Sponsored Content
Top Forums UNIX for Dummies Questions & Answers File comparison of huge files Post 302588239 by dude2cool on Saturday 7th of January 2012 06:36:21 PM
Old 01-07-2012
So you have around 6.5 Gig in / which is where your /tmp is. So you are running out of space doing sort.

Is there any other free space on your hard disk that you can use? If you so, use the -T option. Look it up

man sort

Pretty easy. Find a directory/FS where you have ample free space and specify it using -T

Code:
-T directory    Specifies the directory in  which  to  place
                     temporary files

.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Difference between two huge files

Hi, As per my requirement, I need to take difference between two big files(around 6.5 GB) and get the difference to a output file without any line numbers or '<' or '>' in front of each new line. As DIFF command wont work for big files, i tried to use BDIFF instead. I am getting incorrect... (13 Replies)
Discussion started by: pyaranoid
13 Replies

2. UNIX for Advanced & Expert Users

Huge files manipulation

Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text. I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump) In using HP-UX large servers. Any advice will... (8 Replies)
Discussion started by: Klashxx
8 Replies

3. Shell Programming and Scripting

Huge File Comparison

Hi i need to compare two fixed length files and produce the differences if any to a seperate file. I have to capture each and every differneces line by line. Ideally my files should not have any differences but if there are any then it should be captured without any miss. Also my files sizes are... (4 Replies)
Discussion started by: naveenn08
4 Replies

4. Shell Programming and Scripting

Splitting the Huge file into several files...

Hi I have to write a script to split the huge file into several pieces. The file columns is | pipe delimited. The data sample is as: 6625060|1420215|07308806|N|20100120|5572477081|+0002.79|+0000.00|0004|0001|......... (3 Replies)
Discussion started by: lakteja
3 Replies

5. Shell Programming and Scripting

Compare 2 folders to find several missing files among huge amounts of files.

Hi, all: I've got two folders, say, "folder1" and "folder2". Under each, there are thousands of files. It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command. However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies

6. Shell Programming and Scripting

Three Difference File Huge Data Comparison Problem.

I got three different file: Part of File 1 ARTPHDFGAA . . Part of File 2 ARTGHHYESA . . Part of File 3 ARTPOLYWEA . . (4 Replies)
Discussion started by: patrick87
4 Replies

7. Shell Programming and Scripting

Help- counting delimiter in a huge file and split data into 2 files

I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;” Here is the sample of 5 lines in the file: Name1;phone1;address1;city1;state1;zipcode1 Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Discussion started by: lv99
7 Replies

8. UNIX for Dummies Questions & Answers

Split a huge 7 GB File Based on Pattern into 4 files

Hi, I have a Huge 7 GB file which has around 1 million records, i want to split this file into 4 files to contain around 250k messages each. Please help me as Split command cannot work here as it might miss tags.. Format of the file is as below <!--###### ###### START-->... (6 Replies)
Discussion started by: KishM
6 Replies

9. Shell Programming and Scripting

Aggregation of Huge files

Hi Friends !! I am facing a hash total issue while performing over a set of files of huge volume: Command used: tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f' Pipe delimited file and 156 column is for hash totalling.... (14 Replies)
Discussion started by: Ravichander
14 Replies

10. UNIX for Advanced & Expert Users

File comaprsons for the Huge data files ( around 60G) - Need optimized and teh best way to do this

I have 2 large file (.dat) around 70 g, 12 columns but the data not sorted in both the files.. need your inputs in giving the best optimized method/command to achieve this and redirect the not macthing lines to the thrid file ( diff.dat) File 1 - 15 columns File 2 - 15 columns Data is... (9 Replies)
Discussion started by: kartikirans
9 Replies
XMLSORT(1p)						User Contributed Perl Documentation					       XMLSORT(1p)

NAME
xmlsort - sorts 'records' in XML files SYNOPSIS
xmlsort -r=<recordname> [ <other options> ] [ <filename> ] Options: -r <name> name of the elements to be sorted -k <keys> child nodes to be used as sort keys -i ignore case when sorting -s normalise whitespace when comparing sort keys -t <dir> buffer records to named directory rather than in memory -m <bytes> set memory chunk size for disk buffering -h help - display the full documentation Example: xmlsort -r 'person' -k 'lastname;firstname' -i -s in.xml >out.xml DESCRIPTION
This script takes an XML document either on STDIN or from a named file and writes a sorted version of the file to STDOUT. The "-r" option should be used to identify 'records' in the document - the bits you want sorted. Elements before and after the records will be unaffected by the sort. OPTIONS
Here is a brief summary of the command line options (and the XML::Filter::Sort options which they correspond to). For more details see XML::Filter::Sort. -r <recordname> (Record) The name of the elements to be sorted. This can be a simple element name like 'person' or a pathname like 'employees/person' (only person elements contained directly within an employees element). -k <keys> (Keys) Semicolon separated list of elements (or attributes) within a record which should be used as sort keys. Each key can optionally be followed by 'alpha' or 'num' to indicate alphanumeric of numeric sorting and 'asc' or 'desc' for ascending or descending order (eg: -k 'lastname;firstname;age,n,d'). -i (IgnoreCase) This option makes sort comparisons case insensitive. -s (NormaliseKeySpace) By default all whitespace in the sort key elements is considered significant. Specifying -s will case leading and trailing whitespace to be stripped and internal whitespace runs to be collapsed to a single space. -t <directory> (TempDir) When sorting large documents, it may be prudent to use disk buffering rather than memory buffering. This option allows you to specify where temporary files should be written. -m <bytes> (MaxMem) If you use the -t option to enable disk buffering, records will be collected in memory in 'chunks' of up to about 10 megabytes before being sorted and spooled to temporary files. This option allows you to specify a larger chunk size. A suffix of K or M indicates kilobytes or megabytes respectively. SEE ALSO
This script uses the following modules: XML::SAX::ParserFactory XML::Filter::Sort XML::SAX::Writer AUTHOR
Grant McLean <grantm@cpan.org> COPYRIGHT
Copyright (c) 2002 Grant McLean. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.12.4 2002-06-14 XMLSORT(1p)
All times are GMT -4. The time now is 09:53 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy