07-14-2009
How about breaking the file into multiple small files do the sort and merge them later.
As pointed out the disk is full, check with "du" in UNIX/Linux.
Try Perl/Python to do a intelligent sort, bubble or something (Please note: At this time I am not thinking of the big 'O' or what is efficient. Just giving few ideas.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
I have a file containing many records separated by a % that I would like to sort uniquely (and if possible with a count of dupes) while maintaining the integrity of each record.
File looks like this:
%
srcip: 5.6.7.8
srcburb: internal
dstip: 1.2.3.4
dstport: 2000
dstburb: external... (12 Replies)
Discussion started by: earnstaf
12 Replies
2. Solaris
Can any one give me command How to delete duplicate records with out sort.
Suppose if the records like below:
345,bcd,789
123,abc,456
234,abc,456
712,bcd,789
out tput should be
345,bcd,789
123,abc,456
Key for the records is 2nd and 3rd fields.fields are seperated by colon(,). (2 Replies)
Discussion started by: svenkatareddy
2 Replies
3. Shell Programming and Scripting
Can any one give me command How to delete duplicate records with out sort.
Suppose if the records like below:
345,bcd,789
123,abc,456
234,abc,456
712,bcd,789
out tput should be
345,bcd,789
123,abc,456
Key for the records is 2nd and 3rd fields.fields are seperated by colon(,). (19 Replies)
Discussion started by: svenkatareddy
19 Replies
4. Shell Programming and Scripting
Hi,
I am new to scripting. I need a script to sort and the records in a file and then split them into different files.
For example, the file is:
H1......................
H2......................
D2....................
D2....................
H1........................... (15 Replies)
Discussion started by: Sunitha_edi82
15 Replies
5. Shell Programming and Scripting
Hello,
I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file.
What will be the besat and fastest way to extract the ne file.
sample file format :--... (2 Replies)
Discussion started by: learner16s
2 Replies
6. Shell Programming and Scripting
I was trying to use the AIX 6.1 sort command to sort fixed-length data records, sorting by specific columns only. It took some time to figure out how to get it to work, so I wanted to share the solution. The sort man page wasn't much help, because it talks about field delimeters (default space... (1 Reply)
Discussion started by: CheeseHead1
1 Replies
7. UNIX for Dummies Questions & Answers
Hi everyone.
I am a newbie to Linux stuff. I have this kind of problem which couldn't solve alone. I have a text file with records separated by empty lines like this:
ID: 20
Name: X
Age: 19
ID: 21
Name: Z
ID: 22
Email: xxx@yahoo.com
Name: Y
Age: 19
I want to grep records that... (4 Replies)
Discussion started by: Atrisa
4 Replies
8. UNIX for Dummies Questions & Answers
Hi all,
I So, I've got a monster text document comprising a list of various company names and associated info just in a long list one after another. I need to sort them alphabetically by name...
The text document looks like this:
Company Name:
the_first_company's_name_here
Address:... (2 Replies)
Discussion started by: quee1763
2 Replies
9. Shell Programming and Scripting
Dear All,
I have two files both containing 10 Million records each separated by comma(csv fmt).
One file is input.txt other is status.txt.
Input.txt-> contains fields with one unique id field (primary key we can say)
Status.txt -> contains two fields only:1. unique id and 2. status
... (8 Replies)
Discussion started by: vguleria
8 Replies
10. Shell Programming and Scripting
I have a file which has number of pipe delimited records.
I am able to read the records....but I want to sort it after reading.
i=0
while IFS="|" read -r usrId dataOwn expire email group secProf startDt endDt smhRole RoleCat DataProf SysRole MesgRole SearchProf
do
print $usrId $dataOwn... (4 Replies)
Discussion started by: harish468
4 Replies
LEARN ABOUT DEBIAN
clfmerge
clfmerge(1) logtools clfmerge(1)
NAME
clfmerge - merge Common-Log Format web logs based on time-stamps
SYNOPSIS
clfmerge [--help | -h] [-b size] [-d] [file names]
DESCRIPTION
The clfmerge program is designed to avoid using sort to merge multiple web log files. Web logs for big sites consist of multiple files in
the >100M size range from a number of machines. For such files it is not practical to use a program such as gnusort to merge the files
because the data is not always entirely in order (so the merge option of gnusort doesn't work so well), but it is not in random order (so
doing a complete sort would be a waste). Also the date field that is being sorted on is not particularly easy to specify for gnusort (I
have seen it done but it was messy).
This program is designed to simply and quickly sort multiple large log files with no need for temporary storage space or overly large buf-
fers in memory (the memory footprint is generally only a few megs).
OVERVIEW
It will take a number (from 0 to n) of file-names on the command line, it will open them for reading and read CLF format web log data from
them all. Lines which don't appear to be in CLF format (NB they aren't parsed fully, only minimal parsing to determine the date is per-
formed) will be rejected and displayed on standard-error.
If zero files are specified then there will be no error, it will just silently output nothing, this is for scripts which use the find com-
mand to find log files and which can't be counted on to find any log files, it saves doing an extra check in your shell scripts.
If one file is specified then the data will be read into a 1000 line buffer and it will be removed from the buffer (and displayed on stan-
dard output) in date order. This is to handle the case of web servers which date entries on the connection time but write them to the log
at completion time and thus generate log files that aren't in order (Netscape web server does this - I haven't checked what other web
servers do).
If more than one file is specified then a line will be read from each file, the file that had the earliest time stamp will be read from
until it returns a time stamp later than one of the other files. Then the file with the earlier time stamp will be read. With multiple
files the buffer size is 1000 lines or 100 * the number of files (whichever is larger). When the buffer becomes full the first line will
be removed and displayed on standard output.
OPTIONS
-b buffer-size
Specify the buffer-size to use, if 0 is specified then it means to disable the sliding-window sorting of the data which improves the
speed.
-d Set domain-name mangling to on. This means that if a line starts with as the name of the site that was requested then that would be
removed from the start of the line and the GET / would be changed to GET http://www.company.com/ which allows programs like Webal-
izer to produce good graphs for large hosting sites. Also it will make the domain name in lower case.
EXIT STATUS
0 No errors
1 Bad parameters
2 Can't open one of the specified files
3 Can't write to output
AUTHOR
This program, its manual page, and the Debian package were written by Russell Coker <russell@coker.com.au>.
SEE ALSO
clfsplit(1),clfdomainsplit(1)
Russell Coker <russell@coker.com.au> 0.06 clfmerge(1)