09-01-2011
Make sure you are sorting in the C locale. The other locales can be 10 time slower.
9 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi,
May I know, if a pipe separated File is large, what is the best method to calculate the unique row count of 3rd column and get a list of unique value of the 3rdcolum?
Thanks in advance! (20 Replies)
Discussion started by: deepakwins
20 Replies
2. UNIX for Dummies Questions & Answers
This may sound like a trivial problem, but I still need some help:
I have a file with ids and I want to split it 'n' ways (could be any number) into files:
1
1
1
2
2
3
3
4
5
5
Let's assume 'n' is 3, and we cannot have the same id in two different partitions. So the partitions may... (8 Replies)
Discussion started by: ChicagoBlues
8 Replies
3. Shell Programming and Scripting
Input file
---------
12:name1:|host1|host1|host2|host1
13:name2:|host1|host1|host2|host3
14:name3:
......
Required output
---------------
12:name1:host1(2)|host1(1)
13:name2:host1(2)|host2(1)|host3(1)
14:name3:
where (x) - Count how many times field appears in last column
... (3 Replies)
Discussion started by: greycells
3 Replies
4. Shell Programming and Scripting
I would really appreciate a sulution for this :
invoice# client#
5929 231
4358 231
2185 231
6234 231
1166 464
1264 464
3432 464
1720 464
9747 464
1133 791
4930 791
5496 791
6291 791
8681 989
3023 989 (2 Replies)
Discussion started by: hemo21
2 Replies
5. UNIX for Dummies Questions & Answers
Hi. I am not sure the title gives an optimal description of what I want to do.
I have several text files that contain data in many columns. All the files are organized the same way, but the data in the columns might differ. I want to count the number of times data occur in specific columns,... (0 Replies)
Discussion started by: JamesT
0 Replies
6. UNIX for Dummies Questions & Answers
I would like to print unique lines without sort or unique. Unfortunately the server I am working on does not have sort or unique. I have not been able to contact the administrator of the server to ask him to add it for several weeks. (7 Replies)
Discussion started by: cokedude
7 Replies
7. Shell Programming and Scripting
I have one script as below:
#!/bin/ksh
Outputfile1="/home/OutputFile1.xls"
Outputfile2="/home/OutputFile2.xls"
InputFile1="/home/InputFile1.sql"
InputFile2="/home/InputFile2.sql"
echo "Select hobby, class, subject, sports, rollNumber from Student_Table" >> InputFile1
echo "Select rollNumber... (3 Replies)
Discussion started by: Sharma331
3 Replies
8. Shell Programming and Scripting
Hi,
I have an input file that I have sorted in a previous stage by $1 and $4. I now need something that will take the first record from each group of data based on the key being $1
Input file
1000AAA|"ZZZ"|"Date"|"1"|"Y"|"ABC"|""|AA
1000AAA|"ZZZ"|"Date"|"2"|"Y"|"ABC"|""|AA... (2 Replies)
Discussion started by: Ads89
2 Replies
9. UNIX for Beginners Questions & Answers
Dear community, I am facing a problem and I kindly ask your help:
I have 4 different data sets consisted from 3 different types of array.
On each file, column 1 is chromosome position, column 2 is SNP id etc... Lets say I have the following (bim) datasets:
x2014:
1 rs3094315... (4 Replies)
Discussion started by: fondan
4 Replies
LEARN ABOUT DEBIAN
clfmerge
clfmerge(1) logtools clfmerge(1)
NAME
clfmerge - merge Common-Log Format web logs based on time-stamps
SYNOPSIS
clfmerge [--help | -h] [-b size] [-d] [file names]
DESCRIPTION
The clfmerge program is designed to avoid using sort to merge multiple web log files. Web logs for big sites consist of multiple files in
the >100M size range from a number of machines. For such files it is not practical to use a program such as gnusort to merge the files
because the data is not always entirely in order (so the merge option of gnusort doesn't work so well), but it is not in random order (so
doing a complete sort would be a waste). Also the date field that is being sorted on is not particularly easy to specify for gnusort (I
have seen it done but it was messy).
This program is designed to simply and quickly sort multiple large log files with no need for temporary storage space or overly large buf-
fers in memory (the memory footprint is generally only a few megs).
OVERVIEW
It will take a number (from 0 to n) of file-names on the command line, it will open them for reading and read CLF format web log data from
them all. Lines which don't appear to be in CLF format (NB they aren't parsed fully, only minimal parsing to determine the date is per-
formed) will be rejected and displayed on standard-error.
If zero files are specified then there will be no error, it will just silently output nothing, this is for scripts which use the find com-
mand to find log files and which can't be counted on to find any log files, it saves doing an extra check in your shell scripts.
If one file is specified then the data will be read into a 1000 line buffer and it will be removed from the buffer (and displayed on stan-
dard output) in date order. This is to handle the case of web servers which date entries on the connection time but write them to the log
at completion time and thus generate log files that aren't in order (Netscape web server does this - I haven't checked what other web
servers do).
If more than one file is specified then a line will be read from each file, the file that had the earliest time stamp will be read from
until it returns a time stamp later than one of the other files. Then the file with the earlier time stamp will be read. With multiple
files the buffer size is 1000 lines or 100 * the number of files (whichever is larger). When the buffer becomes full the first line will
be removed and displayed on standard output.
OPTIONS
-b buffer-size
Specify the buffer-size to use, if 0 is specified then it means to disable the sliding-window sorting of the data which improves the
speed.
-d Set domain-name mangling to on. This means that if a line starts with as the name of the site that was requested then that would be
removed from the start of the line and the GET / would be changed to GET http://www.company.com/ which allows programs like Webal-
izer to produce good graphs for large hosting sites. Also it will make the domain name in lower case.
EXIT STATUS
0 No errors
1 Bad parameters
2 Can't open one of the specified files
3 Can't write to output
AUTHOR
This program, its manual page, and the Debian package were written by Russell Coker <russell@coker.com.au>.
SEE ALSO
clfsplit(1),clfdomainsplit(1)
Russell Coker <russell@coker.com.au> 0.06 clfmerge(1)