There is no way for me to know if this will help at all, but I had some fun playing. All of this was on a 16 core 16GB desktop running Cygwin. Linux often performs better on some things because Cygwin is really sitting on top of the Windows runtime library.
As an extrapolation from my testing below:
Try -
Since you have four cores - this will improve throughput. It means run four independent threads of sorting - one on each core, break the large file into 256 chunks, sort and then merge 256 chunks (NMERGE in the documentation).
Your data presents problems.
1. The key is the entire line. Using a tag sort (some kind of key compression on the first 64 bytes to turn the data into an integer or a set of 3 integers) speeds up comparison times by an order or magnitude. To make a numeric comparison on the first radix key (if you like the term) use
. This will have sort use three 64 bit words, using 3bit translation ( 3 values * 64) for comparison which is three compare ops. versus having to compare 64 characters.
2. Locality. Line 1301010 may really need to be after to line 14 in the final output.
By getting NMERGE to reasonable value ( 300GB/256 = 1.17GB) This creates a first
pass scenario where locality is less of an issue since all of 1.17 will easily fit in memory.
3. sort uses disk intensively. make sure TMPDIR is assigned to a filesystem that is a separate disk with lots of CONTIGUOUS free space. i.e., Pretty much wiped empty.
An SSD will make a huge difference in performance.
I dummied up a file with a 27183337 lines == almost exactly 2GB. I ran it on Cygwin.
Box has 16 cores 16 GB memory.
Run using 7200 rpm drive not on SSD
Notice the user + sys time adds up to more than elapsed time. Why? parallel.
This is about 40% of the wall time compared with a slow disk. sys is better due to shorter I/O request queues . And appears to me to be the limiting factor. The number of comparisons was the same as was a lot of the other overhead so user is about the same.
This User Gave Thanks to jim mcnamara For This Post:
I am trying to understand the webserver log file for an error which has occured on my live web site.
The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Hi All
I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like:
Ignore the <TAB> annotations as that... (4 Replies)
I was wondering how sort works.
Does file size and time to sort increase geometrically?
I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion?
If it takes 100 hours that's fine but not 100 days.
Thanks so much. (2 Replies)
hello all,
kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!!
the script is... (16 Replies)
Hi all,
I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min).
Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search.
Example:... (11 Replies)
Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS.
I have tried to do it using:... (14 Replies)
Hello all -
I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Hello,
I have a very large file of around 2 million records which has the following structure:
I have used the standard awk program to sort:
# wordfreq.awk --- print list of word frequencies
{
# remove punctuation
#gsub(/_]/, "", $0)
for (i = 1; i <= NF; i++)
freq++
}
END {
for (word... (3 Replies)