Help optimizing sort of large files


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users Help optimizing sort of large files
# 29  
Old 11-17-2014
Quote:
Originally Posted by kogorman3
Nice to know. I tried e4defrag and it showed a fragmentation score of 0 on all directories.

I'm still a bit new to this, even after peeking at the source code. But it seems to me that there are two distinct phases to GNU sort. Both are merge sorts, but there's a big difference between merging in RAM and merging disk files.
Not really.

If your files can fit in memory, either way you do it -- cache or sort buffers -- it will be held in RAM.
Quote:
The main point of my optimizing effort turned out to be minimizing the number of file merges.
Yes, too many files at once is bad since the disk has to seek them individually.

I don't think this has much to do with the size of the temp files as much as their number. Doing a lot of seeking on a non-SSD disk makes its performance really bad. I once measured a disk's random read performance with caching disabled -- hopping from one sector to another, reading then moving, gets you fifty kilobytes per second on a disk good for 100MB/s. Using all available RAM is about as bad as turning off caching, by the way. Worse actually -- memory pressure will start pushing out useful things, making programs wait pointlessly for bits of code to return to them at need.

Cache could also explain the differing times your runs take. If there's significant parts of your file left in cache, that could speed up the next run.
# 30  
Old 11-17-2014
Quote:
Originally Posted by Corona688
Not really.

If your files can fit in memory, either way you do it -- cache or sort buffers -- it will be held in RAM. Yes, too many files at once is bad since the disk has to seek them individually.
I'm testing with my idea of a small file: 13 GB. My targets are more like TB-sized. These are the cases that motivate optimizing GNU sort. For modest-sized files, I wouldn't bother.

Quote:
I don't think this has much to do with the size of the temp files as much as their number. Doing a lot of seeking on a non-SSD disk makes its performance really bad. I once measured a disk's random read performance with caching disabled -- hopping from one sector to another, reading then moving, gets you fifty kilobytes per second on a disk good for 100MB/s. Using all available RAM is about as bad as turning off caching, by the way. Worse actually -- memory pressure will start pushing out useful things, making programs wait pointlessly for bits of code to return to them at need.
Size and number are linked. If they're bigger, there are fewer of them. So would love for the cache to contain all of the data from files being merged. I don't know how to do it, but I'm hoping my data will guide me to it indirectly.

Quote:
Cache could also explain the differing times your runs take. If there's significant parts of your file left in cache, that could speed up the next run.
True. I do not do anything to clear the caches before each run, but the tests are done by scripts that always do them in the same order, so I would expect more uniformity than I've seen so far.
# 31  
Old 11-18-2014
Quote:
Originally Posted by kogorman3
Size and number are linked. If they're bigger, there are fewer of them.
It doesn't work that way. It doesn't run 16384 individual merges simultaneously, then 4096 merges simultaneously, then 1024, etc etc. It always does the number you tell it to, as many times as it takes to process the list of things to do.

I don't think small sorts hurt you, especially since they're small enough to be cached. What hurts you are merges on too many files at once for the disk to seek between, reducing it's I/O throughput.

Remember you are trying to find a "sweet spot" where CPU use and disk throughput are both at peak -- where the system can sustain full disk and cpu use.

Quote:
True. I do not do anything to clear the caches before each run, but the tests are done by scripts that always do them in the same order, so I would expect more uniformity than I've seen so far.
Check if you're eating into swap sometimes. Hitting swap could have severe performance penalties that throw off your tests.
# 32  
Old 11-18-2014
You could also check out external sorting. Since you know the data is much larger than RAM, this may be higher performance since it's purely sequential. You can control exactly how much your disk is being hit by what when.

Basically, split your n-terabyte file into chunks small enough to fit purely in memory -- 50% of memory size is a good guess -- and sort them, then merge them. This will give you full control of how many files are being read when, to the point the limit becomes disk speed (as it always was, ideally).
# 33  
Old 11-18-2014
Quote:
Originally Posted by Corona688
It doesn't work that way. It doesn't run 16384 individual merges simultaneously, then 4096 merges simultaneously, then 1024, etc etc. It always does the number you tell it to, as many times as it takes to process the list of things to do.
That makes no sense to me. If I tell it to make 1GB temporaries, my 13GB test file will make 13 of them and probably merge just once. If I tell it to make 1MB temporaries, it will make 13,000 of them, and likely have to merge the outputs of the first set of merges at least one extra time, handling the same data more times with more disk I/O.

My real TB-sized inputs will see this even more.

BTW, on skimming the sort source code, I found that by default it tries to make the buffer-size parameter 1/8 of physical memory: 4GB. That was so slow that I started this investigation. Smaller turned out to be better, possibly because none of the TLB or caches could deal with it effectively during the initial in-RAM merges because of the huge working set.

Quote:
I don't think small sorts hurt you, especially since they're small enough to be cached. What hurts you are merges on too many files at once for the disk to seek between, reducing it's I/O throughput.

Remember you are trying to find a "sweet spot" where CPU use and disk throughput are both at peak -- where the system can sustain full disk and cpu use.
Not sure what you mean about small sorts; I can't have them. I have large files, requiring large sorts. The files are large enough that I do not expect anything from previous merges to be available if the data has to be merged again, and I expect the data from different inputs to be far enough apart on disk to require seeks no matter what I do. Accordingly, I'd like to minimize the number of merge passes.

Even with these large sizes, sweet spots are exactly what I'm looking for.

Quote:
Check if you're eating into swap sometimes. Hitting swap could have severe performance penalties that throw off your tests.
I'm looking. But with the modest parameters I'm giving it, I hardly expect my 32-GB RAM to need to swap. The largest buffer-size I've ever tried is 4g, and that was already a bad idea and I don't do that any more. I'm distinctly sub-G now.
# 34  
Old 11-18-2014
Quote:
Originally Posted by kogorman3
That makes no sense to me. If I tell it to make 1GB temporaries, my 13GB test file will make 13 of them and probably merge just once. If I tell it to make 1MB temporaries, it will make 13,000 of them, and likely have to merge the outputs of the first set of merges at least one extra time, handling the same data more times with more disk I/O.
This is because we are talking about different problems. You are talking about how many files it must split into -- I am talking about how many files are being merged at once.

Merging fewer files at once will mean less disk thrashing but more passes. Since you do not have an SSD input, the performance penalties for sustained disk thrashing -- skipping from file1 to file2 to file3 to file4 back to file1 -- are very, very high, hundredsfold. This could also be throwing off your results if the disk only thrashes sometimes due to unfortunate coincidences of access and cache.

Using the SSD as temp space may still be useful too, even if it's smaller than your file. It's another independent "spindle" you could be using to merge partial files in, particularly with the 'external sort' idea. Create n 20-gig sorted files on your SSD, merge them into one big sorted file on your main spindle, rinse, repeat.

Last edited by Corona688; 11-18-2014 at 12:32 PM..
# 35  
Old 11-18-2014
If you look at the first 10 characters in each file and create a file that represents each combination of characters for the first 10 characters, you would have 59,049 files, if my math is correct. You can then put each row from each file in the corresponding file. This when you sort each file, the file is already sorted to the 10th character and the remaining sort would be easier to do and easier to do in parallel. Since you would not want multiple processes appending lines to the same file at the same time the initial bucket sort should probably get done as a single process. Then each of the 59,049 files can get sorted with as many processes as you want. Plus each file will be smaller than what you have now.

You can also break up the files by using the first three characters as directory names, that way each directory three levels down would have 1/9th of the total files. This would also make it easier to bucket sort by more than 10 characters. I included a file of each possible combination of characters for the first 10 characters. This approach assumes an even distribution of data across the first 10 characters.

I woulds still load the data into a database though... Smilie
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Script to sort large file with frequency

Hello, I have a very large file of around 2 million records which has the following structure: I have used the standard awk program to sort: # wordfreq.awk --- print list of word frequencies { # remove punctuation #gsub(/_]/, "", $0) for (i = 1; i <= NF; i++) freq++ } END { for (word... (3 Replies)
Discussion started by: gimley
3 Replies

2. UNIX for Advanced & Expert Users

Script to sort the files and append the extension .sort to the sorted version of the file

Hello all - I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Discussion started by: pankaj80
3 Replies

3. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

4. UNIX for Dummies Questions & Answers

Speeding/Optimizing GREP search on CSV files

Hi all, I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min). Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search. Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies

5. Shell Programming and Scripting

Divide large data files into smaller files

Hello everyone! I have 2 types of files in the following format: 1) *.fa >1234 ...some text... >2345 ...some text... >3456 ...some text... . . . . 2) *.info >1234 (7 Replies)
Discussion started by: ad23
7 Replies

6. Shell Programming and Scripting

a problem with large files

hello all, kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!! the script is... (16 Replies)
Discussion started by: m_wassal
16 Replies

7. UNIX for Dummies Questions & Answers

Sort large file

I was wondering how sort works. Does file size and time to sort increase geometrically? I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion? If it takes 100 hours that's fine but not 100 days. Thanks so much. (2 Replies)
Discussion started by: dcfargo
2 Replies

8. UNIX for Dummies Questions & Answers

large files?

How do we check 'large files' is enabled on a Unix box -- HP-UX B11.11 (2 Replies)
Discussion started by: ranj@chn
2 Replies

9. Shell Programming and Scripting

Large Text Files

Hi All I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like: Ignore the <TAB> annotations as that... (4 Replies)
Discussion started by: caddyjoe77
4 Replies

10. UNIX for Dummies Questions & Answers

Large files

I am trying to understand the webserver log file for an error which has occured on my live web site. The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Discussion started by: sehgalniraj
4 Replies
Login or Register to Ask a Question