Help optimizing sort of large files


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users Help optimizing sort of large files
# 22  
Old 11-14-2014
Hi.

You may wish to look over the documentation for msort. It has numerous options that standard sort does not. Some might be useful for your task:
Code:
msort - sort records in complex ways
http://billposer.org/Software/msort.html
http://freecode.com/projects/msort

However, for tasks that can be done with both:
Quote:
... if GNU sort or BSD sort is capable of doing what you want, it will generally be faster [than msort]. The exact ratio varies with the details of the sort and the nature of the input, but in my tests, where msort and GNU sort are capable of performing the same sort, GNU sort is typically several times faster than msort. BSD sort seems to be slightly faster than GNU sort.
Best wishes ... cheers, drl
This User Gave Thanks to drl For This Post:
# 23  
Old 11-14-2014
Quote:
Originally Posted by Corona688
sort is a merge-sort, which has no need or use for gigantic memory buffers -- unless you want to use more memory than is available and eat into swap, that is. If you have very high-performance swap that can be useful. Otherwise, leave -buffer-size out and let it manage itself.
I'm pretty sure that the buffer-size parameter controls only pass 1, which is where actual sorting occurs. It controls the size of the first set of temporary files. All subsequent passes are merges. Moreover, the default on my system is similar to what I get when I specify 4g, and is distinctly sub-optimal.

Quote:
Originally Posted by Corona688
-parallel should be a big performance gain -- if you have enough memory that it doesn't need to thrash your disk, and fast enough disks to keep up. If not, it will just make things worse.
In testing, the gain was real but not big. The elephant in the room here seems to be the number of passes, and I/O time is a large percent of the total. While threading improves overlap, they're still competing for use of the same input and output files and directories.
Quote:
Originally Posted by Corona688
I don't see any something-for-nothing solutions here. You won't squeeze out anything but percents here and there unless you deal with the bottlenecks. Every time you tell it "use more resources" and it slows down, that's a bottleneck. Every time you tell it "use less files" and it speeds up, that's a bottleneck.

1) More RAM -- the more the OS can cache, the less it has to wait on the disk. Brute force, but there's a reason RAM is popular, it works really well.
2) A different temp space. If you put /tmp/ on a different disk spindle than the file you are sorting, you can get the bandwidth of two disks instead of splitting the bandwidth of one disk several ways (and eliminate a lot of disk thrashing time). It doesn't have to be /tmp/ of course, sort -T puts the files wherever you ask.
3) Faster swap. Eat up more RAM than you have available and depend on an SSD to make up the difference. This could be good, though sounds rather complicated to me.
Fair enough. But RAM is already 32GB, which maxes out my motherboard. Input, temporaries and output are already on three separate drives. Swap is separate and on an SSD, but I don't think I'm using it. So I'm squeezing out percentages. Fortunately, some of them are worth the effort; I'm already running about twice as fast as the built-in defaults, which have over-large buffers and not enough streams in a batch.
# 24  
Old 11-14-2014
Quote:
Originally Posted by kogorman3
In testing, the gain was real but not big. The elephant in the room here seems to be the number of passes, and I/O time is a large percent of the total. While threading improves overlap, they're still competing for use of the same input and output files and directories.
I know. I am concerned you might be making your buffers too large -- ideal settings for 1 process would be twice what your computer can handle when doubled. Halve the buffer size and number of files every time you double the number of threads. Individually their efficiency could go down a bit, but collectively they can get masses more work done if you have enough cache to keep up with them.

You might look into increasing the writeback time on your temp partition. Spending all your I/O time writing changes that are going to be invalidated 60 seconds later could be a waste, let the disk cache do the thrashing.

Which means leaving some RAM for disk cache of course. Using all available RAM forces it to do more I/O. 50/50 for sort vs cache might be a good split off the cuff.

I wonder if your SSD might be a better temp folder than swap -- like you say, you're not using it, and forcing your system to eat into swap can really interfere with caching. An ssd could support a much larger number of streams, potentially.

Last edited by Corona688; 11-14-2014 at 04:14 PM..
# 25  
Old 11-14-2014
Well defragged disk helps, as sort is doing sequential passes merging data. As a pattern, sequential is the fastest, if the adjacent file pages are really adjacent disk pages.

If sort has a big RAM footprint, it can move out of place items farther on each pass, as each pass reads from all sorted input streams into RAM and the smallest <= last is written to the newest temp output.

Fast sort space is critical, too, more critical than fast swap, but both are nice.
# 26  
Old 11-15-2014
Quote:
Originally Posted by Corona688
I know. I am concerned you might be making your buffers too large -- ideal settings for 1 process would be twice what your computer can handle when doubled. Halve the buffer size and number of files every time you double the number of threads. Individually their efficiency could go down a bit, but collectively they can get masses more work done if you have enough cache to keep up with them.

You might look into increasing the writeback time on your temp partition. Spending all your I/O time writing changes that are going to be invalidated 60 seconds later could be a waste, let the disk cache do the thrashing.

Which means leaving some RAM for disk cache of course. Using all available RAM forces it to do more I/O. 50/50 for sort vs cache might be a good split off the cuff.

I wonder if your SSD might be a better temp folder than swap -- like you say, you're not using it, and forcing your system to eat into swap can really interfere with caching. An ssd could support a much larger number of streams, potentially.

My SSD is nowhere near big enough for /tmp, so that is assigned to a dedicated 2TB SATA drive. It would be large enough for these tests on a 14GB file, but my real workload has TB-sized inputs, so I test accordingly.

The tests seem to confirm that my buffers are too large. I was testing with 1 GB and larger, based on what the default settings are. But I'm going to be redoing the tests with smaller buffers and larger batches. So long as I can keep it down to 2 passes through the data, it may be much better that way. I wouldn't be surprised to find the best arrangement is 2,000 512-megabyte temporary files merged in a single pass. It all depends on how smart the merge is. If it bubble-sorts, of course, I'm sunk.

I'll have to research writeback time. However, when my input file is larger than RAM, I would expect all temps to be flushed before they get used.

---------- Post updated at 07:30 AM ---------- Previous update was at 07:19 AM ----------

Quote:
Originally Posted by DGPickett
Well defragged disk helps, as sort is doing sequential passes merging data. As a pattern, sequential is the fastest, if the adjacent file pages are really adjacent disk pages.

If sort has a big RAM footprint, it can move out of place items farther on each pass, as each pass reads from all sorted input streams into RAM and the smallest <= last is written to the newest temp output.

Fast sort space is critical, too, more critical than fast swap, but both are nice.
Defragged? I'm not sure how to do that on Linux, so I am using a fresh 2-TB drive formatted ext4 and directing sort to use it for temporaries. It's otherwise empty.

I've got 32 GB RAM and a 64-bit CPU. That's big enough for the whole test file, but the parameters to sort don't let it work that way.

Sort space vs swap space? Not sure, but I guess you mean disk. They're all SATA-III, 7200-RPM.

---------- Post updated at 08:00 AM ---------- Previous update was at 07:30 AM ----------

Quote:
Originally Posted by drl
Hi.

You may wish to look over the documentation for msort. It has numerous options that standard sort does not. Some might be useful for your task:
Code:
msort - sort records in complex ways
http://billposer.org/Software/msort.html
http://freecode.com/projects/msort

However, for tasks that can be done with both:

Best wishes ... cheers, drl
Thanks for the pointer. I had never seen msort. A quick look at the documentation makes it clear that for the current task, GNU sort will be just fine, but I'll keep msort in mind.

Last edited by kogorman3; 11-15-2014 at 12:16 PM..
# 27  
Old 11-17-2014
Quote:
Originally Posted by kogorman3
Defragged? I'm not sure how to do that on Linux, so I am using a fresh 2-TB drive formatted ext4 and directing sort to use it for temporaries. It's otherwise empty.
ext4 partitions are relatively easy to defrag, being designed with runtime defragmentation in mind (yes, runtime -- no need to unmount) via the e4defrag utility. There's no point defragging an empty partition, but check that your input and output partitions aren't a mess after all this testing.
Quote:
I've got 32 GB RAM and a 64-bit CPU. That's big enough for the whole test file, but the parameters to sort don't let it work that way.
The process of merge-sorting doesn't work that way. No matter how big your buffers are, it has to do the same number of merges on the same number of elements of the same sizes, nearly all of them tiny... Starting with billions of 2-element merges, half the number of 4-element merges, etc, etc, etc. (A little oversimplification, but the merging options don't substantially change this.) That's why pushing buffers to ridiculous sizes is so little help -- they're nearly always dead weight except for the final merge, when it's never going to be big enough to matter anyway.

Last edited by Corona688; 11-17-2014 at 11:17 AM..
# 28  
Old 11-17-2014
Quote:
Originally Posted by Corona688
ext4 partitions are relatively easy to defrag, being designed with runtime defragmentation in mind (yes, runtime -- no need to unmount) via the e4defrag utility. There's no point defragging an empty partition, but check that your input and output partitions aren't a mess after all this testing.
Nice to know. I tried e4defrag and it showed a fragmentation score of 0 on all directories.

Quote:
The process of merge-sorting doesn't work that way. No matter how big your buffers are, it has to do the same number of merges on the same number of elements of the same sizes, nearly all of them tiny... Starting with billions of 2-element merges, half the number of 4-element merges, etc, etc, etc. (A little oversimplification, but the merging options don't substantially change this.) That's why pushing buffers to ridiculous sizes is so little help -- they're nearly always dead weight except for the final merge, when it's never going to be big enough to matter anyway.
I'm still a bit new to this, even after peeking at the source code. But it seems to me that there are two distinct phases to GNU sort. Both are merge sorts, but there's a big difference between merging in RAM and merging disk files. The main point of my optimizing effort turned out to be minimizing the number of file merges. Here's how I think of it now:

If all of the data can fit in the specified buffer, no temporary files will be created, and the output of the single in-RAM merge will go to the output file.

Otherwise, buffer-loads of data will be merged in RAM, and output to temporary files. There will be at least 2 of these, and perhaps a great many. These will be merged <batch-size> files at a time, with all the I/O time you'd expect. It's therefore desirable to ensure that no more than <batch-size> temporary files are created. If that's impossible, then limit it to the square, or even the cube of that number. Both buffer-size and batch-size affect the achievement of these goals.

Of course, even this is an oversimplification. It ignores effects of the TLB, of the L1 and L2 caches, and the kernel's buffer cache. Thus my interest in measuring actual times.

This is not going well, and I don't know why. Successive runs report quite different times. It's true the system is used for other things, but not heavily -- it's all my personal use. This is showing up in elapsed time, but not much in either system or user time. Odd. Moreover, the results of the second runs were consistently worse than the first. I'm trying third runs now.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Script to sort large file with frequency

Hello, I have a very large file of around 2 million records which has the following structure: I have used the standard awk program to sort: # wordfreq.awk --- print list of word frequencies { # remove punctuation #gsub(/_]/, "", $0) for (i = 1; i <= NF; i++) freq++ } END { for (word... (3 Replies)
Discussion started by: gimley
3 Replies

2. UNIX for Advanced & Expert Users

Script to sort the files and append the extension .sort to the sorted version of the file

Hello all - I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Discussion started by: pankaj80
3 Replies

3. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

4. UNIX for Dummies Questions & Answers

Speeding/Optimizing GREP search on CSV files

Hi all, I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min). Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search. Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies

5. Shell Programming and Scripting

Divide large data files into smaller files

Hello everyone! I have 2 types of files in the following format: 1) *.fa >1234 ...some text... >2345 ...some text... >3456 ...some text... . . . . 2) *.info >1234 (7 Replies)
Discussion started by: ad23
7 Replies

6. Shell Programming and Scripting

a problem with large files

hello all, kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!! the script is... (16 Replies)
Discussion started by: m_wassal
16 Replies

7. UNIX for Dummies Questions & Answers

Sort large file

I was wondering how sort works. Does file size and time to sort increase geometrically? I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion? If it takes 100 hours that's fine but not 100 days. Thanks so much. (2 Replies)
Discussion started by: dcfargo
2 Replies

8. UNIX for Dummies Questions & Answers

large files?

How do we check 'large files' is enabled on a Unix box -- HP-UX B11.11 (2 Replies)
Discussion started by: ranj@chn
2 Replies

9. Shell Programming and Scripting

Large Text Files

Hi All I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like: Ignore the <TAB> annotations as that... (4 Replies)
Discussion started by: caddyjoe77
4 Replies

10. UNIX for Dummies Questions & Answers

Large files

I am trying to understand the webserver log file for an error which has occured on my live web site. The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Discussion started by: sehgalniraj
4 Replies
Login or Register to Ask a Question