Sponsored Content
Top Forums UNIX for Advanced & Expert Users Help optimizing sort of large files Post 302925271 by kogorman3 on Saturday 15th of November 2014 11:00:00 AM
Old 11-15-2014
Quote:
Originally Posted by Corona688
I know. I am concerned you might be making your buffers too large -- ideal settings for 1 process would be twice what your computer can handle when doubled. Halve the buffer size and number of files every time you double the number of threads. Individually their efficiency could go down a bit, but collectively they can get masses more work done if you have enough cache to keep up with them.

You might look into increasing the writeback time on your temp partition. Spending all your I/O time writing changes that are going to be invalidated 60 seconds later could be a waste, let the disk cache do the thrashing.

Which means leaving some RAM for disk cache of course. Using all available RAM forces it to do more I/O. 50/50 for sort vs cache might be a good split off the cuff.

I wonder if your SSD might be a better temp folder than swap -- like you say, you're not using it, and forcing your system to eat into swap can really interfere with caching. An ssd could support a much larger number of streams, potentially.

My SSD is nowhere near big enough for /tmp, so that is assigned to a dedicated 2TB SATA drive. It would be large enough for these tests on a 14GB file, but my real workload has TB-sized inputs, so I test accordingly.

The tests seem to confirm that my buffers are too large. I was testing with 1 GB and larger, based on what the default settings are. But I'm going to be redoing the tests with smaller buffers and larger batches. So long as I can keep it down to 2 passes through the data, it may be much better that way. I wouldn't be surprised to find the best arrangement is 2,000 512-megabyte temporary files merged in a single pass. It all depends on how smart the merge is. If it bubble-sorts, of course, I'm sunk.

I'll have to research writeback time. However, when my input file is larger than RAM, I would expect all temps to be flushed before they get used.

---------- Post updated at 07:30 AM ---------- Previous update was at 07:19 AM ----------

Quote:
Originally Posted by DGPickett
Well defragged disk helps, as sort is doing sequential passes merging data. As a pattern, sequential is the fastest, if the adjacent file pages are really adjacent disk pages.

If sort has a big RAM footprint, it can move out of place items farther on each pass, as each pass reads from all sorted input streams into RAM and the smallest <= last is written to the newest temp output.

Fast sort space is critical, too, more critical than fast swap, but both are nice.
Defragged? I'm not sure how to do that on Linux, so I am using a fresh 2-TB drive formatted ext4 and directing sort to use it for temporaries. It's otherwise empty.

I've got 32 GB RAM and a 64-bit CPU. That's big enough for the whole test file, but the parameters to sort don't let it work that way.

Sort space vs swap space? Not sure, but I guess you mean disk. They're all SATA-III, 7200-RPM.

---------- Post updated at 08:00 AM ---------- Previous update was at 07:30 AM ----------

Quote:
Originally Posted by drl
Hi.

You may wish to look over the documentation for msort. It has numerous options that standard sort does not. Some might be useful for your task:
Code:
msort - sort records in complex ways
http://billposer.org/Software/msort.html
http://freecode.com/projects/msort

However, for tasks that can be done with both:

Best wishes ... cheers, drl
Thanks for the pointer. I had never seen msort. A quick look at the documentation makes it clear that for the current task, GNU sort will be just fine, but I'll keep msort in mind.

Last edited by kogorman3; 11-15-2014 at 12:16 PM..
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Large files

I am trying to understand the webserver log file for an error which has occured on my live web site. The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Discussion started by: sehgalniraj
4 Replies

2. Shell Programming and Scripting

Large Text Files

Hi All I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like: Ignore the <TAB> annotations as that... (4 Replies)
Discussion started by: caddyjoe77
4 Replies

3. UNIX for Dummies Questions & Answers

large files?

How do we check 'large files' is enabled on a Unix box -- HP-UX B11.11 (2 Replies)
Discussion started by: ranj@chn
2 Replies

4. UNIX for Dummies Questions & Answers

Sort large file

I was wondering how sort works. Does file size and time to sort increase geometrically? I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion? If it takes 100 hours that's fine but not 100 days. Thanks so much. (2 Replies)
Discussion started by: dcfargo
2 Replies

5. Shell Programming and Scripting

a problem with large files

hello all, kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!! the script is... (16 Replies)
Discussion started by: m_wassal
16 Replies

6. Shell Programming and Scripting

Divide large data files into smaller files

Hello everyone! I have 2 types of files in the following format: 1) *.fa >1234 ...some text... >2345 ...some text... >3456 ...some text... . . . . 2) *.info >1234 (7 Replies)
Discussion started by: ad23
7 Replies

7. UNIX for Dummies Questions & Answers

Speeding/Optimizing GREP search on CSV files

Hi all, I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min). Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search. Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies

8. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

9. UNIX for Advanced & Expert Users

Script to sort the files and append the extension .sort to the sorted version of the file

Hello all - I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Discussion started by: pankaj80
3 Replies

10. Shell Programming and Scripting

Script to sort large file with frequency

Hello, I have a very large file of around 2 million records which has the following structure: I have used the standard awk program to sort: # wordfreq.awk --- print list of word frequencies { # remove punctuation #gsub(/_]/, "", $0) for (i = 1; i <= NF; i++) freq++ } END { for (word... (3 Replies)
Discussion started by: gimley
3 Replies
All times are GMT -4. The time now is 11:12 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy