Help optimizing sort of large files


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users Help optimizing sort of large files
# 1  
Old 11-07-2014
Wrench Help optimizing sort of large files

I'm doing a hobby project that has me sorting huge files with sort of monotonous keys. It's very slow -- the current file is about 300 GB and has been sorting for a day. I know that sort has this --batch-size and --buffer-size parameters, but I'd like a jump start if possible to limit the number of days I have to fool around finding what works. The man page doesn't even tell me what the default values are.

I have a 4-core 64-bit AMD processor, 32 GB RAM and plenty of hard drive space.

I would normally do a preliminary radix sort on files this size, but it's a bit awkward with these keys. The keys represent positions in a game and consist of 64 bytes of just 3 values: x, o and -, always beginning with x. They tend to form clusters that are similar at the beginning, and vary from run to run. It's not clear how to form the radix. Here are records from various spots in a typical run....

Code:
xo-xoooox---x-------ooox------------xxo--x----------o-xx-------- -45 20 -1 1
xo-xo-oox---x-------ooox------------xxo--x----------o-xx-------- +45 19 05 1
xo-xo-oox---x-------ooox------------x-o--x----------o-xx-------- -45 18 37 2
xo-xo-o-x---x-------ooox------------x-o--x----------o-xx-------- +45 17 07 3
xo-xo-o-x---x-------ooox--------------o--x----------o-xx-------- -45 16 36 4

xxxooxx-oooxo-------ooo--xo-----------o--xxo-----------x----xoxx -31 26 23 8
xxxooxx-oooxo--------oo--xo-----------o--xxo-----------x----xoxx +31 25 20 9
xxxoo-x-oooxo--------oo--xo-----------o--xxo-----------x----xoxx +35 24 20 457
xxxoo-x-oooxo--------o---xo-----------o--xxo-----------x----xoxx -35 23 22 458
xx-oo-x-oooxo--------ox--xo-----------o--xxo-------o---x---xxoxx -35 25 02 0


xx-ooxx-oooxo--------o---x------------o--xxo-------------------x +34 18 55 5344
xx-ooxx-oooxo--------o---x---------------xxo-------------------x -34 17 38 5345
xx-ooxx-ooo-o--------o---x---------------xxo-------------------x +34 16 11 5346
xx-ooxx-oo--o--------o---x---------------xxo-------------------x -34 15 10 5347
xx-oo-x-oo--o--------o---x---------------xxo-------------------x +34 14 26 1237242

# 2  
Old 11-07-2014
Can you post what you've tried and a sample of the input and output...
# 3  
Old 11-07-2014
All I've tried is a plain old call to sort(1). I'm unhappy with the time it takes.

It doesn't make sense to post the inputs or outputs, as they are over 300 GB in size, and are much like the little snippets in the original posting.

The sort time is roughly one day. I have hundreds of these to do. Any improvement will be well worth the effort, but I'm hoping for some expert advice to get me started. Otherwise, I'll just try some stuff.
# 4  
Old 11-07-2014
One obvious way to use all your CPUs is to split the file into N hunks, use a sort on each and pipe their output to "sort -m <pipe1> . . . <pipeN>". It helps if the file is a regular array, or can be made one, not line delimited, but it is not hard to write or find a tool to start after N bytes and a linefeed, using a seek to start, and end after M bytes at a linefeed. The bash/ksh construct <(...) is great for managing the pipes. N probably should be 2-3x the count of CPU cores, assuming a lot of i/o block time.

Reducing the data by encoding will also help. "xo-" is worth 6 bits, not 24. "1237242" is worth 4 bytes not 7.

Putting the data into an RDBMS means you can maintain indices on the keys, allowing sorted random access or sorted storage.

Faster storage is also key to high performance in big data. Distributing the data over many spindles and controllers gives higher bandwidth.
This User Gave Thanks to DGPickett For This Post:
# 5  
Old 11-07-2014
Quote:
Originally Posted by kogorman3
All I've tried is a plain old call to sort(1). I'm unhappy with the time it takes.

It doesn't make sense to post the inputs or outputs, as they are over 300 GB in size, and are much like the little snippets in the original posting.

The sort time is roughly one day. I have hundreds of these to do. Any improvement will be well worth the effort, but I'm hoping for some expert advice to get me started. Otherwise, I'll just try some stuff.
I wasn't asking you to post the entire input and output...just a representative sample i.e. about 10 lines of input and 10 lines of output...
# 6  
Old 11-08-2014
My own results

Being the impatient sort, I tried a few things.

It appears that, on my machine at least, the default --buffer-size is 4 GB, which yields 1.8GB temporary files. Don't ask me why the roughly half-sized results. I surmise that pass 1 uses qsort on a full buffer to produce the initial temporaries.

It appears that, on my machine at least, the default --batch-size is 16, which for some reason forms the next level of temporaries by merging 15 small ones into a bigger one. Again, don't ask my why the disparity of -1.

As I fooled with things, the execution times I got were in discrete steps, approximate multiples of each other.

I surmise that the execution time is determined largely by the number of passes over the data. I am able to get that down to two passes on most of my datasets, and 3 at most. I did this by upping the limit on open files to the hard limit of 4096, and setting the batch size to 1000, plus setting the buffer size to 10 GB. I'm still experimenting to find a sweet spot.

I think staging multiple sorts might be counter-productive, since if I'm right, 2 is the minimum number of passes -- one to create temporaries, one to write the final result.

The idea of encoding the data is interesting. I'll explore it when I'm sure everything else works. I really like having readable data when debugging, which I am still doing.

BTW, you already have more than 10 lines of input. You can sort them to see the output. I sort on the whole record for this testing phase. Nothing to see here, folks.

My bash code now looks something like this:
Code:
ulimit -Sn hard
export TMPDIR=/mnt/h2/tmp
export LC_ALL=C
export SORT="--buffer-size=1000 --buffer-size=10g"

sort $SORT <infile> --output <outfile>


Last edited by kogorman3; 11-08-2014 at 09:35 AM.. Reason: typo
# 7  
Old 11-08-2014
Quote:
Originally Posted by kogorman3
... ... ...

I surmise that the execution time is determined largely by the number of passes over the data. I am able to get that down to two passes on most of my datasets, and 3 at most. I did this by upping the limit on open files to the hard limit of 4096, and setting the batch size to 1000, plus setting the buffer size to 10 GB. I'm still experimenting to find a sweet spot.

... ... ...

My bash code now looks something like this:
Code:
ulimit -Sn hard
export TMPDIR=/mnt/h2/tmp
export LC_ALL=C
export SORT="--buffer-size=1000 --buffer-size=10g"

sort $SORT <infile> --output <outfile>

Did you perhaps intend to use:
Code:
export SORT="--batch-size=1000 --buffer-size=10g"

instead of setting the buffer size twice?
These 2 Users Gave Thanks to Don Cragun For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Script to sort large file with frequency

Hello, I have a very large file of around 2 million records which has the following structure: I have used the standard awk program to sort: # wordfreq.awk --- print list of word frequencies { # remove punctuation #gsub(/_]/, "", $0) for (i = 1; i <= NF; i++) freq++ } END { for (word... (3 Replies)
Discussion started by: gimley
3 Replies

2. UNIX for Advanced & Expert Users

Script to sort the files and append the extension .sort to the sorted version of the file

Hello all - I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Discussion started by: pankaj80
3 Replies

3. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

4. UNIX for Dummies Questions & Answers

Speeding/Optimizing GREP search on CSV files

Hi all, I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min). Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search. Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies

5. Shell Programming and Scripting

Divide large data files into smaller files

Hello everyone! I have 2 types of files in the following format: 1) *.fa >1234 ...some text... >2345 ...some text... >3456 ...some text... . . . . 2) *.info >1234 (7 Replies)
Discussion started by: ad23
7 Replies

6. Shell Programming and Scripting

a problem with large files

hello all, kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!! the script is... (16 Replies)
Discussion started by: m_wassal
16 Replies

7. UNIX for Dummies Questions & Answers

Sort large file

I was wondering how sort works. Does file size and time to sort increase geometrically? I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion? If it takes 100 hours that's fine but not 100 days. Thanks so much. (2 Replies)
Discussion started by: dcfargo
2 Replies

8. UNIX for Dummies Questions & Answers

large files?

How do we check 'large files' is enabled on a Unix box -- HP-UX B11.11 (2 Replies)
Discussion started by: ranj@chn
2 Replies

9. Shell Programming and Scripting

Large Text Files

Hi All I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like: Ignore the <TAB> annotations as that... (4 Replies)
Discussion started by: caddyjoe77
4 Replies

10. UNIX for Dummies Questions & Answers

Large files

I am trying to understand the webserver log file for an error which has occured on my live web site. The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Discussion started by: sehgalniraj
4 Replies
Login or Register to Ask a Question