Help optimizing sort of large files


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users Help optimizing sort of large files
# 8  
Old 11-09-2014
There is no way for me to know if this will help at all, but I had some fun playing. All of this was on a 16 core 16GB desktop running Cygwin. Linux often performs better on some things because Cygwin is really sitting on top of the Windows runtime library.

As an extrapolation from my testing below:
Try -
Code:
sort --parallel 4  --batchsize=256

Since you have four cores - this will improve throughput. It means run four independent threads of sorting - one on each core, break the large file into 256 chunks, sort and then merge 256 chunks (NMERGE in the documentation).

Your data presents problems.

1. The key is the entire line. Using a tag sort (some kind of key compression on the first 64 bytes to turn the data into an integer or a set of 3 integers) speeds up comparison times by an order or magnitude. To make a numeric comparison on the first radix key (if you like the term) use
Code:
-k1n -k2n -k3n

. This will have sort use three 64 bit words, using 3bit translation ( 3 values * 64) for comparison which is three compare ops. versus having to compare 64 characters.

2. Locality. Line 1301010 may really need to be after to line 14 in the final output.
By getting NMERGE to reasonable value ( 300GB/256 = 1.17GB) This creates a first
pass scenario where locality is less of an issue since all of 1.17 will easily fit in memory.

3. sort uses disk intensively. make sure TMPDIR is assigned to a filesystem that is a separate disk with lots of CONTIGUOUS free space. i.e., Pretty much wiped empty.
An SSD will make a huge difference in performance.

I dummied up a file with a 27183337 lines == almost exactly 2GB. I ran it on Cygwin.
Box has 16 cores 16 GB memory.

Code:
$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 8
stack size              (kbytes, -s) 2025
cpu time               (seconds, -t) unlimited
max user processes              (-u) 256
virtual memory          (kbytes, -v) unlimited


Run using 7200 rpm drive not on SSD
Code:
TMPDIR=/cygdrive/e/tmp # not the SSD drive
time sort --parallel 16 --batchsize=32 -o ./sorted bigfile
real    3m2.016s
user    4m24.744s
sys     3m1.015s

Notice the user + sys time adds up to more than elapsed time. Why? parallel.

Code:
TMPDIR=/cygdrive/f/tmp #  this is SSD drive
time sort --parallel 16 --batchsize=32 -o ./sorted bigfile
real     1m18.223s
user    4m19.007s
sys     2m19.016s

This is about 40% of the wall time compared with a slow disk. sys is better due to shorter I/O request queues . And appears to me to be the limiting factor. The number of comparisons was the same as was a lot of the other overhead so user is about the same.
This User Gave Thanks to jim mcnamara For This Post:
# 9  
Old 11-09-2014
Quote:
Originally Posted by Don Cragun
Did you perhaps intend to use:
Code:
export SORT="--batch-size=1000 --buffer-size=10g"

instead of setting the buffer size twice?
Indeed I did. In fact, I thought I had edited that typo, but clearly I didn't.
# 10  
Old 11-09-2014
I wrote a "locality of reference" sort once: mmap64() the file and start sorting by making 2 lists of one line, then a linked list of 2 lines sorted, twice, and then merge them, then again, then merge the two sets of 4, etc. A binary item counter tells you what is needed next. Since it works with lists of length 1 1 1 1 2 2 4 1 1 1 1 2 2 4 8 1 1 1 1 2 2 4 1 1 1 1 2 2 4 8 16 ..., it optimizes the levels of cache, ram, VM in use.m Short lists sort in L1 cache, really long lists sort in vm at disk speed, and in between, in between. The pretext is that the speed problem is bandwidth, not CPU speed.
This User Gave Thanks to DGPickett For This Post:
# 11  
Old 11-09-2014
Quote:
Originally Posted by jim mcnamara
There is no way for me to know if this will help at all, but I had some fun playing. All of this was on a 16 core 16GB desktop running Cygwin. Linux often performs better on some things because Cygwin is really sitting on top of the Windows runtime library.

... {snipped] ...
Very interesting.

I had completely missed the --parallel option in reading the man page. I use this machine for everything, so I'm not sure I'll always want to swamp the cores, but I'll surely use some. I think this will help just the first pass -- when the temp files are first created -- because I'm going to try really hard to tweak parameters to make the second pass the last one where there's a single output file.

I'm just starting, and my largest input file is 1.3 TB. Therefore, SSD drives are too expensive for me; the only way to use just the one I already own and is not cluttered with my stuff already would be to chop the sort into many segments -- and then the merge phase would have to be all on spinning drives anyway. Maybe in 10 years.

I may compress the records eventually, but I'm running tests now that show that user time is at most 25% of the total time. That limits the effectiveness. Plus, each run is a one-time thing, and I worry about the overhead of the compression since it has to be a separate process.

I've got a script that is testing combinations of batch-size, buffer-size, compress-program (with various levels of compressions) and I'll add --parallel to it. I've been curious about sort for some years, at this seems to be the moment when I really want to investigate.

---------- Post updated at 02:07 PM ---------- Previous update was at 11:54 AM ----------

Quote:
Originally Posted by DGPickett
I wrote a "locality of reference" sort once: mmap64() the file and start sorting by making 2 lists of one line, then a linked list of 2 lines sorted, twice, and then merge them, then again, then merge the two sets of 4, etc. A binary item counter tells you what is needed next. Since it works with lists of length 1 1 1 1 2 2 4 1 1 1 1 2 2 4 8 1 1 1 1 2 2 4 1 1 1 1 2 2 4 8 16 ..., it optimizes the levels of cache, ram, VM in use.m Short lists sort in L1 cache, really long lists sort in vm at disk speed, and in between, in between. The pretext is that the speed problem is bandwidth, not CPU speed.
An excellent idea for a cache-aware in-memory sort. I have never before seen this idea extended to VM like this. However, it won't work for my TB-scale files without TB's of swap space. I'm also not quite ready to rewrite GNU sort to use the idea.

Thinking of alternatives, I read somewhere that phase 1 of the existing sort used qsort on a full buffer, and spits it out as the result if it's small and as temporaries otherwise. I know qsort is a favorite because it's unusually good in it's average performance, and is in-place. I once wrote a sort that used a heapsort algorithm in phase 1 to produce longer temporaries -- averaging twice the size of the working storage, with guaranteed n-log-n performance. Too bad it wouldn't be a good general-purpose replacement -- it incurs memory fragmentation when using variable-length records. The qsort implementation doesn't because it cleans out the workspace on every buffer, and the heapsort method can't do that and still generate the long outputs.

---------- Post updated at 02:13 PM ---------- Previous update was at 02:07 PM ----------

I'm now running some tests with a "short" 14 GB file. Some extra eyes on the script would be appreciated, as I've already been caught in one error already in this thread.
Code:
#!/bin/bash
# find times for sorting as compared to copying time.

ulimit -Sn hard
export TMPDIR=/mnt/tmptmp
export DATA=s2740.posn
export DEST=/mnt/h2/tmp/junk
export TEMP=/mnt/tmptmp/junk
export LC_ALL=C

cd /mnt/h1/posn
echo prep
df -h .
df -h /mnt/tmptmp
df -h /mnt/h2/tmp

# do something to make the cache state consistent
rm -f $TEMP $DEST
bash -c 'cp $DATA $TEMP; cp $TEMP $DEST'
echo

echo -n first copy then second copy
rm -f $TEMP $DEST
time bash -c 'cp $DATA $TEMP'
time bash -c 'cp $TEMP $DEST'
echo

echo -n copy both
rm -f $TEMP $DEST
time bash -c 'cp $DATA $TEMP; cp $TEMP $DEST'
echo

echo -n sort with default params
rm -f $TEMP $DEST
time sort $DATA --output=$DEST
echo

echo -n sort with n=16, buffer=4g
rm -f $TEMP $DEST
time sort --buffer-size=4g --batch-size=16 $DATA --output=$DEST
echo


buf=1
while [ $buf -lt 16 ]
do
  bat=10
  while [ $bat -lt 101 ]
  do
    for par in 1 2 4
    do
      echo -n sort with parallel=$par buffer=${buf}g batch=$bat and gzip from none to 9
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-1'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-2'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-3'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-4'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-5'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-6'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-7'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-8'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      export GZIP='-9'
      rm -f $TEMP $DEST
      time sort --parallel=$par --buffer-size=${buf}g --batch-size=$bat $DATA --output=$DEST
      bat=$(( bat + 10 ))
      echo
    done
  done
  buf=$(( buf + 10 ))
done

---------- Post updated at 03:02 PM ---------- Previous update was at 02:13 PM ----------

Oops. I can already see I omitted the --compress-program parameter from the ones I wanted to use gzip. Consider it fixed.

Last edited by kogorman3; 11-09-2014 at 06:15 PM.. Reason: improve comment
# 12  
Old 11-12-2014
There is no swap space use with mmap() unless you choose the more exotic options. The basic mmap() swaps in file pages directly to RAM, giving you a way to work huge files with all of RAM as a cache. If you overmap a file, you can write pages. Empty pages can exist, supporting sparse matrices. This is a very powerful door into application use of VM and full exploitation of RAM.

Sorting small into larger in steps allows the problem to periodically expand to the next level of storage: from the caches to the RAM to the disk. The only flaw is that it might be good to merge more than 2 at a time, so when you get to the disk size merges, there is just one or at least fewer. The merge could use some sort of comparison tournament tree to do the merge, sized to fit in L1 cache, where each replacement item needs to be fit into the tree before using the smallest from the tree. Note that each item will take a whole page of RAM, or two, and N-N+2 lines of L1 cache, depending on the cache and item sizes and page/line offsets. Trying to fill L1 cache is fraught with difficulty, as many items may demand the same cache bucket, so it pays to shoot a bit low, like 1/4 - 1/2 L1 cache size.
# 13  
Old 11-12-2014
You don't need to use an SSD for temp files - you can use it as the swap device.

Although you probably wouldn't like the results if your GUI gets swapped out, it should speed up the sort by a huge factor.
# 14  
Old 11-12-2014
Since you have 300 GB of data, you are better off loading this into a database. Oracle would work well, especially if you used the Advanced Compression for OLTP with 11gR2 or possibly 12c. You can load it in a table with each row in the file as one row in the database and each character of the in the first half of the line as an individual CHAR(1). You could then create an index with the columns that you want to sort by and you would already have it sorted. You can also store the data as an index organised table. If this is not for profit you probably don't need an Oracle license. You just need to know how to install Oracle and create a database. You can also use mySQL to get nearly the same thing.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Script to sort large file with frequency

Hello, I have a very large file of around 2 million records which has the following structure: I have used the standard awk program to sort: # wordfreq.awk --- print list of word frequencies { # remove punctuation #gsub(/_]/, "", $0) for (i = 1; i <= NF; i++) freq++ } END { for (word... (3 Replies)
Discussion started by: gimley
3 Replies

2. UNIX for Advanced & Expert Users

Script to sort the files and append the extension .sort to the sorted version of the file

Hello all - I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Discussion started by: pankaj80
3 Replies

3. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

4. UNIX for Dummies Questions & Answers

Speeding/Optimizing GREP search on CSV files

Hi all, I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min). Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search. Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies

5. Shell Programming and Scripting

Divide large data files into smaller files

Hello everyone! I have 2 types of files in the following format: 1) *.fa >1234 ...some text... >2345 ...some text... >3456 ...some text... . . . . 2) *.info >1234 (7 Replies)
Discussion started by: ad23
7 Replies

6. Shell Programming and Scripting

a problem with large files

hello all, kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!! the script is... (16 Replies)
Discussion started by: m_wassal
16 Replies

7. UNIX for Dummies Questions & Answers

Sort large file

I was wondering how sort works. Does file size and time to sort increase geometrically? I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion? If it takes 100 hours that's fine but not 100 days. Thanks so much. (2 Replies)
Discussion started by: dcfargo
2 Replies

8. UNIX for Dummies Questions & Answers

large files?

How do we check 'large files' is enabled on a Unix box -- HP-UX B11.11 (2 Replies)
Discussion started by: ranj@chn
2 Replies

9. Shell Programming and Scripting

Large Text Files

Hi All I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like: Ignore the <TAB> annotations as that... (4 Replies)
Discussion started by: caddyjoe77
4 Replies

10. UNIX for Dummies Questions & Answers

Large files

I am trying to understand the webserver log file for an error which has occured on my live web site. The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Discussion started by: sehgalniraj
4 Replies
Login or Register to Ask a Question