Sponsored Content
Top Forums UNIX for Advanced & Expert Users Help optimizing sort of large files Post 302926355 by kogorman3 on Sunday 23rd of November 2014 02:51:50 PM
Old 11-23-2014
Quote:
Originally Posted by DGPickett
I suppose your could write a sort that used a tournament space as big as available ram to write temp files each with one sorted sequence, and then all the files can be merged by sort -m, or a tree of sort -m's, an always 2 pass sort.

The idea of the progressive merge sort was to live with the cache and RAM limits dynamically without knowing where they are. And in real life, sometimes the amount of available RAM or CACHE varies with other activities, so it is not good to have a fragile fit.

BTW, the tournament write can write strings far longer than the tournament table size. If the input was already sorted, it would be written as one string. You write the smallest item >= the last item written, so with random data, I am guessing a 64K leg tournament would write 96K items a sorted string, but my math is a bit weak. Someone show me up.

If you wanted to code it in a single process, you could write all sorted strings to one temp drive, write the starting offsets of each sorted string on a second temp file and fix the tournament size at something fitting comfortably within upper cache. The input file could be mmap()'s so actual records to be written are in RAM, just offsets in the tournament table. After the first pass, you know you have N strings to merge, so mmap() the output files and merge them. Since there are just 3 files open at a time, no fd limit woes. The mmap() lets your read from N different points in the first temp file. The one temp file write of all input data creates a huge change in working set size. Writing just one main file at a time minimizes write buffer cost. The original reading, although through mmap() page faults, is generally sequential. The second read uses just N-2N pages to buffer the sequential string reads. The latency is even pretty low, essentially 0 from last read to first write.
I may be dense, but I don't see the point in such suggestions. My files are huge, much bigger than any SSD I can afford. No matter how you cut it, the file won't fit in RAM or my SSD drive, and there's going to be a lot of comparison, and a lot of disk head motion in the process of the sort. I'm using GNU sort because I have it and it works out of the box in such cases.

Having played around with my test file long enough to become familiar with the basics, and get a more accurate picture of how sort works, I sorted my 1.4TB file starting noon yesterday using sort's default settings other than directing temporary files to an empty spare drive.

It started at around 12:30 PM yesterday finished at 9:37 AM today. I guess I had misread the code that computes the buffer size; I was expecting 2GB temporaries, and in fact they were more like 11.24 GB, and there were 117 such files except that the last one was shorter. It took around 4 minutes to create the early ones, and they were all done in about 8.5 hours. Then 112 of them were merged in 7 batches of 16, creating files of 179.8 GB each, taking about 7.5 hours. Finally the 7 large and the 5 remaining small temporaries were merged into the output in about 6 hours.

for comparison, just copying the file to the temp drive, and then to the result drive took 6.6 hours, compared to about 21 hours for the sort.

I'm hoping to cut hours off that time by setting a bigger batch_size and not merge any of the data twice. I may also fool with the buffer_size to see if there's a way to reduce compute overhead. I'll start by cutting temporary size in half and doubling the batch size.

Last edited by kogorman3; 11-23-2014 at 04:20 PM.. Reason: more info
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Large files

I am trying to understand the webserver log file for an error which has occured on my live web site. The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Discussion started by: sehgalniraj
4 Replies

2. Shell Programming and Scripting

Large Text Files

Hi All I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like: Ignore the <TAB> annotations as that... (4 Replies)
Discussion started by: caddyjoe77
4 Replies

3. UNIX for Dummies Questions & Answers

large files?

How do we check 'large files' is enabled on a Unix box -- HP-UX B11.11 (2 Replies)
Discussion started by: ranj@chn
2 Replies

4. UNIX for Dummies Questions & Answers

Sort large file

I was wondering how sort works. Does file size and time to sort increase geometrically? I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion? If it takes 100 hours that's fine but not 100 days. Thanks so much. (2 Replies)
Discussion started by: dcfargo
2 Replies

5. Shell Programming and Scripting

a problem with large files

hello all, kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!! the script is... (16 Replies)
Discussion started by: m_wassal
16 Replies

6. Shell Programming and Scripting

Divide large data files into smaller files

Hello everyone! I have 2 types of files in the following format: 1) *.fa >1234 ...some text... >2345 ...some text... >3456 ...some text... . . . . 2) *.info >1234 (7 Replies)
Discussion started by: ad23
7 Replies

7. UNIX for Dummies Questions & Answers

Speeding/Optimizing GREP search on CSV files

Hi all, I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min). Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search. Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies

8. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

9. UNIX for Advanced & Expert Users

Script to sort the files and append the extension .sort to the sorted version of the file

Hello all - I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Discussion started by: pankaj80
3 Replies

10. Shell Programming and Scripting

Script to sort large file with frequency

Hello, I have a very large file of around 2 million records which has the following structure: I have used the standard awk program to sort: # wordfreq.awk --- print list of word frequencies { # remove punctuation #gsub(/_]/, "", $0) for (i = 1; i <= NF; i++) freq++ } END { for (word... (3 Replies)
Discussion started by: gimley
3 Replies
All times are GMT -4. The time now is 07:13 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy