11-14-2014
Quote:
Originally Posted by
Corona688
sort is a merge-sort, which has no need or use for gigantic memory buffers -- unless you want to use more memory than is available and eat into swap, that is. If you have very high-performance swap that can be useful. Otherwise, leave -buffer-size out and let it manage itself.
I'm pretty sure that the buffer-size parameter controls only pass 1, which is where actual sorting occurs. It controls the size of the first set of temporary files. All subsequent passes are merges. Moreover, the default on my system is similar to what I get when I specify 4g, and is distinctly sub-optimal.
Quote:
Originally Posted by
Corona688
-parallel should be a big performance gain -- if you have enough memory that it doesn't need to thrash your disk, and fast enough disks to keep up. If not, it will just make things worse.
In testing, the gain was real but not big. The elephant in the room here seems to be the number of passes, and I/O time is a large percent of the total. While threading improves overlap, they're still competing for use of the same input and output files and directories.
Quote:
Originally Posted by
Corona688
I don't see any something-for-nothing solutions here. You won't squeeze out anything but percents here and there unless you deal with the bottlenecks. Every time you tell it "use more resources" and it slows down, that's a bottleneck. Every time you tell it "use less files" and it speeds up, that's a bottleneck.
1) More RAM -- the more the OS can cache, the less it has to wait on the disk. Brute force, but there's a reason RAM is popular, it works really well.
2) A different temp space. If you put /tmp/ on a different disk spindle than the file you are sorting, you can get the bandwidth of two disks instead of splitting the bandwidth of one disk several ways (and eliminate a lot of disk thrashing time). It doesn't have to be /tmp/ of course, sort -T puts the files wherever you ask.
3) Faster swap. Eat up more RAM than you have available and depend on an SSD to make up the difference. This could be good, though sounds rather complicated to me.
Fair enough. But RAM is already 32GB, which maxes out my motherboard. Input, temporaries and output are already on three separate drives. Swap is separate and on an SSD, but I don't think I'm using it. So I'm squeezing out percentages. Fortunately, some of them are worth the effort; I'm already running about twice as fast as the built-in defaults, which have over-large buffers and not enough streams in a batch.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
I am trying to understand the webserver log file for an error which has occured on my live web site.
The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Discussion started by: sehgalniraj
4 Replies
2. Shell Programming and Scripting
Hi All
I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like:
Ignore the <TAB> annotations as that... (4 Replies)
Discussion started by: caddyjoe77
4 Replies
3. UNIX for Dummies Questions & Answers
How do we check 'large files' is enabled on a Unix box -- HP-UX B11.11 (2 Replies)
Discussion started by: ranj@chn
2 Replies
4. UNIX for Dummies Questions & Answers
I was wondering how sort works.
Does file size and time to sort increase geometrically?
I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion?
If it takes 100 hours that's fine but not 100 days.
Thanks so much. (2 Replies)
Discussion started by: dcfargo
2 Replies
5. Shell Programming and Scripting
hello all,
kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!!
the script is... (16 Replies)
Discussion started by: m_wassal
16 Replies
6. Shell Programming and Scripting
Hello everyone!
I have 2 types of files in the following format:
1) *.fa
>1234
...some text...
>2345
...some text...
>3456
...some text...
.
.
.
.
2) *.info
>1234 (7 Replies)
Discussion started by: ad23
7 Replies
7. UNIX for Dummies Questions & Answers
Hi all,
I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min).
Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search.
Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies
8. Solaris
Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS.
I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies
9. UNIX for Advanced & Expert Users
Hello all -
I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Discussion started by: pankaj80
3 Replies
10. Shell Programming and Scripting
Hello,
I have a very large file of around 2 million records which has the following structure:
I have used the standard awk program to sort:
# wordfreq.awk --- print list of word frequencies
{
# remove punctuation
#gsub(/_]/, "", $0)
for (i = 1; i <= NF; i++)
freq++
}
END {
for (word... (3 Replies)
Discussion started by: gimley
3 Replies
LEARN ABOUT DEBIAN
wrap-and-sort
WRAP-AND-SORT(1) General Commands Manual WRAP-AND-SORT(1)
NAME
wrap-and-sort - wrap long lines and sort items in Debian packaging files
SYNOPSIS
wrap-and-sort [options]
DESCRIPTION
wrap-and-sort wraps the package lists in Debian control files. By default the lists will only split into multiple lines if the entries are
longer than 80 characters. wrap-and-sort sorts the package lists in Debian control files and all .install files. Beside that wrap-and-sort
removes trailing spaces in these files.
This script should be run in the root of a Debian package tree. It searches for control, control.in, copyright, copyright.in, install, and
*.install in the debian directory.
OPTIONS
-h, --help
Show this help message and exit.
-a, --wrap-always
Wrap all package lists in the Debian control file even if the entries are shorter than 80 characters and could fit in one line line.
-s, --short-indent
Only indent wrapped lines by one space (default is in-line with the field name).
-b, --sort-binary-packages
Sort binary package paragraphs by name.
-k, --keep-first
When sorting binary package paragraphs, leave the first one at the top. Unqualified debhelper(7) configuration files are applied to
the first package.
-n, --no-cleanup
Do not remove trailing whitespaces.
-d path, --debian-directory=path
Location of the debian directory (default: ./debian).
-f file, --file=file
Wrap and sort only the specified file. You can specify this parameter multiple times. All supported files will be processed if no
files are specified.
-v, --verbose
Print all files that are touched.
AUTHORS
wrap-and-sort and this manpage have been written by Benjamin Drung <bdrung@debian.org>.
Both are released under the ISC license.
DEBIAN
Debian Utilities WRAP-AND-SORT(1)