It appears that, on my machine at least, the default --buffer-size is 4 GB, which yields 1.8GB temporary files. Don't ask me why the roughly half-sized results. I surmise that pass 1 uses qsort on a full buffer to produce the initial temporaries.
It appears that, on my machine at least, the default --batch-size is 16, which for some reason forms the next level of temporaries by merging 15 small ones into a bigger one. Again, don't ask my why the disparity of -1.
As I fooled with things, the execution times I got were in discrete steps, approximate multiples of each other.
I surmise that the execution time is determined largely by the number of passes over the data. I am able to get that down to two passes on most of my datasets, and 3 at most. I did this by upping the limit on open files to the hard limit of 4096, and setting the batch size to 1000, plus setting the buffer size to 10 GB. I'm still experimenting to find a sweet spot.
I think staging multiple sorts might be counter-productive, since if I'm right, 2 is the minimum number of passes -- one to create temporaries, one to write the final result.
The idea of encoding the data is interesting. I'll explore it when I'm sure everything else works. I really like having readable data when debugging, which I am still doing.
BTW, you already have more than 10 lines of input. You can sort them to see the output. I sort on the whole record for this testing phase. Nothing to see here, folks.
My bash code now looks something like this:
Last edited by kogorman3; 11-08-2014 at 09:35 AM..
Reason: typo
I am trying to understand the webserver log file for an error which has occured on my live web site.
The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Hi All
I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like:
Ignore the <TAB> annotations as that... (4 Replies)
I was wondering how sort works.
Does file size and time to sort increase geometrically?
I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion?
If it takes 100 hours that's fine but not 100 days.
Thanks so much. (2 Replies)
hello all,
kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!!
the script is... (16 Replies)
Hi all,
I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min).
Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search.
Example:... (11 Replies)
Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS.
I have tried to do it using:... (14 Replies)
Hello all -
I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Hello,
I have a very large file of around 2 million records which has the following structure:
I have used the standard awk program to sort:
# wordfreq.awk --- print list of word frequencies
{
# remove punctuation
#gsub(/_]/, "", $0)
for (i = 1; i <= NF; i++)
freq++
}
END {
for (word... (3 Replies)
Discussion started by: gimley
3 Replies
LEARN ABOUT SUSE
sort
SORT(1) User Commands SORT(1)NAME
sort - sort lines of text files
SYNOPSIS
sort [OPTION]... [FILE]...
sort [OPTION]... --files0-from=F
DESCRIPTION
Write sorted concatenation of all FILE(s) to standard output.
Mandatory arguments to long options are mandatory for short options too. Ordering options:
-b, --ignore-leading-blanks
ignore leading blanks
-d, --dictionary-order
consider only blanks and alphanumeric characters
-f, --ignore-case
fold lower case to upper case characters
-g, --general-numeric-sort
compare according to general numerical value
-i, --ignore-nonprinting
consider only printable characters
-M, --month-sort
compare (unknown) < `JAN' < ... < `DEC'
-n, --numeric-sort
compare according to string numerical value
-R, --random-sort
sort by random hash of keys
--random-source=FILE
get random bytes from FILE (default /dev/urandom)
-r, --reverse
reverse the result of comparisons
--sort=WORD
sort according to WORD: general-numeric -g, month -M, numeric -n, random -R, version -V
-V, --version-sort
sort by numeric version
Other options:
--batch-size=NMERGE
merge at most NMERGE inputs at once; for more use temp files
-c, --check, --check=diagnose-first
check for sorted input; do not sort
-C, --check=quiet, --check=silent
like -c, but do not report first bad line
--compress-program=PROG
compress temporaries with PROG; decompress them with PROG -d
--files0-from=F
read input from the files specified by NUL-terminated names in file F; If F is - then read names from standard input
-k, --key=POS1[,POS2]
start a key at POS1 (origin 1), end it at POS2 (default end of line)
-m, --merge
merge already sorted files; do not sort
-o, --output=FILE
write result to FILE instead of standard output
-s, --stable
stabilize sort by disabling last-resort comparison
-S, --buffer-size=SIZE
use SIZE for main memory buffer
-t, --field-separator=SEP
use SEP instead of non-blank to blank transition
-T, --temporary-directory=DIR
use DIR for temporaries, not $TMPDIR or /tmp; multiple options specify multiple directories
-u, --unique
with -c, check for strict ordering; without -c, output only the first of an equal run
-z, --zero-terminated
end lines with 0 byte, not newline
--help display this help and exit
--version
output version information and exit
POS is F[.C][OPTS], where F is the field number and C the character position in the field; both are origin 1. If neither -t nor -b is in
effect, characters in a field are counted from the beginning of the preceding whitespace. OPTS is one or more single-letter ordering
options, which override global ordering options for that key. If no key is given, use the entire line as the key.
SIZE may be followed by the following multiplicative suffixes: % 1% of memory, b 1, K 1024 (default), and so on for M, G, T, P, E, Z, Y.
With no FILE, or when FILE is -, read standard input.
*** WARNING *** The locale specified by the environment affects sort order. Set LC_ALL=C to get the traditional sort order that uses
native byte values.
AUTHOR
Written by Mike Haertel and Paul Eggert.
REPORTING BUGS
Report sort bugs to bug-coreutils@gnu.org
GNU coreutils home page: <http://www.gnu.org/software/coreutils/>
General help using GNU software: <http://www.gnu.org/gethelp/>
COPYRIGHT
Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.
SEE ALSO
The full documentation for sort is maintained as a Texinfo manual. If the info and sort programs are properly installed at your site, the
command
info coreutils 'sort invocation'
should give you access to the complete manual.
GNU coreutils 7.1 July 2010 SORT(1)