Sponsored Content
Top Forums UNIX for Advanced & Expert Users Help optimizing sort of large files Post 302925515 by kogorman3 on Monday 17th of November 2014 03:33:24 PM
Old 11-17-2014
Quote:
Originally Posted by Corona688
ext4 partitions are relatively easy to defrag, being designed with runtime defragmentation in mind (yes, runtime -- no need to unmount) via the e4defrag utility. There's no point defragging an empty partition, but check that your input and output partitions aren't a mess after all this testing.
Nice to know. I tried e4defrag and it showed a fragmentation score of 0 on all directories.

Quote:
The process of merge-sorting doesn't work that way. No matter how big your buffers are, it has to do the same number of merges on the same number of elements of the same sizes, nearly all of them tiny... Starting with billions of 2-element merges, half the number of 4-element merges, etc, etc, etc. (A little oversimplification, but the merging options don't substantially change this.) That's why pushing buffers to ridiculous sizes is so little help -- they're nearly always dead weight except for the final merge, when it's never going to be big enough to matter anyway.
I'm still a bit new to this, even after peeking at the source code. But it seems to me that there are two distinct phases to GNU sort. Both are merge sorts, but there's a big difference between merging in RAM and merging disk files. The main point of my optimizing effort turned out to be minimizing the number of file merges. Here's how I think of it now:

If all of the data can fit in the specified buffer, no temporary files will be created, and the output of the single in-RAM merge will go to the output file.

Otherwise, buffer-loads of data will be merged in RAM, and output to temporary files. There will be at least 2 of these, and perhaps a great many. These will be merged <batch-size> files at a time, with all the I/O time you'd expect. It's therefore desirable to ensure that no more than <batch-size> temporary files are created. If that's impossible, then limit it to the square, or even the cube of that number. Both buffer-size and batch-size affect the achievement of these goals.

Of course, even this is an oversimplification. It ignores effects of the TLB, of the L1 and L2 caches, and the kernel's buffer cache. Thus my interest in measuring actual times.

This is not going well, and I don't know why. Successive runs report quite different times. It's true the system is used for other things, but not heavily -- it's all my personal use. This is showing up in elapsed time, but not much in either system or user time. Odd. Moreover, the results of the second runs were consistently worse than the first. I'm trying third runs now.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Large files

I am trying to understand the webserver log file for an error which has occured on my live web site. The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Discussion started by: sehgalniraj
4 Replies

2. Shell Programming and Scripting

Large Text Files

Hi All I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like: Ignore the <TAB> annotations as that... (4 Replies)
Discussion started by: caddyjoe77
4 Replies

3. UNIX for Dummies Questions & Answers

large files?

How do we check 'large files' is enabled on a Unix box -- HP-UX B11.11 (2 Replies)
Discussion started by: ranj@chn
2 Replies

4. UNIX for Dummies Questions & Answers

Sort large file

I was wondering how sort works. Does file size and time to sort increase geometrically? I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion? If it takes 100 hours that's fine but not 100 days. Thanks so much. (2 Replies)
Discussion started by: dcfargo
2 Replies

5. Shell Programming and Scripting

a problem with large files

hello all, kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!! the script is... (16 Replies)
Discussion started by: m_wassal
16 Replies

6. Shell Programming and Scripting

Divide large data files into smaller files

Hello everyone! I have 2 types of files in the following format: 1) *.fa >1234 ...some text... >2345 ...some text... >3456 ...some text... . . . . 2) *.info >1234 (7 Replies)
Discussion started by: ad23
7 Replies

7. UNIX for Dummies Questions & Answers

Speeding/Optimizing GREP search on CSV files

Hi all, I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min). Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search. Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies

8. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

9. UNIX for Advanced & Expert Users

Script to sort the files and append the extension .sort to the sorted version of the file

Hello all - I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Discussion started by: pankaj80
3 Replies

10. Shell Programming and Scripting

Script to sort large file with frequency

Hello, I have a very large file of around 2 million records which has the following structure: I have used the standard awk program to sort: # wordfreq.awk --- print list of word frequencies { # remove punctuation #gsub(/_]/, "", $0) for (i = 1; i <= NF; i++) freq++ } END { for (word... (3 Replies)
Discussion started by: gimley
3 Replies
avimerge(1)						      General Commands Manual						       avimerge(1)

NAME
avimerge - merge several AVI-files into one SYNOPSIS
avimerge -o name -i file1 [ file2 [ ... ] ] [ -p file ] [ -a num ] [ -A num ] [ -b num ] [ -c ] [ -f commentfile ] [ -x indexfile ] COPYRIGHT
avimerge is Copyright (C) by Thomas Oestreich. DESCRIPTION
avimerge is a versatile tool. It can contatenate several AVI files into one. It can also be used to fix an index of a broken file and can also replace audio tracks or muxes new ones. It can read raw AC3 and MP3 files for multplexing. OPTIONS
-o name Specify the name of the output file. -i file Specify the name(s) of the input file(s) to merge into the output file. -p file Specify the name of the audio file to multiplex into the output file. The type of file can be either another AVI file or an MP3 or AC3 file. -b num Specify if avimerge should write an VBR mp3 header into the AVI file. Default is dependent on the input file (and usually correct). num is either 1 or 0. -c Drop video frames in case audio is missing [off] Only when merging multiple AVI files. Some AVI files run a little bit (usually for one or two video frames) short on audio. This means avimerge cannot keep up sync when concatinating them. The files play fine when played individually but not when merged because audio from the new file gets played back with video from the old file. avimerge will print a message like No audiodata left for track 0->0 (59950.25=59950.25) continuing .. When you turn on the -c option, the video which is too much will be dropped. -f commentfile Read AVI tombstone data for header comments from commentfile. See /docs/avi_comments.txt for a sample. -x indexfile Read the AVI index from indexfile. See aviindex(1) for information on how to create such a file. -a num Specify the number of the audio track you want to use from the input file. -A num Specify the number of the audio track you want to use in the output file. If you specify an existing track number, the track will be replaced. If omitted, the next free slot will be used. EXAMPLES
The command avimerge -o big.avi -i my_file1.avi my_file2.avi my_file3.avi merges the three input files my_file[123].avi into one big AVI-file big.avi. avimerge -o out.avi -i in.avi -p audio2.avi -a 1 merges track number 1 form in.avi to the next free track number in out.avi. You can create audio-only AVI-files using transcode -i song.mp3 -x null,mp3 -g 0x0 -y raw -a 1 -o audio2.avi -u 50 The command avimerge -o out.avi -i in.avi -p sound.mp3 merges the file sound.mp3 as an additional audio track into out.avi. AUTHORS
avimerge was written by Thomas Oestreich <ostreich@theorie.physik.uni-goettingen.de> with contributions from many others. See AUTHORS for details. SEE ALSO
aviindex(1), avifix(1), avisplit(1), tccat(1), tcdecode(1), tcdemux(1), tcextract(1), tcprobe(1), tcscan(1), transcode(1) avimerge(1) 26th January 2004 avimerge(1)
All times are GMT -4. The time now is 05:20 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy