Sponsored Content
Top Forums UNIX for Advanced & Expert Users Help optimizing sort of large files Post 302925455 by Corona688 on Monday 17th of November 2014 10:10:34 AM
Old 11-17-2014
Quote:
Originally Posted by kogorman3
Defragged? I'm not sure how to do that on Linux, so I am using a fresh 2-TB drive formatted ext4 and directing sort to use it for temporaries. It's otherwise empty.
ext4 partitions are relatively easy to defrag, being designed with runtime defragmentation in mind (yes, runtime -- no need to unmount) via the e4defrag utility. There's no point defragging an empty partition, but check that your input and output partitions aren't a mess after all this testing.
Quote:
I've got 32 GB RAM and a 64-bit CPU. That's big enough for the whole test file, but the parameters to sort don't let it work that way.
The process of merge-sorting doesn't work that way. No matter how big your buffers are, it has to do the same number of merges on the same number of elements of the same sizes, nearly all of them tiny... Starting with billions of 2-element merges, half the number of 4-element merges, etc, etc, etc. (A little oversimplification, but the merging options don't substantially change this.) That's why pushing buffers to ridiculous sizes is so little help -- they're nearly always dead weight except for the final merge, when it's never going to be big enough to matter anyway.

Last edited by Corona688; 11-17-2014 at 11:17 AM..
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Large files

I am trying to understand the webserver log file for an error which has occured on my live web site. The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Discussion started by: sehgalniraj
4 Replies

2. Shell Programming and Scripting

Large Text Files

Hi All I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like: Ignore the <TAB> annotations as that... (4 Replies)
Discussion started by: caddyjoe77
4 Replies

3. UNIX for Dummies Questions & Answers

large files?

How do we check 'large files' is enabled on a Unix box -- HP-UX B11.11 (2 Replies)
Discussion started by: ranj@chn
2 Replies

4. UNIX for Dummies Questions & Answers

Sort large file

I was wondering how sort works. Does file size and time to sort increase geometrically? I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion? If it takes 100 hours that's fine but not 100 days. Thanks so much. (2 Replies)
Discussion started by: dcfargo
2 Replies

5. Shell Programming and Scripting

a problem with large files

hello all, kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!! the script is... (16 Replies)
Discussion started by: m_wassal
16 Replies

6. Shell Programming and Scripting

Divide large data files into smaller files

Hello everyone! I have 2 types of files in the following format: 1) *.fa >1234 ...some text... >2345 ...some text... >3456 ...some text... . . . . 2) *.info >1234 (7 Replies)
Discussion started by: ad23
7 Replies

7. UNIX for Dummies Questions & Answers

Speeding/Optimizing GREP search on CSV files

Hi all, I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min). Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search. Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies

8. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

9. UNIX for Advanced & Expert Users

Script to sort the files and append the extension .sort to the sorted version of the file

Hello all - I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Discussion started by: pankaj80
3 Replies

10. Shell Programming and Scripting

Script to sort large file with frequency

Hello, I have a very large file of around 2 million records which has the following structure: I have used the standard awk program to sort: # wordfreq.awk --- print list of word frequencies { # remove punctuation #gsub(/_]/, "", $0) for (i = 1; i <= NF; i++) freq++ } END { for (word... (3 Replies)
Discussion started by: gimley
3 Replies
E4DEFRAG(8)						      System Manager's Manual						       E4DEFRAG(8)

NAME
e4defrag - online defragmenter for ext4 filesystem SYNOPSIS
e4defrag [ -c ] [ -v ] target ... DESCRIPTION
e4defrag reduces fragmentation of extent based file. The file targeted by e4defrag is created on ext4 filesystem made with "-O extent" option (see mke2fs(8)). The targeted file gets more contiguous blocks and improves the file access speed. target is a regular file, a directory, or a device that is mounted as ext4 filesystem. If target is a directory, e4defrag reduces fragmen- tation of all files in it. If target is a device, e4defrag gets the mount point of it and reduces fragmentation of all files in this mount point. OPTIONS
-c Get a current fragmentation count and an ideal fragmentation count, and calculate fragmentation score based on them. By seeing this score, we can determine whether we should execute e4defrag to target. When used with -v option, the current fragmentation count and the ideal fragmentation count are printed for each file. Also this option outputs the average data size in one extent. If you see it, you'll find the file has ideal extents or not. Note that the maximum extent size is 131072KB in ext4 filesystem (if block size is 4KB). If this option is specified, target is never defragmented. -v Print error messages and the fragmentation count before and after defrag for each file. NOTES
e4defrag does not support swap file, files in lost+found directory, and files allocated in indirect blocks. When target is a device or a mount point, e4defrag doesn't defragment files in mount point of other device. Non-privileged users can execute e4defrag to their own file, but the score is not printed if -c option is specified. Therefore, it is desirable to be executed by root user. AUTHOR
Written by Akira Fujita <a-fujita@rs.jp.nec.com> and Takashi Sato <t-sato@yk.jp.nec.com>. SEE ALSO
mke2fs(8), mount(8). e4defrag version 2.0 May 2009 E4DEFRAG(8)
All times are GMT -4. The time now is 10:05 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy