Sponsored Content
Top Forums UNIX for Advanced & Expert Users Help optimizing sort of large files Post 302925218 by Corona688 on Friday 14th of November 2014 03:04:46 PM
Old 11-14-2014
Quote:
Originally Posted by kogorman3
In testing, the gain was real but not big. The elephant in the room here seems to be the number of passes, and I/O time is a large percent of the total. While threading improves overlap, they're still competing for use of the same input and output files and directories.
I know. I am concerned you might be making your buffers too large -- ideal settings for 1 process would be twice what your computer can handle when doubled. Halve the buffer size and number of files every time you double the number of threads. Individually their efficiency could go down a bit, but collectively they can get masses more work done if you have enough cache to keep up with them.

You might look into increasing the writeback time on your temp partition. Spending all your I/O time writing changes that are going to be invalidated 60 seconds later could be a waste, let the disk cache do the thrashing.

Which means leaving some RAM for disk cache of course. Using all available RAM forces it to do more I/O. 50/50 for sort vs cache might be a good split off the cuff.

I wonder if your SSD might be a better temp folder than swap -- like you say, you're not using it, and forcing your system to eat into swap can really interfere with caching. An ssd could support a much larger number of streams, potentially.

Last edited by Corona688; 11-14-2014 at 04:14 PM..
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Large files

I am trying to understand the webserver log file for an error which has occured on my live web site. The webserver access file is very big in size so it's not possible to open this file using vi editor. I know the approximate time the error occured, so i am interested in looking for the log file... (4 Replies)
Discussion started by: sehgalniraj
4 Replies

2. Shell Programming and Scripting

Large Text Files

Hi All I have approximately 10 files that are at least 100+ MB in size. I am importing them into a DB to output them to the web. What i need to do first is clean the files up so i dont have un necessary rows in the DB. Below is what the file looks like: Ignore the <TAB> annotations as that... (4 Replies)
Discussion started by: caddyjoe77
4 Replies

3. UNIX for Dummies Questions & Answers

large files?

How do we check 'large files' is enabled on a Unix box -- HP-UX B11.11 (2 Replies)
Discussion started by: ranj@chn
2 Replies

4. UNIX for Dummies Questions & Answers

Sort large file

I was wondering how sort works. Does file size and time to sort increase geometrically? I have a 5.3 billion line file I'd like to use with sort -u I'm wondering if that'll take forever because of a geometric expansion? If it takes 100 hours that's fine but not 100 days. Thanks so much. (2 Replies)
Discussion started by: dcfargo
2 Replies

5. Shell Programming and Scripting

a problem with large files

hello all, kindly i need your help, i made a script to print a specific lines from a huge file about 3 million line. the output of the script will be about 700,000 line...the problem is the script is too slow...it kept working for 5 days and the output was only 200,000 lines !!! the script is... (16 Replies)
Discussion started by: m_wassal
16 Replies

6. Shell Programming and Scripting

Divide large data files into smaller files

Hello everyone! I have 2 types of files in the following format: 1) *.fa >1234 ...some text... >2345 ...some text... >3456 ...some text... . . . . 2) *.info >1234 (7 Replies)
Discussion started by: ad23
7 Replies

7. UNIX for Dummies Questions & Answers

Speeding/Optimizing GREP search on CSV files

Hi all, I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min). Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search. Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies

8. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

9. UNIX for Advanced & Expert Users

Script to sort the files and append the extension .sort to the sorted version of the file

Hello all - I am to this forum and fairly new in learning unix and finding some difficulty in preparing a small shell script. I am trying to make script to sort all the files given by user as input (either the exact full name of the file or say the files matching the criteria like all files... (3 Replies)
Discussion started by: pankaj80
3 Replies

10. Shell Programming and Scripting

Script to sort large file with frequency

Hello, I have a very large file of around 2 million records which has the following structure: I have used the standard awk program to sort: # wordfreq.awk --- print list of word frequencies { # remove punctuation #gsub(/_]/, "", $0) for (i = 1; i <= NF; i++) freq++ } END { for (word... (3 Replies)
Discussion started by: gimley
3 Replies
HTCACHECLEAN(8) 						   htcacheclean 						   HTCACHECLEAN(8)

NAME
htcacheclean - Clean up the disk cache SYNOPSIS
htcacheclean [ -D ] [ -v ] [ -t ] [ -r ] [ -n ] -ppath -llimit htcacheclean [ -n ] [ -t ] [ -i ] -dinterval -ppath -llimit SUMMARY
htcacheclean is used to keep the size of mod_disk_cache's storage within a certain limit. This tool can run either manually or in daemon mode. When running in daemon mode, it sleeps in the background and checks the cache directories at regular intervals for cached content to be removed. You can stop the daemon cleanly by sending it a TERM or INT signal. OPTIONS
-dinterval Daemonize and repeat cache cleaning every interval minutes. This option is mutually exclusive with the -D, -v and -r options. To shutdown the daemon cleanly, just send it a SIGTERM or SIGINT. -D Do a dry run and don't delete anything. This option is mutually exclusive with the -d option. -v Be verbose and print statistics. This option is mutually exclusive with the -d option. -r Clean thoroughly. This assumes that the Apache web server is not running (otherwise you may get garbage in the cache). This option is mutually exclusive with the -d option and implies the -t option. -n Be nice. This causes slower processing in favour of other processes. htcacheclean will sleep from time to time so that (a) the disk IO will be delayed and (b) the kernel can schedule other processes in the meantime. -t Delete all empty directories. By default only cache files are removed, however with some configurations the large number of directo- ries created may require attention. If your configuration requires a very large number of directories, to the point that inode or file allocation table exhaustion may become an issue, use of this option is advised. -ppath Specify path as the root directory of the disk cache. This should be the same value as specified with the CacheRoot directive. -llimit Specify limit as the total disk cache size limit. The value is expressed in bytes by default (or attaching B to the number). Attach K for Kbytes or M for MBytes. -i Be intelligent and run only when there was a modification of the disk cache. This option is only possible together with the -d option. EXIT STATUS
htcacheclean returns a zero status ("true") if all operations were successful, 1 otherwise. Apache HTTP Server 2008-05-06 HTCACHECLEAN(8)
All times are GMT -4. The time now is 05:44 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy