03-28-2019
When processing extremely large files you might consider using split first.
Then in multicore environments spawn several awks or greps to process it in parallel from shell script.
There are also GNU tools which offer parallelism without shell logic.
Should be a bit tougher to program, but processing time will be reduced significantly if you have cores and disks are fast to service.
Memory also comes in play, since split will read the files, and operating system will cache those files in memory, if the same is available.
Making those awks or greps processes much faster on read operations.
Of course, limit being free memory on the system and configuration of the file system caching in general.
In default configurations file system caching will be able to use a large portion free memory on most linux / unix systems i've seen.
Hope that helps
Regards
Peasant.
9 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
hi someone tell me which ways i can improve disk I/O and system process performance.kindly refer some commands so i can do it on my test machine.thanks, Mazhar (2 Replies)
Discussion started by: mazhar99
2 Replies
2. Shell Programming and Scripting
I have a data file of 2 gig
I need to do all these, but its taking hours, any where i can improve performance, thanks a lot
#!/usr/bin/ksh
echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')"
function showHelp {
cat << EOF >&2
syntax extreme.sh FILENAME
Specify filename to parse
EOF... (3 Replies)
Discussion started by: sirababu
3 Replies
3. Shell Programming and Scripting
Hi Friends,
I wrote the below shell script to generate a report on alert messages recieved on a day. But i for processing around 4500 lines (alerts) the script is taking aorund 30 minutes to process.
Please help me to make it faster and improve the performace of the script. i would be very... (10 Replies)
Discussion started by: apsprabhu
10 Replies
4. Shell Programming and Scripting
Hi All,
I have written a script as follows which is taking lot of time in executing/searching only 3500 records taken as input from one file in log file of 12 GB Approximately.
Working of script is read the csv file as an input having 2 arguments which are transaction_id,mobile_number and search... (6 Replies)
Discussion started by: poweroflinux
6 Replies
5. Programming
Input file:
#content_1
12314345345
242467
#content_14
436677645
576577657
#content_100
3425546
56
#content_12
243254546
1232454
.
.
Reference file:
content_100 (1 Reply)
Discussion started by: cpp_beginner
1 Replies
6. Shell Programming and Scripting
I have around 300 files(*.rdf,*.fmb,*.pll,*.ctl,*.sh,*.sql,*.prog) which are of large size.
Around 8000 keywords(which will be in the file $keywordfile) needed to be searched inside those files.
If a keyword is found in a file..I have to insert the filename,extension,catagoery,keyword,occurrence... (8 Replies)
Discussion started by: millan
8 Replies
7. UNIX for Dummies Questions & Answers
Hi ,
i wrote a script to convert dates to the formate i want .it works fine but the conversion is tkaing lot of time . Can some one help me tweek this script
#!/bin/bash
file=$1
ofile=$2
cp $file $ofile
mydates=$(grep -Po '+/+/+' $ofile) # gets 8/1/13
mydates=$(echo "$mydates" | sort |... (5 Replies)
Discussion started by: vikatakavi
5 Replies
8. Shell Programming and Scripting
Hello,
I'm new to this forum and like to first of all say hello to everyone.
I've got a really annoying problem at the moment.
I'm trying to rsync some files (about 200MB with one file of 120MB) from a Raspberry PI with raspbian to a debian server via rsync.
This procedure is stored in a... (3 Replies)
Discussion started by: wex_storm
3 Replies
9. Programming
Hello,
Attached is my very simple C++ code to remove any substrings (DNA sequence) of each other, i.e. any redundant sequence is removed to get unique sequences. Similar to sort | uniq command except there is reverse-complementary for DNA sequence. The program runs well with small dataset, but... (11 Replies)
Discussion started by: yifangt
11 Replies
LEARN ABOUT HPUX
dbc_max_pct
dbc_max_pct(5) OBSOLETED dbc_max_pct(5)
NAME
dbc_max_pct, dbc_min_pct, bufcache_max_pct, bufpages, nbuf - OBSOLETED kernel tunable parameter
DESCRIPTION
These tunables have been obsoleted and removed. Do not make any changes to these tunables, as they have no effect on the kernel.
Use the file cache tunables and (see filecache_max(5)).
In previous releases, the tunables and were used to set limits to the dynamic buffer cache, and the tunables and were used for tuning the
buffer cache when a static cache was desired.
This release of HP-UX offers improved file caching technology and improved physical memory control associated to caching file I/O data.
The number of tunable parameters used to control file caching memory usage is reduced from five to two, and previous usability issues are
addressed.
Restrictions on Changing
These tunables should not be modified. Attempting to tune any of the obsolete buffer cache tunables, or results in an error.
Use the tunables and to set limits on the file cache. Note that, on a any given system, the optimum values of these two new tunables are
not necessarily equivalent to the optimum values of the obsolete buffer cache tunable values in the older systems. You should first deter-
mine if the new default values yield acceptable performance on your system, before attempting to change the values of the new file cache
tunables.
AUTHOR
and were developed by HP.
SEE ALSO
filecache_max(5).
Kernel Tunable Parameter dbc_max_pct(5)