Sponsored Content
Top Forums Shell Programming and Scripting Want to improve the performance of script Post 302496816 by methyl on Tuesday 15th of February 2011 10:48:40 AM
Old 02-15-2011
Suggest you forget trying to search multi-gigabyte flat files with Shell programs and look at writing a proper database application using something like Oracle.
Is this data in a database already I wonder?
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

improve performance by using ls better than find

Hi , i'm searching for files over many Aix servers with rsh command using this request : find /dir1 -name '*.' -exec ls {} \; and then count them with "wc" but i would improve this search because it's too long and replace directly find with ls command but "ls *. " doesn't work. and... (3 Replies)
Discussion started by: Nicol
3 Replies

2. Shell Programming and Scripting

How to improve grep performance...

Hi All, I am using grep command to find string "abc" in one file . content of file is *********** abc = xyz def= lmn ************ i have given the below mentioned command to redirect the output to tmp file grep abc file | sort -u | awk '{print #3}' > out_file Then i am searching... (2 Replies)
Discussion started by: pooga17
2 Replies

3. UNIX for Dummies Questions & Answers

Improve Performance

hi someone tell me which ways i can improve disk I/O and system process performance.kindly refer some commands so i can do it on my test machine.thanks, Mazhar (2 Replies)
Discussion started by: mazhar99
2 Replies

4. Shell Programming and Scripting

Any way to improve performance of this script

I have a data file of 2 gig I need to do all these, but its taking hours, any where i can improve performance, thanks a lot #!/usr/bin/ksh echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')" function showHelp { cat << EOF >&2 syntax extreme.sh FILENAME Specify filename to parse EOF... (3 Replies)
Discussion started by: sirababu
3 Replies

5. Shell Programming and Scripting

Improve the performance of a shell script

Hi Friends, I wrote the below shell script to generate a report on alert messages recieved on a day. But i for processing around 4500 lines (alerts) the script is taking aorund 30 minutes to process. Please help me to make it faster and improve the performace of the script. i would be very... (10 Replies)
Discussion started by: apsprabhu
10 Replies

6. Shell Programming and Scripting

How to improve the performance of parsers in Perl?

Hi, I have around one lakh records. I have used XML for the creation of the data. I have used these 2 Perl modules. use XML::DOM; use XML::LibXML; The data will loo like this and most it is textual entries. <eid>19000</eid> <einfo>This is the ..........</einfo> ......... (3 Replies)
Discussion started by: vanitham
3 Replies

7. Programming

Help with improve the performance of grep

Input file: #content_1 12314345345 242467 #content_14 436677645 576577657 #content_100 3425546 56 #content_12 243254546 1232454 . . Reference file: content_100 (1 Reply)
Discussion started by: cpp_beginner
1 Replies

8. UNIX for Dummies Questions & Answers

How to improve the performance of this script?

Hi , i wrote a script to convert dates to the formate i want .it works fine but the conversion is tkaing lot of time . Can some one help me tweek this script #!/bin/bash file=$1 ofile=$2 cp $file $ofile mydates=$(grep -Po '+/+/+' $ofile) # gets 8/1/13 mydates=$(echo "$mydates" | sort |... (5 Replies)
Discussion started by: vikatakavi
5 Replies

9. Programming

Improve the performance of my C++ code

Hello, Attached is my very simple C++ code to remove any substrings (DNA sequence) of each other, i.e. any redundant sequence is removed to get unique sequences. Similar to sort | uniq command except there is reverse-complementary for DNA sequence. The program runs well with small dataset, but... (11 Replies)
Discussion started by: yifangt
11 Replies

10. Shell Programming and Scripting

Bash script search, improve performance with large files

Hello, For several of our scripts we are using awk to search patterns in files with data from other files. This works almost perfectly except that it takes ages to run on larger files. I am wondering if there is a way to speed up this process or have something else that is quicker with the... (15 Replies)
Discussion started by: SDohmen
15 Replies
ATSADC(1)							       local								 ATSADC(1)

NAME
atsadc, atsa1, atsaftp, atsahttp -- counter-collection SYNOPSIS
atsadc [ t n ] [ ofile ] atsa1 [ t n ] atsaftp atsahttp DESCRIPTION
System activity-data can be gathered on special request of a user [see atsar(1) ] or automatically, on a routine basis, as described here. Usually the kernel maintains statistical counters that are incremented as various system actions occur. These include counters for CPU uti- lization, disk utilization, memory utilization and various network statistics. The program atsadc and the shell-script atsa1 are used to collect, save, and process these counters. The program atsadc (the data collector) samples system data n times with an interval of t seconds between samples, and writes in binary format to ofile or (default) to standard output. The sampling interval t should be greater than 1 second. If t and n are omitted, a special reset-record is written. This facility is used when booting to a multi-user state, to mark the time at which the counters restart from zero. For example, the reset-mark can be added to the daily data by the command: /usr/local/bin/atsadc /var/log/atsar/atsa`date +%d` Note that this entry is written to the /etc/rc.d/init.d/atsar file. The shell-script atsa1 is used to collect and store data in the binary file /var/log/atsar/atsadd where dd is the current day of the month. The arguments t and n cause records to be written n times at an interval of t seconds, or once if omitted. Furthermore this script takes care that log-files older than a week are removed once a day. A file containing following entries should be added to the /etc/cron.d directory to produce records every 20 minutes during working hours and hourly otherwise: 0 * * * 0-6 root /usr/local/bin/atsa1 20,40 8-17 * * 1-5 root /usr/local/bin/atsa1 See crontab(1) for details. The shell-script atsaftp counts the new transfers registered in the FTP-logfile(s) since the previous time this script was activated; the new counters are stored in the /var/log/atsar/ftpstat file in ASCII-format. The names of the FTP-logfiles to be watched are specified in the /etc/atsar.conf configuration-file. The shell-script atsahttp counts the new transfers registered in the HTTP-logfile(s) since the previous time this script was activated; the new counters are stored in the /var/log/atsar/httpstat file in ASCII-format. The names of the HTTP-logfiles to be watched are specified in the /etc/atsar.conf configuration-file. Both scripts must be activated just before the program atsadc is started, which also collects these counters. FILES
/var/log/atsar/atsadd Daily data file, where dd are digits representing the day of the month. SEE ALSO
atsar(1), crontab(1) AUTHOR
Gerlof Langeveld, AT Computing (gerlof@ATComputing.nl) AT Computing July 2004 ATSADC(1)
All times are GMT -4. The time now is 01:24 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy