Hi All- we have performance issue in unix to read line by line.
I am looking at processing all the records.
description: Our script will read data from a flat file, it will pickup first four character and based on the value it will set up variables accordingly and appended the final output to another flat file as show below.
Concern: script is working fine but we have a performance issue to read line by line, we are looking at something like it will ready all the lines at a time and dynamically identify the first four character and accordingly set up the individual variables and finally append the value.
Please find attached script, sample input data file.
actual script:
---------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------
Your script is slowed down considerably because of on average 5 calls to external programs and subshells for each line of the input file, which adds up to 50,000 (!) calls with the 10,000 line input sample in post #1 .
Instead of :
Code:
record_type=`echo $record | sed 's/\(^....\).*/\1/'`
try
Code:
record_type=${record%"${record#????}"}
And for all the expr statement in the case statement, you can use ksh's arthmetic expansions:
For example instead of
Code:
a2=` expr ${a2} + 1 `; a3=` expr ${a3} + 2 `
try:
Code:
a2=$(( a2 + 1 )); a3=$(( a3 + 2))
and instead of
Code:
a7=` expr ${a7} + 1 `; a5=` expr ${a5} + 3 `
try:
Code:
a7=$((a7 + 1) ; a5=$((a5 + 3))
and so on for all the other lines with ` expr ...` statements
--
Also instead of
Code:
cat test2.tlog | while read line1
do
line_no=` expr ${line_no} + 1 `
process_each_record ${line1}
done
You can try:
Code:
while read line1
do
line_no=$((line_no + 1))
process_each_record "${line1}"
done < test2.tlog
Note that there should be double quotes around $line1 to avoid unintended field splitting and wildcard expansions by the shell
If you do this throughout the script, then this will result in zero calls to external programs and or subshells and this should bring a dramatic performance gain..
--
yet another option - but that is a matter of taste - is to not cut off the first four characters, but instead use the case statement's pattern matching:
Code:
case $record in
1111*)
foo
;;
1112*)
bar
;;
...
Last edited by Scrutinizer; 05-04-2016 at 08:52 PM..
This User Gave Thanks to Scrutinizer For This Post:
If we take your original code (after converting the DOS <carriage-return><linefeed> character pair line terminators into UNIX <linefeed> (AKA <newline>) single character line terminators and modifying your sample input file (test2.txt) the same way) and we time running your script 7 times on a MacBook Pro built about 2 years ago with a 2.8GHz Intel Core i7 processor and a 1Tb SSD running OS X El Capitan Version 10.11.4, the average time output looks like:
Code:
real 1m13.54s
user 0m25.80s
sys 0m45.76s
(i.e., 73.54 seconds).
If we modify your code using the suggestions Scrutinizer supplied (using a logical equivalent of:
Code:
record_type=${record%"${record#????}"}
to extract the first four characters of each record) and also get rid of the test for the existence of the output file and redirect the output from the read loop (which opens and closes the output file once) instead of opening and closing the output file once for each line read from your input file (getting rid of 9,999 opens and closes when processing your sample input) and time the following script:
You didn't say which version of the Korn shell you're using. The above code works with any Korn shell. If you have a 1993 or later version of ksh, you can change the line:
Code:
case ${1%"${1#????}"} in
to:
Code:
case ${1:0:4} in
and further reduce the average running time to:
Code:
real 0m0.17s
user 0m0.14s
sys 0m0.02s
That is better than a 99.75% reduction from your original script's running time.
If you are using a 1988 vintage ksh and don't have a /bin/ksh93 that you can use, we can still incorporate Scrutinizer's 2nd suggestion changing the above case statement to just:
Code:
case $1 in
and change the patterns from the form:
Code:
(1111) assignments...
to:
Code:
(1111*) assignments...
and still reduce the average running time to:
Code:
real 0m0.28s
user 0m0.24s
sys 0m0.03s
which is still about a 99.62% reduction from your original script's running time and also works with any version of the Korn shell.
I hope this gives you some idea of how significant the improvement in running time can be when you get rid of unneeded invocations of external utilities and unneeded output file opens and closes.
In addition, it will not matter much performance-wise, but the function could be further reduced to something like this, making it a bit easier to understand and thus more maintainable.
In principle, the idea of switching to awk for the entire processing is not a bad one, although you shouldn't expect another performance improvement as noticeable as the one gained before.
But - you can't use shell syntax inside awk. E.g.
Code:
${record%"${record#????}" ---> substr (record, 1, 4)
a2=$(( a2 + 1 )) ---> a2+=1
$xyz ---> xyz (unless you want to access field xyz)
And, don't cata file into awk's stdin - awk can open and read a file immediately.
Kindly let me know if you have any suggestions on this.
---------- Post updated at 01:12 AM ---------- Previous update was at 12:41 AM ----------
Thanks a lot Don for your support...
My initial script took around 21 min 35 sec, with the new code it is taking 15 sec.
I would have thought that by now you would know that the awk command language and the shell command language are not the same.
I could rewrite the 62 line, 1,595 character ksh93 script I suggested to instead be a ksh script invoking awk that would run about twice as fast as the ksh93 script, and still produce exactly the same output as the other three scripts.
But, I would never attempt to do that if I thought you were going to try to convert that readable, maintainable, understandable 42 line, 840 character script into an unreadable, unmaintainable, not understandable 1-liner. And, if I were to create such a script, it would not contain an unneeded use of cat that would only slow it down (just like the cat in your original script did).
---------------------------------
I'm glad to hear that one of the three scripts I suggested is working well for you.
All- We have a performance issue in reading a file line by line. Please find attached scripts for the same. Currently it is taking some 45 min to parse "512444" lines.
Could you please have a look at it and provide any suggestions to improve the performance.
Thanks,
Balu
... (12 Replies)
Heyas
I'm trying to read/display a file its content and put borders around it (tui-cat / tui-cat -t(ypwriter).
The typewriter-part is a 'bonus' but still has its own flaws, but thats for later.
So in some way, i'm trying to rewrite cat using bash and other commands.
But sadly it fails on... (2 Replies)
Heyas
With my forum search term 'issue with leading dash' i found 2 closed threads which sadly didnt help me.
Also me was to eager to add the script, that i didnt properly test, and just now figured this issue.
So i have this code:
if ]
then while read line
do line="${line/-/'\-'}"... (7 Replies)
Hi all,
I have a log file say Test.log that gets updated continuously and it has data in pipe separated format. A sample log file would look like:
<date1>|<data1>|<url1>|<result1>
<date2>|<data2>|<url2>|<result2>
<date3>|<data3>|<url3>|<result3>
<date4>|<data4>|<url4>|<result4>
What I... (3 Replies)
Hello,
I need a program that read a file line by line and prints out lines 1, 2 & 3 after an empty line... An example of entries in the file would be:
SRVXPAPI001 ERRO JUN24 07:28:34 1775
REASON= 0000, PROCID= #E506 #1065: TPCIPPR, INDEX= 003F
... (8 Replies)
Hi All,
I'm trying to figure out which are the trusted-ips and which are not using a script file.. I have a file named 'ip-list.txt' which contains some ip addresses and another file named 'trusted-ip-list.txt' which also contains some ip addresses. I want to read a line from... (4 Replies)
Hi,
This is the script and the error I am receiving
Can anyone please suggest ?
For the exmaple below assume we are using vg01
#!/bin/ksh
echo "##### Max Mount Count Fixer #####"
echo "Please insert Volume Group name to check"
read VG
lvs |grep $VG | awk {'print $1'} > /tmp/audit.log
... (2 Replies)
Hi,
I'm trying to run the following command using sh -c
ie
sh -c "while read EachLine
do
rm -f $EachLine ;
done < file_list.lst;"
It doesn't seem to do anything.
When I run this at the command line, it does remove the files contained in the list so i know the command works
ie... (4 Replies)
Hi
I am using while loop, below, to read lines from a very large file, around 400,000 rows. The script works fine until around line 300k but then starts giving incorrect result.
I have tried running the script with a smaller data set and it works fine. I made sure to include the line where... (2 Replies)
I am using the while-loop to read a file.
The file has lines with null-terminated strings (words, actually.)
What I have by that reading - just a first word up to '\0'!
I need to have whole string up to 'new line' - (LF, 10#10, 16#A)
What I am doing wrong?
#make file 'grb' with... (6 Replies)