Bash script search, improve performance with large files


Login or Register to Reply

 
Thread Tools Search this Thread
# 9  
Would you mind to also time the proposal in post #3?
# 10  
Quote:
Originally Posted by RudiC
Would you mind to also time the proposal in post #3?

I actually did but i edited in the post after. Smilie


Code:
awk  prijslijst_filter.csv lowercase_winnaar.csv  9,51s user 0,13s system 99% cpu 9,647 total

Since the difference between the grep and this newer awk is only mere seconds i am not sure which i am going to use. The awk one is prefered as it is a drop in solution for the current one but the grep one is still quite alot faster.


Grep has also the advantage that it responds better with the ignore case part. I never seem to get this properly working on the awk one even with the forced lowercase on both files.




I just tried your awk solution again RudiC and it seems something is wrong with it . I did not check the first time because i had to leave right after i tested it (the files got overwritten after).


It seems the part you gave does not give any files to continue the rest of the script.


Code:
awk '
NR==FNR                 {SRCH=SRCH DL $0
                         DL = "|"
                         next
                        }
tolower($0) ~ SRCH      {print > "'"$PAD/removed_woord_blaat33.csv"'"
                         next
                        }

                        {print > "'"$PAD/filtered_winnaar_blaat33.csv"'"
                        }
' prijslijst_filter.csv lowercase_winnaar.csv


I tried with and without time to see if that caused the issue but it did not change the outcome. Both new files arent created.

Last edited by SDohmen; 03-28-2019 at 11:29 AM.. Reason: new info
# 11  
When processing extremely large files you might consider using split first.
Then in multicore environments spawn several awks or greps to process it in parallel from shell script.
There are also GNU tools which offer parallelism without shell logic.

Should be a bit tougher to program, but processing time will be reduced significantly if you have cores and disks are fast to service.

Memory also comes in play, since split will read the files, and operating system will cache those files in memory, if the same is available.
Making those awks or greps processes much faster on read operations.

Of course, limit being free memory on the system and configuration of the file system caching in general.
In default configurations file system caching will be able to use a large portion free memory on most linux / unix systems i've seen.

Hope that helps
Regards
Peasant.
# 12  
Quote:
Originally Posted by Peasant
When processing extremely large files you might consider using split first.
Then in multicore environments spawn several awks or greps to process it in parallel from shell script.
There are also GNU tools which offer parallelism without shell logic.

Should be a bit tougher to program, but processing time will be reduced significantly if you have cores and disks are fast to service.

Memory also comes in play, since split will read the files, and operating system will cache those files in memory, if the same is available.
Making those awks or greps processes much faster on read operations.

Of course, limit being free memory on the system and configuration of the file system caching in general.
In default configurations file system caching will be able to use a large portion free memory on most linux / unix systems i've seen.

Hope that helps
Regards
Peasant.

This sounds very interesting but there are 2 issues.


1. I have to split the files in smaller files (around 5k i guess) which isn't a big deal but a little bit annoying.
2. Since this is running in a script i have no idea how to call multiple instances of awk at the same time. Everything i know says that it handles each part of the script after each other and not at the same time. If you have an idea how to accomplish that please let me know since it does sound interesting/promising.


CPU and MEM arent the issue as they are sufficient. The only thing that can stall the script are the other scripts that are running also. I tried spreading them out as much as possible but some just take quite long to run and thats why i want to slim them down so they dont run together.
# 13  
Quote:
Originally Posted by RudiC
You might want to build an "alternation regex", with not too many keywords, and modify the matching slightly. Compare performance of

Code:
awk '
NR==FNR                 {SRCH=SRCH DL $0
                         DL = "|"
                         next
                        }
tolower($0) ~ SRCH      {print > "'"$PAD/removed_woord.csv"'"
                         next
                        }

                        {print > "'"$PAD/filtered_winnaar_2.csv"'"
                        }
' file3 file4 

real    0m2,328s
user    0m2,318s
sys    0m0,005s

to this


Code:
time awk '
NR==FNR         {id[$0]
                 next
                }
                {for (SP in id) if (tolower($0) ~ SP)    {print > "'"$PAD/removed_woord.csv"'"
                                                 next
                                                }
                }
                {print > "'"$PAD/filtered_winnaar_2.csv"'"
                }
' file3 file4
real    0m17,038s
user    0m16,995s
sys    0m0,025s

seems to make a factor of roughly 7. The output seems to be identical. Please try and report back.



I just did this one again and i got it working. I noticed the -F";" was missing so i added that and it worked flawlessly. The complete script runs in about 20 sec now which was more then 7 min first.
# 14  
Congrats, that would be a factor ~21 of performance gain!


I'd be surprised if the script needs the -F";" as it doesn't handle single fields but just the entire line, $0
Login or Register to Reply

|
Thread Tools Search this Thread
Search this Thread:
Advanced Search

More UNIX and Linux Forum Topics You Might Find Helpful
How to improve the performance of this script?
vikatakavi
Hi , i wrote a script to convert dates to the formate i want .it works fine but the conversion is tkaing lot of time . Can some one help me tweek this script #!/bin/bash file=$1 ofile=$2 cp $file $ofile mydates=$(grep -Po '+/+/+' $ofile) # gets 8/1/13 mydates=$(echo "$mydates" | sort |...... UNIX for Dummies Questions & Answers
5
UNIX for Dummies Questions & Answers
Want to improve the performance of script
poweroflinux
Hi All, I have written a script as follows which is taking lot of time in executing/searching only 3500 records taken as input from one file in log file of 12 GB Approximately. Working of script is read the csv file as an input having 2 arguments which are transaction_id,mobile_number and search...... Shell Programming and Scripting
6
Shell Programming and Scripting
Improve the performance of a shell script
apsprabhu
Hi Friends, I wrote the below shell script to generate a report on alert messages recieved on a day. But i for processing around 4500 lines (alerts) the script is taking aorund 30 minutes to process. Please help me to make it faster and improve the performace of the script. i would be very...... Shell Programming and Scripting
10
Shell Programming and Scripting
Any way to improve performance of this script
sirababu
I have a data file of 2 gig I need to do all these, but its taking hours, any where i can improve performance, thanks a lot #!/usr/bin/ksh echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')" function showHelp { cat << EOF >&2 syntax extreme.sh FILENAME Specify filename to parse EOF...... Shell Programming and Scripting
3
Shell Programming and Scripting