Speeding up shell script with grep


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Speeding up shell script with grep
# 1  
Old 03-25-2017
Speeding up shell script with grep

HI Guys hoping some one can help

I have two files on both containing uk phone numbers

master is a file which has been collated over a few years ad currently contains around 4 million numbers

new is a file which also contains 4 million number i need to split new nto two separate files one which contains number that already exist in master and one which contains numbers that don't already exist in master. I can do this but it takes around 80 hours to complete !! can any one offer any suggestions on how to speed this up ?

Code:
while read -r phone_number; do

        echo  "checking master for phone number $phone_number"

        if grep "$phone_number" master.csv; then
  
           echo "$phone_number already exists in master file "
          echo "$phone_number" >> unusable_numbers    
        else

           echo "$phone_number looks good we can use this saving to usable_numbers"
           echo "$phone_number" >> usable_numbers 

        fi

done < new

any help would be gratefully appreciated

Last edited by RudiC; 03-25-2017 at 06:58 AM..
# 2  
Old 03-25-2017
Quote:
Originally Posted by dunryc
master is a file which has been collated over a few years ad currently contains around 4 million numbers

new is a file which also contains 4 million number i need to split new nto two separate files one which contains number that already exist in master and one which contains numbers that don't already exist in master. I can do this but it takes around 80 hours to complete !! can any one offer any suggestions on how to speed this up ?
Use grep -f: instead of writing a shell loop grep can do it in one step (and presumably quite faster):

Code:
grep -f new_phone master > in_new_phone_and_master

You might want to try this with some small files to get a feeling for what it produces. You can also use all the other grep-options in conjunction withh this, especially "-F" (use fixed strings for matching), which speeds greps operation up considerably. Also notice the -v-option which inverts the outcome. See the man-page of grep for details.

I hope this helps.

bakunin
This User Gave Thanks to bakunin For This Post:
# 3  
Old 03-25-2017
In addition to what bakunin suggested, you might also consider the following to more closely match the output produced by your current script...

Making the wild assumptions that:
  1. master.csv is a character separated values file with comma as the character separating fields, and
  2. the field containing the phone number in master.csv is the 1st field
how long does the following script take:
Code:
awk -v fn=1 -F',' '
FNR == NR {
	p[$fn]
	next
}
{	print "checking master for phone number", $1
	if($1 in p) {
		print $1, "already exists in master file "
		print > "unusable_numbers"
	} else {
		print $1, "looks good we can use this saving to usable_numbers"
		print > "useable_numbers"
	}
}' master.csv new

to do the same job?

If the field number in master.csv is not the 1st field, change the value assigned to the fn variable from 1 to the field number of the field containing the phone number.

If the field separators in master.csv are not commas, change the character in the -F option-argument to the desired character.
# 4  
Old 03-25-2017
EDIT: Sorry, forget my solution, there was a flaw in my logic entirely in that script that meant it wasn't suitable for purpose, and certainly wasn't any faster. Apologies !

Last edited by drysdalk; 03-25-2017 at 07:29 AM..
# 5  
Old 03-25-2017
Using grep -f works, but with two files as large as indicated will take its (serious) time, and may eventually run out of memory. Try
Code:
sort master.csv new | uniq -d

, then use the resultant file in similar way (uniq -u) to extract unique values from either original file.
Comparison of both approaches on ~20k files:
Code:
time grep -ffile2 file1
real    0m0.352s
user    0m0.280s
sys     0m0.052s
time sort file[12] | uniq -d
real    0m0.037s
user    0m0.032s
sys     0m0.004s

EDIT: Times spent for two files with roughly 4E6 entries each, and about 1E6 lines overlap (on a two processor linux host):
Code:
time sort file[12] | uniq -d > f1
real    0m14.975s
user    0m27.048s
sys     0m0.792s
time sort file1 f1 | uniq -u > f2
real    0m9.331s
user    0m16.488s
sys     0m0.572s


Last edited by RudiC; 03-25-2017 at 01:22 PM..
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Help with speeding up my working script to take less time - how to use more CPU usage for a script

Hello experts, we have input files with 700K lines each (one generated for every hour). and we need to convert them as below and move them to another directory once. Sample INPUT:- # cat test1 1559205600000,8474,NormalizedPortInfo,PctDiscards,0.0,Interface,BG-CTA-AX1.test.com,Vl111... (7 Replies)
Discussion started by: prvnrk
7 Replies

2. Shell Programming and Scripting

Help 'speeding' up this 'parsing' script - taking 24+ hours to run

Hi, I've written a ksh script that read a file and parse/filter/format each line. The script runs as expected but it runs for 24+ hours for a file that has 2million lines. And sometimes, the input file has 10million lines which means it can be running for more than 2 days and still not finish.... (9 Replies)
Discussion started by: newbie_01
9 Replies

3. Shell Programming and Scripting

Help speeding up script

This is my first experience writing unix script. I've created the following script. It does what I want it to do, but I need it to be a lot faster. Is there any way to speed it up? cat 'Tax_Provision_Sample.dat' | sort | while read p; do fn=`echo $p|cut -d~ -f2,4,3,8,9`; echo $p >> "$fn.txt";... (20 Replies)
Discussion started by: JohnN6
20 Replies

4. Shell Programming and Scripting

speeding up bash script with "while read line"

Hello everybody, I'm still slowly treading my way into bash scripting (without any prior programming experience) and hence my code is mostly what some might call "creative" if they meant well :D I have created a script that serves its purpose but it does so very slowly, since it needs to work... (4 Replies)
Discussion started by: origamisven
4 Replies

5. UNIX for Dummies Questions & Answers

Speeding/Optimizing GREP search on CSV files

Hi all, I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min). Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search. Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies

6. Shell Programming and Scripting

Shell Script with Grep

Hi guys - below is my script that is checking for current file, size and timestamp. However I added a "grep" feature in it (line in red), but not getting the desired result. I am trying to acheive in output: 1. Show me the file name, timestamp, size and grep'ed words It would be a... (2 Replies)
Discussion started by: DallasT
2 Replies

7. Shell Programming and Scripting

How to grep sql error in shell script and exit the script?

I need help in the following script. I want to grep the sql errors insert into the error table and exit the shell script if there is any error, otherwise keep running the scripts. Here is my script #!/bin/csh -f source .orapass set user = $USER set pass = $PASS cd /opt/data/scripts echo... (2 Replies)
Discussion started by: allinshell99
2 Replies

8. Shell Programming and Scripting

grep'ing and sed'ing chunks in bash... need help on speeding up a log parser.

I have a file that is 20 - 80+ MB in size that is a certain type of log file. It logs one of our processes and this process is multi-threaded. Therefore the log file is kind of a mess. Here's an example: The logfile looks like: "DATE TIME - THREAD ID - Details", and a new file is created... (4 Replies)
Discussion started by: elinenbe
4 Replies

9. UNIX for Dummies Questions & Answers

Speeding up a Shell Script (find, grep and a for loop)

Hi all, I'm having some trouble with a shell script that I have put together to search our web pages for links to PDFs. The first thing I did was: ls -R | grep .pdf > /tmp/dave_pdfs.outWhich generates a list of all of the PDFs on the server. For the sake of arguement, say it looks like... (8 Replies)
Discussion started by: Dave Stockdale
8 Replies

10. Shell Programming and Scripting

grep in Shell script

Hello I do want to write a script which will check any errors say "-error" in the log file then have to send email to the concern person . And the concern person will correct the error . Next time if the script runs eventhough the error has been corrected it will ... (1 Reply)
Discussion started by: Krishnaramjis
1 Replies
Login or Register to Ask a Question