Search multiple patterns in multiple files


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Search multiple patterns in multiple files
# 1  
Old 01-20-2011
Java Search multiple patterns in multiple files

Hi,
I have to write one script that has to search a list of numbers in certain zipped files.
For eg. one file file1.txt contains the numbers. File1.txt contains 5,00,000 numbers and I have to search each number in zipped files(The number of zipped files are around 1000 each file is 5 MB)
I have to search each number in zipped and if number is not there in any zipped file then I have to send the output to a file .

file1.txt
--------
Code:
7234834
2342346
65745654634
345423534
.
.
.
.
783458934
345345

Search all these numbers in zipped files.
Code:
abc.txt.gz.processed
xyz.txt.gz.processed
ere.txt.gz.processed
gfdf.txt.gz.processed
dfg.txt.gz.processed
dgg.txt.gz.processed
.
.
.
kjh.txt.gz.processed

outputfile.txt
Code:
number 35345, not found.
number 345345, not found.
number 87979 not found.
number 234234234, not found.
.
.
.
number 234234234, not found.
number 234234234, not found.



Sample zipped file formatSmilieI am providing 2 records of the zipped file)

Code:
KKKKK 1454545345 842011011920025500000001287009909427909 031378055730681 KKKKKK AAA MMMMMMM034535345345345345
.
.
.
.
 
KKKKK 1454545345 842011011920025500000001287009909427909 03156456456546 KKKKKK AAA MMMMMMM034535345345345345

Red item is the number to search.

I wrote 1 script ..but it is taking too much time. it is taking around 2 minutes to search 1 number. So to serach all numbers it will take 5,00,000 * 2 minutes..Not a feasible solution. Because I have to run this script daily.If I run the command in the background, then unix throws the error that it can't fork process too much processes.

The script that I wrote is:
Code:
#!/usr/bin/ksh
for num in `cat file1.txt`
do
find . -name "*processed" -print | xargs gunzip -c | grep -q $num || echo "$num not found" >> outputfile.txt &
done

Please help me to fine tune this script so that I can get the output in less time.........
Thanks

Last edited by Franklin52; 01-21-2011 at 03:49 AM.. Reason: Please use code tags
# 2  
Old 01-20-2011
Do you have room to unzip all the files somewhere?

Unzip all files and append to one big file (your havelist), then use awk to check each line of your havelist against file1.txt:

Code:
( find . -name "*processed" -print | xargs gunzip -c ) > /scratch/havelist
awk ' NR == FNR { F[i++]=$0; next}
    { for(i in F)if(index($0, F[i])) delete F[i]; }
    END { for(i in F) print "number "F[i]" not found." } ' file1.txt havelist > outputfile.txt
rm /scratch/havelist

awk may chew a big chunk of memory as it loads the 5,000,000 numbers into it's array, and you won't get any output till it's done, but it will be much quicker that your original attempt.
Some more efficency can be gained if you can say that each processed file line only matches to 1 number from file1.txt (ie a line in processed file dosn't contain 2 or more numbers you are looking for).

But, with record counts this large you should really be considering using a database rather than flat/zipped files.

---------- Post updated at 08:47 AM ---------- Previous update was at 08:15 AM ----------

If you just don't have room to extract to a scratchfile you can do the search through a pipe:

Code:
( find . -name "*processed" -print | xargs gunzip -c ) | awk ' NR == FNR { F[i++]=$0; next}
    { for(i in F)if(index($0, F[i])) delete F[i]; }
    END { for(i in F) print "number "F[i]" not found." } ' file1.txt - > outputfile.txt


Last edited by Chubler_XL; 01-20-2011 at 06:55 PM.. Reason: Orginal solution listed processed lines not in file1.txt
# 3  
Old 01-21-2011
MySQL

Thanks Chubler for the reply...
Can you please explain what is going on inside the command.

The output file made is 11.4 GB
Code:
-rw-r--r-- 1 user group 11388572164 Jan 21 11:32 outputfile.txt

Is the error occuring because of the large size of the file.
But it's giving error after running the command..
Code:
awk ' NR == FNR { F[i++]=$0; next}
{ for(i in F)if(index($0, F[i])) delete F[i]; }
END { for(i in F) print "number "F[i]" not found." } ' file1.txt havelist > outputfile.txt

-----------------------------------------------------------------------
Code:
Syntax Error The source line is 1.
The error context is
NR == >>> <<<
awk: 0602-500 Quitting The source line is 1.

-----------------------------------------------------------------------

Please suggest

---------- Post updated at 02:13 AM ---------- Previous update was at 01:10 AM ----------

After executing it, I am getting the below error.
Code:
awk: 0602-561 There is not enough memory available now.
 The input line number is 3.47414e+06. The file is Bigfile.
 The source line number is 2.


Moderator's Comments:
Mod Comment Please use code tags when posting data and code samples!


---------- Post updated at 04:02 AM ---------- Previous update was at 02:13 AM ----------

Hi Chubler_XL,
Thanks for the quick reply.Smilie
It's working but still it is taking too much time. Is it possible to search in a faster way.

Last edited by Franklin52; 01-21-2011 at 03:51 AM..
# 4  
Old 01-23-2011
With some tweaking it may still be possible to get this brute force solution to work fast enough, but it's not looking good. I suspect your are running out of physical memory and the system is swapping, how big is the file1.txt and how much physical memory do you have on your system?

You could consider retaining some of the work from previous scans. This really depends on your dataset and leads to the following questions about your data.

How static is it?
I'd assume the zip file contents don't change much, but perhaps you remove old zips and add new ones?

How about the contents of file1.txt is this completely different each night? Are any items searched for searched for again at later dates? (For example if we know the XYZ wasn't in the zips lastnight and it's searched for again, all we need to scan are files added since last night's scan).

Last edited by Chubler_XL; 01-23-2011 at 06:26 PM..
# 5  
Old 01-23-2011
What Operating System and version are you running? This is very important. I haven't seen the "fork" error in many years.
We gather that you have ksh.

Quote:
#!/usr/bin/ksh
for num in `cat file1.txt`
do
find . -name "*processed" -print | xargs gunzip -c | grep -q $num || echo "$num not found" >> outputfile.txt &
done
The script posted makes no sense because it does not search for zipped files.


If "file1.txt" contains 5,000,000 numbers this level of blunt processing is absurd in unix Shell when searching 5Gb of data.
You appear to want to search a specific field at a specific position within a record but have provided sample data in "file1.txt" which does not match the exact length of the highlighted field.


Do you have use of a professional Systems Analyst?
Do you have a database engine (e.g. Oracle) and use of professional Database Programmers?

IMHO you are way out of you depth. Hire a professional.



The background "&" within a 5,000,000 iteration loop is why you are getting "fork" errors. It would take a seriously special kernel build to create a unix which could cope with 5,000,000 concurrent processes (hmm. temptied to try it). I am surprised that you did not crash the computer with this irresponsible, uninformed and ignorant code.


Ps: Given a decent commercial database engine and some top-class Database Programmers this problem is solveable.

Last edited by methyl; 01-23-2011 at 06:46 PM.. Reason: Footnote:, layout and more layout
# 6  
Old 01-24-2011
Dear Chubler_XL,
Thanks for your involvement is solving the issue that I am facing.
The File file1.txt will be different for each day. It contains fixed length data like
87654321089
09987625347
78346347655
23489237489
.
.
.
.
73246782364
23423423444

And I have to search all this data in zipped files.
Sample Zipped file record:
--------------------------
KKKKK 1454545345 842011011920025500000001287009909427909 031378055730681 KKKKKK AAA MMMMMMM034535345345345345
.
.
.
.

KKKKK 1454545345 842011011920025500000001287009909427909 03156456456546 KKKKKK AAA MMMMMMM034535345345345345
The numbers in file1.txt appears in zipped files in a particular position say from position 60th till position 97th.

The number to be searched for can be repeated or not the very next day.

File1.txt size:
---------------
3 MB

Physical memory
--------
size inuse free pin virtual
memory 8388608 8372636 15972 2270610 5978749
pg space 20971520 319929

work pers clnt other
pin 1925282 0 0 345328
in use 5910340 0 2462296

PageSize PoolSize inuse pgsp pin virtual
s 4 KB - 7994892 319929 1989842 5601005
m 64 KB - 23609 0 17548 23609

Operating System:
-----------------
UNIX-AIX

AIX version:
------------
AIX legzone1 3 5 00C15CD44C00

---------------------------------------------------------------------------------

Dear Methyl,
Actually the file name is abc.txt.gz.processed...That's why I am searching for "*processed".We are first trying to search
it with the help of shell scripting. and if it is not possible, then We will use DataBase to search.

Last edited by vsachan; 01-24-2011 at 01:42 AM..
# 7  
Old 01-24-2011
@vsachan
Having re-read my post I was a bit blunt yesterday. On my computer I can't unzip a file with the wrong file extension.

I have written one-off searches of large numbers of reasonably large compressed text files in Shell and realise the practical limits. On your scale this is not a job for Shell programming.

This data came from somewhere and I would be very surprised if it only ever existed as a flat file. I suppose that you might be trying to find data rejections in context?

I am very concerned that this now appears to be a fuzzy search. The length of the search string and the position and size of the searchable data varies between your posts. This makes the software design so much more difficult and I withdraw any implication that this task is feasible.

On more reflection strongly advise that you get a Systems Analyst and a Database Designer on this job with view to possibly using a database approach. To my mind it is too early to engage a Database Programmer.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

Issue with search and replacing multiple items in multiple files

Im having an issue when trying to replace the first column with a new set of values in multiple files. The results from the following code only replaces the files with the last set of values in val.txt. I want to replace all the files with all the values. for date in {1..31} do for val in... (1 Reply)
Discussion started by: ncwxpanther
1 Replies

2. Shell Programming and Scripting

Search Multiple patterns and display

Hi, I have scenario like below and need to search for multiple patterns Eg: Test Time Started= secs Time Ended = secc Green test Test Time Started= secs Time Ended = secc Green test Output: I need to display the text starting with Test and starting with Time... (2 Replies)
Discussion started by: weknowd
2 Replies

3. Shell Programming and Scripting

Search patterns in multiple logs parallelly.

Hi All, I am starting a service which will redirect its out put into 2 logs say A and B. Now for succesful startup of the service i need to search pattern1 in log A and pattern2 in log B which are writen continuosly. Now my requirement is to find the patterns in the increasing logs A and B... (19 Replies)
Discussion started by: Girish19
19 Replies

4. Shell Programming and Scripting

Grep from multiple patterns multiple file multiple output

Hi, I want to grep multiple patterns from multiple files and save to multiple outputs. As of now its outputting all to the same file when I use this command. Input : 108 files to check for 390 patterns to check for. output I need to 108 files with the searched patterns. Xargs -I {} grep... (3 Replies)
Discussion started by: Diya123
3 Replies

5. Shell Programming and Scripting

Search & Replace: Multiple Strings / Multiple Files

I have a list of files all over a file system e.g. /home/1/foo/bar.x /www/sites/moose/foo.txtI'm looking for strings in these files and want to replace each occurrence with a replacement string, e.g. if I find: '#@!^\&@ in any of the files I want to replace it with: 655#@11, etc. There... (2 Replies)
Discussion started by: spacegoose
2 Replies

6. Shell Programming and Scripting

How to search Multiple patterns in unix

Hi, I tried to search multiple pattern using awk trans=1234 reason=LN MISMATCH rec=`awk '/$trans/ && /'"$reason"'/' file` whenevr i tried to run on command promt it is executing but when i tried to implment same logic in shell script,it is failing i.e $rec is empty ... (6 Replies)
Discussion started by: ns64110
6 Replies

7. Shell Programming and Scripting

search multiple patterns

I have two lists in a file that look like a b b a e f c d f e d c I would like a final list a b c d e f I've tried multiple grep and awk but can't get it to work (8 Replies)
Discussion started by: godzilla07
8 Replies

8. Shell Programming and Scripting

Perl - How to search a text file with multiple patterns?

Good day, great gurus, I'm new to Perl, and programming in general. I'm trying to retrieve a column of data from my text file which spans a non-specific number of lines. So I did a regexp that will pick out the columns. However,my pattern would vary. I tried using a foreach loop unsuccessfully.... (2 Replies)
Discussion started by: Sp3ck
2 Replies

9. Shell Programming and Scripting

Perl: Match a line with multiple search patterns

Hi I'm not very good with the serach patterns and I'd need a sample how to find a line that has multiple patterns. Say I want to find a line that has "abd", "123" and "QWERTY" and there can be any characters or numbers between the serach patterns, I have a file that has thousands of lines and... (10 Replies)
Discussion started by: Juha
10 Replies

10. UNIX for Dummies Questions & Answers

How to parameterize multiple search patterns and generate a new file

I have one file: 123*100*abcd*10 123*101*abcd*-29*def 123*100*abcd*-10 123*102*abcd*-105*asd I would like to parameterize the search patterns in the following way so that the user could dynamically change the search pattern. *100* and *- (ie *minus) *102* and *- The output that is... (6 Replies)
Discussion started by: augustinep
6 Replies
Login or Register to Ask a Question