awk -F "|" '
# hash the search list
NR==FNR {L[$1]=0; next}
# now procede with the data files
# print if the following is true
($4~/^3078=/ && ($2 in L) && L[$2]++==0)
' searchlist.txt datafile1.txt datafile2.txt
Search the 3078= everywhere:
Code:
awk -F "|" '
# hash the search list
NR==FNR {L[$1]=0; next}
# now procede with the data files
# print if the following is true
(/|3078=/ && ($2 in L) && L[$2]++==0)
' searchlist.txt datafile1.txt datafile2.txt
---------- Post updated at 06:51 AM ---------- Previous update was at 05:35 AM ----------
Quote:
Originally Posted by Lucas_0418
What's your environment, found that your code could work in my cygwin.
May you could put all the pattern in a file, then use option -m of grep to get the first match line.
Code:
while read a
do
grep -h -m 1 "$a" *.*
done< yourpatterfile
grep -m1 exits at every 1st match per file.
awk is much more flexible:
Code:
awk -F "|" -v low=745 -v high=755 '
# build the Lookup hash
BEGIN {for (i=low; i<=high; i++) L["4="i]}
# main loop
# if in Lookup hash and if a field begins with 3078=
($2 in L) && /|3078=/ {
print
# delete from the Lookup hash
delete L[$2]
}
' datafile*.txt
Last edited by MadeInGermany; 03-20-2014 at 08:10 AM..
This User Gave Thanks to MadeInGermany For This Post:
Hi,
I have a file like this:
Some_String_Here 123 123 123 321 321 321 3432 3221 557 886 321 321
I would like to find only the unique values in the files and get the following output:
Some_String_Here 123 321 3432 3221 557 886
I am trying to get this done using awk. Can someone please... (5 Replies)
Hi,
I have been dealing with a files only a few gigs until now and was able to get out by using the sort utility. But now, I have a terabyte file which I want to filter out unique values from.
I have a server having 8 processor and 16GB RAM with a 5 TB hdd. Is it worthwhile trying to use... (6 Replies)
After giving
grep -A4 "feature 1," <file name>
I have extracted the following text
feature 1,
subfeat 2,
type 1,
subtype 5,
dump '30352f30312f323030392031313a33303a3337'H -- "05/01/2009 11:30:37" --
--
... (1 Reply)
Hello all,
I have a file with following sample data
2009-08-26 05:32:01.65 spid5 Process ID 86:214 owns resources that are blocking processes on Scheduler 0.
2009-08-26 05:32:01.65 spid5 Process ID 86:214 owns resources that are blocking processes on Scheduler 0.
2009-08-26... (5 Replies)
Hi all,
I have a huge csv file with the following format of data,
Num SNPs, 549997
Total SNPs,555352
Num Samples, 157
SNP, SampleID, Allele1, Allele2
A001,AB1,A,A
A002,AB1,A,A
A003,AB1,A,A
...
...
...
I would like to write out a list of unique SNP (column 1). Could you... (3 Replies)
Hi
I have the following info in a file -
<Cell id="25D"/>
<Cell id="26A"/>
<Cell id="26B"/>
<Cell id="26C"/>
<Cell id="27A"/>
<Cell id="27B"/>
<Cell id="27C"/>
<Cell id="28A"/>
I would like to know how would you go about counting all... (4 Replies)
hi
my problem is little complicated one. i have 2 files which appear like this
file 1
abbsss:aa:22:34:as akl abc 1234
mkilll:as:ss:23:qs asc abc 0987
mlopii:cd:wq:24:as asd abc 7866
file2
lkoaa:as:24:32:sa alk abc 3245
lkmo:as:34:43:qs qsa abc 0987
kloia:ds:45:56:sa acq abc 7805
i... (5 Replies)
Request: grep to find given matching patern and return unique values, eliminate the duplicate values
I have to retrieve the unique folder on the below file contents like;
/app/oracle/build_lib/pkg320.0_20120927
/app/oracle/build_lib/pkg320.0_20121004_prof... (5 Replies)
Hi Folks,
I have the below feed file named abc1.txt in which you can see there is a title and below is the respective values in the rows and it is completely pipe delimited file ,.
... (4 Replies)