Since the strings tested aren't regular expressions, using the regular expression operator is, at best, unnecessarily expensive. At worst, if the strings are allowed to contain regular expression metacharacters, it can lead to an erroneous result.
I suggest using index() instead. For non-trivial data sets, it will also speed things up dramatically.
Testing a near-worst case scenario. The file contains 1501 lines and only the last line contains a string which is a substring of another. Note that while gawk is used, testing with mawk and nawk showed similar improvements:
Regards,
Alister
These 4 Users Gave Thanks to alister For This Post:
Thanks!
I get it now, actually I have hundred thousand lines.
I just thought another scenario: Is possible to do with two columns? i.e. any substring of the same column (but need both col2 and col4 at the same time), should be skipped.
I tried using two arrays to loop:
But there was no output. The second loop seems of problem, any suggestions please? Thanks a lot!
I get it now, actually I have hundred thousand lines.
That's the type of information that should always be mentioned in the initial post. Please keep that in mind going forward.
Quote:
Originally Posted by yifangt
I just thought another scenario: Is possible to do with two columns? i.e. any substring of the same column (but need both col2 and col4 at the same time), should be skipped.
I tried using two arrays to loop:
But there was no output. The second loop seems of problem, any suggestions please? Thanks a lot!
I haven't given it much thought, but upon cursory examination, your logic is definitely very flawed. If i is in a, you jump to the next line. That's wrong. If I understood the task, before you can skip a line, both i must be in a and j must be in b.
Once included, if later lines prove that an earlier line's $2 and $4 are substrings, then that earlier line must be excluded. Note that an earlier line's $2 and $4 may be disqualified by different subsequent lines, so you must track that as well.
Was there some copy-paste malfunction in your post? There are two END sections, nearly identical, which doesn't make sense (multiple END pattern-action pairs are allowed, but in this case I don't see the point of them).
One simple, if not optimal, way to solve the problem is to handle each column individually and then join the results.
You are right: If i is in a, you jump to the next line. That's wrong. ... before you can skip a line, both i must be in a and j must be in b. ...Was there some copy-paste malfunction in your post?Sorry for that!
This part is not what I want: One simple, if not optimal, way to solve the problem is to handle each column individually and then join the results.
My logic is only if i in a is substring of $2 and j in b is substring of $4 at the same time, that line should be skipped. Delete a[i] will skip the whole line as a[$2]=$0. If $2 and $4 are handled separately before joined later, some lines are deleted but should not! For example: Line 9 should not be deleted even $2 is identical to that in Line 1, as $4 at Line 9 is not a substring of $4 in Line 1! No cross comparing between $2 and $4! I thought of combine the two columns to have a single one, which does not work either obviously, as not all the string start with the same char to have substring. I have hard time to catch array in awk. Thanks a lot!
a bit verbose and most likely not optimal:
can probably generalize this to specify any number of fields AND-ed together.....
That's not correct because it only accounts for cases where one line's field pair matches another line's field pair. The case where a line's fields are substrings of fields of two different lines is unaccounted for.
In the awk below I am trying to set/update the value of $14 in file2 in
bold, using the matching NM_ in $12 or $9 in file2
with the NM_ in $2 of file1.
The lengths of $9 and $12 can be variable but what is consistent is the start pattern
will always be NM_ and the end pattern is always ;... (2 Replies)
Hi All,
I am trying to output uniq values per column. see file below. can you please assist? Thank you in advance.
cat names
joe allen ibm
joe smith ibm
joe allen google
joe smith google
rachel allen google
desired output is:
joe allen google
rachel smith ibm (5 Replies)
Hii,
I am reading data from files by defining path as *.log etc,
Files names are like app1a_test2_heep.log , cdc2a_test3_heep.log etc
How to configure logstash so that the part of string that is string before underscore (app1a, cdc2a..) should be grepped and added to host field and... (7 Replies)
Hi All,
I am searching for a script which will produce an output file with the uniq first field with the second field having highest value among all the duplicates..
The output file will produce only the uniqs which are duplicate 3 times..
Input file
X 9
B 5
A 1
Z 9
T 4
C 9
A 4... (13 Replies)
I have a flatfile A.txt
2012/12/04 14:06:07 |trees|Boards 2, 3|denver|mekong|mekong12
2012/12/04 17:07:22 |trees|Boards 2, 3|denver|mekong|mekong12
2012/12/04 17:13:27 |trees|Boards 2, 3|denver|mekong|mekong12
2012/12/04 14:07:39 |rain|Boards 1|tampa|merced|merced11
How do i sort and get... (3 Replies)
Hi New to unix.
I want to display only the unrepeated lines from a file using first field.
Ex:
1234 uname1 status1
1235 uname2 status2
1234 uname3 status3
1236 uname5 status5
I used
sort filename | uniq -u
output:
1234 uname1 status1
1235 uname2 status2
1234 uname3 status3
1236... (10 Replies)
Anyone can help for filter the uniq record for below example? Thank you very much
Input file
20090503011111|test|abc
20090503011112|tet1|abc|def
20090503011112|test1|bcd|def
20090503011131|abc|abc
20090503011131|bbc|bcd
20090503011152|bcd|abc
20090503011151|abc|abc... (8 Replies)
How can I use uniq on a certain field or what else could I use? If I want to use uniq on the second field and the output would remove one of the lines with a 5.
bob 5 hand
jane 3 leg
jon 4 head
chris 5 lungs (1 Reply)
Hi ;
I have a question regarding the uniq command in unix
How do I uniq 3rd field in a file ?
original file :
zoom coord 39 18652 39 18652
zoom coord 39 18653 39 18653
zoom coord 39 18818 39 18818
zoom coord 39 18840 39 18840
zoom coord 41 15096 41 15096
zoom... (1 Reply)
Hi all,
I have a file that contains a list of codes (shown below).
I want to 'uniq' the file using only the first field. Anyone know an easy way of doing it?
Cheers,
Dave
##### Input File #####
1xr1 1xws 1yxt 1yxu 1yxv 1yxx 2o3p 2o63 2o64 2o65
1xr1 1xws 1yxt 1yxv 1yxx 2o3p 2o63 2o64... (8 Replies)