I'll just explain the new pieces of code:$h{$F[0]}{$F[3]}++ - This will create "hash of hashes", with first key being 1st field in your file, and second key being field number four. This technique is described in "Intermediate Perl" book. So after running this code over your sample file, the %h hash structure looks like this:
As you can see, for each $F[0] field, all the $F[3] fields are present as keys of underlying hash. Values stored in those inner hashes are number occurrences for $F[3] field. This code would also work without storing this information, using: $h{$F[0]}{$F[3]}=1, then hash will look like this:
Now all we have to do is print the number of keys present in each inner hash (blue) for each main hash key - $F[0] field (red). To do this we iterate over $F[0] values (red) using for $i (keys %h){Then we assign inner hash keys (blue) to @x array - @x=keys %{$h{$i}};. So during each for each $i value, @x will look like this:
Now all we have to do is print $i and respective number of elements of @x array: print "$i\t" . ($#x+1) . "\n"
Hello,
I have a bash shell script and I use awk to print certain columns of one file and direct the output to another file. If I do a less or cat on the file it looks correct, but if I email the file and open it with Outlook the lines outputted by awk are concatenated.
Here is my awk line:... (6 Replies)
Hi All,
I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space.
I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Hi friends,
I have multiple files. For now, let's say I have two of the following style
cat 1.txt
cat 2.txt
output.txt
Please note that my files are not sorted and in the output file I need another extra column that says the file from which it is coming. I have more than 100... (19 Replies)
hi
i have used comm -13 <(sort 1.txt) <(sort 2.txt) option to get the unique lines that are present in file 2 but not in file 1. but some how i am getting the entire file 2. i would expect few but not all uncommon lines fro my dat. is there anything wrong with the way i used the command?
my... (1 Reply)
hi
my problem is little complicated one. i have 2 files which appear like this
file 1
abbsss:aa:22:34:as akl abc 1234
mkilll:as:ss:23:qs asc abc 0987
mlopii:cd:wq:24:as asd abc 7866
file2
lkoaa:as:24:32:sa alk abc 3245
lkmo:as:34:43:qs qsa abc 0987
kloia:ds:45:56:sa acq abc 7805
i... (5 Replies)
Hello everyone,
Maybe somebody could help me with an awk script.
I have this input (field separator is comma ","):
547894982,M|N|J,U|Q|P,98,101,0,1,1
234900027,M|N|J,U|Q|P,98,101,0,1,1
234900023,M|N|J,U|Q|P,98,54,3,1,1
234900028,M|H|J,S|Q|P,98,101,0,1,1
234900030,M|N|J,U|F|P,98,101,0,1,1... (2 Replies)
file 1
Sun Mar 17 00:01:33 2013 submit , Name="1234"
Sun Mar 17 00:01:33 2013 submit , Name="1344"
Sun Mar 17 00:01:33 2013 submit , Name="1124"
..
..
..
..
Sun Mar 17 00:01:33 2013 submit , Name="8901"
file 2
Sun Mar 17 00:02:47 2013 1234 execute SUCCEEDED
Sun Mar 17... (24 Replies)
I would like to print unique lines without sort or unique. Unfortunately the server I am working on does not have sort or unique. I have not been able to contact the administrator of the server to ask him to add it for several weeks. (7 Replies)
I have a directory of files, I can show the number of lines in each file and order them from lowest to highest with:
wc -l *|sort
15263 Image.txt
16401 reference.txt
40459 richtexteditor.txt
How can I also print the number of unique lines in each file?
15263 1401 Image.txt
16401... (15 Replies)