Because there's always the first ocurrence that equals 0 which does not count, and it has the same index for the array, the second value overlaps the first, so it's always saved the second value of the same pattern, in case there's a second value with the same pattern.
This time the key was paying attention to the index and how awk saves in the array.
Thanks.
There is no test for 0. It is not necessarily the second line with a given value for the 1st three fields that is saved in the array; it is the last line with a given value for the 1st three fields that is saved. If there is one line with 900, -, and 1000 as the 1st three fields on the line, respectively, a[$1, $2, $3]'s value (or in this case a["900", "-", "1000"]'s value) will be that entire line. If there is more one line with 900, -, and 1000 as the 1st three fields on the line, respectively, a[$1, $2, $3]'s value will be the last line starting with those three values.
When processing an array with:
the elements are processed in a random order (not necessarily the order in which they were found in the input file). This is why aia used sort -n to print the output in the same order as the (sorted) input file.
This User Gave Thanks to Don Cragun For This Post:
I have a list which contains all the jar files shipped with the product I am involved with. Now, in this list I have some jar files which appear again and again. But these jar files are present in different folders.
My input file looks like this
/path/1/to a.jar
/path/2/to a.jar
/path/1/to... (10 Replies)
For example suppose I have a file which contains data as:
$cat data
800,2
100,9
700,3
100,9
200,8
100,3
Now I want the output as
200,8
700,3
800,2
Key is first three characters, I don't want any reords which are having duplicate keys.
Like sort +0.0 -0.3 data can we use... (9 Replies)
I have huge txt file having millions of trade data.
For e.g
Trade.txt (first 8 lines in the file is header info)
COB_DATE,TRADE_ID,SOURCE_SYSTEM_TRADE_ID,TRADE_GROUP_ID,
TRADE_TYPE,DEALER_NAME,EXTERNAL_COUNTERPARTY_ID,
EXTERNAL_COUNTERPARTY_NAME,DB_COUNTERPARTY_ID,... (6 Replies)
Can anyone help me to removing duplicate records from 2 separate files in UNIX?
Please find the sample records for both the files
cat Monday.dat
3FAHP0JA1AR319226MOHMED ATEK 966504453742 SAU2010DE
3LNHL2GC6AR636361HEA DEUK CHOI 821057314531 KOR2010LE
3MEHM0JG7AR652083MUTLAB NAL-NAFISAH... (4 Replies)
Hi,
I want to remove duplicate records including the first line based on column1. For example
inputfile(filer.txt):
-------------
1,3000,5000
1,4000,6000
2,4000,600
2,5000,700
3,60000,4000
4,7000,7777
5,999,8888
expected output:
----------------
3,60000,4000
4,7000,7777... (5 Replies)
Hi All,
I want to remove the rows from File1.csv by comparing a column/field in the File2.csv. If both columns matches then I want that row to be deleted from File1 using shell script(awk). Here is an example on what I need.
File1.csv:
RAJAK,ACTIVE,1
VIJAY,ACTIVE,2
TAHA,ACTIVE,3... (6 Replies)
I was reading this thread. It looks like a simpler way to say this is to only keep uniq lines based on field or column 1.
https://www.unix.com/shell-programming-scripting/165717-removing-duplicate-records-file-based-single-column.html
Can someone explain this command please? How are there no... (5 Replies)
Hi,
I want to display the file names and the record count for the files in the 2nd column for the files created today.
i have written the below command which is listing the file names. but while piping the above command to the wc -l command
its not working for me.
ls -l... (5 Replies)
Join and merge multiple files with duplicate key and fill void columns
Hi guys,
I have many files that I want to merge:
file1.csv:
1|abc
1|def
2|ghi
2|jkl
3|mno
3|pqr
file2.csv: (5 Replies)