Sponsored Content
Top Forums Shell Programming and Scripting awk 2 fields duplicate and 1 different Post 302449836 by numele on Tuesday 31st of August 2010 05:24:24 PM
Old 08-31-2010
awk 2 fields duplicate and 1 different

I have file that I need to remove the duplicates. The problem is, I need to only keep the one which has a unique 3rd field. Here is a sample file:
Code:
xxx.xxx:x:CISCO1.CLEVE61W:ERIE.NET:x:x:x:x:
xxx.xxx:x:CISCO2.CLEVE62W:OHIO.NET:x:x:x:x:
xxx.xxx:x:CISCO2.CLEVE62W:NORTH.NET:x:x:x:x:
xxx.xxx:x:CISCO3.CLEVE62W:OHIO.NET:x:x:x:x:
xxx.xxx:x:CISCO4.CLEVE62W:OHIO.NET:x:x:x:x:
xxx.xxx:x:CISCO4.CLEVE62W:ERIE.NET:x:x:x:x:

In this example I have duplicate CISCO2 and CISCO4 fields. I always need to keep the line that also has OHIO.NET in it and remove the other, like this:
Code:
xxx.xxx:x:CISCO1.CLEVE61W:ERIE.NET:x:x:x:x:
xxx.xxx:x:CISCO2.CLEVE62W:OHIO.NET:x:x:x:x:
xxx.xxx:x:CISCO3.CLEVE62W:OHIO.NET:x:x:x:x:
xxx.xxx:x:CISCO4.CLEVE62W:OHIO.NET:x:x:x:x:

 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Extract duplicate fields in rows

I have a input file with formating: 6000000901 ;36200103 ;h3a01f496 ; 2000123605 ;36218982 ;heefa1328 ; 2000273132 ;36246985 ;h08c5cb71 ; 2000041207 ;36246985 ;heef75497 ; Each fields is seperated by semi-comma. Sometime, the second files is... (6 Replies)
Discussion started by: anhtt
6 Replies

2. Shell Programming and Scripting

compare fields in a file with duplicate records

Hi: I've been searching the net but didnt find a clue. I have a file in which, for some records, some fields coincide. I want to compare one (or more) of the dissimilar fields and retain the one record that fulfills a certain condition. For example, on this file: 99 TR 1991 5 06 ... (1 Reply)
Discussion started by: rleal
1 Replies

3. Shell Programming and Scripting

awk sed cut? to rearrange random number of fields into 3 fields

I'm working on formatting some attendance data to meet a vendors requirements to upload to their system. With some help on the forums here, I have the data close. But they've since changed what they want. The vendor wants me to submit three fields to them. Field 1 is the studentid field,... (4 Replies)
Discussion started by: axo959
4 Replies

4. Shell Programming and Scripting

Find duplicate based on 'n' fields and mark the duplicate as 'D'

Hi, In a file, I have to mark duplicate records as 'D' and the latest record alone as 'C'. In the below file, I have to identify if duplicate records are there or not based on Man_ID, Man_DT, Ship_ID and I have to mark the record with latest Ship_DT as "C" and other as "D" (I have to create... (7 Replies)
Discussion started by: machomaddy
7 Replies

5. Shell Programming and Scripting

Join fields from files with duplicate lines

I have two files, file1.txt: 1 abc 2 def 2 dgh 3 ijk 4 lmn file2.txt 1 opq 2 rst 3 uvw My desired output is: 1 abc opq 2 def rst 2 dgh rst 3 ijk uvw (2 Replies)
Discussion started by: xan.amini
2 Replies

6. Shell Programming and Scripting

How to print 1st field and last 2 fields together and the rest of the fields after it using awk?

Hi experts, I need to print the first field first then last two fields should come next and then i need to print rest of the fields. Input : a1,abc,jsd,fhf,fkk,b1,b2 a2,acb,dfg,ghj,b3,c4 a3,djf,wdjg,fkg,dff,ggk,d4,d5 Expected output: a1,b1,b2,abc,jsd,fhf,fkk... (6 Replies)
Discussion started by: 100bees
6 Replies

7. Shell Programming and Scripting

awk - compare 1st 15 fields of record with 20 fields

I'm trying to compare 2 files for differences in a selct number of fields. When differnces are found it will write the whole record of the second file including appending '|C' out to a delta file. Each record will have 20 fields, but only want to do comparison of 1st 15 fields. The 1st field of... (7 Replies)
Discussion started by: sljnk
7 Replies

8. Shell Programming and Scripting

Remove duplicate lines from file based on fields

Dear community, I have to remove duplicate lines from a file contains a very big ammount of rows (milions?) based on 1st and 3rd columns The data are like this: Region 23/11/2014 09:11:36 41752 Medio 23/11/2014 03:11:38 4132 Info 23/11/2014 05:11:09 4323... (2 Replies)
Discussion started by: Lord Spectre
2 Replies

9. Shell Programming and Scripting

awk sort based on difference of fields and print all fields

Hi I have a file as below <field1> <field2> <field3> ... <field_num1> <field_num2> Trying to sort based on difference of <field_num1> and <field_num2> in desceding order and print all fields. I tried this and it doesn't sort on the difference field .. Appreciate your help. cat... (9 Replies)
Discussion started by: newstart
9 Replies

10. UNIX for Beginners Questions & Answers

Discarding records with duplicate fields

Hi, My input looks like this (tab-delimited): grp1 name2 firstname M 55 item1 item1.0 grp1 name2 firstname F 55 item1 item1.0 grp2 name1 firstname M 55 item1 item1.0 grp2 name2 firstname M 55 item1 item1.0 Using awk, I am trying to discard the records with common fields 2, 4, 5, 6, 7... (4 Replies)
Discussion started by: beca123456
4 Replies
DUFF(1) 						    BSD General Commands Manual 						   DUFF(1)

NAME
duff -- duplicate file finder SYNOPSIS
duff [-0HLPaeqprtz] [-d function] [-f format] [-l limit] [file ...] duff [-h] duff [-v] DESCRIPTION
The duff utility reports clusters of duplicates in the specified files and/or directories. In the default mode, duff prints a customizable header, followed by the names of all the files in the cluster. In excess mode, duff does not print a header, but instead for each cluster prints the names of all but the first of the files it includes. If no files are specified as arguments, duff reads file names from stdin. Note that as of version 0.4, duff ignores symbolic links to files, as that behavior was conceptually broken. Therefore, the -H, -L and -P options now apply only to directories. The following options are available: -0 If reading file names from stdin, assume they are null-terminated, instead of separated by newlines. Also, when printing file names and cluster headers, terminate them with null characters instead of newlines. This is useful for file names containing whitespace or other non-standard characters. -H Follow symbolic links listed on the command line. This overrides any previous -L or -P option. Note that this only applies to directories, as symbolic links to files are never followed. -L Follow all symbolic links. This overrides any previous -H or -P option. Note that this only applies to directories, as symbolic links to files are never followed. -P Don't follow any symbolic links. This overrides any previous -H or -L option. This is the default. Note that this only applies to directories, as symbolic links to files are never followed. -a Include hidden files and directories when searching recursively. -d function The message digest function to use. The supported functions are sha1, sha256, sha384 and sha512. The default is sha1. -e Excess mode. List all but one file from each cluster of duplicates. Also suppresses output of the cluster header. This is useful when you want to automate removal of duplicate files and don't care which duplicates are removed. -f format Set the format of the cluster header. If the header is set to the empty string, no header line is printed. The following escape sequences are available: %n The number of files in the cluster. %c A legacy synonym for %d, for compatibility reasons. %d The message digest of files in the cluster. This may not be combined with -t as no digest is calculated. %i The one-based index of the file cluster. %s The size, in bytes, of a file in the cluster. %% A '%' character. The default format string when using -t is: %n files in cluster %i (%s bytes) The default format string for other modes is: %n files in cluster %i (%s bytes, digest %d) -h Display help information and exit. -l limit The minimum size of files to be sampled. If the size of files in a cluster is equal or greater than the specified limit, duff will sample and compare a few bytes from the start of each file before calculating a full digest. This is stricly an optimization and does not affect which files are considered by duff. The default limit is zero bytes, i.e. to use sampling on all files. -q Quiet mode. Suppress warnings and error messages. -p Physical mode. Make duff consider physical files instead of hard links. If specified, multiple hard links to the same physical file will not be reported as duplicates. -r Recursively search into all specified directories. -t Thorough mode. Distrust digests as a guarantee for equality. In thorough mode, duff compares files byte by byte when their sizes match. -v Display version information and exit. -z Do not consider empty files to be equal. This option prevents empty files from being reported as duplicates. EXAMPLES
The command: duff -r foo/ lists all duplicate files in the directory foo and its subdirectories. The command: duff -e0 * | xargs -0 rm removes all duplicate files in the current directory. Note that you have no control over which files in each cluster that are selected by -e (excess mode). Use with care. The command: find . -name '*.h' -type f | duff lists all duplicate header files in the current directory and its subdirectories. The command: find . -name '*.h' -type f -print0 | duff -0 | xargs -0 -n1 echo lists all duplicate header files in the current directory and its subdirectories, correctly handling file names containing whitespace. Note the use of xargs and echo to remove the null separators again before listing. DIAGNOSTICS
The duff utility exits 0 on success, and >0 if an error occurs. SEE ALSO
find(1), xargs(1) AUTHORS
Camilla Berglund <elmindreda@elmindreda.org> BUGS
duff doesn't check whether the same file has been specified twice on the command line. This will lead it to report files listed multiple times as duplicates when not using -p (physical mode). Note that this problem only affects files, not directories. duff no longer (as of version 0.4) reports symbolic links to files as duplicates, as they're by definition always duplicates. This may break scripts relying on the previous behavior. If the underlying files are modified while duff is running, all bets are off. This is not really a bug, but it can still bite you. BSD
January 18, 2012 BSD
All times are GMT -4. The time now is 07:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy