Sponsored Content
Top Forums Shell Programming and Scripting To remove duplicates from pipe delimited file Post 302865857 by drl on Sunday 20th of October 2013 07:35:17 AM
Old 10-20-2013
Hi.

We once needed a code that would run on a number of different systems, yet produce consistent results. We ran into the situation that utility uniq was not consistent among the systems. We introducing an option:
Code:
--last
allows over-writing, effectively keeping the most-recently
seen instance. Some versions of uniq on other *nix systems use
the most recent (Solaris), the default is compatibility with
GNU/Linux uniq, which keeps the first occurrence.

By substituting this idea for the system version of uniq, we were able to produce consistent results.

I think this problem can approached with the sort idea of danmero, but with the stable option set, and a "final filter" that eliminates duplicates. Because the file is already sorted, no additional storage is needed: in the final filter, if the fields of the incoming record differ from that in storage, then write out the saved line, and save the new line. If the fields are the same, then save the new instance of the line. Our code was in perl, but awk could be as easily used.

Best wishes ... cheers, drl

Last edited by drl; 10-20-2013 at 08:44 AM..
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How to generate a pipe ( | ) delimited file?

:)Hi Friends, I have certain log files extracted. I want it to be converted in pipe ( | ) delimited file. How do i do it? E.g. Account Balance : 123456789 Rs O/P (Account Balance: | 123456789 Rs) Account Balance (Last) > 987654321 Rs O/P (Account Balance (Last) | 987654321 Rs) Last... (5 Replies)
Discussion started by: anushree.a
5 Replies

2. Shell Programming and Scripting

convert a pipe delimited file to a':" delimited file

i have a file whose data is like this:: osr_pe_assign|-120|wg000d@att.com|4| osr_evt|-21|wg000d@att.com|4| pe_avail|-21|wg000d@att.com|4| osr_svt|-11|wg000d@att.com|4| pe_mop|-13|wg000d@att.com|4| instar_ready|-35|wg000d@att.com|4| nsdnet_ready|-90|wg000d@att.com|4|... (6 Replies)
Discussion started by: priyanka3006
6 Replies

3. Shell Programming and Scripting

Remove SPACES between PIPE delimited file

This is my input file with extra information in the HEADER and leading & trailing SPACES between PIPE delimiter. 02/04/2010 Dynamic List Display 1 --------------------------------------------------------------------------------------... (6 Replies)
Discussion started by: srimitta
6 Replies

4. Shell Programming and Scripting

How to convert a space delimited file into a pipe delimited file using shellscript?

Hi All, I have space delimited file similar to the one as shown below.. I need to convert it as a pipe delimited, the values inside the pipe delimited file should be as highlighted... AA ATIU2345098809 009697 005374 BB ATIU2345097809 005445 006518 CC ATIU9685098809 003215 003571 DD... (7 Replies)
Discussion started by: nithins007
7 Replies

5. Shell Programming and Scripting

Help with converting Pipe delimited file to Tab Delimited

I have a file which was pipe delimited, I need to make it tab delimited. I tried with sed but no use cat file | sed 's/|//t/g' The above command substituted "/t" not tab in the place of pipe. Sample file: abc|123|2012-01-30|2012-04-28|xyz have to convert to: abc 123... (6 Replies)
Discussion started by: karumudi7
6 Replies

6. Shell Programming and Scripting

Remove few columns from pipe delimited file

I have file as below column1|column2|column3|column4|column5| fill1|fill2|fill3|fill4|fill5| abc1|abc2|abc3|abc4|abc5| . . . . i need to remove column2,3, from that file column1|column4|column5| fill1|fill4|fill5| abc1|abc4|abc5| . . . (3 Replies)
Discussion started by: greenworld123
3 Replies

7. Shell Programming and Scripting

How to ignore Pipe in Pipe delimited file?

Hi guys, I need to know how i can ignore Pipe '|' if Pipe is coming as a column in Pipe delimited file for eg: file 1: xx|yy|"xyz|zzz"|zzz|12... using below awk command awk 'BEGIN {FS=OFS="|" } print $3 i would get xyz But i want as : xyz|zzz to consider as whole column... (13 Replies)
Discussion started by: rohit_shinez
13 Replies

8. Shell Programming and Scripting

Removing duplicates from delimited file based on 2 columns

Hi guys,Got a bit of a bind I'm in. I'm looking to remove duplicates from a pipe delimited file, but do so based on 2 columns. Sounds easy enough, but here's the kicker... Column #1 is a simple ID, which is used to identify the duplicate. Once dups are identified, I need to only keep the one... (2 Replies)
Discussion started by: kevinprood
2 Replies

9. UNIX for Dummies Questions & Answers

Need to convert a pipe delimited text file to tab delimited

Hi, I have a rquirement in unix as below . I have a text file with me seperated by | symbol and i need to generate a excel file through unix commands/script so that each value will go to each column. ex: Input Text file: 1|A|apple 2|B|bottle excel file to be generated as output as... (9 Replies)
Discussion started by: raja kakitapall
9 Replies

10. Shell Programming and Scripting

How to remove new line characters from data rows in a Pipe delimited file?

I have a file as below Emp1|FirstName|MiddleName|LastName|Address|Pincode|PhoneNumber 1234|FirstName1|MiddleName2|LastName3| Add1 || ADD2|123|000000000 2345|FirstName2|MiddleName3|LastName4| Add1 || ADD2| 234|000000000 OUTPUT : ... (1 Reply)
Discussion started by: styris
1 Replies
uniq(1) 							   User Commands							   uniq(1)

NAME
uniq - report or filter out repeated lines in a file SYNOPSIS
/usr/bin/uniq /usr/bin/uniq [-c | -d | -u] [-f fields] [-s char] [input_file [output_file]] /usr/bin/uniq [-c | -d | -u] [-n] [+ m] [input_file [output_file]] ksh93 uniq [-cdiu] [-D[delimit]] [-f fields] [-s chars] [-w chars] [input_file [output_file]] uniq [-cdiu] [-D[delimit]] [-n] [+m] [-w chars] [input_file [output_file]] DESCRIPTION
/usr/bin/uniq The uniq utility reads an input file comparing adjacent lines and writes one copy of each input line on the output. The second and succeed- ing copies of repeated adjacent input lines are not written. Repeated lines in the input are not detected if they are not adjacent. ksh93 The uniq built-in in ksh93 is associated with the /bin or /usr/bin path. It is invoked when uniq is executed without a pathname prefix and the pathname search finds a /bin/uniq or /usr/bin/uniq executable. uniq reads an input, comparing adjacent lines, and writing one copy of each input line on the output. The second and succeeding copies of the repeated adjacent lines are not written. If output_file is not specified, uniq writes to standard output. If input_file is not specified, or if input_file is -, uniq reads from standard input, and the start of the file is defined as the current offset. OPTIONS
/usr/bin/uniq The following options are supported by /usr/bin/uniq: -c Precedes each output line with a count of the number of times the line occurred in the input. -d Suppresses the writing of lines that are not repeated in the input. -f fields Ignores the first fields fields on each input line when doing comparisons, where fields is a positive decimal integer. A field is the maximal string matched by the basic regular expression: [[:blank:]]*[^[:blank:]]* If fields specifies more fields than appear on an input line, a null string is used for comparison. +m Equivalent to -s chars with chars set to m. -n Equivalent to -f fields with fields set to n. -s chars Ignores the first chars characters when doing comparisons, where chars is a positive decimal integer. If specified in conjunc- tion with the -f option, the first chars characters after the first fields fields is ignored. If chars specifies more charac- ters than remain on an input line, a null string is used for comparison. -u Suppresses the writing of lines that are repeated in the input. ksh93 The following options are supported by the uniq built-in command is ksh93: -c Outputs the number of times each line occurred along with the line. --count -d Outputs only duplicate lines. --repeated | duplicates -D Outputs all duplicate lines as a group with an empty line delimiter specified by delimit. --all-repeated[=delimit] Specify delimit as one of the following: none Do not delimit duplicate groups. prepend Prepend an empty line before each group. separate Separate each group with an empty line. The value for delimit can be omitted. The default value is none. -f Skips over fields number of fields before checking for uniqueness. A field is the minimal string matching the --skip-fields=fields BRE [[:blank:]]*[^[:blank:]]*. -i Ignore case in comparisons. --ignore-case +m Equivalent to the -s chars option, with chars set to m. -n Equivalent to the -f fields option, with fields set to n. -s Skips over chars number of characters before checking for uniqueness. --skip-chars=chars If specified with the -f option, the first chars after the first fields are ignored. If the chars specifies more characters than are on the line, an empty string is used for comparison. -u Outputs unique lines. --uniq -w Skips over any specified fields and characters, then compares chars number of characters. --check-chars=chars OPERANDS
The following operands are supported: input_file A path name of the input file. If input_file is not specified, or if the input_file is -, the standard input is used. output_file A path name of the output file. If output_file is not specified, the standard output is used. The results are unspecified if the file named by output_file is the file named by input_file. EXAMPLES
Example 1 Using the uniq Command The following example lists the contents of the uniq.test file and outputs a copy of the repeated lines. example% cat uniq.test This is a test. This is a test. TEST. Computer. TEST. TEST. Software. example% uniq -d uniq.test This is a test. TEST. example% The next example outputs just those lines that are not repeated in the uniq.test file. example% uniq -u uniq.test TEST. Computer. Software. example% The last example outputs a report with each line preceded by a count of the number of times each line occurred in the file: example% uniq -c uniq.test 2 This is a test. 1 TEST. 1 Computer. 2 TEST. 1 Software. example% ENVIRONMENT VARIABLES
See environ(5) for descriptions of the following environment variables that affect the execution of uniq: LANG, LC_ALL, LC_CTYPE, LC_MES- SAGES, and NLSPATH. EXIT STATUS
The following exit values are returned: 0 Successful completion. >0 An error occurred. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: /usr/bin/uniq +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWesu | +-----------------------------+-----------------------------+ |CSI |Enabled | +-----------------------------+-----------------------------+ |Interface Stability |Committed | +-----------------------------+-----------------------------+ |Standard |See standards(5). | +-----------------------------+-----------------------------+ ksh93 +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWcsu | +-----------------------------+-----------------------------+ |Interface Stability |See below. | +-----------------------------+-----------------------------+ The ksh93 built-in binding to /bin and /usr/bin is Volatile. The built-in interfaces are Uncommitted. SEE ALSO
comm(1), ksh93(1), , pcat(1), sort(1), uncompress(1), attributes(5), environ(5), standards(5) SunOS 5.11 13 Mar 2008 uniq(1)
All times are GMT -4. The time now is 08:07 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy