Sponsored Content
Full Discussion: removing duplicates
Top Forums Shell Programming and Scripting removing duplicates Post 302212315 by stevie_velvet on Monday 7th of July 2008 07:00:11 AM
Old 07-07-2008
bump ;
Any clever SED / *AWK'rs out there who knows how to ignore blank lines & can integrate into the above examples.....?
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Removing duplicates

Hi, I've been trying to removed duplicates lines with similar columns in a fixed width file and it's not working. I've search the forum but nothing comes close. I have a sample file: 27147140631203RA CCD * 27147140631203RA PPN * 37147140631207RD AAA 47147140631203RD JNA... (12 Replies)
Discussion started by: giannicello
12 Replies

2. UNIX for Dummies Questions & Answers

removing duplicates and sort -k

Hello experts, I am trying to remove all lines in a csv file where the 2nd columns is a duplicate. I am try to use sort with the key parameter sort -u -k 2,2 File.csv > Output.csv File.csv File Name|Document Name|Document Title|Organization Word Doc 1.doc|Word Document|Sample... (3 Replies)
Discussion started by: orahi001
3 Replies

3. Shell Programming and Scripting

Removing duplicates

Hi, I have a file in the below format., test test (10) to to (25) see see (45) and i need the output in the format of test 10 to 25 see 45 Some one help me? (6 Replies)
Discussion started by: imdadulla
6 Replies

4. UNIX for Advanced & Expert Users

removing duplicates.

Hi All In unix ,we have a file ,there we have to remove the duplicates by using one specific column. Can any body tell me the command. ex: file1 id,name 1,ww 2,qwq 2,asas 3,asa 4,asas 4,asas o/p: 1,ww 2,qwq 3,asa (7 Replies)
Discussion started by: raju4u
7 Replies

5. Shell Programming and Scripting

Removing duplicates

I have a test file with the following 2 columns: Col 1 | Col 2 T1 | 1 <= remove T5 | 1 T4 | 2 T1 | 3 T3 | 3 T4 | 1 <= remove T1 | 2 <= remove T3 ... (7 Replies)
Discussion started by: gctex
7 Replies

6. Emergency UNIX and Linux Support

Removing all the duplicates

i want to remove all the duplictaes in a file.I dont want even a single entry. For the input data: 12345|12|34 12345|13|23 3456|12|90 15670|12|13 12345|10|14 3456|12|13 i need the below data in one file 15670|12|13 and the below data in another file (9 Replies)
Discussion started by: pandeesh
9 Replies

7. Shell Programming and Scripting

Help in removing duplicates

I have an input file abc.txt with info like: abcd rateuse inklite robet rateuse abcd I need to remove duplicates from the file (eg: abcd,rateuse) from the file and need to place the contents in same file abc.txt if needed can be placed in another file. can anyone help me in this :( (4 Replies)
Discussion started by: rkrish
4 Replies

8. UNIX for Dummies Questions & Answers

Removing duplicates from a file

Hi All, I am merging files coming from 2 different systems ,while doing that I am getting duplicates entries in the merged file I,01,000131,764,2,4.00 I,01,000131,765,2,4.00 I,01,000131,772,2,4.00 I,01,000131,773,2,4.00 I,01,000168,762,2,2.00 I,01,000168,763,2,2.00... (5 Replies)
Discussion started by: Sri3001
5 Replies

9. Shell Programming and Scripting

Removing duplicates except the last occurrence

Hi All, i have a file like below, @DB_FCTS\src\Data\Scripts\Delete_CU_OM_BIL_PRT_STMT_TYP.sql @DB_FCTS\src\Data\Scripts\Delete_CDP_BILL_LBL_MSG.sql @DB_FCTS\src\Data\Scripts\Delete_OM_BIDDR.sql @DB_FCTS\src\Data\Scripts\Insert_CU_OM_LBL_MSG.sql... (11 Replies)
Discussion started by: mechvijays
11 Replies

10. Shell Programming and Scripting

Removing duplicates from new file

i hav two files like i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3 (2 Replies)
Discussion started by: sagar_1986
2 Replies
SDIFF(1)							   User Commands							  SDIFF(1)

NAME
sdiff - side-by-side merge of file differences SYNOPSIS
sdiff [OPTION]... FILE1 FILE2 DESCRIPTION
Side-by-side merge of file differences. -o FILE --output=FILE Operate interactively, sending output to FILE. -i --ignore-case Consider upper- and lower-case to be the same. -E --ignore-tab-expansion Ignore changes due to tab expansion. -b --ignore-space-change Ignore changes in the amount of white space. -W --ignore-all-space Ignore all white space. -B --ignore-blank-lines Ignore changes whose lines are all blank. -I RE --ignore-matching-lines=RE Ignore changes whose lines all match RE. --strip-trailing-cr Strip trailing carriage return on input. -a --text Treat all files as text. -w NUM --width=NUM Output at most NUM (default 130) columns per line. -l --left-column Output only the left column of common lines. -s --suppress-common-lines Do not output common lines. -t --expand-tabs Expand tabs to spaces in output. -d --minimal Try hard to find a smaller set of changes. -H --speed-large-files Assume large files and many scattered small changes. --diff-program=PROGRAM Use PROGRAM to compare files. -v --version Output version info. --help Output this help. If a FILE is `-', read standard input. AUTHOR
Written by Thomas Lord. REPORTING BUGS
Report bugs to <bug-gnu-utils@gnu.org>. COPYRIGHT
Copyright (C) 2002 Free Software Foundation, Inc. This program comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of this program under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING. SEE ALSO
The full documentation for sdiff is maintained as a Texinfo manual. If the info and sdiff programs are properly installed at your site, the command info diff should give you access to the complete manual. diffutils 2.8.1 April 2002 SDIFF(1)
All times are GMT -4. The time now is 03:13 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy