There's potentially endless ways of achieving this. If you want a one-liner:
It uses 'uniq -c' to produce lines like "2 10" and "1 20". Then awk filters out all the ones with no repeats by $1>1. Then it gets rid of the first number with $1="", and does a loop printing the entire rest of the line as many times as the given count.
Hi Everyone,
I have a flat file of 1000 unique records like following : For eg
Andy,Flower,201-987-0000,12/23/01
Andrew,Smith,101-387-3400,11/12/01
Ani,Ross,401-757-8640,10/4/01
Rich,Finny,245-308-0000,2/27/06
Craig,Ford,842-094-8740,1/3/04
.
.
.
.
.
.
Now I want to duplicate... (9 Replies)
Hi all:
Let's suppose I have a file like this (but with many more records).
XX ME 342 8688 2006 7 6 3c 60.029 -38.568 2901 0001 74 4 7603 8
969.8 958.4 3.6320 34.8630
985.5 973.9 3.6130 34.8600
998.7 986.9 3.6070 34.8610
1003.6 991.7 ... (4 Replies)
I have a .DAT file like below
23666483030000653-B94030001OLFXXX000000120081227
23797049900000654-E71060001OLFXXX000000220081227
23699281320000655 E71060002OLFXXX000000320081227
22885068900000652 B86860003OLFXXX592123320081227
22885068900000652 B86860003ODL-SP592123420081227... (8 Replies)
Hi frinds,
Need your help.
item , color ,desc
==== ======= ====
1,red ,abc
1,red , a b c
2,blue,x
3,black,y
4,brown,xv
4,brown,x v
4,brown, x v
I have to elemnet the duplicate rows on the basis of item.
the final out put will be
1,red ,abc (6 Replies)
Hi,
I have a file with these records
abc
xyz
xyz
pqr
uvw
cde
cde
In my o/p file , I want all the non duplicate rows to be shown.
o/p abc
pqr
uvw
Any suggestions how to do this?
Thanks for the help.
rs (2 Replies)
I have 2 files
"File 1" is delimited by ";" and "File 2" is delimited by "|".
File 1 below (3 record shown):
Doc1;03/01/2012;New York;6 Main Street;Mr. Smith 1;Mr. Jones
Doc2;03/01/2012;Syracuse;876 Broadway;John Davis;Barbara Lull
Doc3;03/01/2012;Buffalo;779 Old Windy Road;Charles... (2 Replies)
Hi,
i am working on a script that would remove records or lines in a flat file. The only difference in the file is the "NOT NULL" word. Please see below example of the input file.
INPUT FILE:>
CREATE a
(
TRIAL_CLIENT NOT NULL VARCHAR2(60),
TRIAL_FUND NOT NULL... (3 Replies)
Gents,
Please how I can get only the last 2 records from repetead values, from column 2
input
1 1011
1 1011
1 1012
1 1012
1 5001
1 5001
1 5002
1 5002
1 5003
1 5003
1 7001
1 7001
1 7002
1 7002 (2 Replies)
Gents,
I have a file which contends duplicate records in column 1, but the values in column 2 are different.
3099753489 3
3099753489 5
3101954341 12
3101954341 14
3102153285 3
3102153285 5
3102153297 3
3102153297 5
I will like to get something like this:
output desired... (16 Replies)
Gents,
Please give a help
file
--BAD STATUS NOT RESHOOTED--
*** VP 41255/51341 in sw 2973
*** VP 41679/51521 in sw 2973
*** VP 41687/51653 in sw 2973
*** VP 41719/51629 in sw 2976
--BAD COG NOT RESHOOTED--
*** VP 41689/51497 in sw 2974
*** VP 41699/51677 in sw 2974
*** VP... (18 Replies)
Discussion started by: jiam912
18 Replies
LEARN ABOUT OSX
uniq
UNIQ(1) BSD General Commands Manual UNIQ(1)NAME
uniq -- report or filter out repeated lines in a file
SYNOPSIS
uniq [-c | -d | -u] [-i] [-f num] [-s chars] [input_file [output_file]]
DESCRIPTION
The uniq utility reads the specified input_file comparing adjacent lines, and writes a copy of each unique input line to the output_file. If
input_file is a single dash ('-') or absent, the standard input is read. If output_file is absent, standard output is used for output. The
second and succeeding copies of identical adjacent input lines are not written. Repeated lines in the input will not be detected if they are
not adjacent, so it may be necessary to sort the files first.
The following options are available:
-c Precede each output line with the count of the number of times the line occurred in the input, followed by a single space.
-d Only output lines that are repeated in the input.
-f num Ignore the first num fields in each input line when doing comparisons. A field is a string of non-blank characters separated from
adjacent fields by blanks. Field numbers are one based, i.e., the first field is field one.
-s chars
Ignore the first chars characters in each input line when doing comparisons. If specified in conjunction with the -f option, the
first chars characters after the first num fields will be ignored. Character numbers are one based, i.e., the first character is
character one.
-u Only output lines that are not repeated in the input.
-i Case insensitive comparison of lines.
ENVIRONMENT
The LANG, LC_ALL, LC_COLLATE and LC_CTYPE environment variables affect the execution of uniq as described in environ(7).
EXIT STATUS
The uniq utility exits 0 on success, and >0 if an error occurs.
COMPATIBILITY
The historic +number and -number options have been deprecated but are still supported in this implementation.
SEE ALSO sort(1)STANDARDS
The uniq utility conforms to IEEE Std 1003.1-2001 (``POSIX.1'') as amended by Cor. 1-2002.
HISTORY
A uniq command appeared in Version 3 AT&T UNIX.
BSD July 3, 2004 BSD