Sponsored Content
Top Forums Shell Programming and Scripting Extract duplicate fields in rows Post 302147739 by Franklin52 on Wednesday 28th of November 2007 07:43:22 AM
Old 11-28-2007
Try:

Code:
sort -t ';' -k 2,2 | awk 'dat==$2{print $0}{dat=$2}' file

Regards
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

duplicate rows in a file

hi all can anyone please let me know if there is a way to find out duplicate rows in a file. i have a file that has hundreds of numbers(all in next row). i want to find out the numbers that are repeted in the file. eg. 123434 534 5575 4746767 347624 5575 i want 5575 please help (3 Replies)
Discussion started by: infyanurag
3 Replies

2. Shell Programming and Scripting

How to extract duplicate rows

I have searched the internet for duplicate row extracting. All I have seen is extracting good rows or eliminating duplicate rows. How do I extract duplicate rows from a flat file in unix. I'm using Korn shell on HP Unix. For.eg. FlatFile.txt ======== 123:456:678 123:456:678 123:456:876... (5 Replies)
Discussion started by: bobbygsk
5 Replies

3. HP-UX

How to get Duplicate rows in a file

Hi all, I have written one shell script. The output file of this script is having sql output. In that file, I want to extract the rows which are having multiple entries(duplicate rows). For example, the output file will be like the following way. ... (7 Replies)
Discussion started by: raghu.iv85
7 Replies

4. Shell Programming and Scripting

How to extract duplicate rows

Hi! I have a file as below: line1 line2 line2 line3 line3 line3 line4 line4 line4 line4 I would like to extract duplicate lines (not unique, triplicate or quadruplicate lines). Output will be as below: line2 line2 I would appreciate if anyone can help. Thanks. (4 Replies)
Discussion started by: chromatin
4 Replies

5. Shell Programming and Scripting

Find duplicate based on 'n' fields and mark the duplicate as 'D'

Hi, In a file, I have to mark duplicate records as 'D' and the latest record alone as 'C'. In the below file, I have to identify if duplicate records are there or not based on Man_ID, Man_DT, Ship_ID and I have to mark the record with latest Ship_DT as "C" and other as "D" (I have to create... (7 Replies)
Discussion started by: machomaddy
7 Replies

6. Shell Programming and Scripting

Extract fields from different rows.

Hi, I have data like below. SID=D6EB96CC0 HID=9C246D6 CSource=xya Cappe=1 Versionc=3670 MAR1=STL MARS2=STL REQ_BUFFER_ENCODING=UTF-8 REQ_BUFFER_ORIG_ENCODING=UTF-8 RESP_BODY_ENCODING=UTF-8 CON_ID=2713 I want to select CSource=xya (18 Replies)
Discussion started by: chetan.c
18 Replies

7. Shell Programming and Scripting

Delete duplicate rows

Hi, This is a followup to my earlier post him mno klm 20 76 . + . klm_mango unix_00000001; alp fdc klm 123 456 . + . klm_mango unix_0000103; her tkr klm 415 439 . + . klm_mango unix_00001043; abc tvr klm 20 76 . + . klm_mango unix_00000001; abc def klm 83 84 . + . klm_mango... (5 Replies)
Discussion started by: jacobs.smith
5 Replies

8. Shell Programming and Scripting

Extract and count number of Duplicate rows

Hi All, I need to extract duplicate rows from a file and write these bad records into another file. And need to have a count of these bad records. i have a command awk ' {s++} END { for(i in s) { if(s>1) { print i } } }' ${TMP_DUPE_RECS}>>${TMP_BAD_DATA_DUPE_RECS}... (5 Replies)
Discussion started by: Arun Mishra
5 Replies

9. Shell Programming and Scripting

Extract duplicate rows with conditions

Gents Can you help please. Input file 5490921425 1 7 1310342 54909214251 5490921425 2 1 1 54909214252 5491120937 1 1 3 54911209371 5491120937 3 1 1 54911209373 5491320785 1 ... (4 Replies)
Discussion started by: jiam912
4 Replies

10. Shell Programming and Scripting

Extract and exclude rows based on duplicate values

Hello I have a file like this: > cat examplefile ghi|NN603762|eee mno|NN607265|ttt pqr|NN613879|yyy stu|NN615002|uuu jkl|NN607265|rrr vwx|NN615002|iii yzA|NN618555|ooo def|NN190486|www BCD|NN628717|ppp abc|NN190486|qqq EFG|NN628717|aaa HIJ|NN628717|sss > I can sort the file by... (5 Replies)
Discussion started by: CHoggarth
5 Replies
GSL-RANDIST(1)						      General Commands Manual						    GSL-RANDIST(1)

NAME
gsl-randist - generate random samples from various distributions SYNOPSYS
gsl-randist seed n DIST param1 param2 [..] DESCRIPTION
gsl-randist is a demonstration program for the GNU Scientific Library. It generates n random samples from the distribution DIST using the distribution parameters param1, param2, ... EXAMPLE
Here is an example. We generate 10000 random samples from a Cauchy distribution with a width of 30 and histogram them over the range -100 to 100, using 200 bins. gsl-randist 0 10000 cauchy 30 | gsl-histogram -100 100 200 > histogram.dat A plot of the resulting histogram will show the familiar shape of the Cauchy distribution with fluctuations caused by the finite sample size. awk '{print $1, $3 ; print $2, $3}' histogram.dat | graph -T X SEE ALSO
gsl(3), gsl-histogram(1). AUTHOR
gsl-randist was written by James Theiler and Brian Gough. Copyright 1996-2000; for copying conditions see the GNU General Public Licence. This manual page was added by the Dirk Eddelbuettel <edd@debian.org>, the Debian GNU/Linux maintainer for GSL. GNU
GSL-RANDIST(1)
All times are GMT -4. The time now is 08:38 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy