05-03-2012
remove duplicate lines with condition
hi to all
Does anyone know if there's a way to remove duplicate lines which we consider the same only if they have the first and the second column the same?
For example I have :
us2333 bbb 5
us2333 bbb 3
us2333 bbb 2
and I want to get
us2333 bbb 10
The thing is I cannot remove the 3rd line and then use uniq -c
as the 3rd line holds an information which is needed.Besides,it is the sum
of the 3rd column (the 3rd column is a result of a previous uniq -c)
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
I have following file content (3 fields each line):
23 888 10.0.0.1
dfh 787 10.0.0.2
dssf dgfas 10.0.0.3
dsgas dg 10.0.0.4
df dasa 10.0.0.5
df dag 10.0.0.5
dfd dfdas 10.0.0.5
dfd dfd 10.0.0.6
daf nfd 10.0.0.6
...
as can be seen, that the third field is ip address and sorted. but... (3 Replies)
Discussion started by: fredao
3 Replies
2. UNIX for Dummies Questions & Answers
I have a log file "logreport" that contains several lines as seen below:
04:20:00 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but responded to ping
06:38:08 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but responded to ping
07:11:05 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but... (18 Replies)
Discussion started by: Nysif Steve
18 Replies
3. Shell Programming and Scripting
Hi,
I came to know that using awk '!x++' removes the duplicate lines. Can anyone please explain the above syntax. I want to understand how the above awk syntax removes the duplicates.
Thanks in advance,
sudvishw :confused: (7 Replies)
Discussion started by: sudvishw
7 Replies
4. Shell Programming and Scripting
Hi, I have a huge file which is about 50GB. There are many lines. The file format likes
21 rs885550 0 9887804 C C T C C C C C C C
21 rs210498 0 9928860 0 0 C C 0 0 0 0 0 0
21 rs303304 0 9941889 A A A A A A A A A A
22 rs303304 0 9941890 0 A A A A A A A A A
The question is that there are a few... (4 Replies)
Discussion started by: zhshqzyc
4 Replies
5. Shell Programming and Scripting
Hey guys, need some help to fix this script. I am trying to remove all the duplicate lines in this file.
I wrote the following script, but does not work. What is the problem?
The output file should only contain five lines:
Later! (5 Replies)
Discussion started by: Ernst
5 Replies
6. Shell Programming and Scripting
Hi
Ive been scratching over this for some time with no solution.
I have a file like this
1 bla bla 1
2 bla bla 2
4 bla bla 3
5 bla bla 1
6 bla bla 1
I want to remove consecutive occurrences of lines like bla bla 1, but the first column may be different.
Any ideasss?? (23 Replies)
Discussion started by: jamie_123
23 Replies
7. UNIX for Dummies Questions & Answers
Hi
I need this output. Thanks.
Input:
TAZ
YET
FOO
FOO
VAK
TAZ
BAR
Output:
YET
VAK
BAR (10 Replies)
Discussion started by: tara123
10 Replies
8. Shell Programming and Scripting
Hi,
I have a csv file which contains some millions of lines in it.
The first line(Header) repeats at every 50000th line. I want to remove all the duplicate headers from the second occurance(should not remove the first line).
I don't want to use any pattern from the Header as I have some... (7 Replies)
Discussion started by: sudhakar T
7 Replies
9. Shell Programming and Scripting
Hi Gents,
Please can you help me to get the desired output .
In the first column I have some duplicate records, The condition is that all need to reject the duplicate record keeping the last occurrence. But the condition is. If the last occurrence is equal to value 14 or 98 in column 3 and... (2 Replies)
Discussion started by: jiam912
2 Replies
10. Shell Programming and Scripting
Hi All,
I am storing the result in the variable result_text using the below code.
result_text=$(printf "$result_text\t\n$name") The result_text is having the below text. Which is having duplicate lines.
file and time for the interval 03:30 - 03:45
file and time for the interval 03:30 - 03:45 ... (4 Replies)
Discussion started by: nalu
4 Replies
LEARN ABOUT DEBIAN
cufilter
CUFILTER(1p) User Contributed Perl Documentation CUFILTER(1p)
NAME
cufilter - Filter emails through Mail::CheckUser
SYNOPSIS
Add the following lines to your ~/.procmailrc:
# Filter mail through Mail::CheckUser
:0f
| /usr/bin/cufilter
DESCRIPTION
When email messages are filtered through this program using the procmail settings as outlined in the SYNOPSYS, the email address in the
"From:" header is passed through Mail::CheckUser to ensure validity. If there is a problem with the email address, the "Subject:" header
is modified to show which email address failed along with the failure reason. No messages are lost, but it provides an easy way for the
mail client to organize, sort, or filter based on the subject tweaks.
EXAMPLES
Lets say a spammer sends a message with the following headers:
From: god@heaven.org
To: you@host.com
Subject: Happy Pill
Then the new headers might change to the following:
From: god@heaven.org
To: you@host.com
Subject: [CU!god@heaven.org!DNS failure: SERVFAIL] Happy Pill
This makes it easy to filter for mail clients.
INSTALL
This file can be installed into /usr/bin/cufilter and is intended to be utilized through the procmail functionality by adding the following
lines to your ~/.procmailrc configuration.
# Filter mail through Mail::CheckUser
:0f
| /usr/bin/cufilter
AUTHORS
Rob Brown bbb@cpan.org
COPYRIGHT
Copyright (c) 2003 Rob Brown bbb@cpan.org. All rights reserved.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
$Id: cufilter,v 1.3 2003/09/18 15:36:26 hookbot Exp $
SEE ALSO
Mail::CheckUser(3), procmail(1).
perl v5.14.2 2012-03-10 CUFILTER(1p)