Sponsored Content
Full Discussion: Sort, Uniq, Duplicates
Top Forums Shell Programming and Scripting Sort, Uniq, Duplicates Post 302117708 by matrixmadhan on Wednesday 16th of May 2007 05:32:09 AM
Old 05-16-2007
Quote:
This gets 25060008,0040,03, into the Duplicates file.
But I also want 84176527,0047,03, in the Duplicates file.

Basically I want the script to sort on the first 2 fields (delimited by comma) and if duplicates are found for first 2 fields I want it to be written to "Duplicates" file.

In the above sample of records only the third field is common '03'
and not the first or the second field.

How would you expect that to be termed as duplicates based on two fields ? Smilie
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

sort/uniq

I have a file: Fred Fred Fred Jim Fred Jim Jim If sort is executed on the listed file, shouldn't the output be?: Fred Fred Fred Fred Jim Jim Jim (3 Replies)
Discussion started by: jimmyflip
3 Replies

2. UNIX for Dummies Questions & Answers

Help with Last,uniq, sort and cut

Using the last, uniq, sort and cut commands, determine how many times the different users have logged in. I know how to use the last command and cut command... i came up with last | cut -f1 -d" " | uniq i dont know if this is right, can someone please help me... thanks (1 Reply)
Discussion started by: jay1228
1 Replies

3. Shell Programming and Scripting

sort and uniq in perl

Does anyone have a quick and dirty way of performing a sort and uniq in perl? How an array with data like: this is bkupArr BOLADVICE_VN this is bkupArr MLT6800PROD2A this is bkupArr MLT6800PROD2A this is bkupArr BOLADVICE_VN_7YR this is bkupArr MLT6800PROD2A I want to sort it... (4 Replies)
Discussion started by: reggiej
4 Replies

4. Shell Programming and Scripting

Removing duplicates [sort , uniq]

Hey Guys, I have file which looks like this, Contig201#numbPA Contig1452#nmdynD6PA dm022p15.r#CG6461PA dm005e16.f#SpatPA IGU001_0015_A06.f#CG17593PA I need to remove duplicates based on the chracter matching upto '#'. for example if we consider this.. Contig201#numbPA... (4 Replies)
Discussion started by: sharatz83
4 Replies

5. Shell Programming and Scripting

Help with Uniq and sort

The key is first field i want only uniq record for the first field in file. I want the output as or output as Appreciate help on this (4 Replies)
Discussion started by: pinnacle
4 Replies

6. Shell Programming and Scripting

sort | uniq question

Hello, I have a large data file: 1234 8888 bbb 2745 8888 bbb 9489 8888 bbb 1234 8888 aaa 4838 8888 aaa 3977 8888 aaa I need to remove duplicate lines (where the first column is the duplicate). I have been using: sort file.txt | uniq -w4 > newfile.txt However, it seems to keep the... (11 Replies)
Discussion started by: palex
11 Replies

7. Shell Programming and Scripting

Sort and uniq after comparision

Hi All, I have a text file with the format shown below. Some of the records are duplicated with the only exception being date (Field 15). I want to compare all duplicate records using subscriber number (field 7) and keep only those records with greater date. ... (1 Reply)
Discussion started by: nua7
1 Replies

8. Shell Programming and Scripting

Sort uniq or awk

Hi again, I have files with the following contents datetime,ip1,port1,ip2,port2,number How would I find out how many times ip1 field shows up a particular file? Then how would I find out how many time ip1 and port 2 shows up? Please mind the file may contain 100k lines. (8 Replies)
Discussion started by: LDHB2012
8 Replies

9. Shell Programming and Scripting

Uniq or sort -u or similar only between { }

Hi ! I am trying to remove doubbled entrys in a textfile only between delimiters. Like that example but i dont know how to do that with sort or similar. input: { aaa aaa } { aaa aaa } output: { aaa } { (8 Replies)
Discussion started by: fugitivus
8 Replies

10. UNIX for Dummies Questions & Answers

Uniq and sort -u

Hello all, Need to pick your brains, I have a 10Gb file where each row is a name, I am expecting about 50 names in total. So there are a lot of repetitions in clusters. So I want to do a sort -u file Will it be considerably faster or slower to use a uniq before piping it to sort... (3 Replies)
Discussion started by: senhia83
3 Replies
MRTG-LOGFILE(1) 						       mrtg							   MRTG-LOGFILE(1)

NAME
mrtg-logfile - description of the mrtg-2 logfile format SYNOPSIS
This document provides a description of the contents of the mrtg-2 logfile. OVERVIEW
The logfile consists of two main sections. The first Line It stores the traffic counters from the most recent run of mrtg. The rest of the File Stores past traffic rate averates and maxima at increassing intervals. The first number on each line is a unix time stamp. It represents the number of seconds since 1970. DETAILS
The first Line The first line has 3 numbers which are: A (1st column) A timestamp of when MRTG last ran for this interface. The timestamp is the number of non-skip seconds passed since the standard UNIX "epoch" of midnight on 1st of January 1970 GMT. B (2nd column) The "incoming bytes counter" value. C (3rd column) The "outgoing bytes counter" value. The rest of the File The second and remaining lines of the file contains 5 numbers which are: A (1st column) The Unix timestamp for the point in time the data on this line is relevant. Note that the interval between timestamps increases as you progress through the file. At first it is 5 minutes and at the end it is one day between two lines. This timestamp may be converted in OpenOffice Calc or MS Excel by using the following formula =(x+y)/86400+DATE(1970;1;1) (instead of ";" it may be that you have to use "," this depends on the context and your locale settings) you can also ask perl to help by typing perl -e 'print scalar localtime(x)," "' x is the unix timestamp and y is the offset in seconds from UTC. (Perl knows y). B (2nd column) The average incoming transfer rate in bytes per second. This is valid for the time between the A value of the current line and the A value of the previous line. C (3rd column) The average outgoing transfer rate in bytes per second since the previous measurement. D (4th column) The maximum incoming transfer rate in bytes per second for the current interval. This is calculated from all the updates which have occured in the current interval. If the current interval is 1 hour, and updates have occured every 5 minutes, it will be the biggest 5 minute transfer rate seen during the hour. E (5th column) The maximum outgoing transfer rate in bytes per second for the current interval. AUTHOR
Butch Kemper <kemper@bihs.net> and Tobias Oetiker <tobi@oetiker.ch> 2.17.4 2012-01-12 MRTG-LOGFILE(1)
All times are GMT -4. The time now is 11:08 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy