I have been trying to find out all IDs for those entries with duplicate names in 2nd and 3rd columns and their count like how many time duplication happened for any name if any,
Hello,
My text file has input of the form
abc dft45.xml
ert rt653.xml
abc ert57.xml
I need to write a perl script/shell script to find duplicates in the first column and write it into a text file of the form...
abc dft45.xml
abc ert57.xml
Can some one help me plz? (5 Replies)
Given a file such as this I need to remove the duplicates.
00060011 PAUL BOWSTEIN ad_waq3_921_20100826_010517.txt
00060011 PAUL BOWSTEIN ad_waq3_921_20100827_010528.txt
0624-01 RUT CORPORATION ad_sade3_10_20100827_010528.txt
0624-01 RUT CORPORATION ... (13 Replies)
Hi,
I need an awk script (or whatever shell-construct) that would take data like below and get the max value of 3 column, when grouping by the 1st column.
clientname,day-of-month,max-users
-----------------------------------
client1,20120610,5
client2,20120610,2
client3,20120610,7... (3 Replies)
Hi,
I have a file (sorted by sort) with 8 tab delimited columns. The first column contains duplicated fields and I need to merge all these identical lines.
My input file:
comp100002 aaa bbb ccc ddd eee fff ggg
comp100003 aba aba aba aba aba aba aba
comp100003 fff fff fff fff fff fff fff... (5 Replies)
Hi, I have a file with +/- 13000 lines and 4 column. I need to search the 3rd column for a word that begins with "SAP-" and move/skip it to the next column (4th). Because the 3rd column need to stay empty.
Thanks in advance.:)
89653 36891 OTR-60 SAP-2
89653 36892 OTR-10 SAP-2... (2 Replies)
input
"A","B","C,D","E","F"
"S","T","U,V","W","X"
"AA","BB","CC,DD","EEEE","FFF"
required output:
"A","B","C,D","C,D","F"
"S", T","U,V","U,V","X"
"AA","BB","CC,DD","CC,DD","FFF"
tried using awk but double quotes not preserving for every field. any help to solve this is much... (5 Replies)
Hi Experts,
Please bear with me, i need help
I am learning AWk and stuck up in one issue.
First point : I want to sum up column value for column 7, 9, 11,13 and column15 if rows in column 5 are duplicates.No action to be taken for rows where value in column 5 is unique.
Second point : For... (1 Reply)
Hello Team,
My source data (INput) is like below
EPIC1 router EPIC2 Targetdefinition
Exp1 Expres rtr1 Router
SQL SrcQual Exp1 Expres
rtr1 Router EPIC1 Targetdefinition
My output like
SQL SrcQual Exp1 Expres
Exp1 Expres rtr1 Router
rtr1 Router EPIC1 Targetdefinition... (5 Replies)
Discussion started by: sekhar.lsb
5 Replies
LEARN ABOUT REDHAT
logfile
LOGFILE(1) mrtg LOGFILE(1)NAME
logfile - description of the mrtg-2 logfile format
SYNOPSIS
This document provides a description of the contents of the mrtg-2 logfile.
OVERVIEW
The logfile consists of two main sections. A very short one at the beginning:
The first Line
It stores the traffic counters from the most recent run of mrtg
The rest of the File
Stores past traffic rate averates and maxima at increassing intervals
The first number on each line is a unix time stamp. It represents the number of seconds since 1970.
DETAILS
The first Line
The first line has 3 numbers which are:
A (1st column)
A timestamp of when MRTG last ran for this interface. The timestamp is the number of non-skip seconds passed since the standard UNIX
"epoch" of midnight on 1st of January 1970 GMT.
B (2nd column)
The "incoming bytes counter" value.
C (3rd column)
The "outgoing bytes counter" value.
The rest of the File
The second and remaining lines of the file 5 numbers which are:
A (1st column)
The Unix timestamp for the point in time the data on this line is relevant. Note that the interval between timestamps increases as you
prograss through the file. At first it is 5 minutes and at the end it is one day between two lines.
This timestamp may be converted in EXCEL by using the following formula:
=(x+y)/86400+DATE(1970,1,1)
you can also ask perl to help by typing
perl -e 'print scalar localtime(x),"
"'
x is the unix timestamp and y is the offset in seconds from UTC. (Perl knows y).
B (2nd column)
The average incoming transfer rate in bytes per second. This is valid for the time between the A value of the current line and the A
value of the previous line.
C (3rd column)
The average outgoing transfer rate in bytes per second since the previous measurement.
D (4th column)
The maximum incoming transfer rate in bytes per second for the current interval. This is calculated from all the updates which have
occured in the current interval. If the current interval is 1 hour, and updates have occured every 5 minutes, it will be the biggest 5
minute transferrate seen during the hour.
E (5th column)
The maximum outgoing transfer rate in bytes per second for the current interval.
AUTHOR
Butch Kemper <kemper@bihs.net> and Tobias Oetiker <oetiker@ee.ethz.ch>
3rd Berkeley Distribution 2.9.17 LOGFILE(1)