Need to remove duplicate lines satisfies the below condition
if error, server, netowrk, dept and date are all the same then keep the latest line and remove old timed duplicate lines
Your code collects a unique line per time stamp ($NF is the last field on the line, the time stamp), not per the criteria you listed.
I don't know what the FS=" *" part is supposed to do, the regular whitespace separation that awk uses by default should work, and the FS looks like it's more or less the same thing anyway (not sure if you have tabs in there or not).
The keys you want to use are $1 (error), $2 (server), $3 (network), $4 (dept), and $5 (date). You probably want to do the arithmetic normalization on the time field, not on the date.
I moved the END to the end (sic) purely for readability reasons; awk doesn't care much where in the script you put it.
So t contains the time stamp from $7 with the colons removed, and k is the combination of the fields you want to compare time stamps for (error, server, network, dept, date). If t is bigger than the old t you have for this k in m[k] (or it doesn't exist, meaning it's effectively zero), replace it, and remember the whole line in r[k]. Finally print all the lines in r.
Oh, the single number one after the closing brace is significant, too; it causes the header lines to be printed. If you don't want to print them, take it out. (It's a shorthand; it says "for any remaining line -- for which 1 is true, which by definition it is; this thus means all remaining lines, excluding any which were already handled earlier in the script -- do the default action, which is to print the line.")
Last edited by era; 05-06-2008 at 05:26 AM..
Reason: m[k] is effectively zero if it's not defined; single 1 prints header
Hi,
I have file which contains data based on tags. Output of the file should be in order of tags.
Below are the files :
Tags.txt
f12
f13
f23
f45
f56
Original data is like this :
Data.txt
2017/01/04|09:07:00:021|R|XYZ|38|9|1234|f12=CAT|f23=APPLE|f45=PENCIL|f13=CAR... (5 Replies)
Hello Experts,
I am truly a beginner in shell and perl . Need an urgent help with sorting a file. please help. wouldn't mind whether in perl or shell script.
Here are the details.
------------------------------------------------------
Input Text file EX:... (9 Replies)
Hello,
I'm trying to find a solution or a proper tool for the following job: I need to sort a text document with indented sections, so all levels of indentation are sorted independently for each section.
Particularly, I need this for Cisco routers' running config files to compare them with... (2 Replies)
I have a tab delimited file with 5 columns
79 A B 20.2340 6.1488 8.5086 1.3838
87 A B 0.1310 0.0382 0.0054 0.1413
88 A B 46.1651 99.0000 21.8107 0.2203
89 A B 0.1400 0.1132 0.0151 0.1334
114 A B 0.1088 0.0522 0.0057 0.1083
115 A B... (2 Replies)
Hello friends!
Help me pls to write correct awk and grep statements for my task:
I have got files with name filename.txt
It has such structure:
Start of file
FROM: address@domen.com (12...890) abc
DATE: 11/23/2009 on Std
SUBJECT: any subject
End of file
So, I must check,
if this file... (4 Replies)
i have a file ddd.txt
its delimiter is : but has , and "" within each column as below and also each line ends with ;
I_EP,"29":I_US,"120":I_P_ID,"2020":I_NEW,"600":I_OLD,"400":I_POW,"4.5":I_NAME,"TOM";... (9 Replies)
In unix how to sort in reverse order based on second field in a text file.
$ cat data1
David:501
Albie:503
Shaun:502
The expected output:
Albie:503
Shaun:502
David:501
Please help :) (4 Replies)
i need help with my script....
i am suppose to grab files within a certain date range
now i have done that already using the touch and find command (found them in other threads)
touch -d "$date_start" ./tmp1
touch -d "$date_end" ./tmp2
find "$data_location" -maxdepth 1 -newer ./tmp1 !... (6 Replies)
Hi all
My text file looks like this:
start doc
... (certain number of records)
REC3|Emma|info|
REC3|Lukas|info|
REC3|Arthur|info|
... (certain number of records)
end doc
start doc
... (certain number of records)... (4 Replies)