I have a large set of files structured about the same way:
Code:
02:53:43 03/03/2014 r/w I/O per second KBytes per sec Svt ms IOSz KB
UniqID name Host Port Cur1 Avg Max Cur1 Avg Max Cur1 Avg Cur1 Avg Que..
0 Name1 Host1 2:5:3 r 0 0 0 0 0 0 0.00 0.00 0.0 0.0 -..
0 Name1 Host1 2:5:3 w 0 0 0 0 0 0 0.00 0.00 0.0 0.0 -..
0 Name1 Host1 2:5:3 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0..
1 Name2 Host1 3:5:3 r 0 0 0 0 0 0 0.00 0.00 0.0 0.0 -..
1 Name2 Host1 3:5:3 w 0 0 0 0 0 0 0.00 0.00 0.0 0.0 -..
1 Name2 Host1 3:5:3 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0..
2 Name3 Host2 2:5:3 r 0 0 0 0 0 0 0.00 0.00 0.0 0.0 -..
2 Name3 Host2 2:5:3 w 0 0 0 0 0 0 0.00 0.00 0.0 0.0 -..
02:54:13 03/03/2014 r/w I/O per second KBytes per sec Svt ms IOSz KB
UniqID name Host Port Cur1 Avg Max Cur1 Avg Max Cur1 Avg Cur1 Avg Que..
This goes on for about 6.7GB. The other files are quite similar. The eventual goal is to chart each Name# by each of the column headers into a chart. i'll be slicing and dicing this in any number of ways. by T, by R, by W; possibly by Cur1, Cur2.
so I can csv the files, but i'm be dealing with these daily. I'd like to chart them, then archive the source files.
I have a few options, now I can awk each column, loop with bash. but it seems this is a perl script. i guess it might be prudent to start by splitting the files, into 999 separate files? parsing each? right now I jsut want to get out of manually grepping each T for each name and using paste to append the timestamp and importing THAT into excel, jsut to generate a chart (however lovely it might be). and an investigatory tool, this needs t be pretty accurate.
so maybe what i need is to just keep the file whole, then append the timestamp for each sample to each line in the sample. save me the time of splitting then pasting.
What do you smart guys think?
-k
Last edited by Don Cragun; 03-04-2014 at 09:49 PM..
Reason: Add CODE tags.
awk or perl should not matter...I would go with what ever is quickest for you. It seems you want to sum up all enteries per name# (not clear on this)...I would use awk to match on the time stamp, then summarize by similar name numbers, then write a new csv type of file with the time stamp being part of each record. Something like
Code:
timestamp | name# | value | value | value | value | ....
Then you can combine everything into one gigantic file so it won't matter how many input files you process. So the savings here is the one
file and each line conveys where it came from.
Hi,
I need to continuously monitor a logfile to get the log information between a process start and end.
the logfile look like this
abcdddddddddd
cjjckkkkkkkkkkkk
abc : Process started
aaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaa
bbbbbbbbbbbbbbbbbbbbbbb
abc... (6 Replies)
Can some body show me a sed command to remove everyhing upto a '/' and
leave the rest of the line.
cat data.out
This is the directory /tmp/xxx/yy.ksh
I only want to get the fullpath name
/tmp/xxx.yy.ksh
Thanks in advance to all who answer. (3 Replies)
Hi,
name=VDSL_TTV_ HN_SUB create coid=MA5603U phone=5678 portpolicy=APortSelectionPolicy rfu10=TTV rfu3=Dot1q sz7_portmode=VDSL2 rfu5=1234 srprofile.sy_profname=$ADSL_TTV_SubProfile1
I have a line like this. Its a single line.I need the output as
name=VDSL_TTV_ HN_SUB create... (1 Reply)
Hey guys,
I have this file generated by me... i want to create some HTML output from it.
The problem is that i am really confused about how do I go about reading the file.
The file is in the following format:
TID1 Name1 ATime=xx AResult=yyy AExpected=yyy BTime=xx BResult=yyy... (8 Replies)
I have a big xml file with little formatting in it. It contains over 600 messages that I need to break each message out in its own separate file.
The xml file looks in the middle of it something like this:
</Title></Msg><Msg><Opener> Hello how
are you?<Title> Some says hello</Title><Body>... (3 Replies)