Sponsored Content
Top Forums Shell Programming and Scripting Parsing a log file and creating a report script Post 303002026 by RudiC on Wednesday 16th of August 2017 11:06:55 AM
Old 08-16-2017
How about
Code:
awk -F"\.\.+"   '
BEGIN           {HD = "Device	Manufacturer	Machine Type and Model	FRU Number	Serial Number	Part Number"
                 for (MX=n=split (HD, HDArr, "\t"); n>0; n--)   {SRCH[HDArr[n]]
                                                                 LNG[n] = length(HDArr[n])
                                                                }
                }

/^(SYSTEM|MODEL|PROCESS|NUMBER)/ 

!LEADINDONE &&
!NF             {LEADINDONE    = 1
                 print
                 print HD
                 gsub (/[^\t]/, "-", HD)
                 print HD
                }
!LEADINDONE     {next
                }

NF == 1         {TMPL = RES[HDArr[4]] RES[HDArr[5]] RES[HDArr[6]]
                 if (TMPL != "") for (i=1; i<=MX; i++) printf "%-*s%s", LNG[i], RES[HDArr[i]]?RES[HDArr[i]]:"NA", (i == MX)?ORS:OFS
#                       else     delete CNT[TMPCNT]
                 split ("", RES)
                 split ($0, T, " ")
                 RES[HDArr[1]] = T[1]
                 TMPCNT = T[1]
                 sub (/[0-9]*$/, _, TMPCNT)
                 CNT[TMPCNT]++
                }

NF < 2          {next
                }

                {sub (/^ */, "", $1)
                }

$1 in SRCH      {RES[$1] = $NF
                }

END             {for (i=1; i<=MX; i++) printf "%-*s%s", LNG[i], RES[HDArr[i]]?RES[HDArr[i]]:"NA", (i == MX)?ORS:OFS
                 printf RS
                 for (c in CNT) print "Total", c, ":", CNT[c]
                }

' OFS="\t" file
SYSTEM: nb11cu51
MODEL, TYPE, and SN: IBM,9026-P70,01100699F
PROCESSOR TYPE: PowerPC_RS64-II
NUMBER OF PROCESSORS: 4

Device    Manufacturer    Machine Type and Model    FRU Number    Serial Number    Part Number
------    ------------    ----------------------    ----------    -------------    -----------
rmt0  	EXABYTE     	IBM-20GB              	59H4120   	60171713     	59H4117    
rmt1  	IBM         	03570C11              	NA        	0000000A6844 	NA         
ssa0  	IBM053      	NA                    	 34L5318  	S0237219     	 09L569B   
ssa1  	IBM053      	NA                    	 34L5388  	S0270187     	 09L5695   
rmt1  	IBM         	03570C11              	NA        	0000000A6844 	NA         
hdisk0	IBM         	DNES-309170W          	25L3101     	AJJ55889     	25L1861     
hdisk1	SEAGATE     	DPSS-309170N          	07N3675     	ZD11B560     	07N3721     
pdisk0	IBM         	DRVC09B               	NA        	680BA636SA   	34L8483     
pdisk01	MAC         	DRVC09A               	NA        	680BA636TT   	34L8483     

Total	rmt	:	3
Total	pdisk	:	2
Total	ssa	:	3
Total	pci	:	2
Total	hdisk	:	2
Total	tmscsi	:	2

The total counts are at the end as they are available only then; the reason pci and tmscsi counts are shown is that your input file does NOT stick to a reasonable structure (if you remove the # from the delete CNT line, ssa count will disappear as well...).
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Help with a shell script for creating a log file

I have a schell script that runs continously on an AIX system. It is actually started from another shell script with the "ksh -x" command and then I just write the output to a log file. This causes the log files to be filled with mostly useless information. I would like to modify this script to... (2 Replies)
Discussion started by: heprox
2 Replies

2. Shell Programming and Scripting

Shell script for parsing 300mb log file..

am relatively new to Shell scripting. I have written a script for parsing a big file. The logic is: Apart from lot of other useless stuffs, there are many occurances of <abc> and corresponding </abc> tags. (All of them are properly closed) My requirement is to find a particular tag (say... (3 Replies)
Discussion started by: gurpreet470
3 Replies

3. Shell Programming and Scripting

Help with script parsing a log file

I have a large log file, which I want to first use grep to get the specific lines then send it to awk to print out the specific column and if the result is zero, don't do anything. What I have so far is: LOGDIR=/usr/local/oracle/Transcription/log ERRDIR=/home/edixftp/errors #I want to be... (3 Replies)
Discussion started by: mevasquez
3 Replies

4. UNIX for Dummies Questions & Answers

Script for parsing details in a log file to a seperate file

Hi Experts, Im a new bee for scripting, I would ned to do the following via linux shell scripting, I have an application which throws a log file, on each action of a particular work with the application, as sson as the action is done, the log file would vanish or stops updating there, the... (2 Replies)
Discussion started by: pingnagan
2 Replies

5. Shell Programming and Scripting

Parsing of file for Report Generation (String parsing and splitting)

Hey guys, I have this file generated by me... i want to create some HTML output from it. The problem is that i am really confused about how do I go about reading the file. The file is in the following format: TID1 Name1 ATime=xx AResult=yyy AExpected=yyy BTime=xx BResult=yyy... (8 Replies)
Discussion started by: umar.shaikh
8 Replies

6. UNIX for Dummies Questions & Answers

Creating a report from csv file.

Hi Gurus, I need your help in transforming the CSV file into some what a report format. My source file looks like below Date,ProdID,TimeID,LevelID 2010-08-31,200,M,1 2010-08-31,201,Q,2 2010-08-31,202,Y,1 2010-08-31,203,M,5 Output required is ... (9 Replies)
Discussion started by: naveen.kuppili
9 Replies

7. Shell Programming and Scripting

Script for Parsing Log File

Working on a script that inputs an IP, parses and outputs to another file. A Sample of the log is as follows: I need the script to be able to input IP and print the data in an output file in the following format or something similar: Thanks for any help you can give me! (8 Replies)
Discussion started by: Winsarc
8 Replies

8. Shell Programming and Scripting

Parsing with Name value pair and creating a normalized file

I have url string as follows and I need to parse the name value pair into fields /rows event_id date time payload 1329130951 20120214 22.30.40... (1 Reply)
Discussion started by: smee
1 Replies

9. Shell Programming and Scripting

Shell script for creating log file and sending mail?

Hi , I am trying to create shell script which will help me to compare file name in two folder. There is a multiple file store in 2 folder.I want to compare that with the name. If all the file are same then send a mail that "all date is same" if not then create one log file which contain... (4 Replies)
Discussion started by: san_dy123
4 Replies

10. Shell Programming and Scripting

Issue with awk script parsing log file

Hello All, I am trying to parse a log file and i got this code from one of the good forum colleagues, However i realised later there is a problem with this awk script, being naive to awk world wanted to see if you guys can help me out. AWK script: awk '$1 ~ "^WRITER_" {p=1;next}... (18 Replies)
Discussion started by: Ariean
18 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 02:43 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy