Note that in your sample input file shown in post #1 in this thread, you showed us two lines that seem to be in completely different formats. Please explain what the real format is for your input files.
Oh, I didn't see that... It was just a exemple of my CSV files. Layout problem I guess. The correct layout :
In my script, I keep only the column 1,2,5,6 and 7 thanks to awk.
Quote:
It is also unclear as to whether or not all of the input files will contain an entry for each LPARS value. If a record for a specific LPARS value is not included in a file, should that be treated as a "different" value causing a line to be printed? Or should a file be ignored when determining whether or not to print an LPARS value line if there is no entry for that LPARS value in that file?
There is a value for each LPARS. And even if there is no value, it's not problem. For exemple, if I have no value for the RAM, nothing will be displayed next to " RAM " :
I just need to put the content of the csv file next to his " key name " ( LPARS, RAM, CPU 1 or CPU 2 ). So if there is no informations, nothing will be displayed.
I don't have the impression that is difficult, but I don't see the solution... I succeeded with my first script but all I needed was to delete the duplicate lines... Now that I've succeed to delete the duplicate lines, I juste need to put the ouput at the good layout...
I have following file content (3 fields each line):
23 888 10.0.0.1
dfh 787 10.0.0.2
dssf dgfas 10.0.0.3
dsgas dg 10.0.0.4
df dasa 10.0.0.5
df dag 10.0.0.5
dfd dfdas 10.0.0.5
dfd dfd 10.0.0.6
daf nfd 10.0.0.6
...
as can be seen, that the third field is ip address and sorted. but... (3 Replies)
i have the long file more than one ns and www and mx in the line like .
i need the first ns record and first www and first mx from line .
the records are seperated with tthe ; i am try ing in awk scripting not getiing the solution.
... (4 Replies)
Hi,
I came to know that using awk '!x++' removes the duplicate lines. Can anyone please explain the above syntax. I want to understand how the above awk syntax removes the duplicates.
Thanks in advance,
sudvishw :confused: (7 Replies)
Hi, I have a huge file which is about 50GB. There are many lines. The file format likes
21 rs885550 0 9887804 C C T C C C C C C C
21 rs210498 0 9928860 0 0 C C 0 0 0 0 0 0
21 rs303304 0 9941889 A A A A A A A A A A
22 rs303304 0 9941890 0 A A A A A A A A A
The question is that there are a few... (4 Replies)
Hello again, I am wanting to remove all duplicate blocks of XML code in a file. This is an example:
input:
<string-array name="threeItems">
<item>item1</item>
<item>item2</item>
<item>item3</item>
</string-array>
<string-array name="twoItems">
<item>item1</item>
<item>item2</item>... (19 Replies)
Hi,
I have a file with date in it like:
UserString1
UserString2
UserString3
UserString4
UserString5
I need two entries for each line so it reads like
UserString1
UserString1
UserString2
UserString2
etc. Can someone help me with the awk command please?
Thanks (4 Replies)
Hi,
I am on a Solaris8 machine
If someone can help me with adjusting this awk 1 liner (turning it into a real awkscript) to get by this "event not found error"
...or
Present Perl solution code that works for Perl5.8 in the csh shell ...that would be great.
******************
... (3 Replies)
Hi All,
I am storing the result in the variable result_text using the below code.
result_text=$(printf "$result_text\t\n$name") The result_text is having the below text. Which is having duplicate lines.
file and time for the interval 03:30 - 03:45
file and time for the interval 03:30 - 03:45 ... (4 Replies)