read a allow to recover the query string and test=$( echo $a | cut -d'=' -f2) allow to change the output. The basic output is FRAME_NAME=MIAIBYE00. It was generate from a listbox in my index page which is contain the list of my FRAMES. I use the cut command to keep only the right side of the =. My variable $test is equal to the query string with the cut. So I keep only the lines which is contain the query string.
OK. I know that the value stored in the shell variable test is used to filter the input. You still haven't clearly answered the question: Is the value stored in test a string that is an exact match for a $1 value in your input files? Assuming that it is, the awk test $1 == test would be a better test than using $0 ~ test. The $1 == test will only match exactly the value you want to match. The $0 ~ test will match the lines you do want, but could also match lines that you do not want.
Quote:
In my post#1, I make a screenshot of only three columns, because... I can't do more. The date is from the filename, so yes, for this exemple, there is only three columns, bu as I have 276 csv files and if the date is from the filename... There is 276 columns. That's why there is only 3 columns here.
And like for the screenshot, the lines from my CSV are just here as an exemple. In reality, I have 226442 lines. You understand that I can't post all these lines as an exemple.
So, in a nuthsell :
- I have many CSV files ( 276 csv -> 226442 lines )
- I make awk to keep only the column 1,2,5,6,and 7. I would like to keep only the lines that are not the same, so I use the command if (!a[$0]++) to delete the duplicate lines ( By eliminating the duplicate lines, I reduce the number of columns too. )
- I would like to display these informations like that thanks to a html array :
As in my first script and as you can see an exemple on the screenshot.
LPARS : the content of the column 2 kept by the awk command
RAM : the content of the column 5 kept by the awk command
CPU 1 : the content of the column 6 kept by the awk command
CPU 2 : the content of the column 7 kept by the awk command
No, we cannot see that from your example in post #1. Your example in post #1 shows the output you would have gotten if you had run your script asking it to process three input files. It does not show us the output you want to get when you run your script with those same three input files. And, your refusal to show us the output your want to get from those three sample input files makes many of your later statements ambiguous.
PLEASE look at your sample output in post #1 and show us exactly what output you want to produce. (DO NOT use XX to hide the data you want; use the data that is in the image in post #1.) I assume that there will be somewhere between two and seven lines of output and I would have thought that you want three columns of output, but maybe you only want two columns of output. If you are unwilling to do this simple task for me, I don't think I will be able to figure out what you are trying to do. The language barrier is making it difficult for me to determine what you are trying to do. I need to see the actual output you are trying to produce from the three files used in your example.
Quote:
There is no problem for that. It's just a simple request. If you haven't time to awser me or if you can't find a soluce, never mind ! I will continue to find a soluce for my part !
Have a nice day !
I want to help, but without a clear example of the output you are trying to produce I can't write the code you need.
I have following file content (3 fields each line):
23 888 10.0.0.1
dfh 787 10.0.0.2
dssf dgfas 10.0.0.3
dsgas dg 10.0.0.4
df dasa 10.0.0.5
df dag 10.0.0.5
dfd dfdas 10.0.0.5
dfd dfd 10.0.0.6
daf nfd 10.0.0.6
...
as can be seen, that the third field is ip address and sorted. but... (3 Replies)
i have the long file more than one ns and www and mx in the line like .
i need the first ns record and first www and first mx from line .
the records are seperated with tthe ; i am try ing in awk scripting not getiing the solution.
... (4 Replies)
Hi,
I came to know that using awk '!x++' removes the duplicate lines. Can anyone please explain the above syntax. I want to understand how the above awk syntax removes the duplicates.
Thanks in advance,
sudvishw :confused: (7 Replies)
Hi, I have a huge file which is about 50GB. There are many lines. The file format likes
21 rs885550 0 9887804 C C T C C C C C C C
21 rs210498 0 9928860 0 0 C C 0 0 0 0 0 0
21 rs303304 0 9941889 A A A A A A A A A A
22 rs303304 0 9941890 0 A A A A A A A A A
The question is that there are a few... (4 Replies)
Hello again, I am wanting to remove all duplicate blocks of XML code in a file. This is an example:
input:
<string-array name="threeItems">
<item>item1</item>
<item>item2</item>
<item>item3</item>
</string-array>
<string-array name="twoItems">
<item>item1</item>
<item>item2</item>... (19 Replies)
Hi,
I have a file with date in it like:
UserString1
UserString2
UserString3
UserString4
UserString5
I need two entries for each line so it reads like
UserString1
UserString1
UserString2
UserString2
etc. Can someone help me with the awk command please?
Thanks (4 Replies)
Hi,
I am on a Solaris8 machine
If someone can help me with adjusting this awk 1 liner (turning it into a real awkscript) to get by this "event not found error"
...or
Present Perl solution code that works for Perl5.8 in the csh shell ...that would be great.
******************
... (3 Replies)
Hi All,
I am storing the result in the variable result_text using the below code.
result_text=$(printf "$result_text\t\n$name") The result_text is having the below text. Which is having duplicate lines.
file and time for the interval 03:30 - 03:45
file and time for the interval 03:30 - 03:45 ... (4 Replies)