If we look very closely at the desired output shown in post #1 in this thread and do a LOT of reading between the lines, the awk program with the printf piped through ex helper script shown in post #3 is even further off the mark than RudiC noticed.
Like RudiC's code, the code shown in post #3 sums and prints every input field in the output. But the sample input contains 23 fields and the sample (desired) output only contains 18 fields??? So reading between the lines and assuming that the header for the desired 1st output field was supposed to be AVG_POP1 instead of VG_POP1, we might guess that what is really wanted is to only calculate averages of the input fields that have headers that start with POP. If we make that assumption and also assume that the heading for the output field should have the input field header with the string AVG_ prepended to it AND assume that the average should only count the data lines being averaged (as was done in RudiC's code but was not done in the code in post #3), we could come closer to getting what seems to be the desired output.
But, then we also note that the code in post #3 uses a <tab> character as the input field separator and there are no <tab>s in the sample input provided (so the code in post #3 only produces one output field).
So, if we modify the sample input data to be <tab> separated and assume that the desired output should also be <tab> separated instead of separating output fields with a single <space> character and not even including a <newline> terminator to the single partial line of output produced by the script in post #3 AND not just as sequences of <space>s as shown in the given in the desired output shown in post #1, we might try something more like:
Using %9.2f as the output format for the calculated values makes the field values line up with the output field headers. It isn't obvious whether or not this was required by the sample output provided in post #1. The above code produces the output:
The values shown in red differ from the desired sample output (with fields aligned for comparison purposes) in post #1 by ±.01 for the AVG_POP6 and AVG_POP14 fields shown in red above; all of the other values exactly match the desired output shown on the last line of the above comparison.
If you strip out the values shown in orange in the last line of the output (corresponding to fields that have been deleted from the output in the other two sets of output shown), the values given by the code in post #3 are about 80% of the desired values (as I would expect since that code is calculating the average by dividing the sum of four numeric fields by five instead of dividing the sum of four numeric fields by four).
I hope this helps. It would certainly be a lot easier to come up with code like the above if the description of the problem matched the desired output a lot closer.
Last edited by Don Cragun; 02-13-2019 at 01:56 AM..
Reason: Fix font problem with <plus-or-minus> character.
These 2 Users Gave Thanks to Don Cragun For This Post:
Hi I have fakebook.csv as following:
F1(current date) F2(popularity) F3(name of book) F4(release date of book)
2006-06-21,6860,"Harry Potter",2006-12-31
2006-06-22,,"Harry Potter",2006-12-31
2006-06-23,7120,"Harry Potter",2006-12-31
2006-06-24,,"Harry Potter",2006-12-31... (0 Replies)
Hi,
I want to know, how we find out if a column is having a numeric value or not.
For Example if we have a csv file as
ASDF,QWER,GHJK,123,FGHY,9876
GHTY,NVHR,WOPI,623,HFBS,5386
we need to find out if the 4th and 6th column has muneric value or not.
Thanks in advance
Keerthan (9 Replies)
Dear All,
I have this file tab delimited
A 1 12 22
B 3 34 33
C 55 9 32
A 12 81 71
D 11 1 66
E 455 4 2
B 89 4 3
I would like to make the average every column where the first column is the same, for example,
A 6,5 46,5 46,5
B 46,0 19,0 18,0
C 55,0 9,0 32,0
D 11,0 1,0 66,0... (8 Replies)
Hi,
I would like to calculate the average of column 'y' based on the value of column 'pos'.
For example, here is file1
id pos y c
11 1 220 aa
11 4333 207 f
11 5333 112 ee
11 11116 305 e
11 11117 310 r
11 22228 781 gg
11 ... (2 Replies)
I have file1.txt
LBP298W2,300,-18,-115,-12,-105
LBP298W2,300,-18,-115,LBP298W3,300
LBP298W3,300,-18,-115,-12,-105---------- Post updated at 03:35 AM ---------- Previous update was at 03:34 AM ----------
i want to remove every line with non numeric value in column 5
expected result
... (4 Replies)
I have a file that looks like this:
id window BV
1 1 0.5
1 2 0.2
1 3 0.1
2 1 0.5
2 2 0.1
2 3 0.2
3 1 0.4
3 2 0.6
3 3 0.8
Using awk, how would I get the average BV for window 1? Output like this:
window avgBV
1 0.47
2 0.23 (10 Replies)
Hi,
My input file
Gene1 1
Gene1 2
Gene1 3
Gene1 0
Gene2 0
Gene2 0
Gene2 4
Gene2 8
Gene3 9
Gene3 9
Gene4 0
Condition:
If the first column matches, then look in the second column. If there is a value of zero in the second column, then don't consider that record while averaging.
... (5 Replies)
From googling and reading man pages I figured out this sorts the first column by numeric values.
sort -g -k 1,1
Why does the -n option not work? The man pages were a bit confusing.
And what if I want to sort the second column numerically? I haven't been able to figure that out. The file... (7 Replies)
Hi All,
I am trying to replace a certain value from one place in a file . In the below file at position 35 I will have 8 I need to modify all 8 in that position to 7
I tried
awk '{gsub("8","7",$35)}1' infile > outfile ----> not working
sed -i 's/8/7'g' infile --- it is replacing all... (3 Replies)