Hi,
first, I have searched in the forum for this, but I could not find the right answer. (There were some similar threads, but I was not sure how to adapt the ideas.)
Anyway, I have a quite natural problem: Given are several text files. All files contain the same number of lines and the same number of columns. I want to compute a file that contains at each cell (interpreting the files as tables) the average value of the corresponding cells from my files.
There is a problematic thing in the files: The cells may contain numbers or the string "n/a" which means something like "not computed". When some files have a "n/a" at some cell, then the result file should contain the average from the not-"n/a" values and (separated by some unique symbol like "~") the number of "n/a" values.
For example:
File 1
File 2
File 3
Now, want to compute the following:
Resultfile
Of course, I could implement this with some high-level programming language, but having this as script would make it much more comfortable in my application.
I think this should be easy for experts of awk or similar tools. Unfortunately I don't see an easy solution.
My program run without error. The problem I am having.
The program isn't outputting field values with the column headers to file.txt.
Each of the column headers in file.txt has no data.
MEMSIZE SECOND SASFoundation Filename
The output results in file.txt should show:
... (1 Reply)
Dear All,
I have to solve the following problems with multiple tab-separated text file but I don't know how. Any help would be greatly appreciated. I have access to Linux mint (but not as a professional).
I have multiple tab-delimited files with the following structure:
file1:
1 44
2 ... (5 Replies)
I have the following format of input from multiple files
File 1
24.01 -81.01 1.0
24.02 -81.02 5.0
24.03 -81.03 0.0
File 2
24.01 -81.01 2.0
24.02 -81.02 -5.0
24.03 -81.03 10.0
I need to scan through the files and when the first 2 columns match I... (18 Replies)
I have a file containing multiple values, some of them are pipe separated which are to be read as separate values and some of them are single value all are these need to store in variables.
I need to read this file which is an input to my script
Config.txt
file name, first path, second... (7 Replies)
Hi,
I want to compute for linearly-interpolated values for my data using awk, any help is highly appreciated.
How do I apply the linear interpolation formula to my data in awk given the equation below:
x y
15 0
25 0.1633611
35 0.0741623
desired output: linear interpolation at... (4 Replies)
I have several sequential files with name stat.1000, stat.1001....to stat.1020 with a format like this
0.01 1 3822 4.97379915032e-14 4.96982253992e-09 0
0.01 3822 1 4.97379915032e-14 4.96982253992e-09 0
0.01 2 502 0.00993165137406 993.165137406 0
0.01 502 2 0.00993165137406 993.165137406 0... (6 Replies)
Hello there,
I found an elegant solution to computing average values from multiple text files
awk '{for (i=1;i<=NF;i++){if ($i!~"n/a"){a+=$i}else{b++}}}END{for (i=1;i<=FNR;i++){for (j=1;j<=NF;j++){printf (a/(3-b))((b>0)?"~"b" ":" ")};printf "\n"}}' file1 file2 file3
I tried to modify... (2 Replies)
Hi
I'm sure there's a way to do this, but I ran out of caffeine/talent before getting the answer in a long winded alternate way (don't ask ;) )
The task I was trying to do was scan a directory of files and show only files that contained 3 values:
I940
5433309
2181
I tried many variations... (4 Replies)
Hi,
I have 20 files which have respective 50 lines with different values.
I would like to process each line of the 50 lines in these 20 files one at a time and do an average of 3rd field ($3) of these 20 files. This will be output to an output file.
Instead of using join to generate whole... (8 Replies)
Hi,
I got a lot of files looking like this:
1
0.5
6
All together there are ard 1'000'000 lines in each of the ard 100 files.
I want to build the average for every line, and write the result to a new file.
The averaging should start at a specific line, here for example at line... (10 Replies)