06-19-2008
Average in awk
Hi
I am looking for an awk script which can compute average of all the fields every 5th line. The file looks:
A B C D E F G H I J K L M
1 18 13 14 12 14 13 11 12 12 15 15 15
2 17 17 13 13 13 12 12 11 12 14 15 14
3 16 16 12 12 12 11 11 12 11 16 14 13
4 15 15 11 11 11 12 11 12 11 15 14 16
5 14 14 10 12 11 12 11 13 12 14 16 16
6 13 13 11 11 13 11 10 12 12 14 15 15
7 12 12 27 24 12 12 11 11 15 16 15 14
8 12 11 26 23 11 13 11 12 14 15 15 14
9 12 11 25 22 11 12 11 11 13 14 15 14
10 15 12 24 21 11 12 10 13 13 15 16 14
I need to compute the average of the individual fields (A to M) for every 5th line (line 1 to 5: 6-10 etc).
I greatly appreciate any help.
Thanks
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Suppose I have 500 files in a directory and I need to Use awk to calculate average of column 3 for each of the file, how would I do that? (6 Replies)
Discussion started by: grossgermany
6 Replies
2. Shell Programming and Scripting
Hi,
I have the data like this
$1 $2
1 12
2 13
3 14
4 12
5 12
6 12
7 13
8 14
9 12
10 12
i want to compute average of $1 and $2 every 5th line (1-5 and 6-10)
Please help me with awk
Thank you (4 Replies)
Discussion started by: saint2006
4 Replies
3. UNIX for Dummies Questions & Answers
Hi
I am looking for an awk script which can compute the average of the last column based on the date and time. The file looks:
site1,"2000-01-01 00:00:00", "2000-01-01 00:59:00",0.013
site2,"2000-02-01 01:00:00", "2000-02-01 01:59:00",0.035
site1,"2000-02-01 02:00:00", "2000-02-01... (15 Replies)
Discussion started by: kathy wang
15 Replies
4. Shell Programming and Scripting
Hi,
I have the following data in a file for example:
P1 XXXXXXX.1 YYYYYYY.1 ZZZ.1
P1 XXXXXXX.2 YYYYYYY.2 ZZZ.2
P1 XXXXXXX.3 YYYYYYY.3 ZZZ.3
P1 XXXXXXX.4 YYYYYYY.4 ZZZ.4
P1 XXXXXXX.5 YYYYYYY.5 ZZZ.5
P1 XXXXXXX.6 YYYYYYY.6 ZZZ.6
P1 XXXXXXX.7 YYYYYYY.7 ZZZ.7
P1 XXXXXXX.8 YYYYYYY.8 ZZZ.8
P2... (6 Replies)
Discussion started by: alex2005
6 Replies
5. Shell Programming and Scripting
Hi guys, I am not an expert in shell and I need help with awk command. I have a file with values like
200 1 1
200 7 2
200 6 3
200 5 4
300 3 1
300 7 2
300 6 3
300 4 4
I need resulting file with averages of... (3 Replies)
Discussion started by: saif
3 Replies
6. Shell Programming and Scripting
I want to calculate the average line by line of some files with several lines on them, the files are identical, just want to average the 3rd columns of those files.:wall:
Example file:
File 1
001 0.046 0.667267
001 0.047 0.672028
001 0.048 0.656025
001 0.049 ... (2 Replies)
Discussion started by: AriasFco
2 Replies
7. Shell Programming and Scripting
I need to find the average from a file like:
data => BW:123 M:30 RTD:0 1 0 1 0 0 1 1 1 1 0 0 1 1 0'
data => BW:123 N:30 RTD:0 1 0 1 0 0 1 1 1 1 0 0 1 1 0'
data => BW:123 N:30 RTD:0 1 0 1 0 0 1 1 1 1 0 0 1 1 0'
data => BW:123 N:30 RTD:0 1 0 1 0 0 1 1 1 1 0 0 1 1 0'
data => BW:123 N:30 RTD:0 1... (4 Replies)
Discussion started by: Slagle
4 Replies
8. Shell Programming and Scripting
I am trying to modify the awk below to include the gene name ($5) for each target and can not seem to do so. Also, I'm not sure the calculation is right (average of all targets that are the same is $4 using the values in $7)? Thank you :).
awk '{if((NR>1)&&($4!=last)){printf("%s\t%f\t%s\n",... (1 Reply)
Discussion started by: cmccabe
1 Replies
9. Shell Programming and Scripting
In the below awk I am trying to combine all matching $4 into a single $5 (up to the -), and count the lines in $6 and average all values in $7. The awk is close but it seems to only be using the last line in the file and skipping all others. The posted input is a sample of the file that is over... (3 Replies)
Discussion started by: cmccabe
3 Replies
10. Shell Programming and Scripting
Hi, I'm using awk to try and get a moving average for the second column of numbers ($2) in the below example broken out by unique identifier in column 1 ($1) :
H1,1.2
H1,2.3
H1,5.5
H1,6.6
H1,8.7
H1,4.1
H1,6.4
H1,7.8
H1,9.6
H1,3.2
H5,50.1
H5,54.2
H5,58.8
H5,60.9
H5,54.3
H5,52.7... (8 Replies)
Discussion started by: theflamingmoe
8 Replies
LEARN ABOUT DEBIAN
x_over
X_OVER(1gmt) Generic Mapping Tools X_OVER(1gmt)
NAME
x_over - Find and compute Cross-Over Errors
SYNOPSIS
x_over leg_1 [ leg_2 ] [ -A ] [ -C ] [ -L ] [ -V ] [ -Wtimegap ] [ -Gfact ] [ -Mfact ] [ -Tfact ] [ -Nnp_int ]
DESCRIPTION
x_over is used to inspect two cruises to see if they intersect, and if so report the time, position, discrepancies in gravity/magnetics/ba-
thymetry, heading for each track segment, and the average values of the geophysical observables at the cross-over point. The names of the
legs are passed on the command line. If they are identical or only one name is passed, then x_over looks for internal cross-overs. The
optional parameters are:
-A Use an Akima spline to interpolate the geophysical field at the cross-over point.
-C Use a Natural Cubic spline function instead.
-L Use a linear interpolant [Default].
-W Do not compute cross-overs if the 2 nearest points are more than timegap minutes apart.
-G Scale gravity by fact [Default is 0.1 since gmt-files store gravity in g.u.]
-M Scale magnetic anomaly by fact [1.0].
-T Scale bathymetry by fact [1.0].
-N Specify how many points to use in the interpolation [Default is 6].
-V Selects verbose mode, which will send progress reports to stderr [Default runs "silently"]. Report the number of cross-overs for
this pair of legs.
BEWARE
The COEs found are printed out to standard output in ASCII format. The first record contains the leg names and their start year, whereas
subsequent records have the data for each COE encountered. The fields written out are lat, lon, time along track #1, time along track #2,
x_gravity, x_magnetics, x_bathymetry, average gravity, average magnetics, average bathymetry, heading along track #1, and heading along
track #2. Sign convention: If lega and legb are passed on the command line, then the COE value is Value (lega) - Value (legb). It is rec-
ommended that the Akima spline is used instead of the natural cubic spline, since it is less sensitive to outliers that tend to introduce
wild oscillations in the interpolation.
SEE ALSO
GMT(1), x_system(1)
REFERENCES
Wessel, P. XOVER: A Cross-over Error Detector for Track Data, Computers & Geosciences, 15, 333-346.
GMT 4.5.7 15 Jul 2011 X_OVER(1gmt)