08-23-2008
awk offers arrays for precisely this type of problem. Make each item with a count a key of the array, then at END print the results. If all keys occur in all files you can simply divide by the number of input files, otherwise you will need to collect both the sum and the count (divisor) for each key.
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi,
I got a lot of files looking like this:
1
0.5
6
All together there are ard 1'000'000 lines in each of the ard 100 files.
I want to build the average for every line, and write the result to a new file.
The averaging should start at a specific line, here for example at line... (10 Replies)
Discussion started by: chillmaster
10 Replies
2. Shell Programming and Scripting
Hi,
first, I have searched in the forum for this, but I could not find the right answer. (There were some similar threads, but I was not sure how to adapt the ideas.)
Anyway, I have a quite natural problem: Given are several text files. All files contain the same number of lines and the same... (3 Replies)
Discussion started by: rbredereck
3 Replies
3. UNIX for Dummies Questions & Answers
Hey all, I am relatively poor at programming and unfortunately don't have time to read about programming at this current moment.
I wanted to be able to run a simple command to read a column of numbers in a file and give me the average of those numbers. In addition if I could specify the... (2 Replies)
Discussion started by: Leonidsg
2 Replies
4. UNIX for Dummies Questions & Answers
Hi,
I have a space delimited text file that looks like the following:
Aa 100 200
Bb 300 100
Cc X 500
Dd 600 X
Basically, I want to take the average of columns 2 and 3 and print it in column 4. However if there is an X in either column 2 or 3, I want to print the non-X value. Therefore... (11 Replies)
Discussion started by: evelibertine
11 Replies
5. Shell Programming and Scripting
I have several sequential files with name stat.1000, stat.1001....to stat.1020 with a format like this
0.01 1 3822 4.97379915032e-14 4.96982253992e-09 0
0.01 3822 1 4.97379915032e-14 4.96982253992e-09 0
0.01 2 502 0.00993165137406 993.165137406 0
0.01 502 2 0.00993165137406 993.165137406 0... (6 Replies)
Discussion started by: kayak
6 Replies
6. Shell Programming and Scripting
Hi ,
I am a newbie!!!
I want to develop a script for deleting files older than x days from multiple paths. Now I could reach upto this piece of code which deletes files older than x days from a particular path. How do I enhance it to have an input from a .txt file or a .dat file? For eg:... (12 Replies)
Discussion started by: jhilmil
12 Replies
7. Shell Programming and Scripting
Hi all,
I think so I’m getting the result is wrong, while using following awk commend,
colval=$(awk 'FNR>1 && NR==FNR{a=$4;next;} FNR>1 {a+=$4; print $2"\t"a/3}'
filename_f.tsv filename_f2.tsv filename_f3.tsv)
echo $colval >> Result.tsv
it’s doing the condition 2 times, first result... (5 Replies)
Discussion started by: Shenbaga.d
5 Replies
8. Red Hat
Hi ,
My linux server is taking more time to calculate big size from long time.
* i am accessing server through ssh
* commands
# - du -sh *
#du -sh * | sort -n | grep G
Please guide me for fast way to find big size directories under to / partition
Thanks (8 Replies)
Discussion started by: Nats
8 Replies
9. Shell Programming and Scripting
Hi All,
Thank you for reading through my post and helping me figure out how I would be able to perform this task.
For example: I have a list of continuous output collected into a file in the format as seen below:
Date...........Time........C....A......... B
==========================... (5 Replies)
Discussion started by: terrychen
5 Replies
10. Shell Programming and Scripting
I have the following format of input from multiple files
File 1
24.01 -81.01 1.0
24.02 -81.02 5.0
24.03 -81.03 0.0
File 2
24.01 -81.01 2.0
24.02 -81.02 -5.0
24.03 -81.03 10.0
I need to scan through the files and when the first 2 columns match I... (18 Replies)
Discussion started by: ncwxpanther
18 Replies
LEARN ABOUT REDHAT
logfile
LOGFILE(1) mrtg LOGFILE(1)
NAME
logfile - description of the mrtg-2 logfile format
SYNOPSIS
This document provides a description of the contents of the mrtg-2 logfile.
OVERVIEW
The logfile consists of two main sections. A very short one at the beginning:
The first Line
It stores the traffic counters from the most recent run of mrtg
The rest of the File
Stores past traffic rate averates and maxima at increassing intervals
The first number on each line is a unix time stamp. It represents the number of seconds since 1970.
DETAILS
The first Line
The first line has 3 numbers which are:
A (1st column)
A timestamp of when MRTG last ran for this interface. The timestamp is the number of non-skip seconds passed since the standard UNIX
"epoch" of midnight on 1st of January 1970 GMT.
B (2nd column)
The "incoming bytes counter" value.
C (3rd column)
The "outgoing bytes counter" value.
The rest of the File
The second and remaining lines of the file 5 numbers which are:
A (1st column)
The Unix timestamp for the point in time the data on this line is relevant. Note that the interval between timestamps increases as you
prograss through the file. At first it is 5 minutes and at the end it is one day between two lines.
This timestamp may be converted in EXCEL by using the following formula:
=(x+y)/86400+DATE(1970,1,1)
you can also ask perl to help by typing
perl -e 'print scalar localtime(x),"
"'
x is the unix timestamp and y is the offset in seconds from UTC. (Perl knows y).
B (2nd column)
The average incoming transfer rate in bytes per second. This is valid for the time between the A value of the current line and the A
value of the previous line.
C (3rd column)
The average outgoing transfer rate in bytes per second since the previous measurement.
D (4th column)
The maximum incoming transfer rate in bytes per second for the current interval. This is calculated from all the updates which have
occured in the current interval. If the current interval is 1 hour, and updates have occured every 5 minutes, it will be the biggest 5
minute transferrate seen during the hour.
E (5th column)
The maximum outgoing transfer rate in bytes per second for the current interval.
AUTHOR
Butch Kemper <kemper@bihs.net> and Tobias Oetiker <oetiker@ee.ethz.ch>
3rd Berkeley Distribution 2.9.17 LOGFILE(1)