07-12-2007
I think the more appropiate question is it retarded to use awk in this scenario? Is there a better way?
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi I have fakebook.csv as following:
F1(current date) F2(popularity) F3(name of book) F4(release date of book)
2006-06-21,6860,"Harry Potter",2006-12-31
2006-06-22,,"Harry Potter",2006-12-31
2006-06-23,7120,"Harry Potter",2006-12-31
2006-06-24,,"Harry Potter",2006-12-31... (0 Replies)
Discussion started by: onthetopo
0 Replies
2. Shell Programming and Scripting
Hi,
I have two time series data (below) merged into a file.
t1 and t2 are in unit of second
I want to calculate the average of V1 every second and count how many times "1" in V2 is occur within a second
Input File:
t1 V1 t2 V2
10.000000... (5 Replies)
Discussion started by: nica
5 Replies
3. Shell Programming and Scripting
Hello,
Let's assume I have 100 files FILE_${m} (0<m<101). Each of them contains 100 lines and 10 columns.
I'd like to get in a file called "result" the average value of column 3, ONLY between lines 11 and 17, in order to plot that average as a function of the parameter m.
So far I can compute... (6 Replies)
Discussion started by: DMini
6 Replies
4. Shell Programming and Scripting
Hi,
I have the following data in a file for example:
P1 XXXXXXX.1 YYYYYYY.1 ZZZ.1
P1 XXXXXXX.2 YYYYYYY.2 ZZZ.2
P1 XXXXXXX.3 YYYYYYY.3 ZZZ.3
P1 XXXXXXX.4 YYYYYYY.4 ZZZ.4
P1 XXXXXXX.5 YYYYYYY.5 ZZZ.5
P1 XXXXXXX.6 YYYYYYY.6 ZZZ.6
P1 XXXXXXX.7 YYYYYYY.7 ZZZ.7
P1 XXXXXXX.8 YYYYYYY.8 ZZZ.8
P2... (6 Replies)
Discussion started by: alex2005
6 Replies
5. Shell Programming and Scripting
Hi,
I'm new to shell programming, can anyone help me on this? I want to do following operations -
1. Average salary for each country
2. Total salary for each city
and data that looks like -
salary country city
10000 zzz BN
25000 zzz BN
30000 zzz BN
10000 yyy ZN
15000 yyy ZN
... (3 Replies)
Discussion started by: shell123
3 Replies
6. Shell Programming and Scripting
I want to calculate the average line by line of some files with several lines on them, the files are identical, just want to average the 3rd columns of those files.:wall:
Example file:
File 1
001 0.046 0.667267
001 0.047 0.672028
001 0.048 0.656025
001 0.049 ... (2 Replies)
Discussion started by: AriasFco
2 Replies
7. Shell Programming and Scripting
Hi,
I would like to calculate the average of column 'y' based on the value of column 'pos'.
For example, here is file1
id pos y c
11 1 220 aa
11 4333 207 f
11 5333 112 ee
11 11116 305 e
11 11117 310 r
11 22228 781 gg
11 ... (2 Replies)
Discussion started by: jackken007
2 Replies
8. UNIX for Dummies Questions & Answers
Hi,
I am searching for an awk-script that computes the mean values for the $2 column, but addicted to the values in the $1 column. It also should delete the unnecessary lines after computing...
An example (for some reason I cant use the code tag button):
cat list.txt
1 10
1 30
1 20... (2 Replies)
Discussion started by: bjoern456
2 Replies
9. Shell Programming and Scripting
Hello dears,
I have a log file with records like below and want to get a average of one column based on the search of one specific keyword.
2015-02-07 08:15:28 10.102.51.100 10.112.55.101 "kevin.c" POST ... (2 Replies)
Discussion started by: Newman
2 Replies
10. Shell Programming and Scripting
I have the need to match the first two columns and when they match, calculate the percent of average for the third columns. The following awk script does not give me the expected results.
awk 'NR==FNR {T=$3; next} $1,$2 in T {P=T/$3*100; printf "%s %s %.0f\n", $1, $2, (P>=0)?P:-P}' diff.file... (1 Reply)
Discussion started by: ncwxpanther
1 Replies
LEARN ABOUT REDHAT
logfile
LOGFILE(1) mrtg LOGFILE(1)
NAME
logfile - description of the mrtg-2 logfile format
SYNOPSIS
This document provides a description of the contents of the mrtg-2 logfile.
OVERVIEW
The logfile consists of two main sections. A very short one at the beginning:
The first Line
It stores the traffic counters from the most recent run of mrtg
The rest of the File
Stores past traffic rate averates and maxima at increassing intervals
The first number on each line is a unix time stamp. It represents the number of seconds since 1970.
DETAILS
The first Line
The first line has 3 numbers which are:
A (1st column)
A timestamp of when MRTG last ran for this interface. The timestamp is the number of non-skip seconds passed since the standard UNIX
"epoch" of midnight on 1st of January 1970 GMT.
B (2nd column)
The "incoming bytes counter" value.
C (3rd column)
The "outgoing bytes counter" value.
The rest of the File
The second and remaining lines of the file 5 numbers which are:
A (1st column)
The Unix timestamp for the point in time the data on this line is relevant. Note that the interval between timestamps increases as you
prograss through the file. At first it is 5 minutes and at the end it is one day between two lines.
This timestamp may be converted in EXCEL by using the following formula:
=(x+y)/86400+DATE(1970,1,1)
you can also ask perl to help by typing
perl -e 'print scalar localtime(x),"
"'
x is the unix timestamp and y is the offset in seconds from UTC. (Perl knows y).
B (2nd column)
The average incoming transfer rate in bytes per second. This is valid for the time between the A value of the current line and the A
value of the previous line.
C (3rd column)
The average outgoing transfer rate in bytes per second since the previous measurement.
D (4th column)
The maximum incoming transfer rate in bytes per second for the current interval. This is calculated from all the updates which have
occured in the current interval. If the current interval is 1 hour, and updates have occured every 5 minutes, it will be the biggest 5
minute transferrate seen during the hour.
E (5th column)
The maximum outgoing transfer rate in bytes per second for the current interval.
AUTHOR
Butch Kemper <kemper@bihs.net> and Tobias Oetiker <oetiker@ee.ethz.ch>
3rd Berkeley Distribution 2.9.17 LOGFILE(1)