Sponsored Content
Top Forums Shell Programming and Scripting Help in extracting multiple files and taking average at same time Post 302228194 by ahjiefreak on Saturday 23rd of August 2008 11:23:26 AM
Old 08-23-2008
Help in extracting multiple files and taking average at same time

Hi,

I have 20 files which have respective 50 lines with different values.

I would like to process each line of the 50 lines in these 20 files one at a time and do an average of 3rd field ($3) of these 20 files. This will be output to an output file.

Instead of using join to generate whole bunch of redundant files and then compute the average, im looking any other possible better way to do the above right away.

E.g

apple.txt
tool1 2.00 4 30.20
tool2 3.00 5 40.22
tool3 2.00 6 45.32
....
tool50 ...........


orange.txt
tool1 1.00 2 30.20
tool2 4.00 3 40.22
tool3 6.00 4 45.32
...
tool50 ...

bar.txt
tool1 2.10 1 30.20
tool2 3.04 4 40.22
tool3 2.02 5 45.32
...
tool50 .....

and the remaining 17 files of different names.

The output would be:-
tool1 (4+2+1+....)/20
tool2 (5+3+4+...)/20
tool3 (6+4+5+...)/20
....
tool50....

Please advise. THanks.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Average of elements throught multiple files

Hi, I got a lot of files looking like this: 1 0.5 6 All together there are ard 1'000'000 lines in each of the ard 100 files. I want to build the average for every line, and write the result to a new file. The averaging should start at a specific line, here for example at line... (10 Replies)
Discussion started by: chillmaster
10 Replies

2. Shell Programming and Scripting

Computing average values from multiple text files

Hi, first, I have searched in the forum for this, but I could not find the right answer. (There were some similar threads, but I was not sure how to adapt the ideas.) Anyway, I have a quite natural problem: Given are several text files. All files contain the same number of lines and the same... (3 Replies)
Discussion started by: rbredereck
3 Replies

3. UNIX for Dummies Questions & Answers

Taking a average of a column of numbers

Hey all, I am relatively poor at programming and unfortunately don't have time to read about programming at this current moment. I wanted to be able to run a simple command to read a column of numbers in a file and give me the average of those numbers. In addition if I could specify the... (2 Replies)
Discussion started by: Leonidsg
2 Replies

4. UNIX for Dummies Questions & Answers

Taking the average of two columns and printing it on a new column

Hi, I have a space delimited text file that looks like the following: Aa 100 200 Bb 300 100 Cc X 500 Dd 600 X Basically, I want to take the average of columns 2 and 3 and print it in column 4. However if there is an X in either column 2 or 3, I want to print the non-X value. Therefore... (11 Replies)
Discussion started by: evelibertine
11 Replies

5. Shell Programming and Scripting

Average of a column in multiple files

I have several sequential files with name stat.1000, stat.1001....to stat.1020 with a format like this 0.01 1 3822 4.97379915032e-14 4.96982253992e-09 0 0.01 3822 1 4.97379915032e-14 4.96982253992e-09 0 0.01 2 502 0.00993165137406 993.165137406 0 0.01 502 2 0.00993165137406 993.165137406 0... (6 Replies)
Discussion started by: kayak
6 Replies

6. Shell Programming and Scripting

Script to delete files older than x days and also taking an input for multiple paths

Hi , I am a newbie!!! I want to develop a script for deleting files older than x days from multiple paths. Now I could reach upto this piece of code which deletes files older than x days from a particular path. How do I enhance it to have an input from a .txt file or a .dat file? For eg:... (12 Replies)
Discussion started by: jhilmil
12 Replies

7. Shell Programming and Scripting

Cannot get the correct ans. Using awk in taking average

Hi all, I think so I’m getting the result is wrong, while using following awk commend, colval=$(awk 'FNR>1 && NR==FNR{a=$4;next;} FNR>1 {a+=$4; print $2"\t"a/3}' filename_f.tsv filename_f2.tsv filename_f3.tsv) echo $colval >> Result.tsv it’s doing the condition 2 times, first result... (5 Replies)
Discussion started by: Shenbaga.d
5 Replies

8. Red Hat

Du -sh command taking time to calculate the big size files

Hi , My linux server is taking more time to calculate big size from long time. * i am accessing server through ssh * commands # - du -sh * #du -sh * | sort -n | grep G Please guide me for fast way to find big size directories under to / partition Thanks (8 Replies)
Discussion started by: Nats
8 Replies

9. Shell Programming and Scripting

Average of multiple time-stamped data every half hour

Hi All, Thank you for reading through my post and helping me figure out how I would be able to perform this task. For example: I have a list of continuous output collected into a file in the format as seen below: Date...........Time........C....A......... B ==========================... (5 Replies)
Discussion started by: terrychen
5 Replies

10. Shell Programming and Scripting

Match first two columns and average third from multiple files

I have the following format of input from multiple files File 1 24.01 -81.01 1.0 24.02 -81.02 5.0 24.03 -81.03 0.0 File 2 24.01 -81.01 2.0 24.02 -81.02 -5.0 24.03 -81.03 10.0 I need to scan through the files and when the first 2 columns match I... (18 Replies)
Discussion started by: ncwxpanther
18 Replies
clfmerge(1)							     logtools							       clfmerge(1)

NAME
clfmerge - merge Common-Log Format web logs based on time-stamps SYNOPSIS
clfmerge [--help | -h] [-b size] [-d] [file names] DESCRIPTION
The clfmerge program is designed to avoid using sort to merge multiple web log files. Web logs for big sites consist of multiple files in the >100M size range from a number of machines. For such files it is not practical to use a program such as gnusort to merge the files because the data is not always entirely in order (so the merge option of gnusort doesn't work so well), but it is not in random order (so doing a complete sort would be a waste). Also the date field that is being sorted on is not particularly easy to specify for gnusort (I have seen it done but it was messy). This program is designed to simply and quickly sort multiple large log files with no need for temporary storage space or overly large buf- fers in memory (the memory footprint is generally only a few megs). OVERVIEW
It will take a number (from 0 to n) of file-names on the command line, it will open them for reading and read CLF format web log data from them all. Lines which don't appear to be in CLF format (NB they aren't parsed fully, only minimal parsing to determine the date is per- formed) will be rejected and displayed on standard-error. If zero files are specified then there will be no error, it will just silently output nothing, this is for scripts which use the find com- mand to find log files and which can't be counted on to find any log files, it saves doing an extra check in your shell scripts. If one file is specified then the data will be read into a 1000 line buffer and it will be removed from the buffer (and displayed on stan- dard output) in date order. This is to handle the case of web servers which date entries on the connection time but write them to the log at completion time and thus generate log files that aren't in order (Netscape web server does this - I haven't checked what other web servers do). If more than one file is specified then a line will be read from each file, the file that had the earliest time stamp will be read from until it returns a time stamp later than one of the other files. Then the file with the earlier time stamp will be read. With multiple files the buffer size is 1000 lines or 100 * the number of files (whichever is larger). When the buffer becomes full the first line will be removed and displayed on standard output. OPTIONS
-b buffer-size Specify the buffer-size to use, if 0 is specified then it means to disable the sliding-window sorting of the data which improves the speed. -d Set domain-name mangling to on. This means that if a line starts with as the name of the site that was requested then that would be removed from the start of the line and the GET / would be changed to GET http://www.company.com/ which allows programs like Webal- izer to produce good graphs for large hosting sites. Also it will make the domain name in lower case. EXIT STATUS
0 No errors 1 Bad parameters 2 Can't open one of the specified files 3 Can't write to output AUTHOR
This program, its manual page, and the Debian package were written by Russell Coker <russell@coker.com.au>. SEE ALSO
clfsplit(1),clfdomainsplit(1) Russell Coker <;russell@coker.com.au> 0.06 clfmerge(1)
All times are GMT -4. The time now is 12:59 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy