Sponsored Content
Top Forums Shell Programming and Scripting Calculating average for every Nth line in the Nth column Post 302622511 by zaxxon on Thursday 12th of April 2012 08:38:54 AM
Old 04-12-2012
I guess the 1944 etc. is the important identifier? Using -1,-2,-3 just to filter the relevant lines:
Code:
awk -F"[,-]" '/-0[123],/ {a[$1]+=$NF; c[$1]++} END{for(e in a)print e", "a[e]/c[e]}' infile
1945, 8.6
1946, 24.3

This User Gave Thanks to zaxxon For This Post:
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Editing 1st or nth column

Hi, I have a file whick is pipe delimited : 100| alpha| tabgo|watch| |||| 444444 | alpha| tabgo|watch| |||| 444444 | sweden |tabgo|watch| |||| 444444 | US| tabgo|watch| |||| 444444 100| factory| tabgo|watch| |||| 444444 | ABC| tabgo|watch| |||| 444444 | launch| tabgo|watch| ||||... (4 Replies)
Discussion started by: darshanw
4 Replies

2. Shell Programming and Scripting

get 3rd column of nth line

hi; i have a file.txt and its 9th, 10th and 11th line lines are: RbsLocalCell=S2C1 maxPortIP 4 (this is 9th line) RbsLocalCell=S3C1 maxPortIP 4 (this is 10th line) RbsLocalCell=S1C1 ... (11 Replies)
Discussion started by: gc_sw
11 Replies

3. Shell Programming and Scripting

Finding Nth Column

Please help me how can I display every nth field present in a "|" delimited file. Ex: If a have a file with data as a|b|c|d|e|f|g|h|k|l|m|n I want to display every 3rd feild which means the output should be c f k n Please help me. (1 Reply)
Discussion started by: ngkumar
1 Replies

4. Shell Programming and Scripting

Using AWK to find top Nth values in Nth column

I have an awk script to find the maximum value of the 2nd column of a 2 column datafile, but I need to find the top 5 maximum values of the 2nd column. Here is the script that works for the maximum value. awk 'BEGIN { subjectmax=$1 ; max=0} $2 >= max {subjectmax=$1 ; max=$2} END {print... (3 Replies)
Discussion started by: ncwxpanther
3 Replies

5. Shell Programming and Scripting

awk to search for specific line and replace nth column

I need to be able to search for a string in the first column and if that string exists than replace the nth column with "-9.99". AW12000012012 2.38 1.51 3.01 1.66 0.90 0.91 1.22 0.82 0.57 1.67 2.31 3.63 0.00 AW12000012013 1.52 0.90 1.20 1.34 1.21 0.67 ... (14 Replies)
Discussion started by: ncwxpanther
14 Replies

6. Shell Programming and Scripting

Replace a value of Nth field of nth row

Using Awk, how can I achieve the following? I have set of record numbers, for which, I have to replace the nth field with some values, say spaces. Eg: Set of Records : 4,9,10,55,89,etc I have to change the 8th field of all the above set of records to spaces (10 spaces). Its a delimited... (1 Reply)
Discussion started by: deepakwins
1 Replies

7. Shell Programming and Scripting

Replace nth to nth character?

Hi I got the following problem and I wonder if some could please help me out? I'd like to replace character 8 - 16 , 16 - 24 cat file ... (2 Replies)
Discussion started by: stinkefisch
2 Replies

8. Shell Programming and Scripting

Taking nth column and putting its value in n+1 column using awk

Hello Members, Need your expert opinion how to tackle below. I have an input file that looks like below: USS|AWCC|AFGAW|93|70 USSAA|Roshan TDCA|AFGTD|93|72,79 ALB|Vodafone|ALBVF|355|69 ALGEE|Wataniya (Nedjma)|DZAWT|213|50,550 I like output file in below format: ... (7 Replies)
Discussion started by: umarsatti
7 Replies

9. UNIX for Dummies Questions & Answers

Getting the lines with nth column non-null

Hi, I have a huge list of archives (.gz). Each archive is about 40MB. A file is generated every minute so if I want to analyze the data for 1 hour I get already 60 files for example. These are text files, ';' separated, each line having about 300 fields (columns). What I need to do is to... (11 Replies)
Discussion started by: Nenad
11 Replies

10. Shell Programming and Scripting

Replace Value of nth Column of Each Line Using Array

Hello All, I am writing a shell script with following requirement: 1. I have one input file as below CHE01,A,MSC,INO CHE02,B,NST,INC CHE03,C,STM,INP 2. In shell script I have predefined array as below: Array1={A, B, C} Array2= {U09, C04, A054} (6 Replies)
Discussion started by: angshuman
6 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 05:17 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy