Sponsored Content
Top Forums Shell Programming and Scripting Awk based script to find the median of all individual columns in a data file Post 302653137 by binlib on Friday 8th of June 2012 12:34:33 PM
Old 06-08-2012
With gawk version 4:
Code:
gawk4 -F, '{
  for (i = 1; i <= NF; ++i)
    a[i][NR] = $i
}
END {
  for (i = 1; i <= NF; ++i) {
    asort(a[i])
    printf(" %.1f", NR%2? a[i][(NR+1)/2] : (a[i][NR/2] + a[i][NR/2+1])/2)
  }
  print ""
}'

 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

awk based script to print the "mode(statistics term)" for each column in a data file

Hi All, Thanks all for the continued support so far. Today, I need to find the most occurring string/number(also called mode in statistics terminology) for each column in a data file (.csv type). For one column of data(1.txt) like below Sample 1 2 2 3 4 1 1 1 2 I can find the mode... (6 Replies)
Discussion started by: ks_reddy
6 Replies

2. Shell Programming and Scripting

awk based script to find the average of all the columns in a data file

Hi All, I need the modification for the below mentioned code (found in one more post https://www.unix.com/shell-programming-scripting/27161-script-generate-average-values.html) to find the average values for all the columns(but for a specific rows) and print the averages side by side. I have... (4 Replies)
Discussion started by: ks_reddy
4 Replies

3. Shell Programming and Scripting

awk based script to ignore all columns from a file which contains character strings

Hello All, I have a .CSV file where I expect all numeric data in all the columns other than column headers. But sometimes I get the files (result of statistics computation by other persons) like below( sample data) SNO,Data1,Data2,Data3 1,2,3,4 2,3,4,SOME STRING 3,4,Inf,5 4,5,4,4 I... (9 Replies)
Discussion started by: ks_reddy
9 Replies

4. Shell Programming and Scripting

Help with awk replacing identical columns based on another file

Hello, I am using Awk in UBUNTU 12.04. I have a file like following with three fields and 44706 rows. F1 A A F2 G G F3 A T I have another file like this: AL_1 F1 A A AL_2 F1 A T AL_3 F1 A A AL_1 F2 G G AL_2 F2 G A AL_3 F2 G G BO_1 F1 A A BO_2 F1 A T... (6 Replies)
Discussion started by: Homa
6 Replies

5. Shell Programming and Scripting

awk script to split file into multiple files based on many columns

So I have a space delimited file that I'd like to split into multiple files based on multiple column values. This is what my data looks like 1bc9A02 1 10 1000 FTDLNLVQALRQFLWSFRLPGEAQKIDRMMEAFAQRYCQCNNGVFQSTDTCYVLSFAIIMLNTSLHNPNVKDKPTVERFIAMNRGINDGGDLPEELLRNLYESIKNEPFKIPELEHHHHHH 1ku1A02 1 10... (9 Replies)
Discussion started by: viored
9 Replies

6. UNIX for Dummies Questions & Answers

Median calculator based on id match

I am trying to calculate the median of a column of numbers if they match an ID type on a different column. The input file has 3 columns. The column that has the ID is column 1 and the column with the values I'd like to find the median for is column 3. The file does not need to be sorted. What I... (9 Replies)
Discussion started by: verse123
9 Replies

7. Shell Programming and Scripting

Find columns in a file based on header and print to new file

Hello, I have to fish out some specific columns from a file based on the header value. I have the list of columns I need in a different file. I thought I could read in the list of headers I need, # file with header names of required columns in required order headers_file=$2 # read contents... (11 Replies)
Discussion started by: LMHmedchem
11 Replies

8. Shell Programming and Scripting

In PErl script: need to read the data one file and generate multiple files based on the data

We have the data looks like below in a log file. I want to generat files based on the string between two hash(#) symbol like below Source: #ext1#test1.tale2 drop #ext1#test11.tale21 drop #ext1#test123.tale21 drop #ext2#test1.tale21 drop #ext2#test12.tale21 drop #ext3#test11.tale21 drop... (5 Replies)
Discussion started by: Sanjeev G
5 Replies

9. Shell Programming and Scripting

awk script to find data in three file and perform replace operation

Have three files. Any other approach with regards to file concatenation or splitting, etc is appreciated If column55(billngtype) of file1 contains YMNC or YPBC then pick the value of column13(documentnumber). Now find this documentnumber in column1(Billdoc) of file2 and grep the corresponding... (4 Replies)
Discussion started by: as7951
4 Replies

10. UNIX for Advanced & Expert Users

Need Optimization shell/awk script to aggreagte (sum) for all the columns of Huge data file

Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file File delimiter "|" Need to have Sum of all columns, with column number : aggregation (summation) for each column File not having the header Like below - Column 1 "Total Column 2 : "Total ... ...... (2 Replies)
Discussion started by: kartikirans
2 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 08:45 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy