Sponsored Content
Top Forums Shell Programming and Scripting Summing column value - using PERL Post 302398321 by ganapati on Wednesday 24th of February 2010 08:32:15 AM
Old 02-24-2010
MySQL

Wareh wahh!!! I'm thrilled Smilie
Tons of thanks to abubacker, it works exactly as expected. Smilie

Added the feature to this code, is it possible to handle multiple input files with different dates as below?

Code:
input file name: input_20100221.csv
20100221, abc_1, 200
20100221, abc_4, 300
20100221, opq_3, 200
20100221, abc_5, 200
20100221, xyz_1, 500
20100221, abc_2, 500
20100221, abc_3, 100
20100221, xyz_2, 700
20100221, opq_2, 300
20100221, xyz_3, 100
20100221, opq_1, 200

input file name: input_20100222.csv
20100222, abc_1, 100
20100222, abc_4, 200
20100222, opq_3, 200
20100222, abc_5, 200
20100222, xyz_1, 100
20100222, abc_2, 200
20100222, abc_3, 100
20100222, xyz_2, 200
20100222, opq_2, 800
20100222, xyz_3, 600
20100222, opq_1, 700

input file name: input_20100224.csv
20100224, abc_1, 600
20100224, abc_4, 400
20100224, opq_3, 200
20100224, abc_5, 300
20100224, xyz_1, 300
20100224, abc_2, 200
20100224, abc_3, 700
20100224, xyz_2, 200
20100224, opq_2, 200
20100224, xyz_3, 900
20100224, opq_1, 800

Output should be exactly as below:

Code:
Rundate,  abc,   opq, xyz  ### header, only one time
20100221, 1300,  700, 1300
20100222,  800, 1700,  900
20100224, 2200, 1200, 1400

Cheers ~~
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Summing on column

Hi Friends How to do sum on a column? I have a file like: FRED 500.01 TX SMITH 50.10 NY HARRY 5.00 CA 555.11 Sum on second column. I am trying using nawk like nawk 'BEGIN {FS="|"}; {printf $1"+"}' Thanks a lot for your help S :) (2 Replies)
Discussion started by: sbasetty
2 Replies

2. UNIX for Dummies Questions & Answers

summing according to the column

I have a text file with two columns the first column is an integer and the second column is date how do i sum up the first column according to the date example 123 jan1 232 jan1 473 jan2 467 jan2 356 jan3 376 jan3 my result should be 355 jan1 940 jan2 732 jan3 how do i... (2 Replies)
Discussion started by: ramky79
2 Replies

3. Shell Programming and Scripting

selecting column in perl

Dear all, I have a rather large file of numbers which i would like to read into a script and then do some maths on a specific column( e.g column). so far i have been using the following awk command awk '{print $4}' infile.txt > out.tmp to strip out the desired column within the in perl... (3 Replies)
Discussion started by: Mish_99
3 Replies

4. Shell Programming and Scripting

summing values of a column

I have a file which contains data as below: ----------------------------------------------------------------------------------------------- GSPWeb Statistics for the period of last 20 days... (3 Replies)
Discussion started by: mohsin.quazi
3 Replies

5. Shell Programming and Scripting

Compare Two Files(Column By Column) In Perl or shell

Hi, I am writing a comparator script, which comapre two txt files(column by column) below are the precondition of this comparator 1)columns of file are not seperated Ex. file1.txt 8888812341181892 1243548895685687 8945896789897789 1111111111111111 file2.txt 9578956789567897... (2 Replies)
Discussion started by: kumar96877
2 Replies

6. Shell Programming and Scripting

Summing up rows data regarding 1st column

Dear all, I have one file like LABEL A B C D E F G H I J K L M N G02100 64651.3 25630.7 8225.21 51238 267324 268005 234001 52410.9 18598.2 10611 10754.7 122535 267170 36631.4 G02100 12030.3 8260.15 8569.91 ... (4 Replies)
Discussion started by: AAWT
4 Replies

7. Shell Programming and Scripting

Please Help!!!! Awk for summing columns based on selected column value

a,b,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,aa,bb,cc,dd,ee,ff,gg,hh,ii a thru ii are digits and strings.... The awk needed....if coloumn 9 == i (coloumn 9 is string ), output the sum of x's(coloumn 22 ) in all records and sum of y's (coloumn 23 ) in all records in a file (records.txt).... (6 Replies)
Discussion started by: BrownBob
6 Replies

8. Shell Programming and Scripting

Summing a number column

hi All, i have a file in which only one column is there., test.txt ====== -900.01 -900.02 -900.03 -900.04 -900.05 -900.06 -900.07 -900.08 -900.09 900.01 900.02 900.03 900.04 900.05 (4 Replies)
Discussion started by: mechvijays
4 Replies

9. Shell Programming and Scripting

awk split columns after matching on rows and summing the last column

input: chr1 1 2 3 chr1 1 2 4 chr1 2 4 5 chr2 3 6 9 chr2 3 6 10 Code: awk '{a+=$4}END{for (i in a) print i,a}' input Output: chr112 7 chr236 19 chr124 5 Desired output: chr1 1 2 7 chr2 3 6 19 chr1 2 4 5 (1 Reply)
Discussion started by: jacobs.smith
1 Replies

10. Shell Programming and Scripting

Mismatch in summing a column in UNIX

Hello, I am facing issue in summing up a column in unix.I am displaying a column sum up to 4 decimal places and below is the code snippet sed '1d' abc.csv | cut -d',' -f7 | awk '{s+=$1}END{ printf("%.4f\n",s)}' -170552450514.8603 example of data values in the column(not... (3 Replies)
Discussion started by: karthik adiga
3 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 01:42 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy