Sponsored Content
Top Forums Shell Programming and Scripting Column sum group by uniq records Post 302288009 by sandeep13 on Monday 16th of February 2009 08:06:25 AM
Old 02-16-2009
Column sum group by uniq records Reply to Thread

HI Franklin,

Thanks a lot. It works....using nawk

/usr/bin/nawk 'BEGIN{FS=OFS=";"}NR==1{print;next}{a[$1";"$2]+=$3}END{for(i in a)print i, a[i]}' file

Much appreciated.

Cheers,
Sandeep


Quote:
Originally Posted by Franklin52
This is what I get:

Code:
$ cat file
PORT; ID; TOTAL
port1;p1;100000
port2;p2;5000
port1;p1;500
$
$
$ awk 'BEGIN{FS=OFS=";"}                              
NR==1{print;next}
{a[$1";"$2]+=$3}
END{for(i in a)print i, a[i]}' file
PORT; ID; TOTAL
port2;p2;5000
port1;p1;100500
$
$

Regards
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How to calculate a sum of certain records?

Hi, i have a file where the records are like this. vt100 2048 D402 MG0010 0 586 262144 D403 MG0011 1000 486 8192 D404 MG0012 270 386 8192 A423 CC0177 40 586 65536 A424 CC0182 670 486 16384 A423 CC0183 100 486 16384 A425 CC0184 80 65000 4096 B407 EE1027 80 I want firstly, to count how... (10 Replies)
Discussion started by: sickboy
10 Replies

2. Shell Programming and Scripting

Merge group numbers and add a column containing group names

I have a file in the following format. Groups of data merge together and the group number is indicated above each group. 1 adrf dfgr dfg 2 dfgr dfgr 3 dfef dfr fd 4 fgrt fgr fgg 5 fgrt fgr (3 Replies)
Discussion started by: Lucky Ali
3 Replies

3. Shell Programming and Scripting

Merge group numbers and add a column containing group names

Hi All I do have a file like this with 6 columns. Groups of data merge together and the group number is indicated above each group. 1 1 12 26 289 3.2e-027 GCGTATGGCGGC 2 12 26 215 6.7e+006 TTCCACCTTTTG 3 9 26 175 ... (1 Reply)
Discussion started by: Lucky Ali
1 Replies

4. Shell Programming and Scripting

Sum of column by group wise

Hello All , I have a problem with summing of column by group Input File - COL_1,COL_2,COL_3,COL_4,COL_5,COL_6,COL_7,COL_8,COL_9,COL_10,COL_11 3010,21,1923D ,6,0,0.26,0,0.26,-0.26,1,200807 3010,21,192BI ,6,24558.97,1943.94,0,1943.94,22615.03,1,200807 3010,21,192BI... (8 Replies)
Discussion started by: jambesh
8 Replies

5. Shell Programming and Scripting

Sum up the column values group by using some field

12-11-2012,PNL,158406 12-11-2012,RISK,4564 12-11-2012,VAR_1D,310101 12-11-2012,VAR_10D,310101 12-11-2012,CB,866 12-11-2012,STR_VAR_1D,298494 12-11-2012,STR_VAR_10D,309623 09-11-2012,PNL,1024106 09-11-2012,RISK,4565 09-11-2012,VAR_1D,317211 09-11-2012,VAR_10D,317211 09-11-2012,CB,985... (7 Replies)
Discussion started by: manas_ranjan
7 Replies

6. Shell Programming and Scripting

awk to sum a column based on duplicate strings in another column and show split totals

Hi, I have a similar input format- A_1 2 B_0 4 A_1 1 B_2 5 A_4 1 and looking to print in this output format with headers. can you suggest in awk?awk because i am doing some pattern matching from parent file to print column 1 of my input using awk already.Thanks! letter number_of_letters... (5 Replies)
Discussion started by: prashob123
5 Replies

7. Shell Programming and Scripting

Sum column values based in common identifier in 1st column.

Hi, I have a table to be imported for R as matrix or data.frame but I first need to edit it because I've got several lines with the same identifier (1st column), so I want to sum the each column (2nd -nth) of each identifier (1st column) The input is for example, after sorted: K00001 1 1 4 3... (8 Replies)
Discussion started by: sargotrons
8 Replies

8. UNIX for Dummies Questions & Answers

Match sum of values in each column with the corresponding column value present in trailer record

Hi All, I have a requirement where I need to find sum of values from column D through O present in a CSV file and check whether the sum of each Individual column matches with the value present for that corresponding column present in the trailer record. For example, let's assume for column D... (9 Replies)
Discussion started by: tpk
9 Replies

9. Shell Programming and Scripting

Bring values in the second column into single line (comma sep) for uniq value in the first column

I want to bring values in the second column into single line for uniq value in the first column. My input jvm01, Web 2.0 Feature Pack Library jvm01, IBM WebSphere JAX-RS jvm01, Custom01 Shared Library jvm02, Web 2.0 Feature Pack Library jvm02, IBM WebSphere JAX-RS jvm03, Web 2.0 Feature... (10 Replies)
Discussion started by: kchinnam
10 Replies

10. Shell Programming and Scripting

awk to Sum columns when other column has duplicates and append one column value to another with Care

Hi Experts, Please bear with me, i need help I am learning AWk and stuck up in one issue. First point : I want to sum up column value for column 7, 9, 11,13 and column15 if rows in column 5 are duplicates.No action to be taken for rows where value in column 5 is unique. Second point : For... (1 Reply)
Discussion started by: as7951
1 Replies
uniq(1) 							   User Commands							   uniq(1)

NAME
uniq - report or filter out repeated lines in a file SYNOPSIS
/usr/bin/uniq /usr/bin/uniq [-c | -d | -u] [-f fields] [-s char] [input_file [output_file]] /usr/bin/uniq [-c | -d | -u] [-n] [+ m] [input_file [output_file]] ksh93 uniq [-cdiu] [-D[delimit]] [-f fields] [-s chars] [-w chars] [input_file [output_file]] uniq [-cdiu] [-D[delimit]] [-n] [+m] [-w chars] [input_file [output_file]] DESCRIPTION
/usr/bin/uniq The uniq utility reads an input file comparing adjacent lines and writes one copy of each input line on the output. The second and succeed- ing copies of repeated adjacent input lines are not written. Repeated lines in the input are not detected if they are not adjacent. ksh93 The uniq built-in in ksh93 is associated with the /bin or /usr/bin path. It is invoked when uniq is executed without a pathname prefix and the pathname search finds a /bin/uniq or /usr/bin/uniq executable. uniq reads an input, comparing adjacent lines, and writing one copy of each input line on the output. The second and succeeding copies of the repeated adjacent lines are not written. If output_file is not specified, uniq writes to standard output. If input_file is not specified, or if input_file is -, uniq reads from standard input, and the start of the file is defined as the current offset. OPTIONS
/usr/bin/uniq The following options are supported by /usr/bin/uniq: -c Precedes each output line with a count of the number of times the line occurred in the input. -d Suppresses the writing of lines that are not repeated in the input. -f fields Ignores the first fields fields on each input line when doing comparisons, where fields is a positive decimal integer. A field is the maximal string matched by the basic regular expression: [[:blank:]]*[^[:blank:]]* If fields specifies more fields than appear on an input line, a null string is used for comparison. +m Equivalent to -s chars with chars set to m. -n Equivalent to -f fields with fields set to n. -s chars Ignores the first chars characters when doing comparisons, where chars is a positive decimal integer. If specified in conjunc- tion with the -f option, the first chars characters after the first fields fields is ignored. If chars specifies more charac- ters than remain on an input line, a null string is used for comparison. -u Suppresses the writing of lines that are repeated in the input. ksh93 The following options are supported by the uniq built-in command is ksh93: -c Outputs the number of times each line occurred along with the line. --count -d Outputs only duplicate lines. --repeated | duplicates -D Outputs all duplicate lines as a group with an empty line delimiter specified by delimit. --all-repeated[=delimit] Specify delimit as one of the following: none Do not delimit duplicate groups. prepend Prepend an empty line before each group. separate Separate each group with an empty line. The value for delimit can be omitted. The default value is none. -f Skips over fields number of fields before checking for uniqueness. A field is the minimal string matching the --skip-fields=fields BRE [[:blank:]]*[^[:blank:]]*. -i Ignore case in comparisons. --ignore-case +m Equivalent to the -s chars option, with chars set to m. -n Equivalent to the -f fields option, with fields set to n. -s Skips over chars number of characters before checking for uniqueness. --skip-chars=chars If specified with the -f option, the first chars after the first fields are ignored. If the chars specifies more characters than are on the line, an empty string is used for comparison. -u Outputs unique lines. --uniq -w Skips over any specified fields and characters, then compares chars number of characters. --check-chars=chars OPERANDS
The following operands are supported: input_file A path name of the input file. If input_file is not specified, or if the input_file is -, the standard input is used. output_file A path name of the output file. If output_file is not specified, the standard output is used. The results are unspecified if the file named by output_file is the file named by input_file. EXAMPLES
Example 1 Using the uniq Command The following example lists the contents of the uniq.test file and outputs a copy of the repeated lines. example% cat uniq.test This is a test. This is a test. TEST. Computer. TEST. TEST. Software. example% uniq -d uniq.test This is a test. TEST. example% The next example outputs just those lines that are not repeated in the uniq.test file. example% uniq -u uniq.test TEST. Computer. Software. example% The last example outputs a report with each line preceded by a count of the number of times each line occurred in the file: example% uniq -c uniq.test 2 This is a test. 1 TEST. 1 Computer. 2 TEST. 1 Software. example% ENVIRONMENT VARIABLES
See environ(5) for descriptions of the following environment variables that affect the execution of uniq: LANG, LC_ALL, LC_CTYPE, LC_MES- SAGES, and NLSPATH. EXIT STATUS
The following exit values are returned: 0 Successful completion. >0 An error occurred. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: /usr/bin/uniq +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWesu | +-----------------------------+-----------------------------+ |CSI |Enabled | +-----------------------------+-----------------------------+ |Interface Stability |Committed | +-----------------------------+-----------------------------+ |Standard |See standards(5). | +-----------------------------+-----------------------------+ ksh93 +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWcsu | +-----------------------------+-----------------------------+ |Interface Stability |See below. | +-----------------------------+-----------------------------+ The ksh93 built-in binding to /bin and /usr/bin is Volatile. The built-in interfaces are Uncommitted. SEE ALSO
comm(1), ksh93(1), , pcat(1), sort(1), uncompress(1), attributes(5), environ(5), standards(5) SunOS 5.11 13 Mar 2008 uniq(1)
All times are GMT -4. The time now is 07:34 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy