Like RudiC, I find the statement in the updated post #1:
Quote:
The value which is present in column13 should not be present in column16.
to be confusing and to make no sense. It might make sense if the "column13" in that statement had been replaced by "column 2".
Like RudiC, I find it strange that an output field that is supposed to be a sum of one or more input fields is sometimes shown to have a sum that is an empty field. (I would expect a sum of one or more empty or non-empty fields to have a numeric value. But the desired output shown in post #1 has some empty fields that are supposed to be sums.)
Here is an alternative to the code RudiC suggested in post #12:
The output produced by the code RudiC suggested in post #12 will have output records in random order. His code will also combine output from input records with the same ID in field #5 whether or not those records are adjacent in the input file. And, fields being summed that only sum empty fields, the output for those fields will be empty fields.
The output produced by the code I suggested above will have output records in the same order as the records that are found in the input file. But, it will only combine output from input records with the same ID in field #5 if those records are adjacent in the input file. For fields being being summed that only sum empty fields, the output for those fields will be zero fields.
With either of these suggestions, if you want to run these on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk.
With the input currently in post #1, my suggestion above produces the output:
instead of the output that was requested:
I see no reason why spaces have been added to the start of the 1st line of output nor why an empty 17th field has been added to the 2nd line of the desired output. As you can see, the code I provided does not provide either of these requested, but unexplained anomalies.
I have some data that is something like this?
item: onhand counted location
ITEM0001 1 0 a1
ITEM0001 0 1 a2
ITEM0002 5 0 b5
ITEM0002 0 6 c1
I want to sum up... (6 Replies)
i have a file - it will be in sorted order on column 1
abc 0 1
abc 2 3
abc 3 5
def 1 7
def 0 1
--------
i'd like (awk maybe?) to get the results (any ideas)???
abc 5 9
def 1 8 (2 Replies)
I have file which as 12 columns and values like this
1,2,3,4,5
a,b,c,d,e
b,c,a,e,f
a,b,e,a,h
if you see the first column has duplicate values, I need to identify (print it to console) the duplicate value (which is 'a') and also remove duplicate values like below. I could be in two... (5 Replies)
I've a text file with below values viz. multiple rows with same values in column 3, 4 and 5, which need to be considered as duplicates. For all such cases, the rows from second occurrence onwards should be modified in a way that their values in first two columns are replaced with values as in first... (4 Replies)
Hello,
I am new to Linux environment , I working on Linux script which should send auto email based on the specific condition from log file. Below is the sample log file
Name m/c usage
abc xxx 10
abc xxx 20
abc xxx 5
xyz ... (6 Replies)
Hi,
I have a similar input format-
A_1 2
B_0 4
A_1 1
B_2 5
A_4 1
and looking to print in this output format with headers. can you suggest in awk?awk because i am doing some pattern matching from parent file to print column 1 of my input using awk already.Thanks!
letter number_of_letters... (5 Replies)
I have a file (let say file B) like this:
File B:
A1 3 5
A1 7 9
A2 2 5
A3 1 3
The first column defines a filename and the other two define a range in that specific file. In the same directory, I have also three more files (File A1, A2 and A3). Here is 10 sample lines... (3 Replies)
I need to sum values in text file in case duplicate row are present with same name and different value below is example of data in file i have and format i need.
Data in text file
20170308
PM,U,2
PM,U,113
PM,I,123
DA,U,135
DA,I,113
DA,I,1
20170309
PM,U,2
PM,U,1
PM,I,123
PM,I,1... (3 Replies)
Hi Experts,
Please bear with me, i need help
I am learning AWk and stuck up in one issue.
First point : I want to sum up column value for column 7, 9, 11,13 and column15 if rows in column 5 are duplicates.No action to be taken for rows where value in column 5 is unique.
Second point : For... (1 Reply)
I have a file abc.csv, from which I need column 24(PurchaseOrder_TotalCost) to get the sum_of_amounts with date and row count into another file say output.csv
abc.csv-
UTF-8,,,,,,,,,,,,,,,,,,,,,,,,,
... (6 Replies)