Sponsored Content
Top Forums Shell Programming and Scripting Replace duplicate columns with values from first occurrence Post 302695523 by asyed on Monday 3rd of September 2012 05:18:32 AM
Old 09-03-2012
Replace duplicate columns with values from first occurrence

I've a text file with below values viz. multiple rows with same values in column 3, 4 and 5, which need to be considered as duplicates. For all such cases, the rows from second occurrence onwards should be modified in a way that their values in first two columns are replaced with values as in first occurrence. See the sample input and output copied below

Input file:
Code:
AAA,AAA,1,2,99,981,901,11,14
AAA,AAA,1,2,99,987,907,17,13
AAA,AAA,1,2,99,987,902,12,15
ABC,ABC,1,2,99,983,903,13,16
ABC,ABC,1,2,99,984,904,14,17
ABC,ABC,1,2,99,985,905,15,18
ABC,ABC,1,2,99,986,906,16,19

Output file:
Code:
AAA,AAA,1,2,99,981,901,11,14
AAA,AAA,1,2,99,987,907,17,13
AAA,AAA,1,2,99,987,902,12,15
AAA,AAA,1,2,99,983,903,13,16
AAA,AAA,1,2,99,984,904,14,17
AAA,AAA,1,2,99,985,905,15,18
AAA,AAA,1,2,99,986,906,16,19

Can you help me with a awk command to achieve this?

Last edited by Franklin52; 09-03-2012 at 06:25 AM.. Reason: Please use code tags for data and code samples
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Search and replace to first occurrence of string

Hi all, I have a very large; delimited file. In vi I would like to replace: CSACT_DY;AVG_UEACT1;uesPerActiveLinkSetSize_1;#;A CSACT_DY;AVG_UEACT2;uesPerActiveLinkSetSize_2;#;A CSACT_DY;AVG_UEACT3;uesPerActiveLinkSetSize_3;#;A with: CSACT_DY;AVG_UEACT1;Average... (7 Replies)
Discussion started by: gilmord
7 Replies

2. Shell Programming and Scripting

Replace second occurrence only

HPUX /bin/sh (posix) I have a file as such cat dog mouse deer elk rabbit mouse rat pig I would like to replace the second occurrence of mouse in this file with mouse2. The rest of the file has to stay exactly as is. I'm not sure exactly where mouse might be (could be first,second,third... (5 Replies)
Discussion started by: lyoncc
5 Replies

3. Shell Programming and Scripting

Find and replace duplicate column values in a row

I have file which as 12 columns and values like this 1,2,3,4,5 a,b,c,d,e b,c,a,e,f a,b,e,a,h if you see the first column has duplicate values, I need to identify (print it to console) the duplicate value (which is 'a') and also remove duplicate values like below. I could be in two... (5 Replies)
Discussion started by: nuthalapati
5 Replies

4. UNIX for Dummies Questions & Answers

How to replace particular occurrence of character in between a delimiter?

Hi, Hi, I have a file with following format 1|" "text " around " |" fire "guest"|" " 2| "xyz"" | "no guest"|"3" 3| """ test3""| "one" guest"|"4" My requirement is to replace all occurrences of " to ' which are occurring between |" "|delimiter so my output should look like this 1|"... (3 Replies)
Discussion started by: H_bansal
3 Replies

5. Programming

awk to count occurrence of strings and loop for multiple columns

Hi all, If i would like to process a file input as below: col1 col2 col3 ...col100 1 A C E A ... 3 D E G A 5 T T A A 6 D C A G how can i perform a for loop to count the occurences of letters in each column? (just like uniq -c ) in every column. on top of that, i would also like... (8 Replies)
Discussion started by: iling14
8 Replies

6. Shell Programming and Scripting

Replace values in columns

I have the following input format: AA00000712000 -0.17 0.90 -1.04 -0.37 -1.45 -1.13 -0.22 -0.34 -0.55 2.37 0.67 -0.48 -0.48 AA00000712001 0.15 0.03 0.47 0.62 2.04 1.14 0.29 -0.81 0.85 0.53 1.00 -0.10 -0.48 BB00000712000 1.32 -0.47 0.44 0.00 0.98 ... (4 Replies)
Discussion started by: ncwxpanther
4 Replies

7. Shell Programming and Scripting

Multiple columns replace with null values.

I am trying to replace the partcular columns(Col3,col5,col20,col44,col55,co56,col59,col60,col61,col62,col74,col75,col88,col90,col91,col93,col94,col95) with empty Input file Col1,col2,col3,col4,col5------,col100 1,2,3,4,5,---------,100 3,4,5,6,7,---------,300 Output : ... (3 Replies)
Discussion started by: onesuri
3 Replies

8. Shell Programming and Scripting

Find duplicate values in specific column and delete all the duplicate values

Dear folks I have a map file of around 54K lines and some of the values in the second column have the same value and I want to find them and delete all of the same values. I looked over duplicate commands but my case is not to keep one of the duplicate values. I want to remove all of the same... (4 Replies)
Discussion started by: sajmar
4 Replies

9. Shell Programming and Scripting

Do replace operation and awk to sum multiple columns if another column has duplicate values

Hi Experts, Please bear with me, i need help I am learning AWk and stuck up in one issue. First point : I want to sum up column value for column 7, 9, 11,13 and column15 if rows in column 5 are duplicates.No action to be taken for rows where value in column 5 is unique. Second point : For... (12 Replies)
Discussion started by: as7951
12 Replies

10. Shell Programming and Scripting

Match value in two files and replace values in selected columns

The purpose is to check if values for column 3 and 4 in file1 match with column 1 in file2. If any value match do: 1) Replace values in file2 for column 2 and 3 using the information of file1 columns 5 and 6 2) Replace string ($1,1,5) and string ($1,6,5) in file2 with values of columns 7... (8 Replies)
Discussion started by: jiam912
8 Replies
DBIx::Class::Helper::Schema::LintContents(3pm)		User Contributed Perl Documentation	    DBIx::Class::Helper::Schema::LintContents(3pm)

NAME
DBIx::Class::Helper::Schema::LintContents - Check the data in your database match your constraints VERSION
version 2.013002 SYNOPSIS
package MyApp::Schema; use parent 'DBIx::Class::Schema'; __PACKAGE__->load_components('Helper::Schema::LintContents'); 1; And later, somewhere else: say "Incorrectly Null Users:"; for ($schema->null_check_source_auto('User')->all) { say '* ' . $_->id } say "Duplicate Users:"; my $duplicates = $schema->dup_check_source_auto('User'); for (keys %$duplicates) { say "Constraint: $_"; for ($duplicates->{$_}->all) { say '* ' . $_->id } } say "Users with invalid FK's:"; my $invalid_fks = $schema->fk_check_source_auto('User'); for (keys %$invalid_fks) { say "Rel: $_"; for ($invalid_fks->{$_}->all) { say '* ' . $_->id } } DESCRIPTION
Some people think that constraints make their databases slower. As silly as that is, I have been in a similar situation! I'm here to help you, dear developers! Basically this is a suite of methods that allow you to find violated "constraints." To be clear, the constraints I mean are the ones you tell DBIx::Class about, real constraints are fairly sure to be followed. METHODS
fk_check_source my $busted = $schema->fk_check_source( 'User', 'Group', { group_id => 'id' }, ); "fk_check_source" takes three arguments, the first is the from source moniker of a relationship. The second is the to source or source moniker of a relationship. The final argument is a hash reference representing the columns of the relationship. The return value is a resultset of the from source that do not have a corresponding to row. To be clear, the example given above would return a resultset of "User" rows that have a "group_id" that points to a "Group" that does not exist. fk_check_source_auto my $broken = $schema->fk_check_source_auto('User'); "fk_check_source_auto" takes a single argument: the source to check. It will check all the foreign key (that is, "belongs_to") relationships for missing... "foreign" rows. The return value will be a hashref where the keys are the relationship name and the values are resultsets of the respective violated relationship. dup_check_source my $smashed = $schema->fk_check_source( 'Group', ['id'] ); "dup_check_source" takes two arguments, the first is the source moniker to be checked. The second is an arrayref of columns that "should be" unique. The return value is a resultset of the source that duplicate the passed columns. So with the example above the resultset would return all groups that are "duplicates" of other groups based on "id". dup_check_source_auto my $ruined = $schema->dup_check_source_auto('Group'); "dup_check_source_auto" takes a single argument, which is the name of the resultsource in which to check for duplicates. It will return a hashref where they keys are the names of the unique constraints to be checked. The values will be resultsets of the respective duplicate rows. null_check_source my $blarg = $schema->null_check_source('Group', ['id']); "null_check_source" tales two arguments, the first is the name of the source to check. The second is an arrayref of columns that should contain no nulls. The return value is simply a resultset of rows that contain nulls where they shouldn't be. null_check_source_auto my $wrecked = $schema->null_check_source_auto('Group'); "null_check_source_auto" takes a single argument, which is the name of the resultsource in which to check for nulls. The return value is simply a resultset of rows that contain nulls where they shouldn't be. This method automatically uses the configured columns that have "is_nullable" set to false. AUTHOR
Arthur Axel "fREW" Schmidt <frioux+cpan@gmail.com> COPYRIGHT AND LICENSE
This software is copyright (c) 2012 by Arthur Axel "fREW" Schmidt. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. perl v5.14.2 2012-06-18 DBIx::Class::Helper::Schema::LintContents(3pm)
All times are GMT -4. The time now is 03:34 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy