Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Remove duplicates and keep them in a separate file Post 302859875 by pamu on Friday 4th of October 2013 05:28:19 AM
Old 10-04-2013
try

Code:
awk '!A[$13]++' file

 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Remove duplicates from File from specific location

How can i remove the duplicate lines from a file, for example sample123456Sample testing123456testing XXXXX131323XXXXX YYYYY423432YYYYY fsdfdsf123456gsdfdsd all the duplicates from column 6-12 , must be deleted. I want to consider the first row, if same comes in the given range i want to... (1 Reply)
Discussion started by: gopikgunda
1 Replies

2. Shell Programming and Scripting

remove duplicates within a block in a file..help required

hi.. i have a file in the following format :- name-a age -12 address-123 age-12 phone-22222 ============ name-ab age -11 address-123 age-11 phone-222223 ============= name-abc age -12 address-1234 age-12 phone-2222223 ============= (2 Replies)
Discussion started by: nipun_garg
2 Replies

3. Shell Programming and Scripting

Remove duplicates from end of file

1/p ---- A B C A C o/p --- B A C From input file it should remove duplicates from end without changing order (5 Replies)
Discussion started by: lavnayas
5 Replies

4. Shell Programming and Scripting

Remove duplicates from a file

Hi, I need to remove duplicates from a file. The file will be like this 0003 10101 20100120 abcdefghi 0003 10101 20100121 abcdefghi 0003 10101 20100122 abcdefghi 0003 10102 20100120 abcdefghi 0003 10103 20100120 abcdefghi 0003 10103 20100121 abcdefghi Here if the first colum and... (6 Replies)
Discussion started by: gpaulose
6 Replies

5. Shell Programming and Scripting

Search based on 1,2,4,5 columns and remove duplicates in the same file.

Hi, I am unable to search the duplicates in a file based on the 1st,2nd,4th,5th columns in a file and also remove the duplicates in the same file. Source filename: Filename.csv "1","ccc","information","5000","temp","concept","new" "1","ddd","information","6000","temp","concept","new"... (2 Replies)
Discussion started by: onesuri
2 Replies

6. Shell Programming and Scripting

How to remove duplicates from the .dat file

All, I have a file 1181CUSTOMER-L061411_003500.dat.Z having duplicate records in it. bash-2.05$ zcat 1181CUSTOMER-L061411_003500.dat.Z|grep "90876251S" 90876251S|ABG, AN ADAYANA COMPANY|3550 DEPAUW BLVD|||US|IN|INDIANAPOLIS||DAL|46268||||||GEN|||||||USD|||ABG, AN ADAYANA... (3 Replies)
Discussion started by: Oracle_User
3 Replies

7. Shell Programming and Scripting

Remove duplicates within row and separate column

Hi all I have following kind of input file ESR1 PA156 leflunomide PA450192 leflunomide CHST3 PA26503 docetaxel Pa4586; thalidomide Pa34958; decetaxel docetaxel docetaxel I want to remove duplicates and I want to separate anything before and after PAxxxx entry into columns or... (1 Reply)
Discussion started by: manigrover
1 Replies

8. UNIX for Dummies Questions & Answers

Remove duplicates from a file

Can u tell me how to remove duplicate records from a file? (11 Replies)
Discussion started by: saga20
11 Replies

9. Shell Programming and Scripting

To remove duplicates from pipe delimited file

Hi some one please help me to remove duplicates from a pipe delimited file based on first two columns. 123|asdf|sfsd|qwrer 431|yui|qwer|opws 123|asdf|pol|njio Here My first record and last record are duplicates.As per my requirement I want all the latest records into one file. I want the... (12 Replies)
Discussion started by: ginrkf
12 Replies

10. UNIX for Advanced & Expert Users

Remove duplicates in flat file

Hi all, I have a issues while loading a flat file to the DB. It is taking much time. When analyzed i found out that there are duplicates entry in the flat file. There are 2 type of Duplicate entry. 1) is entire row is duplicate. ( i can use sort | uniq) to remove the duplicated entry. 2) the... (4 Replies)
Discussion started by: samjoshuab
4 Replies
DBIx::Class::Helper::Schema::LintContents(3pm)		User Contributed Perl Documentation	    DBIx::Class::Helper::Schema::LintContents(3pm)

NAME
DBIx::Class::Helper::Schema::LintContents - Check the data in your database match your constraints VERSION
version 2.013002 SYNOPSIS
package MyApp::Schema; use parent 'DBIx::Class::Schema'; __PACKAGE__->load_components('Helper::Schema::LintContents'); 1; And later, somewhere else: say "Incorrectly Null Users:"; for ($schema->null_check_source_auto('User')->all) { say '* ' . $_->id } say "Duplicate Users:"; my $duplicates = $schema->dup_check_source_auto('User'); for (keys %$duplicates) { say "Constraint: $_"; for ($duplicates->{$_}->all) { say '* ' . $_->id } } say "Users with invalid FK's:"; my $invalid_fks = $schema->fk_check_source_auto('User'); for (keys %$invalid_fks) { say "Rel: $_"; for ($invalid_fks->{$_}->all) { say '* ' . $_->id } } DESCRIPTION
Some people think that constraints make their databases slower. As silly as that is, I have been in a similar situation! I'm here to help you, dear developers! Basically this is a suite of methods that allow you to find violated "constraints." To be clear, the constraints I mean are the ones you tell DBIx::Class about, real constraints are fairly sure to be followed. METHODS
fk_check_source my $busted = $schema->fk_check_source( 'User', 'Group', { group_id => 'id' }, ); "fk_check_source" takes three arguments, the first is the from source moniker of a relationship. The second is the to source or source moniker of a relationship. The final argument is a hash reference representing the columns of the relationship. The return value is a resultset of the from source that do not have a corresponding to row. To be clear, the example given above would return a resultset of "User" rows that have a "group_id" that points to a "Group" that does not exist. fk_check_source_auto my $broken = $schema->fk_check_source_auto('User'); "fk_check_source_auto" takes a single argument: the source to check. It will check all the foreign key (that is, "belongs_to") relationships for missing... "foreign" rows. The return value will be a hashref where the keys are the relationship name and the values are resultsets of the respective violated relationship. dup_check_source my $smashed = $schema->fk_check_source( 'Group', ['id'] ); "dup_check_source" takes two arguments, the first is the source moniker to be checked. The second is an arrayref of columns that "should be" unique. The return value is a resultset of the source that duplicate the passed columns. So with the example above the resultset would return all groups that are "duplicates" of other groups based on "id". dup_check_source_auto my $ruined = $schema->dup_check_source_auto('Group'); "dup_check_source_auto" takes a single argument, which is the name of the resultsource in which to check for duplicates. It will return a hashref where they keys are the names of the unique constraints to be checked. The values will be resultsets of the respective duplicate rows. null_check_source my $blarg = $schema->null_check_source('Group', ['id']); "null_check_source" tales two arguments, the first is the name of the source to check. The second is an arrayref of columns that should contain no nulls. The return value is simply a resultset of rows that contain nulls where they shouldn't be. null_check_source_auto my $wrecked = $schema->null_check_source_auto('Group'); "null_check_source_auto" takes a single argument, which is the name of the resultsource in which to check for nulls. The return value is simply a resultset of rows that contain nulls where they shouldn't be. This method automatically uses the configured columns that have "is_nullable" set to false. AUTHOR
Arthur Axel "fREW" Schmidt <frioux+cpan@gmail.com> COPYRIGHT AND LICENSE
This software is copyright (c) 2012 by Arthur Axel "fREW" Schmidt. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. perl v5.14.2 2012-06-18 DBIx::Class::Helper::Schema::LintContents(3pm)
All times are GMT -4. The time now is 07:22 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy