Sponsored Content
Full Discussion: Removing duplicates
Top Forums Shell Programming and Scripting Removing duplicates Post 83271 by Perderabo on Tuesday 13th of September 2005 11:56:37 AM
Old 09-13-2005
Try:
sort -mu -k1,1 < datafile
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

removing duplicates and sort -k

Hello experts, I am trying to remove all lines in a csv file where the 2nd columns is a duplicate. I am try to use sort with the key parameter sort -u -k 2,2 File.csv > Output.csv File.csv File Name|Document Name|Document Title|Organization Word Doc 1.doc|Word Document|Sample... (3 Replies)
Discussion started by: orahi001
3 Replies

2. Shell Programming and Scripting

removing duplicates

Hi I have a file that are a list of people & their credentials i recieve frequently The issue is that whne I catnet this list that duplicat entries exists & are NOT CONSECUTIVE (i.e. uniq -1 may not weork here ) I'm trying to write a scrip that will remove duplicate entries the script can... (5 Replies)
Discussion started by: stevie_velvet
5 Replies

3. Shell Programming and Scripting

Removing duplicates

Hi, I have a file in the below format., test test (10) to to (25) see see (45) and i need the output in the format of test 10 to 25 see 45 Some one help me? (6 Replies)
Discussion started by: imdadulla
6 Replies

4. UNIX for Advanced & Expert Users

removing duplicates.

Hi All In unix ,we have a file ,there we have to remove the duplicates by using one specific column. Can any body tell me the command. ex: file1 id,name 1,ww 2,qwq 2,asas 3,asa 4,asas 4,asas o/p: 1,ww 2,qwq 3,asa (7 Replies)
Discussion started by: raju4u
7 Replies

5. Shell Programming and Scripting

Removing duplicates

I have a test file with the following 2 columns: Col 1 | Col 2 T1 | 1 <= remove T5 | 1 T4 | 2 T1 | 3 T3 | 3 T4 | 1 <= remove T1 | 2 <= remove T3 ... (7 Replies)
Discussion started by: gctex
7 Replies

6. Emergency UNIX and Linux Support

Removing all the duplicates

i want to remove all the duplictaes in a file.I dont want even a single entry. For the input data: 12345|12|34 12345|13|23 3456|12|90 15670|12|13 12345|10|14 3456|12|13 i need the below data in one file 15670|12|13 and the below data in another file (9 Replies)
Discussion started by: pandeesh
9 Replies

7. Shell Programming and Scripting

Help in removing duplicates

I have an input file abc.txt with info like: abcd rateuse inklite robet rateuse abcd I need to remove duplicates from the file (eg: abcd,rateuse) from the file and need to place the contents in same file abc.txt if needed can be placed in another file. can anyone help me in this :( (4 Replies)
Discussion started by: rkrish
4 Replies

8. UNIX for Dummies Questions & Answers

Removing duplicates from a file

Hi All, I am merging files coming from 2 different systems ,while doing that I am getting duplicates entries in the merged file I,01,000131,764,2,4.00 I,01,000131,765,2,4.00 I,01,000131,772,2,4.00 I,01,000131,773,2,4.00 I,01,000168,762,2,2.00 I,01,000168,763,2,2.00... (5 Replies)
Discussion started by: Sri3001
5 Replies

9. Shell Programming and Scripting

Removing duplicates except the last occurrence

Hi All, i have a file like below, @DB_FCTS\src\Data\Scripts\Delete_CU_OM_BIL_PRT_STMT_TYP.sql @DB_FCTS\src\Data\Scripts\Delete_CDP_BILL_LBL_MSG.sql @DB_FCTS\src\Data\Scripts\Delete_OM_BIDDR.sql @DB_FCTS\src\Data\Scripts\Insert_CU_OM_LBL_MSG.sql... (11 Replies)
Discussion started by: mechvijays
11 Replies

10. Shell Programming and Scripting

Removing duplicates from new file

i hav two files like i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3 (2 Replies)
Discussion started by: sagar_1986
2 Replies
DBIx::Class::Helper::Schema::LintContents(3pm)		User Contributed Perl Documentation	    DBIx::Class::Helper::Schema::LintContents(3pm)

NAME
DBIx::Class::Helper::Schema::LintContents - Check the data in your database match your constraints VERSION
version 2.013002 SYNOPSIS
package MyApp::Schema; use parent 'DBIx::Class::Schema'; __PACKAGE__->load_components('Helper::Schema::LintContents'); 1; And later, somewhere else: say "Incorrectly Null Users:"; for ($schema->null_check_source_auto('User')->all) { say '* ' . $_->id } say "Duplicate Users:"; my $duplicates = $schema->dup_check_source_auto('User'); for (keys %$duplicates) { say "Constraint: $_"; for ($duplicates->{$_}->all) { say '* ' . $_->id } } say "Users with invalid FK's:"; my $invalid_fks = $schema->fk_check_source_auto('User'); for (keys %$invalid_fks) { say "Rel: $_"; for ($invalid_fks->{$_}->all) { say '* ' . $_->id } } DESCRIPTION
Some people think that constraints make their databases slower. As silly as that is, I have been in a similar situation! I'm here to help you, dear developers! Basically this is a suite of methods that allow you to find violated "constraints." To be clear, the constraints I mean are the ones you tell DBIx::Class about, real constraints are fairly sure to be followed. METHODS
fk_check_source my $busted = $schema->fk_check_source( 'User', 'Group', { group_id => 'id' }, ); "fk_check_source" takes three arguments, the first is the from source moniker of a relationship. The second is the to source or source moniker of a relationship. The final argument is a hash reference representing the columns of the relationship. The return value is a resultset of the from source that do not have a corresponding to row. To be clear, the example given above would return a resultset of "User" rows that have a "group_id" that points to a "Group" that does not exist. fk_check_source_auto my $broken = $schema->fk_check_source_auto('User'); "fk_check_source_auto" takes a single argument: the source to check. It will check all the foreign key (that is, "belongs_to") relationships for missing... "foreign" rows. The return value will be a hashref where the keys are the relationship name and the values are resultsets of the respective violated relationship. dup_check_source my $smashed = $schema->fk_check_source( 'Group', ['id'] ); "dup_check_source" takes two arguments, the first is the source moniker to be checked. The second is an arrayref of columns that "should be" unique. The return value is a resultset of the source that duplicate the passed columns. So with the example above the resultset would return all groups that are "duplicates" of other groups based on "id". dup_check_source_auto my $ruined = $schema->dup_check_source_auto('Group'); "dup_check_source_auto" takes a single argument, which is the name of the resultsource in which to check for duplicates. It will return a hashref where they keys are the names of the unique constraints to be checked. The values will be resultsets of the respective duplicate rows. null_check_source my $blarg = $schema->null_check_source('Group', ['id']); "null_check_source" tales two arguments, the first is the name of the source to check. The second is an arrayref of columns that should contain no nulls. The return value is simply a resultset of rows that contain nulls where they shouldn't be. null_check_source_auto my $wrecked = $schema->null_check_source_auto('Group'); "null_check_source_auto" takes a single argument, which is the name of the resultsource in which to check for nulls. The return value is simply a resultset of rows that contain nulls where they shouldn't be. This method automatically uses the configured columns that have "is_nullable" set to false. AUTHOR
Arthur Axel "fREW" Schmidt <frioux+cpan@gmail.com> COPYRIGHT AND LICENSE
This software is copyright (c) 2012 by Arthur Axel "fREW" Schmidt. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. perl v5.14.2 2012-06-18 DBIx::Class::Helper::Schema::LintContents(3pm)
All times are GMT -4. The time now is 04:36 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy