10 More Discussions You Might Find Interesting
1. UNIX for Beginners Questions & Answers
How can i get the duplicates rows from a file using unix, for example i have data like
a,1
b,2
c,3
d,4
a,1
c,3
e,5
i want output to be like
a,1
c,3 (4 Replies)
Discussion started by: ggupta
4 Replies
2. Shell Programming and Scripting
Hi everybody
I have a .txt file that contains some assembly code for optimizing it i need to remove some replicated parts.
for example I have:e_li r0,-1
e_li r25,-1
e_lis r25,0000
add r31, r31 ,r0
e_li r28,-1
e_lis r28,0000
add r31, r31 ,r0
e_li r28,-1 ... (3 Replies)
Discussion started by: Behrouzx77
3 Replies
3. UNIX for Dummies Questions & Answers
Gurus,
From a file I need to remove duplicate rows based on the first column data but also we need to consider a date column where we need to keep the latest date (13th column).
Ex:
Input File:
Output File:
I know how to take out the duplicates but I couldn't figure out... (5 Replies)
Discussion started by: shash
5 Replies
4. Shell Programming and Scripting
notes: i am using cygwin and notepad++ only for checking this and my OS is XP.
#!/bin/bash
typeset -i totalvalue=(wc -w /cygdrive/c/cygwinfiles/database.txt)
typeset -i totallines=(wc -l /cygdrive/c/cygwinfiles/database.txt)
typeset -i columnlines=`expr $totalvalue / $totallines`
awk -F' ' -v... (5 Replies)
Discussion started by: whitecross
5 Replies
5. HP-UX
Hi all,
I have written one shell script. The output file of this script is having sql output.
In that file, I want to extract the rows which are having multiple entries(duplicate rows).
For example, the output file will be like the following way.
... (7 Replies)
Discussion started by: raghu.iv85
7 Replies
6. UNIX for Dummies Questions & Answers
Hi,
I am processing a file and would like to delete duplicate records as indicated by one of its column. e.g.
COL1 COL2 COL3
A 1234 1234
B 3k32 2322
C Xk32 TTT
A NEW XX22
B 3k32 ... (7 Replies)
Discussion started by: risk_sly
7 Replies
7. Shell Programming and Scripting
Hi,
I need to concatenate three files in to one destination file.In this if some duplicate data occurs it should be deleted.
eg:
file1:
-----
data1 value1
data2 value2
data3 value3
file2:
-----
data1 value1
data4 value4
data5 value5
file3:
-----
data1 value1
data4 value4 (3 Replies)
Discussion started by: Sharmila_P
3 Replies
8. Shell Programming and Scripting
I have a file content like below.
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""... (5 Replies)
Discussion started by: vamshikrishnab
5 Replies
9. Shell Programming and Scripting
hi all
can anyone please let me know if there is a way to find out duplicate rows in a file. i have a file that has hundreds of numbers(all in next row).
i want to find out the numbers that are repeted in the file.
eg.
123434
534
5575
4746767
347624
5575
i want 5575
please help (3 Replies)
Discussion started by: infyanurag
3 Replies
10. UNIX for Dummies Questions & Answers
Hi,
I am trying to remove duplicate lines from a file. For example the contents of example.txt is:
this is a test
2342
this is a test
34343
this is a test
43434
and i want to remove the "this is a test" lines only and end up with the numbers in the file, that is, end up with:
2342... (4 Replies)
Discussion started by: ocelot
4 Replies
DBIx::Class::Helper::Schema::LintContents(3pm) User Contributed Perl Documentation DBIx::Class::Helper::Schema::LintContents(3pm)
NAME
DBIx::Class::Helper::Schema::LintContents - Check the data in your database match your constraints
VERSION
version 2.013002
SYNOPSIS
package MyApp::Schema;
use parent 'DBIx::Class::Schema';
__PACKAGE__->load_components('Helper::Schema::LintContents');
1;
And later, somewhere else:
say "Incorrectly Null Users:";
for ($schema->null_check_source_auto('User')->all) {
say '* ' . $_->id
}
say "Duplicate Users:";
my $duplicates = $schema->dup_check_source_auto('User');
for (keys %$duplicates) {
say "Constraint: $_";
for ($duplicates->{$_}->all) {
say '* ' . $_->id
}
}
say "Users with invalid FK's:";
my $invalid_fks = $schema->fk_check_source_auto('User');
for (keys %$invalid_fks) {
say "Rel: $_";
for ($invalid_fks->{$_}->all) {
say '* ' . $_->id
}
}
DESCRIPTION
Some people think that constraints make their databases slower. As silly as that is, I have been in a similar situation! I'm here to help
you, dear developers! Basically this is a suite of methods that allow you to find violated "constraints." To be clear, the constraints I
mean are the ones you tell DBIx::Class about, real constraints are fairly sure to be followed.
METHODS
fk_check_source
my $busted = $schema->fk_check_source(
'User',
'Group',
{ group_id => 'id' },
);
"fk_check_source" takes three arguments, the first is the from source moniker of a relationship. The second is the to source or source
moniker of a relationship. The final argument is a hash reference representing the columns of the relationship. The return value is a
resultset of the from source that do not have a corresponding to row. To be clear, the example given above would return a resultset of
"User" rows that have a "group_id" that points to a "Group" that does not exist.
fk_check_source_auto
my $broken = $schema->fk_check_source_auto('User');
"fk_check_source_auto" takes a single argument: the source to check. It will check all the foreign key (that is, "belongs_to")
relationships for missing... "foreign" rows. The return value will be a hashref where the keys are the relationship name and the values
are resultsets of the respective violated relationship.
dup_check_source
my $smashed = $schema->fk_check_source( 'Group', ['id'] );
"dup_check_source" takes two arguments, the first is the source moniker to be checked. The second is an arrayref of columns that "should
be" unique. The return value is a resultset of the source that duplicate the passed columns. So with the example above the resultset
would return all groups that are "duplicates" of other groups based on "id".
dup_check_source_auto
my $ruined = $schema->dup_check_source_auto('Group');
"dup_check_source_auto" takes a single argument, which is the name of the resultsource in which to check for duplicates. It will return a
hashref where they keys are the names of the unique constraints to be checked. The values will be resultsets of the respective duplicate
rows.
null_check_source
my $blarg = $schema->null_check_source('Group', ['id']);
"null_check_source" tales two arguments, the first is the name of the source to check. The second is an arrayref of columns that should
contain no nulls. The return value is simply a resultset of rows that contain nulls where they shouldn't be.
null_check_source_auto
my $wrecked = $schema->null_check_source_auto('Group');
"null_check_source_auto" takes a single argument, which is the name of the resultsource in which to check for nulls. The return value is
simply a resultset of rows that contain nulls where they shouldn't be. This method automatically uses the configured columns that have
"is_nullable" set to false.
AUTHOR
Arthur Axel "fREW" Schmidt <frioux+cpan@gmail.com>
COPYRIGHT AND LICENSE
This software is copyright (c) 2012 by Arthur Axel "fREW" Schmidt.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
perl v5.14.2 2012-06-18 DBIx::Class::Helper::Schema::LintContents(3pm)