Sponsored Content
Top Forums Shell Programming and Scripting Perl: filtering lines based on duplicate values in a column Post 302558129 by polsum on Thursday 22nd of September 2011 09:16:09 PM
Old 09-22-2011
Perl: filtering lines based on duplicate values in a column

Hi I have a file like this. I need to eliminate lines with first column having the same value 10 times.
Code:
13 18 1 + chromosome 1, 122638287 AGAGTATGGTCGCGGTTG
13 18 1 + chromosome 1, 128904080 AGAGTATGGTCGCGGTTG
13 18 1 - chromosome 14, 13627938 CAACCGCGACCATACTCT
13 18 1 + chromosome 1, 187172197 AGAGTATGGTCGCGGTTG
13 18 1 - chromosome X, 38407155 CAACCGCGACCATACTCT
13 18 1 + chromosome 9, 13503259 AGAGTATGGTCGCGGTTG
13 18 1 - chromosome 2, 105480832 CAACCGCGACCATACTCT
13 18 1 + chromosome 9, 49045535 AGAGTATGGTCGCGGTTG
13 18 1 + chromosome 1, 178729626 AGAGTATGGTCGCGGTTG
13 18 1 - chromosome X, 55081462 CAACCGCGACCATACTCT
9 17 2 + chromosome 10, 101398385 GCCAGTTCTACAGTCCG
9 17 2 - chromosome 3, 103818009 CGGACTGTAGAACTGGC
9 17 2 - chromosome 16, 94552245 CGGACTGTAGAACTGGC
4 18 1 - chromosome 18, 70056996 TACCCAACAACACATAGT

The value 13 in the first column is repeated 10 times in the consecutive lines. I need to eliminate all those lines in the output.

so the desired output will be
Code:
9 17 2 + chromosome 10, 101398385 GCCAGTTCTACAGTCCG
9 17 2 - chromosome 3, 103818009 CGGACTGTAGAACTGGC
9 17 2 - chromosome 16, 94552245 CGGACTGTAGAACTGGC
4 18 1 - chromosome 18, 70056996 TACCCAACAACACATAGT

Thank you much in advance. If it is possible a code in Perl would be much appreciated.Smilie
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Filtering duplicate lines

Does anybody know a command that filters duplicate lines out of a file. Similar to the uniq command but can handle duplicate lines no matter where they occur in a file? (9 Replies)
Discussion started by: AreaMan
9 Replies

2. Shell Programming and Scripting

Joining multiple files based on one column with different and similar values (shell or perl)

Hi, I have nine files looking similar to file1 & file2 below. File1: 1 ABCA1 1 ABCC8 1 ABR:N 1 ACACB 1 ACAP2 1 ACOT1 1 ACSBG 1 ACTR1 1 ACTRT 1 ADAMT 1 AEN:N 1 AKAP1File2: 1 A4GAL 1 ACTBL 1 ACTL7 (4 Replies)
Discussion started by: seqbiologist
4 Replies

3. Shell Programming and Scripting

Filtering lines for column elements based on corresponding counts in another column

Hi, I have a file like this ACC 2 2 21 aaa AC 443 3 22 aaa GCT 76 1 33 xxx TCG 34 2 33 aaa ACGT 33 1 22 ggg TTC 99 3 44 wee CCA 33 2 33 ggg AAC 1 3 55 ddd TTG 10 1 22 ddd TTGC 98 3 22 ddd GCT 23 1 21 sds GTC 23 4 32 sds ACGT 32 2 33 vvv CGT 11 2 33 eee CCC 87 2 44... (1 Reply)
Discussion started by: polsum
1 Replies

4. UNIX for Dummies Questions & Answers

[SOLVED] remove lines that have duplicate values in column two

Hi, I've got a file that I'd like to uniquely sort based on column 2 (values in column 2 begin with "comp"). I tried sort -t -nuk2,3 file.txtBut got: sort: multi-character tab `-nuk2,3' "man sort" did not help me out Any pointers? Input: Output: (5 Replies)
Discussion started by: pathunkathunk
5 Replies

5. UNIX for Dummies Questions & Answers

awk solution to duplicate lines based on column

Hi experts, I have a tab-delimited file with one column containing values separated by a comma. I wish to duplicate the entire line for every value in that comma-delimited field. For example: $cat file 4444 4444 4444 4444 9990 2222,7777 6666 2222 ... (3 Replies)
Discussion started by: torchij
3 Replies

6. Shell Programming and Scripting

awk to sum a column based on duplicate strings in another column and show split totals

Hi, I have a similar input format- A_1 2 B_0 4 A_1 1 B_2 5 A_4 1 and looking to print in this output format with headers. can you suggest in awk?awk because i am doing some pattern matching from parent file to print column 1 of my input using awk already.Thanks! letter number_of_letters... (5 Replies)
Discussion started by: prashob123
5 Replies

7. Shell Programming and Scripting

Removing duplicate lines on first column based with pipe delimiter

Hi, I have tried to remove dublicate lines based on first column with pipe delimiter . but i ma not able to get some uniqu lines Command : sort -t'|' -nuk1 file.txt Input : 38376KZ|09/25/15|1.057 38376KZ|09/25/15|1.057 02006YB|09/25/15|0.859 12593PS|09/25/15|2.803... (2 Replies)
Discussion started by: parithi06
2 Replies

8. Shell Programming and Scripting

Find duplicate values in specific column and delete all the duplicate values

Dear folks I have a map file of around 54K lines and some of the values in the second column have the same value and I want to find them and delete all of the same values. I looked over duplicate commands but my case is not to keep one of the duplicate values. I want to remove all of the same... (4 Replies)
Discussion started by: sajmar
4 Replies

9. UNIX for Beginners Questions & Answers

Filtering based on column values

Hi there, I am trying to filter a big file with several columns using values on a column with values like (AC=5;AN=10;SF=341,377,517,643,662;VRT=1). I wont to filter the data based on SF= values that are (bigger than 400) ... (25 Replies)
Discussion started by: daashti
25 Replies

10. UNIX for Beginners Questions & Answers

Find lines with duplicate values in a particular column

I have a file with 5 columns. I want to pull out all records where the value in column 4 is not unique. For example in the sample below, I would want it to print out all lines except for the last two. 40991764 2419 724 47182 Cand A 40992936 3591 724 47182 Cand B 40993016 3671 724 47182 Cand C... (5 Replies)
Discussion started by: kaktus
5 Replies
BP_PROCESS_GADFLY(1p)					User Contributed Perl Documentation				     BP_PROCESS_GADFLY(1p)

NAME
process_gadfly.pl - Massage Gadfly/FlyBase GFF files into a version suitable for the Generic Genome Browser SYNOPSIS
% process_gadfly.pl ./RELEASE2 > gadfly.gff DESCRIPTION
This script massages the RELEASE 3 Flybase/Gadfly GFF files located at http://www.fruitfly.org/sequence/release3download.shtml into the "correct" version of the GFF format. To use this script, download the whole genome FASTA file and save it to disk. (The downloaded file will be called something like "na_whole-genome_genomic_dmel_RELEASE3.FASTA", but the link on the HTML page doesn't give the filename.) Do the same for the whole genome GFF annotation file (the saved file will be called something like "whole-genome_annotation-feature-region_dmel_RELEASE3.GFF".) If you wish you can download the ZIP compressed versions of these files. Next run this script on the two files, indicating the name of the downloaded FASTA file first, followed by the gff file: % process_gadfly.pl na_whole-genome_genomic_dmel_RELEASE3.FASTA whole-genome_annotation-feature-region_dmel_RELEASE3.GFF > fly.gff The gadfly.gff file and the fasta file can now be loaded into a Bio::DB::GFF database using the following command: % bulk_load_gff.pl -d fly -fasta na_whole-genome_genomic_dmel_RELEASE3.FASTA fly.gff (Where "fly" is the name of the database. Change it as appropriate. The database must already exist and be writable by you!) The resulting database will have the following feature types (represented as "method:source"): Component:arm A chromosome arm Component:scaffold A chromosome scaffold (accession #) Component:gap A gap in the assembly clone:clonelocator A BAC clone gene:gadfly A gene accession number transcript:gadfly A transcript accession number translation:gadfly A translation codon:gadfly Significance unknown exon:gadfly An exon symbol:gadfly A classical gene symbol similarity:blastn A BLASTN hit similarity:blastx A BLASTX hit similarity:sim4 EST->genome using SIM4 similarity:groupest EST->genome using GROUPEST similarity:repeatmasker A repeat IMPORTANT NOTE: This script will *only* work with the RELEASE3 gadfly files and will not work with earlier releases. SEE ALSO
Bio::DB::GFF, bulk_load_gff.pl, load_gff.pl AUTHOR
Lincoln Stein, lstein@cshl.org Copyright (c) 2002 Cold Spring Harbor Laboratory This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See DISCLAIMER.txt for disclaimers of warranty. perl v5.14.2 2012-03-02 BP_PROCESS_GADFLY(1p)
All times are GMT -4. The time now is 08:16 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy