Sponsored Content
Top Forums Shell Programming and Scripting To search duplicate sequence in file Post 302910404 by ashfaque on Thursday 24th of July 2014 08:35:23 AM
Old 07-24-2014
if sequence number comes more than one times.i want to display such a record.
Following is file contain
e.g
Code:
4762574 6zE:MTOL ORDER_PRIORITY='30600673004762574' ORDER_IDENTIFIER='1149247244' MM_BID_VOLUME='240' MM_BID_PRICE='307.3000' m='GBX'
4762575 6zE:MTOL ORDER_PRIORITY='30600673004762575' ORDER_IDENTIFIER='864497562' MM_BID_VOLUME='112' MM_BID_PRICE='306.6000' m='GBX'
4762576 6zE:MTOL ORDER_IDENTIFIER='3716763318' MM_ASK_VOLUME='0'
4762577 6zE:QSCD ORDER_IDENTIFIER='2689916788' ORDER_PRIORITY='30598338004751402' MM_ASK_VOLUME='1000'
4762578 6zE:CHGL ORDER_PRIORITY='30600674004762578' ORDER_IDENTIFIER='2410789065' MM_ASK_VOLUME='152' MM_ASK_PRICE='201.7500' m='GBX'
4762579 6zE:ASMLA ORDER_IDENTIFIER='2071245638' MM_ASK_VOLUME='0'
4762580 6zE:ASMLA ORDER_PRIORITY='30600674004762580' ORDER_IDENTIFIER='2071245638' MM_ASK_VOLUME='93' MM_ASK_PRICE='63.3800' m='EUR'
4762581 6zE:CHGL ORDER_PRIORITY='30600674004762581' ORDER_IDENTIFIER='4172712031' MM_ASK_VOLUME='435' MM_ASK_PRICE='201.7500' m='GBX'
4762583 6zE:CHGL ORDER_PRIORITY='30600674004762582' ORDER_IDENTIFIER='1639963109' MM_ASK_VOLUME='300' MM_ASK_PRICE='202.0000' m='GBX'
4762583 6zE:ADNL ORDER_PRIORITY='30600674004762583' ORDER_IDENTIFIER='381217139' MM_ASK_VOLUME='120' MM_ASK_PRICE='450.3000' m='GBX'
4762584 6zE:QSCD ORDER_PRIORITY='30600674004762584' ORDER_IDENTIFIER='751440358' MM_ASK_VOLUME='1000' MM_ASK_PRICE='3.2070' m='EUR'

I want such record only sequence number comes more than one times.

Code:
4762583 6zE:CHGL ORDER_PRIORITY='30600674004762582'  ORDER_IDENTIFIER='1639963109' MM_ASK_VOLUME='300'  MM_ASK_PRICE='202.0000' m='GBX'
4762583 6zE:ADNL ORDER_PRIORITY='30600674004762583' ORDER_IDENTIFIER='381217139' MM_ASK_VOLUME='120'

Thanks
Ashfaque




Moderator's Comments:
Mod Comment If you continue to ignore to use code tags, you will collect infraction points and will be banned at some amount.

Last edited by zaxxon; 07-24-2014 at 09:38 AM.. Reason: code tags
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How can I remove those duplicate sequence in UNIX?What command line I should type?

The input is: >HWI-EAS382_30FC7AAXX:4:1:1580:1465 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA >HWI-EAS382_30FC7AAXX:4:1:1062:1640 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA >HWI-EAS382_30FC7AAXX:4:1:272:629 AAAAAAAAGCTATAGTCTCGTCACACATACTCACAA >HWI-EAS382_30FC7AAXX:4:1:1033:1135... (4 Replies)
Discussion started by: patrick chia
4 Replies

2. Shell Programming and Scripting

How to find duplicate commands in the search path?

I wanted to see if there is any duplicate of a specific command in the command search path. The following code will list all copies of "openssl" in the command search path. find `printenv PATH | sed "s/:/ /g"` -maxdepth 1 -name openssl However, the above code would fail if the search path... (9 Replies)
Discussion started by: LessNux
9 Replies

3. Shell Programming and Scripting

Renaming a file use another file as a sequence calling a shl

have this shl that will FTP a file from the a directory in windows to UNIX, It get the name of the file stored in this variable $UpLoadFileName then put in the local directory LocalDir="${MPATH}/xxxxx/dat_files" that part seems to be working, but then I need to take that file and rename, I am using... (3 Replies)
Discussion started by: rechever
3 Replies

4. Shell Programming and Scripting

Search compare and determine duplicate files

Hi May i ask if someone know a package that will search a directory recursively and compare determine duplicate files according to each filename, date modified or any attributes that will determine its duplicity If none where should i start or what are those command in shell scripting that... (11 Replies)
Discussion started by: jao_madn
11 Replies

5. Shell Programming and Scripting

Search duplicate field and replace one of them with new value

Dear All, I have file with 4 columns: 1 AA 0 21 2 BB 0 31 3 AA 0 21 4 CC 0 41 I would like to find the duplicate record based on column 2 and replace the 4th column of the duplicate by a new value. So, the output will be: 1 AA 0 21 2 BB 0 31 3 AA 0 -21 4 CC 0 41 Any suggestions... (3 Replies)
Discussion started by: ezhil01
3 Replies

6. Shell Programming and Scripting

find common entries and match the number with long sequence and cut that sequence in output

Hi all, I have a file like this ID 3BP5L_HUMAN Reviewed; 393 AA. AC Q7L8J4; Q96FI5; Q9BQH8; Q9C0E3; DT 05-FEB-2008, integrated into UniProtKB/Swiss-Prot. DT 05-JUL-2004, sequence version 1. DT 05-SEP-2012, entry version 71. FT COILED 59 140 ... (1 Reply)
Discussion started by: manigrover
1 Replies

7. Shell Programming and Scripting

Count and search by sequence in multiple fasta file

Hello, I have 10 fasta files with sequenced reads information with read sizes from 15 - 35 . I have combined the reads and collapsed in to unique reads and filtered for sizes 18 - 26 bp long unique reads. Now i wanted to count each unique read appearance in all the fasta files and make a table... (5 Replies)
Discussion started by: empyrean
5 Replies

8. Shell Programming and Scripting

Search pattern on logfile and search for day/dates and skip duplicate lines if any

Hi, I've written a script to search for an Oracle ORA- error on a log file, print that line and the .trc file associated with it as well as the dateline of when I assumed the error occured. In most it is the first dateline previous to the error. Unfortunately, this is not a fool proof script.... (2 Replies)
Discussion started by: newbie_01
2 Replies

9. Shell Programming and Scripting

Inserting IDs from a text file into a sequence alignment file

Hi, I have one file with one column and several hundred entries File1: NA1 NA2 NA3And now I need to run a command within a mapping aligner tool to insert these sample names into a sequence alignment file (SAM) such that they look like this @RG ID:Library1 SM:NA1 PL:Illumina ... (7 Replies)
Discussion started by: nans
7 Replies

10. UNIX for Dummies Questions & Answers

Any 'shortcut' to doing this search for duplicate and print max

Hi, I have a file that contains multiple records of the same database. I need to search for the maximum size of the database. At the moment, I am doing as below: Sample generated file to parse is as below. With the caret (^) delimiter, field 1 is the database name, 2 is the database ID and... (3 Replies)
Discussion started by: newbie_01
3 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 11:00 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy