if sequence number comes more than one times.i want to display such a record.
Following is file contain
e.g
I want such record only sequence number comes more than one times.
Thanks
Ashfaque
Moderator's Comments:
If you continue to ignore to use code tags, you will collect infraction points and will be banned at some amount.
Last edited by zaxxon; 07-24-2014 at 09:38 AM..
Reason: code tags
I wanted to see if there is any duplicate of a specific command in the command search path. The following code will list all copies of "openssl" in the command search path.
find `printenv PATH | sed "s/:/ /g"` -maxdepth 1 -name openssl
However, the above code would fail if the search path... (9 Replies)
have this shl that will FTP a file from the a directory in windows to UNIX, It get the name of the file stored in this variable $UpLoadFileName then put in the local directory LocalDir="${MPATH}/xxxxx/dat_files" that part seems to be working, but then I need to take that file and rename, I am using... (3 Replies)
Hi
May i ask if someone know a package that will search a directory recursively and compare determine duplicate files according to each filename, date modified or any attributes that will determine its duplicity
If none where should i start or what are those command in shell scripting that... (11 Replies)
Dear All,
I have file with 4 columns:
1 AA 0 21
2 BB 0 31
3 AA 0 21
4 CC 0 41
I would like to find the duplicate record based on column 2 and replace the 4th column of the duplicate by a new value. So, the output will be:
1 AA 0 21
2 BB 0 31
3 AA 0 -21
4 CC 0 41
Any suggestions... (3 Replies)
Hi all,
I have a file like this
ID 3BP5L_HUMAN Reviewed; 393 AA.
AC Q7L8J4; Q96FI5; Q9BQH8; Q9C0E3;
DT 05-FEB-2008, integrated into UniProtKB/Swiss-Prot.
DT 05-JUL-2004, sequence version 1.
DT 05-SEP-2012, entry version 71.
FT COILED 59 140 ... (1 Reply)
Hello,
I have 10 fasta files with sequenced reads information with read sizes from 15 - 35 . I have combined the reads and collapsed in to unique reads and filtered for sizes 18 - 26 bp long unique reads. Now i wanted to count each unique read appearance in all the fasta files and make a table... (5 Replies)
Hi,
I've written a script to search for an Oracle ORA- error on a log file, print that line and the .trc file associated with it as well as the dateline of when I assumed the error occured. In most it is the first dateline previous to the error.
Unfortunately, this is not a fool proof script.... (2 Replies)
Hi,
I have one file with one column and several hundred entries
File1:
NA1
NA2
NA3And now I need to run a command within a mapping aligner tool to insert these sample names into a sequence alignment file (SAM) such that they look like this
@RG ID:Library1 SM:NA1 PL:Illumina ... (7 Replies)
Hi,
I have a file that contains multiple records of the same database.
I need to search for the maximum size of the database. At the moment, I am doing as below:
Sample generated file to parse is as below. With the caret (^) delimiter, field 1 is the database name, 2 is the database ID and... (3 Replies)
Discussion started by: newbie_01
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)