Thank you Ravinder for your response. Sorry if I question is not clear.
Condition 1 - Unique value of column one which are 3330690 and 0640829
Condition 2 - Unique value of column one 3330690 is associated with 2 distinct value of column 2 which are 373846 and 373847. The unique value of column 1 which is 0640829 is associated with unique value of column 2 which is 459725.
Hence output is expected as below
Code:
2 3330690
1 459725
Hope this clarifies.
---------- Post updated at 08:48 PM ---------- Previous update was at 06:11 PM ----------
Thank you RudiC. This worked perfectly. Now I am trying to understand this piece of code.
Can you please help explaining the code?
Hi All,
I have a file which is having 3 columns as (string string integer)
a b 1
x y 2
p k 5
y y 4
.....
.....
Question:
I want get the unique value of column 2 in a sorted way(on column 2) and the sum of the 3rd column of the corresponding rows. e.g the above file should return the... (6 Replies)
Hi all I have a need of searching some pattern in file by month and then count unique records
D11
G11
R11 -------> Pattern available in file
S11
Jan$1 to $5 column contains some records in which I want to find unique
for this purpose I have written script like below
awk '/Jan/ ||... (4 Replies)
Hi, I have tab-deliminated data similar to the following:
dot is-big 2
dot is-round 3
dot is-gray 4
cat is-big 3
hot in-summer 5
I want to count the frequency of each individual "unique" value in the 1st column. Thus, the desired output would be as follows:
dot 3
cat 1
hot 1
is... (5 Replies)
Hello, I'm trying to used awk but am new to this. I have a file like this:
Bob is a good boy
Bob is a strange person
Bob is a good dancer
Jane can party
Jane is a good girl
Jane is batty
I'd like to get this:
Bob is a good boy
is a strange person
is a good dancer
Jane... (4 Replies)
I am trying to sort, do uniq by 1st column and report this 4 columns tab delimiter table , eg
chr10:112174128 rs2255141 2E-10 Cholesterol, total
chr10:112174128 rs2255141 7E-16 LDL
chr10:17218291 rs10904908 3E-11 HDL Cholesterol
chr10:17218291 rs970548 8E-9 TG... (4 Replies)
Background:
I have a file of thousands of potential SSR primers from Batch Primer 3.
I can't use primers that will contain the same sequence ID or sequence as another primer.
I have some basic shell scripting skills, but not enough to handle this.
What you need to know:
I need to remove the... (1 Reply)
Hi All,
Does anyone have any suggestions/examples of how i could show only lines where the first field is not duplicated. If the first field is listed more than once it shouldnt be shown even if the other columns make it unique.
Example file :
876,RIBDA,EC2
876,RIBDH,EX7
877,RIBDF,E28... (4 Replies)
Hello,
I am trying to count unique rows in my file based on 4 columns (2-5) and to output its frequency in a sixth column. My file is tab delimited
My input file looks like this:
Colum1 Colum2 Colum3 Colum4 Coulmn5
1.1 100 100 a b
1.1 100 100 a c
1.2 200 205 a d
1.3 300 301 a y
1.3 300... (6 Replies)
What is an efficient way of counting the number of unique values in a 400 column by 1000 row array and outputting the counts per column, assuming the unique values in the array are:
A, B, C, D
In other words the output should look like: Value COL1 COL2 COL3
A 50 51 52... (16 Replies)
Discussion started by: Geneanalyst
16 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)