If you will see the entries carefully there is no entry(row) is completly duplicate, so if I can give you a awk solution which is based on to find the duplicate reows will give null result for your requested input.
Here is an example for finding the uniqe values, hope it may help.
Hi all,
I have a huge csv file with the following format of data,
Num SNPs, 549997
Total SNPs,555352
Num Samples, 157
SNP, SampleID, Allele1, Allele2
A001,AB1,A,A
A002,AB1,A,A
A003,AB1,A,A
...
...
...
I would like to write out a list of unique SNP (column 1). Could you... (3 Replies)
I need to take the second column of a .csv file and count the number of instances of each unique value in that same second column. I'd like the output to be value,count sorted by most instances. Thanks for any guidance!
Data example:
317476,317756,0
816063,318861,0
313123,319091,0... (4 Replies)
Hi,
I have a file with the following columns:
361459 447394 CHL1
290282 290282 CHL1
361459 447394 CHL1
361459 447394 CHL1
178352861 178363529 AGA
178352861 178363529 AGA
178363657 178363657 AGA
Essentially, using CHL1 as an example. For any line that has CHL1 in... (2 Replies)
Hello,
I need some sort of way to extract every date contained in a file, and count how many of those dates there are.
Here are the specifics:
The date format I'm looking for is mm/dd/yyyy
I only need to look after line 45 in the file (that's where the data begins)
The columns of... (2 Replies)
How can I delete a column from a CSV file which has comma separated value with a string enclosed in double quotes and a comma in between? I have a file 44.csv with 4 lines including the header like the below format:
column1, column2, column3, column 4, column5, column6
12,455,"string with... (6 Replies)
Hi all,
I am new to shell script.I need your help to write a shell script.
I need to write a shell script to extract data from a .csv file where columns are ',' separated.
The file has 5 columns having values say column 1,column 2.....column 5 as below along with their valuesm.... (3 Replies)
I have a .CSV file with the below format:
"column 1","column 2","column 3","column 4","column 5","column 6","column 7","column 8","column 9","column 10
"12310","42324564756","a simple string with a , comma","string with or, without commas","string 1","USD","12","70%","08/01/2013",""... (2 Replies)
input.csv:
Field1,Field2,Field3,Field4,Field4
abc ,123 ,xyz ,000 ,pqr
mno ,123 ,dfr ,111 ,bbb
output:
Field2,Field4
123 ,000
123 ,111
how to fetch the values of Field4 where Field2='123'
I don't want to fetch the values based on column position. Instead want to... (10 Replies)
Hi Folks,
I have the below feed file named abc1.txt in which you can see there is a title and below is the respective values in the rows and it is completely pipe delimited file ,.
... (4 Replies)
Discussion started by: punpun66
4 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)