Hi all this is a UNIX question.
I have a large flat file with millions of records.
col1|col2|col3
1|a|b
2|c|d
3|e|f
3|g|h
footer****
I am supposed to calculate the sum of col1 1+2+3+3=9, count of col1 1,2,3,3=4, and distinct count of col1 1,2,3=c3
I would like it if you avoid... (4 Replies)
Hi,
I have to search and count unique occurence of DE numbers in bold below in a file which has content like below.
Proc Tran F-BUY
Item Tkey Q5JV
Item Tsid JTIZ9
Item Tdat 20091001
Item Tset 20091001
Item Tbkr 5
Item Tshs 2
Item Tprc 897.0
Item Tcom 2000.0
Item Tcm1 20091001... (6 Replies)
Dear all,
I have been trying to print an entire field, if the first line of the field is matching.
For example, my input looks something like this.
aaa ddd zzz
123 987 126
24 0.650 985
354 9864 0.32
0.333 4324 000
I am looking for a pattern,... (5 Replies)
I have a input.txt file which have 3 fields separate by a comma
place, os and timediff in seconds
tampa,win7, 2575
tampa,win7, 157619
tampa,win7, 3352
dallas,vista,604799
greenbay,winxp, 14400
greenbay,win7 , 518400
san jose,winxp, 228121
san jose,winxp, 70853
san jose,winxp, 193514... (5 Replies)
Hi I am trying to carry out a sum on a file (totals.txt).
The file looks like:
So far i have this command
this returns 20610
I however want it to return 000000206100
Any help would be great
thanks! (6 Replies)
Hello,
I`m a complete newbie to coding, please help with this problem.
I have multiple files in a directory, I have to loop through the contents of each file and extract number of unique isoforms in that file. Each file is tab delimited and only the line with the first parent (column 3)... (1 Reply)
Im looking for an awk script that will take the unique values in column 5, then print and count the unique values in column 6.
CA001011500 11111 11111 -9999 201301 AAA
CA001012040 11111 11111 -9999 201301 AAA
CA001012573 11111 11111 -9999 201301 BBB
CA001012710 11111 11111 -9999 201301... (4 Replies)
When I use the below awk to count the unique lines in $4 for the input it seems to work. The answer is 3 because $4 is only unique 3 times in all the entries. However, when I use the same on actual data I get 56,536 and I know the answer should be 56,548. My question is there a better way to... (8 Replies)
I cannot seem to get what should be a simple awk one-liner to work correctly and cannot figure out why. I would like to use patterns from a specific field in one file as regex to search for matching strings in the entire line ($0) of another file.
I would like to output the lines of File2 which... (1 Reply)
Discussion started by: jvoot
1 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)