I have a m X n matrix written out to file, say like this:
1,2,3,4,5,6
2,6,3,10,34,67
1,45,6,7,8,8
I want to calculate the column averages in the MINIMUM amount of code or processing possible. I would have liked to use my favorite tool, "AWK" but since it processes rowwise, getting the... (5 Replies)
hi
i have file which hav following entries
1501,AAA,2.00
1525,AAA,2.00
1501,AAA,2.00
1525,AAA,2.00
1501,AAA,3.00
1525,AAA,3.00
1525,AAA,3.00
1501,AAA,3.00
1501,AAA,3.00
i want to have a o/p coloum wise
like
1501,AAA,13
1525,AAA,10
here 13 comes as a sum of last colum value... (6 Replies)
Hi,
I have two files. Want to make an addition of the fifth column of from both the files and redirect it to a third file.
Both files have same records except fifth field and same record should be inserted into new file having fifth field as addition of fifth fields of both files.
for... (2 Replies)
Hi guys,
I want to delete files from june 13 to june 30, using rm command can any one tell me the sintax to remove. I ahve hunderd of core files in my /var dir. so i want to clear last month core files. Thanks in Advance.:)) (2 Replies)
Hello All ,
I have a problem with summing of column by group
Input File -
COL_1,COL_2,COL_3,COL_4,COL_5,COL_6,COL_7,COL_8,COL_9,COL_10,COL_11
3010,21,1923D ,6,0,0.26,0,0.26,-0.26,1,200807
3010,21,192BI ,6,24558.97,1943.94,0,1943.94,22615.03,1,200807
3010,21,192BI... (8 Replies)
Hi I have pasted sample data as below:- in data.txt
Please suggest any way out: as the 3rd field is
cat data.txt
22:37:34 STARTING abc
22:37:40 FAILURE sadn
00:06:42 STARTING asd
00:06:51 FAILURE ad
02:06:38 STARTING acs
02:06:46 FAILURE cz
04:06:35 STARTING xzc... (1 Reply)
My below code works fine if none of the columns has pipe as its content in it, If it has the pipe in any of the content then the value moves to the next column.
I wanted my code to work fine even if the column has pipe in it apart from the delimiter.
NOTE : If there is a pipe in the content... (6 Replies)
I have a file which need to be summed up using date column.
I/P:
2017/01/01 a 10
2017/01/01 b 20
2017/01/01 c 40
2017/01/01 a 60
2017/01/01 b 50
2017/01/01 c 40
2017/01/01 a 20
2017/01/01 b 30
2017/01/01 c 40
2017/02/01 a 10
2017/02/01 b 20
2017/02/01 c 30
2017/02/01 a 10... (6 Replies)
Discussion started by: Booo
6 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)