My code below is supposed to find which company had the most business and then print the appropriate fields from another file which are the companies ID number and name. I can loop through awk and display all the total amount of business for each company but I need help in only printing out the... (1 Reply)
Hi,
I have this input file called ttbitnres (which is catenated and sorted):-
8 0.4444 213
10 0.5555 342
11 0.5555 321
12 0.5555 231
13 0.4444 400
My code is at :-
#!/bin/bash
echo -e Version "\t" Number of Pass "\t" Number of Fail "\t" Rank Position "\t"Min "\t" Max... (1 Reply)
Hello all,
this should really be easy for you... I need AWK to print column maxima for each column of such input:
Input:
1 2 3 1
2 1 1 3
2 1 1 2
Output should be:
2 2 3 3
This does the sum, but i need max instead:
{ for(i=1; i<=NF; i++)
sum +=$i }
END {for(i=1; i in sum;... (3 Replies)
Hi
I have a list of 2000 records with multiple entries and I want to get the max size for each entry
ABC 1
ABC 2
ABC 3
ABC 4
DEF 1
DEF 2
DEF 2
DEF 2
DEF 2
... (9 Replies)
Hi,
I have files named as
energy.dat.1
energy.dat.2
energy.dat.3
...
energy.dat.2342
I would like to find the file with maximum number in the filename (ex. energy.dat.2342) and open it.
Would you please share your expertize in writing the script?
Thanks in advance. (8 Replies)
I do have a tab delimited file of the following format
a_1 rt
a_1 st_2
a_1 st_3
a_2 bt_2
a_2 st_er
b_2 st_2
b_2 st_32
S_1 rt_8
S_1 rt_64
I want to cut short the above file and group the file based on the first column like below.
a_1 rt st_2 st_3
a_2 bt_2 st_er
b_2 st_2... (1 Reply)
Hi,
I need an awk script (or whatever shell-construct) that would take data like below and get the max value of 3 column, when grouping by the 1st column.
clientname,day-of-month,max-users
-----------------------------------
client1,20120610,5
client2,20120610,2
client3,20120610,7... (3 Replies)
I've the following data set. I would like to look at the column 3 and only use the rows which has the max value for column 3
Can we use the awk or sed to achieve it.
10 2 10 100
11 2 20 100
12 2 30 100
13 2 30 100
14 ... (7 Replies)
Hi All,
need help...
I have some log below : ### {"request_id":"e8395eb0-a8bd-11e9-b77b-d507ea5312aa","message":"when inquiry paybill 628524871 prevalidation cause : Invalid Transaction"}
### {"request_id":"043f2310-a8be-11e9-b57b-f9c7344998d7","message":"when inquiry paybill 62821615... (2 Replies)
Discussion started by: fajar_3t3
2 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)