Hi,
I am trying to remove trailing white spaces using this command in awk
nawk -F '|' '/^TR/{t = $4 }/^LN/{gsub(/ */,"");printf "%s|%s\n", t, $0 }' $i>>catman_852_files.txt
My delimiter is '|'.
THere are some description fields which are being truncated. I dont want to remove spaces... (1 Reply)
Hi All,
I need to modify a script to remove spaces from a csv file.
The csv file is delimited by the '~' character and I need to remove the spaces which appear before this character.
i.e
Sample input:
LQ001 SWAT 11767727 ~9104 ~001 ~NIRSWA TEST 18 ~2 ~Standard Test ~0011
Desired... (5 Replies)
I have 5 column in sample txt file
where in i have to create report based upon 1,3 and 5 th column..
I have : in first and third coulmn. But I want to retain the colon of fifth coulmn and remove the colon of first column..
5th column contains String message (for example,... (7 Replies)
Hi All,
I am trying to list the various dates for which the file is available in a directory using the command below, (& subsequently pass the command output to a loop)
Command :
ls dir|grep 'filename'|cut -d '_' -f1|cut -c1-8|tr '\n' ','
However, it is giving me an extra comma... (6 Replies)
I have input file like this
551|552|553|554|555|556|557|558|559|560
I need any one offset need to be blank for eg.
551|552|553||555|556|557|558|559|560
My Shell is csh (1 Reply)
Hi,
I have a No Delimiter variable length text file with following schema -
Column Name Data length
Firstname 5
Lastname 5
age 3
phoneno1 10
phoneno2 10
phoneno3 10
sample data - ... (16 Replies)
Hi ,
I have file like this..
aaa|bbbb|cccc|dddd|fff|dsaaFFDFD|
Adsads|sas|sa|as|asa|saddas|dsasd|sdad|
dsas|dss|sss|sss|ddd|dssd|rrr|fddf|
www|fff|refd|dads|fsdf|00sd|
5fgdg|dfs00|d55f|sfds55|445fsd|55ds|sdf|
so I do no have any fix pattern and I want to remove extra... (11 Replies)
Hello All,
we have some 10 files wherein we are using the ASCII NULL as separator which is nothing but '^@' and we need to change it to pipe delimited file before loading to database. Most of the data seems to be fine but there are instances where this separator tends to appear in the middle of... (9 Replies)
Hello All,
I have a pipe delimited file and below is a sample data how it looks:
CS123 | | || 5897 | QXCYN87876
As stated above, the delimited files contains sometimes only spaces as data fields and sometimes there are extra spaces before/after numeric/character data fields. My requirement... (4 Replies)
First post, been browsing for 3 days and came out with nothing so far.
M3 C2 V5 D5 HH:FF A1-A2,A5-A6,A1-A2,A1-4 B4-B6,B2-B4,B4-B6,B1-B2output should be
M3 C2 V5 D5 HH:FF A1-A2,A5-A6,A1-A4 B2-B4,B4-B6,B1-B2On col 6 and 7 there are strings in form of Ax-Ax and Bx-Bx respectively. Each string are... (9 Replies)
Discussion started by: enrikS
9 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)