Greetings.
I am struggling with a shell script to make my life simpler, with a number of practical ways in which it could be used. I want to take a standard text file, and pull the 'n'th word from each line such as the first word from a text file.
I'm struggling to see how each line can be... (5 Replies)
Hi everyone,
I have a file in which a word is repeated more than one time and I want to know how many times it is repeated.
ex: if i repeated word 'guru' in 10 lines I can get the o/p as:
cat filename | grep -c 'guru'.
How ever if the word is repeated more than one time, then how can I... (4 Replies)
Hi everybody:
Could anybody tell me how I can delete repeated rows from a file?, this is, for exemple I have a file like this:
0.490 958.73 281.85 6.67985 0.002481
0.490 954.833 283.991 8.73019 0.002471
0.590 950.504 286.241 6.61451 0.002461
0.690 939.323 286.112 6.16451 0.00246
0.790... (8 Replies)
Hi All,
I want to delete a word from file. How to do that.
I have file that contains the following Information.
EntityName:alba00r1.mis.amat.com OverallStatus:Minor IfName:Gi1/0
EntityName:alba00r1.mis.amat.com ] OverallStatus:Normal IfName:Se0/0/0... (4 Replies)
Hi,
I need to delete repeated nos in a file and finally list the count. Can some one assist me?
file:
12345
12345
56345
12345
23896
Output needed:
12345
56345
23896
Total count:3
Thanks (2 Replies)
Hi,
I need to extract data from a text file in which data has a pattern. I need to extract all repeated pattern and then save it to different files.
example:
input is:
ST*867*000352214
BPT*00*1000352214*090311
SE*1*1
ST*867*000352215
BPT*00*1000352214*090311
SE*1*2
... (5 Replies)
I'm looking for a command that will read a file listing information and delete everything after a certain word is found. I also may need to search the file and delete everything before a certain word. The file would contains fields of information like below repeating for the entire file;
Name... (5 Replies)
Hi All,
I have a file with the data as below. In this i want to delete the last word. Could you pls help me.
$INSTALL_HOME/lib/fm_voucher_pol.so
$INSTALL_HOME/source/sys/fm_apn_pol/fm_apn_pol_device_set_state.c
In the above two lines i want to delete fm_voucher_pol.so and... (5 Replies)
I have a text file where I need to find the string = ST*850*
This string is repetaed several times in the file, so I need to know how many times it appears in the file, this is the text files:
ISA*00* *00* *08*925485USNR *ZZ*IMSALADDERSP... (13 Replies)
Hi below is the input file, i need to find repeated words and sum up the values of it which is second field from the repeated work.Im trying but getting no where close to it.Kindly give me a hint on how to go about it
Input
fruits,apple,20,fruits,mango,20,veg,carrot,12,veg,raddish,30... (11 Replies)
Discussion started by: 100bees
11 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)