12-14-2011
Sorry, I didn't quite get that. Anything in detail would be highly appreciated. Thanks.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
How to edit large file using vi where you can't increase /usr/var/tmp anymore? (3 Replies)
Discussion started by: nazri
3 Replies
2. Shell Programming and Scripting
I am trying to edit a file that has 33k+ records. In this file I need to edit each record that has a 'Y' in the 107th position and change the 10 fields before the 'Y' to blanks. Not all records have a 'Y' in the 107th field.
ex:
... (8 Replies)
Discussion started by: jxh461
8 Replies
3. UNIX for Dummies Questions & Answers
Hello Everyone
I am new to this forum.
I am having a requirement to edit the file(the file is having some sql code).
And this file is in my colleagues login. This is readonly
Now I would like to edit this file.
In which way can I do this? (1 Reply)
Discussion started by: pradkumar
1 Replies
4. Shell Programming and Scripting
hi All,
Plz let me know how to edit a file with 2000000 records.
each record contains with 40 field seperated by |.
i want modify 455487 record, but i am uable to edit this large file using vi editor in unix.
plz let me know how to modify this file.
Thanks in advance.
-Bali Reddy (3 Replies)
Discussion started by: balireddy_77
3 Replies
5. Shell Programming and Scripting
:confused:Folks,
I have a file with 50 million records having 2 columns. I have to do the below:
1. Generate some random numbers of a fixed length.
2. Replace the second column of randomly chosen rows with the random numbers.
I tried using a little bit of perl to generate random numbers... (6 Replies)
Discussion started by: mvijayv
6 Replies
6. Shell Programming and Scripting
I have a file oratab with entry like this
SCADAG:/esitst1/oracle/product/9.2.0.8:Y
I am trying to discover a way to change the 9.2.0.8 part of this to something like 10.2.0.4 as part of an upgrade script.
I have tried
cat /etc/oratab >>/tmp/oratab... (1 Reply)
Discussion started by: sewood
1 Replies
7. Shell Programming and Scripting
Hi,
I need to make a script to edit a file. File is a large file in below format
Version: 2008120101
;$INCLUDE ./abc/xyz/Delhi
;$INCLUDE ./abc/xyz/London
$INCLUDE ./abc/xyz/New York
First line in the file is version number which is in year,month,date and serial number format. Each... (5 Replies)
Discussion started by: makkar4u
5 Replies
8. Shell Programming and Scripting
I have a requirement, which is as follows
*. Folder contains list of xmls. Script has to create new xml files by copying the existing one and renaming it by appending "_pre.xml" at the end.
*. Each file has multiple <Name>fileName</Name> entry. The script has to find the first occurance of... (1 Reply)
Discussion started by: sudesh.ach
1 Replies
9. Shell Programming and Scripting
I have a file containing dates like below
2010 1 02
2010 2 01
2010 3 05
i want the dates to be like below
20100102
20100201
20100305
i tired using
awk '{printf "%s%02s%02s",$1,$2,$3}'
But it does not work,it puts all the dates in one line,i want them in seperate lines like the... (6 Replies)
Discussion started by: tomjones
6 Replies
10. UNIX for Advanced & Expert Users
Hi All,
I have file with 200K Records and each line with 400 character. I need to edit the some part of the file.
For example, i need to edit character from 115 to 125, 135to 145 and 344 to 361
Can you please anyone help me to do this?
Regards, (1 Reply)
Discussion started by: balasubramani04
1 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)