Hi.
In need to delete line before and after pattern in file. I came to following commands.
Delete line before
Delete line after
Is it possible to merge it in single command?
Hi,
I need help with using an awk or sed filter on the below line
ALTER TABLE "ACCOUNT" ADD CONSTRAINT "ACCOUNT_PK" PRIMARY KEY ("ACCT_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "WMC_DATA" LOGGING ENABLE
Look for... (2 Replies)
Hi,
I need help with using an awk or sed filter on the below line
ALTER TABLE "ACCOUNT" ADD CONSTRAINT "ACCOUNT_PK" PRIMARY KEY ("ACCT_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "WMC_DATA" LOGGING ENABLE
Look for... (1 Reply)
Hi,
I have file 1.txt with following entries as shown:
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
**
**
**
In file 2.txt I have the following entries as shown:
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433... (4 Replies)
trying to use sed in finding a matching pattern in a file then deleting
the next line only .. pattern --> <ad-content>
I tried this but it results are not what I wish
sed '/<ad-content>/{N;d;}' akv.xml > akv5.xml
ex,
<Celebrant2First>Mickey</Celebrant2First>
<ad-content>
Minnie... (2 Replies)
I have the following file:
line1
line2
MATCH
line3
line4
I need to find the pattern, "MATCH" and delete the line before and after MATCH. So the result should be
line1
MATCH
lline4
I have to use sed because it is the only utility that is common across my environments. I... (1 Reply)
Hello sed gurus. I am using ksh on Sun and have a file created by concatenating several other files. All files contain header rows. I just need to keep the first occurrence and remove all other header rows.
header for file
1111
2222
3333
header for file
1111
2222
3333
header for file... (8 Replies)
Use and complete the template provided. The entire template must be completed. If you don't, your post may be deleted!
1. The problem statement, all variables and given/known data:
I have file which has got the following content
sam 123 LD 41
sam 234 kp
sam LD 41
kam pu
sam LD 61
Now... (1 Reply)
here is what i want to achieve.. i have a file with below contents
cat fileName
blah blah blah
.
.DROP this
REJECT that
.
--sport 7800 -j REJECT --reject-with icmp-port-unreachable
--dport 7800 -j REJECT --reject-with icmp-port-unreachable
.
.
.
more blah blah blah
--dport 3306... (14 Replies)
We are using Red Hat Linux.
I have a flat file with among other things, the following lines, which appear occasionally throughout the file:
Using sed, I delete this line:
L;L;L;L;R;R;R;L;R;L;R;R;R;L;L;L
With:
/^;;;;;*/d
Works fine every time.
However, I cannot delete... (6 Replies)
Greetings, I need some assistance here as I cannot get a sed line to work properly for me. I have the following list:
Camp.S01E04.720p.HDTV.X264-DIMENSION
Royal.Pains.S05E07.720p.HDTV.x264-IMMERSE
Whose.Line.is.it.Anyway.US.S09E05.720p.HDTV.x264-BAJSKORV
What I would like to accomplish is to... (3 Replies)
Discussion started by: choard
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)