hey all, I have a file with records in following format
8-29-2006 13:01:45|ABC|45
8-29-2006 14:23:12|DEF|21
8-30-2006 00:04:57|ABC|34
I want to remove all yesterday records. Can anyone show me how? Thanks! (10 Replies)
Hi
I have a big verilog file with multiple modules. Each module begin with the code word 'module <module-name>(ports,...)'
and end with the
'endmodule' keyword.
Could you please suggest the best way to split each of these modules into multiple files?
Thank you for the help.
Example of... (7 Replies)
i am a bit new to shell scripting
i have a file containing
xxxx xx xx
but i want to output the content as
xxxxxxxx.
thus removing the space.
any idea how i can do this (4 Replies)
Hi, I have a file called fl_list consists of files i have to archive. I want to create a exception parm called except_parm, so if it finds the directory it will not archive these files and remove from fl_list.
$ cat fl_list
/apps/dev/ihub/ready/IA003B/IA003B_Deal_Header_yyyymmdd_hhmmss.txt... (1 Reply)
How can I remove all data that contain domain e.g zzgh@something.com, sdd@something.com.my and gg@something.my in one file? so that i only have data without the domain in the file.
Here is the file structure "test.out"
more test.out
1 zzztop@b.com
1 zzzulll
1 zzzullll@s.com.my
... (4 Replies)
Hi All,
I want to remove the content based on the header information .
Please find the example below.
File1.txt
Name|Last|First|Location|DepId|Depname|DepLoc
naga|rr|tion|hyd|1|wer|opr
Nava|ra|tin|gen|2|wera|opra
I have to search for the DepId and remove the data from the... (5 Replies)
Here is the contents of test.txt
Dependencies Resolved
Changes in packages about to be updated:
ChangeLog for: 1:perl-Archive-Extract-0.38-131.el6_4.x86_64,
- Resolves: #915692 - CVE-2013-1667 (DoS in rehashing code)
Dependencies Resolved
Changes in packages about to be updated:
... (5 Replies)
Hi/ Hello all Guru here,
I am trying to create script to remove same content from other file, already tested few idea and found that in unix it is limited to sort and uniq. There is many script for removing duplicate content however to delete all same content is non. Need your help and guide .... (7 Replies)
hi all,
i had the below script
x=`cat input.txt |wc -1`
awk 'NR>1 && NR<'$x' ' input.txt > output.txt
by using above script i am able to remove the head and tail part from the input file and able to append the output to the output.txt but if i run it for second time the output is... (2 Replies)
Discussion started by: hemanthsaikumar
2 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)