Hi,
I need to login to a remote server.
Go to a particular path.
Get the lists of files on that path.There may be n number of files.
I need to delete only those files from above created list which are 7 days older.
I have achieved above using ftp protocol, but now the constraint has... (0 Replies)
Hi,
I need help with using an awk or sed filter on the below line
ALTER TABLE "ACCOUNT" ADD CONSTRAINT "ACCOUNT_PK" PRIMARY KEY ("ACCT_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "WMC_DATA" LOGGING ENABLE
Look for... (2 Replies)
I have some files where numbers are part of like
eg 1add1.txt
23sub41.txt etc
I want to remove numbers from the filenames(whereever it may be).
I used echo `ls *.txt | sed -e "s///"`
But its removing first digits like 1add1.txt becomes add1.txt
My intention is to make 1add1.txt... (3 Replies)
I have a file which has about 10000 records and I need to delete about 50 records from the file. I know line numbers and am using
sed '134,1357,......d' filename > new file.
It does not seem to be working.
Please Advice (5 Replies)
Hi,
I have a very big directory structure that consists of many sub-directories inside.There are around 50 ".gz" files under this dir structure.
I want to copy all the gz files alone to a seperate location.
Plz help me. (2 Replies)
Hi,
I have a file which has a number in each line ( i think they are strings )
I will have a $first and $last variable, which are strings but contains only numbers. Also a file $f, I want to filter out the lines in $f with only numbers in between $first and $last. Do I need to consider the... (2 Replies)
Hello,
I have a dump of IPs (around 2 million) and i need to filter out(delete) 37 IPs from this list.
Here is a short list of IPs that i would need deleted
111.111.xxx.xxx
123.123.xxx.xxx
127.x.x.x
98.20.xx.xxx
10.135.xxx.xxx
11.105.xxx.xx
100.100.xxx.xxx
101.xxx.xx.xxx
... (11 Replies)
Hi All,
I have one sensor output(over the same) for a set value of 20.
Time(in Sec), Data
1, 16
2, 20
3, 24
4, 22
5, 21
6, 20
7, 19.5
8, 20
9, 20.5
10, 20
11, 20
12, 19.5
Here we can see like after 5 sec of time the data value reaches to 20+-0.5 range.
So I... (7 Replies)
Dear friend,
I have a file 2 files with column wise
FILE_A
------------------------------
x,1,@
y,3,$
x,5,%
FILE_B
--------------------
x,1,@
i like to delete the all lines in FILE_A ,if first column available in FILE_B.
output (in FILE_A)
y,3,$
x,5,% (10 Replies)
Hi
My directory structure is as below.
dir1, dir2, dir3
I have the list of files to be deleted in the below path as below.
/staging/retain_for_2years/Cleanup/log $ ls -lrt
total 0
drwxr-xr-x 2 nobody nobody 256 Mar 01 16:15 01-MAR-2015_SPDBS2
drwxr-xr-x 2 root ... (2 Replies)
Discussion started by: prasadn
2 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)