04-15-2015
Tough one. Only limit close to this I think is path length...number of bytes/chars in path. Can you use something like locate (if this is linux) to cache filenames so that you can quickly find them?
Of course the guy in this link (
Bastard Operator From Hell Official Archive) would write a script to test the users depth and just delete the files.
:-)
9 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi All,
Is there any command to list samba shared folders in red hat linux 7.2
Thanks in advance
Bache Gowda (0 Replies)
Discussion started by: bache_gowda
0 Replies
2. Shell Programming and Scripting
hi
how to find the queue depth of MQ Queue using unix
please its very urgent (0 Replies)
Discussion started by: Satyak
0 Replies
3. Programming
Hello,
I am looking for specific files in my tree directory using ftw(3). How do I know how deep I am in the file structure.. in other words, say I am looking for config.txt files, and my structure looks like this..
/some/directory/user1/config.txt
/some/directory/user2/config.txt
....... (2 Replies)
Discussion started by: germallon
2 Replies
4. Red Hat
Hello folks,
I am trying to accomplish the following:
1. Create home folders for each user
2. Create a public folder where all users can access
3. Use Samba as a domain controller.
I have successfully completed issue 1. But I can't get the second issue to work. Below is my config file.... (0 Replies)
Discussion started by: behradb
0 Replies
5. Solaris
Hi All, I've been trying to configure samba on Solaris 10 to allow me to have one share that is open and writable to all users and have the rest of my shares password protected by a generic account.
If I set my security to user, my secured shares work just fine and prompt accordingly, but when... (0 Replies)
Discussion started by: ideal2545
0 Replies
6. Solaris
i want to restart samba service in solaris 1o installed on virtual machine
but under
under /etc/init.d directory there is no folder with name samba in solaris 10
how do i proceed ? (4 Replies)
Discussion started by: rehantayyab82
4 Replies
7. Shell Programming and Scripting
Hi
I am trying to a write a script which gives message queue depth for every 5 mins in a file.
Commands that I use are
runmqsc QM_Name
display ql(*) curdepth
Since I can use only MQSC commands I need help on how to fetch the output on to a file after executing display command. (3 Replies)
Discussion started by: jhilmil
3 Replies
8. Shell Programming and Scripting
Hi All,
We have SuoOs and Linux servers.
May i know how do we find the queue depth of IBM MQ from server. (2 Replies)
Discussion started by: Girish19
2 Replies
9. UNIX for Beginners Questions & Answers
I tried to find a file lives within curent directory only, and typed
$ find . -depth 1 -ls -name *.ini
But it gave me,
find: paths must precede expression: 1
Usage: find
How'd I do it correctly ? Thanks in advance. (2 Replies)
Discussion started by: abdulbadii
2 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)