03-20-2009
Nice.. Thats ALOT faster, the directory has hundreds of files.
Thanks a bunch...
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
hello all
I like to make search on files , and the result need to be the files that are duplicated? (8 Replies)
Discussion started by: umen
8 Replies
2. Shell Programming and Scripting
I need a perl script which will create an output file after comparing two diff file in a directory path:
/export/home/abc/file1
/export/home/abc/file2
File Format: <IP>TAB<DeviceName><TAB>DESCRIPTIONS
file1:
10.1.2.1.3<tab>abc123def<tab>xyz.mm1.ppp.... (2 Replies)
Discussion started by: ricky007
2 Replies
3. Shell Programming and Scripting
What utility do you recommend for simply finding all duplicate files among all files? (4 Replies)
Discussion started by: kiasas
4 Replies
4. Shell Programming and Scripting
Hi!
I want to find duplicate files (criteria: file size) in my download folder.
I try it like this:
find /Users/frodo/Downloads \! -type d -exec du {} \; | sort > /Users/frodo/Desktop/duplicates_1.txt;
cut -f 1 /Users/frodo/Desktop/duplicates_1.txt | uniq -d | grep -hif -... (9 Replies)
Discussion started by: Dirk Einecke
9 Replies
5. Shell Programming and Scripting
I have more than 100 files like this:
SVEAVLTGPYGYT 2
SVEGNFEETQY 10
SVELGQGYEQY 28
SVERTGTGYT 6
SVGLADYNEQF 21
SVGQGYEQY 32
SVKTVLGYEQF 2
SVNNEQF 12
SVRDGLTNSPLH 3
SVRRDREGLEQF 11
SVRTSGSYEQY 17
SVSVSGSPLQETQY 78
SVVHSTSPEAF 59
SVVPGNGYT 75 (4 Replies)
Discussion started by: xshang
4 Replies
6. Shell Programming and Scripting
Hi !
I wonder if anyone can help on this : I have a directory: /xyz that has the following files:
chsLog.107.20130603.gz
chsLog.115.20130603
chsLog.111.20130603.gz
chsLog.107.20130603
chsLog.115.20130603.gz
As you ca see there are two files that are the same but only with a minor... (10 Replies)
Discussion started by: fretagi
10 Replies
7. Shell Programming and Scripting
Hi champs,
I have one of the requirement, where I need to compare two files line by line and ignore duplicates. Note, I hav files in sorted order.
I have tried using the comm command, but its not working for my scenario.
Input file1
srv1..development..employee..empname,empid,empdesg... (1 Reply)
Discussion started by: Selva_2507
1 Replies
8. Shell Programming and Scripting
I am so frustrated!!!
I want a nice command that clears away duplicate files:
find . -type f -regex '.*{1,3}\..*' | xargs -I## rm -v '##'
should work in my opinion. But it finds nothing even though I have files that have the file name:
Scooby-Doo-1.txt
Himalaya-2.jpg
Camping... (8 Replies)
Discussion started by: Mr.Glaurung
8 Replies
9. Shell Programming and Scripting
I have tried the following code and with that i couldnt achieve what i want.
#!/usr/bin/bash
find ./ -type f \( -iname "*.xml" \) | sort -n > fileList
sed -i '/\.\/fileList/d' fileList
NAMEOFTHISFILE=$(echo $0|sed -e 's/\/()$*.^|/\\&/g')
sed -i "/$NAMEOFTHISFILE/d"... (2 Replies)
Discussion started by: gold2k8
2 Replies
10. UNIX for Advanced & Expert Users
I would like find and delete old backup files in aix. How would I go about doing this? For example:
server1_1-20-2020
server1_1-21-2020
server1_1-22-2020
server1_1-23-2020
server2_1-20-2020
server2_1-21-2020
server2_1-22-2020
server2_1-23-2020
How would I go about finding and... (3 Replies)
Discussion started by: cokedude
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)