Thanks for the advice above, it got me further along but I've run into another wall or two. Here's where I'm at so far
The first problem is I can't get the IF statement to work no matter where I put quotes, parens, brackets, or curly brackets. Also switching -eg for =, ==, or / makes no difference. I get either a 'too many parameters on line 16" error or it drops right through and declares every file unique when 10 out of 20 are duplicates.
Second problem is I have a file named 'space test dupe10.jpg' that I named to see how it would handle spaces in file names. I have a simplified version of the script (no IF statement) that just echoes the variables as it increments through them, and it appears to treat space, test, and dupe10.jpg as 3 different files.
This isn't a life or death situation so I greatly appreciate any and all advice. I'm just updating an electronic picture frame that hangs on the living room wall and runs for 4 hrs per night which makes adding pictures a pain since it has to be done while it's on. With this script I can put the pics on my NAS and they'll get copied over when the frame boots. I can do that now but duplicate file names are a concern. I made this thing before you could buy them, from instructions in a physical popular mechanics magazine. 15 or so years and several thousand pics later and you can imagine how many times my wife (a photography hobbyist no less) has tried to load "flowers.jpg" on it.
Greetings -
I am a newbie in shell scripts. I have been thru the whole forum but there has been no similar query posed.
The objective of my system is to have a unified filebase system. I am using RSync to synchronise files between the location & central server with both of them having the... (4 Replies)
I would like to know how to compare a listing of directories that begin with the same four numbers ie.
/1234cat
/1234tree
/1234fish
and move all these directories into one directory
Thanks in advance (2 Replies)
ok i asked around to a few ppl and they said to use sed or awk to do what i want.. but i cant figure out how to use it like that..
anyway i have a text file that is 10k lines long.. i need to move text from the end of a line after the ? and move it to the front of the line then add a | after it.... (3 Replies)
I'm rather new to scripting, and despite my attempts at finding/writing a script to do what I need, I have not yet been successful.
I have a file named "list.txt" of arbitrary length with contents in the following format:
/home/user/Music/file1.mp3
/home/user/Music/file2.mp3... (21 Replies)
My input file is multiline file and I am writing a script to search for a pattern and move the line with the pattern and the next line to the end of the file. Since I am trying to learn awk, I thought I would try it.
My input looks like the following:
D #testpoint 1
510.0
D #testpoint2 ... (5 Replies)
Hello all.
I am new to this forum (and somewhat new to UNIX / LINUX - I started using ubuntu 1 year ago).:b:
I have the following problem that I have not been able to figure out how to take care of and I was wondering if anyone could help me out.:confused:
I have all of my music stored in... (7 Replies)
Hi there,
I am having trouble with a script I have written, which is designed to search through a directory for a header and payload file, retrieve a string from both filenames, compare this string and if it matches make a backup of the two files then move them to a different directory for... (1 Reply)
Hi,
Anybody help me to write a Shell Script
Get the latest file from the file list based on created and then move to the target directory.
Tried with the following script: got error.
A=$(ls -1dt $(find "cveit/local_ftp/reflash-parts" -type f -daystart -mtime -$dateoffset) | head... (2 Replies)
Hi
In directory /mnt/upload I have about 100 000 files (*.png) that have been created during the last six months. Now I need to move them to right folders. eg:
file created on 2014-10-10 move to directory /mnt/upload/20141010
file created on 2014-11-11 move to directory /mnt/upload/20141111... (6 Replies)
Shell script logic
Hi
I have 2 input files like with file 1 content as (file1)
"BRGTEST-242" a.txt "BRGTEST-240" a.txt "BRGTEST-219" e.txt
File 2 contents as fle(2)
"BRGTEST-244" a.txt "BRGTEST-244" b.txt "BRGTEST-231" c.txt "BRGTEST-231" d.txt "BRGTEST-221" e.txt
I want to get... (22 Replies)
Discussion started by: pottic
22 Replies
LEARN ABOUT DEBIAN
duff
DUFF(1) BSD General Commands Manual DUFF(1)NAME
duff -- duplicate file finder
SYNOPSIS
duff [-0HLPaeqprtz] [-d function] [-f format] [-l limit] [file ...]
duff [-h]
duff [-v]
DESCRIPTION
The duff utility reports clusters of duplicates in the specified files and/or directories. In the default mode, duff prints a customizable
header, followed by the names of all the files in the cluster. In excess mode, duff does not print a header, but instead for each cluster
prints the names of all but the first of the files it includes.
If no files are specified as arguments, duff reads file names from stdin.
Note that as of version 0.4, duff ignores symbolic links to files, as that behavior was conceptually broken. Therefore, the -H, -L and -P
options now apply only to directories.
The following options are available:
-0 If reading file names from stdin, assume they are null-terminated, instead of separated by newlines. Also, when printing file names
and cluster headers, terminate them with null characters instead of newlines.
This is useful for file names containing whitespace or other non-standard characters.
-H Follow symbolic links listed on the command line. This overrides any previous -L or -P option. Note that this only applies to
directories, as symbolic links to files are never followed.
-L Follow all symbolic links. This overrides any previous -H or -P option. Note that this only applies to directories, as symbolic
links to files are never followed.
-P Don't follow any symbolic links. This overrides any previous -H or -L option. This is the default. Note that this only applies to
directories, as symbolic links to files are never followed.
-a Include hidden files and directories when searching recursively.
-d function
The message digest function to use. The supported functions are sha1, sha256, sha384 and sha512. The default is sha1.
-e Excess mode. List all but one file from each cluster of duplicates. Also suppresses output of the cluster header. This is useful
when you want to automate removal of duplicate files and don't care which duplicates are removed.
-f format
Set the format of the cluster header. If the header is set to the empty string, no header line is printed.
The following escape sequences are available:
%n The number of files in the cluster.
%c A legacy synonym for %d, for compatibility reasons.
%d The message digest of files in the cluster. This may not be combined with -t as no digest is calculated.
%i The one-based index of the file cluster.
%s The size, in bytes, of a file in the cluster.
%% A '%' character.
The default format string when using -t is:
%n files in cluster %i (%s bytes)
The default format string for other modes is:
%n files in cluster %i (%s bytes, digest %d)
-h Display help information and exit.
-l limit
The minimum size of files to be sampled. If the size of files in a cluster is equal or greater than the specified limit, duff will
sample and compare a few bytes from the start of each file before calculating a full digest. This is stricly an optimization and
does not affect which files are considered by duff. The default limit is zero bytes, i.e. to use sampling on all files.
-q Quiet mode. Suppress warnings and error messages.
-p Physical mode. Make duff consider physical files instead of hard links. If specified, multiple hard links to the same physical file
will not be reported as duplicates.
-r Recursively search into all specified directories.
-t Thorough mode. Distrust digests as a guarantee for equality. In thorough mode, duff compares files byte by byte when their sizes
match.
-v Display version information and exit.
-z Do not consider empty files to be equal. This option prevents empty files from being reported as duplicates.
EXAMPLES
The command:
duff -r foo/
lists all duplicate files in the directory foo and its subdirectories.
The command:
duff -e0 * | xargs -0 rm
removes all duplicate files in the current directory. Note that you have no control over which files in each cluster that are selected by -e
(excess mode). Use with care.
The command:
find . -name '*.h' -type f | duff
lists all duplicate header files in the current directory and its subdirectories.
The command:
find . -name '*.h' -type f -print0 | duff -0 | xargs -0 -n1 echo
lists all duplicate header files in the current directory and its subdirectories, correctly handling file names containing whitespace. Note
the use of xargs and echo to remove the null separators again before listing.
DIAGNOSTICS
The duff utility exits 0 on success, and >0 if an error occurs.
SEE ALSO find(1), xargs(1)AUTHORS
Camilla Berglund <elmindreda@elmindreda.org>
BUGS
duff doesn't check whether the same file has been specified twice on the command line. This will lead it to report files listed multiple
times as duplicates when not using -p (physical mode). Note that this problem only affects files, not directories.
duff no longer (as of version 0.4) reports symbolic links to files as duplicates, as they're by definition always duplicates. This may break
scripts relying on the previous behavior.
If the underlying files are modified while duff is running, all bets are off. This is not really a bug, but it can still bite you.
BSD January 18, 2012 BSD