10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi, all
I have a csv file that I would like to remove duplicate lines based on 1st field and sort them by the 1st field. If there are more than 1 line which is same on the 1st field, I want to keep the first line of them and remove the rest. I think I have to use uniq or something, but I still... (8 Replies)
Discussion started by: refrain
8 Replies
2. Shell Programming and Scripting
Dear community,
I have to remove duplicate lines from a file contains a very big ammount of rows (milions?) based on 1st and 3rd columns
The data are like this:
Region 23/11/2014 09:11:36 41752
Medio 23/11/2014 03:11:38 4132
Info 23/11/2014 05:11:09 4323... (2 Replies)
Discussion started by: Lord Spectre
2 Replies
3. Shell Programming and Scripting
hi,
Please help me to write a command to delete duplicate lines from a file. And the size of file is 50 MB. How to remove duplicate lins from such a big file. (6 Replies)
Discussion started by: vsachan
6 Replies
4. Shell Programming and Scripting
Hey guys, need some help to fix this script. I am trying to remove all the duplicate lines in this file.
I wrote the following script, but does not work. What is the problem?
The output file should only contain five lines:
Later! (5 Replies)
Discussion started by: Ernst
5 Replies
5. Shell Programming and Scripting
Hi,
I have two files with below data::
file1:-
123|aaa|ppp
445|fff|yyy
999|ttt|jjj
555|hhh|hhh
file2:-
445|fff|yyy
555|hhh|hhh
The records present in file1, not present in file 2 should be writtent to the out put file.
output:-
123|aaa|ppp
999|ttt|jjj
Is there any one line... (3 Replies)
Discussion started by: gani_85
3 Replies
6. Shell Programming and Scripting
greetings,
i'm hoping there is a way to cat a file, remove duplicate lines and send that output to a new file. the file will always vary but be something similar to this:
please keep in mind that the above could be eight occurrences of each hostname or it might simply have another four of an... (2 Replies)
Discussion started by: crimso
2 Replies
7. UNIX for Dummies Questions & Answers
Hi please help me how to remove duplicate lines in any file.
I have a file having huge number of lines.
i want to remove selected lines in it.
And also if there exists duplicate lines, I want to delete the rest & just keep one of them.
Please help me with any unix commands or even fortran... (7 Replies)
Discussion started by: reva
7 Replies
8. Shell Programming and Scripting
Hello,
Can anyone tell Command/Script to remove duplicate lines from the file? (2 Replies)
Discussion started by: Rahulpict
2 Replies
9. UNIX for Dummies Questions & Answers
I have a log file "logreport" that contains several lines as seen below:
04:20:00 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but responded to ping
06:38:08 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but responded to ping
07:11:05 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but... (18 Replies)
Discussion started by: Nysif Steve
18 Replies
10. Shell Programming and Scripting
I am doing KSH script to remove duplicate lines in a file. Let say the file has format below.
FileA
1253-6856
3101-4011
1827-1356
1822-1157
1822-1157
1000-1410
1000-1410
1822-1231
1822-1231
3101-4011
1822-1157
1822-1231
and I want to simply it with no duplicate line as file... (5 Replies)
Discussion started by: Teh Tiack Ein
5 Replies
DUFF(1) BSD General Commands Manual DUFF(1)
NAME
duff -- duplicate file finder
SYNOPSIS
duff [-0HLPaeqprtz] [-d function] [-f format] [-l limit] [file ...]
duff [-h]
duff [-v]
DESCRIPTION
The duff utility reports clusters of duplicates in the specified files and/or directories. In the default mode, duff prints a customizable
header, followed by the names of all the files in the cluster. In excess mode, duff does not print a header, but instead for each cluster
prints the names of all but the first of the files it includes.
If no files are specified as arguments, duff reads file names from stdin.
Note that as of version 0.4, duff ignores symbolic links to files, as that behavior was conceptually broken. Therefore, the -H, -L and -P
options now apply only to directories.
The following options are available:
-0 If reading file names from stdin, assume they are null-terminated, instead of separated by newlines. Also, when printing file names
and cluster headers, terminate them with null characters instead of newlines.
This is useful for file names containing whitespace or other non-standard characters.
-H Follow symbolic links listed on the command line. This overrides any previous -L or -P option. Note that this only applies to
directories, as symbolic links to files are never followed.
-L Follow all symbolic links. This overrides any previous -H or -P option. Note that this only applies to directories, as symbolic
links to files are never followed.
-P Don't follow any symbolic links. This overrides any previous -H or -L option. This is the default. Note that this only applies to
directories, as symbolic links to files are never followed.
-a Include hidden files and directories when searching recursively.
-d function
The message digest function to use. The supported functions are sha1, sha256, sha384 and sha512. The default is sha1.
-e Excess mode. List all but one file from each cluster of duplicates. Also suppresses output of the cluster header. This is useful
when you want to automate removal of duplicate files and don't care which duplicates are removed.
-f format
Set the format of the cluster header. If the header is set to the empty string, no header line is printed.
The following escape sequences are available:
%n The number of files in the cluster.
%c A legacy synonym for %d, for compatibility reasons.
%d The message digest of files in the cluster. This may not be combined with -t as no digest is calculated.
%i The one-based index of the file cluster.
%s The size, in bytes, of a file in the cluster.
%% A '%' character.
The default format string when using -t is:
%n files in cluster %i (%s bytes)
The default format string for other modes is:
%n files in cluster %i (%s bytes, digest %d)
-h Display help information and exit.
-l limit
The minimum size of files to be sampled. If the size of files in a cluster is equal or greater than the specified limit, duff will
sample and compare a few bytes from the start of each file before calculating a full digest. This is stricly an optimization and
does not affect which files are considered by duff. The default limit is zero bytes, i.e. to use sampling on all files.
-q Quiet mode. Suppress warnings and error messages.
-p Physical mode. Make duff consider physical files instead of hard links. If specified, multiple hard links to the same physical file
will not be reported as duplicates.
-r Recursively search into all specified directories.
-t Thorough mode. Distrust digests as a guarantee for equality. In thorough mode, duff compares files byte by byte when their sizes
match.
-v Display version information and exit.
-z Do not consider empty files to be equal. This option prevents empty files from being reported as duplicates.
EXAMPLES
The command:
duff -r foo/
lists all duplicate files in the directory foo and its subdirectories.
The command:
duff -e0 * | xargs -0 rm
removes all duplicate files in the current directory. Note that you have no control over which files in each cluster that are selected by -e
(excess mode). Use with care.
The command:
find . -name '*.h' -type f | duff
lists all duplicate header files in the current directory and its subdirectories.
The command:
find . -name '*.h' -type f -print0 | duff -0 | xargs -0 -n1 echo
lists all duplicate header files in the current directory and its subdirectories, correctly handling file names containing whitespace. Note
the use of xargs and echo to remove the null separators again before listing.
DIAGNOSTICS
The duff utility exits 0 on success, and >0 if an error occurs.
SEE ALSO
find(1), xargs(1)
AUTHORS
Camilla Berglund <elmindreda@elmindreda.org>
BUGS
duff doesn't check whether the same file has been specified twice on the command line. This will lead it to report files listed multiple
times as duplicates when not using -p (physical mode). Note that this problem only affects files, not directories.
duff no longer (as of version 0.4) reports symbolic links to files as duplicates, as they're by definition always duplicates. This may break
scripts relying on the previous behavior.
If the underlying files are modified while duff is running, all bets are off. This is not really a bug, but it can still bite you.
BSD
January 18, 2012 BSD