Sponsored Content
Top Forums Shell Programming and Scripting Remove duplicated records and update last line record counts Post 303032051 by RudiC on Sunday 10th of March 2019 06:45:15 AM
Old 03-10-2019
On top of what Don Cragun said, the last approach would not account for "duplicate duplicates".


Illogic nonsense... please disregard.

Last edited by RudiC; 03-10-2019 at 08:33 AM..
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

remove duplicated xml record in a file under unix

Hi, If i have a file with xml format, i would like to remove duplicated records and save to a new file. Is it possible...to write script to do it? (8 Replies)
Discussion started by: happyv
8 Replies

2. Shell Programming and Scripting

remove duplicated columns

hi all, i have a file contain multicolumns, this file is sorted by col2 and col3. i want to remove the duplicated columns if the col2 and col3 are the same in another line. example fileA AA BB CC DD CC XX CC DD BB CC ZZ FF DD FF HH HH the output is AA BB CC DD BB CC ZZ FF... (6 Replies)
Discussion started by: kamel.seg
6 Replies

3. Shell Programming and Scripting

Help to Add and Remove Records only from first line/last line

Hi, I need help with a maybe total simple issue but somehow I am not getting it. I am not able to etablish a sed or awk command which is adding to the first line in a text and removing only from the last line the ",". The file is looking like follow: TABLE1, TABLE2, . . . TABLE99,... (4 Replies)
Discussion started by: enjoy
4 Replies

4. Shell Programming and Scripting

Sending e-mail of record counts in 3 or more files

I am trying to load data into 3 tables simultaneously (which is working fine). Then when loaded, it should count the total number of records in all the 3 input files and send an e-mail to the user. The script is working fine, as far as loading all the 3 input files into the database tables, but... (3 Replies)
Discussion started by: msrahman
3 Replies

5. Shell Programming and Scripting

Split a single record to multiple records & add folder name to each line

Hi Gurus, I need to cut single record in the file(asdf) to multile records based on the number of bytes..(44 characters). So every record will have 44 characters. All the records should be in the same file..to each of these lines I need to add the folder(<date>) name. I have a dir. in which... (20 Replies)
Discussion started by: ram2581
20 Replies

6. UNIX for Dummies Questions & Answers

Hardcoding & Record counts in a file

HI , I am having a huge comma delimiter file, I have to append the following four lines before the starting of the file through a shell script. FILE NAME = TEST_LOAD DATETIME = CURRENT DATE TIME LOAD DATE = CURRENT DATE RECORD COUNT = TOTAL RECORDS IN FILE Source data 1,2,3,4,5,6,7... (7 Replies)
Discussion started by: shruthidwh
7 Replies

7. Shell Programming and Scripting

New file should store all the 7 existing filenames and their record counts and ftp th

Hi, I need help regarding below concern. There is a script and it has 7 existing files(in a path say,. usr/appl/temp/file1.txt) and I need to create one new blank file say “file_count.txt” in the same script itself. Then the new file <file_count.txt> should store all the 7 filenames and... (1 Reply)
Discussion started by: pr293
1 Replies

8. Shell Programming and Scripting

How to Remove the new line character inbetween a record

I have a file, in which a single record spans across multiple lines, File 1 ==== 14|\n leave request \n accepted|Yes| 15|\n leave request not \n acccepted|No| I wanted to remove the '\n charecters. I used the below code (foudn somewhere in this forum) perl -e 'while (<>) { if... (1 Reply)
Discussion started by: machomaddy
1 Replies

9. Shell Programming and Scripting

How to remove duplicated lines?

Hi, if i have a file like this: Query=1 a a b c c c d Query=2 b b b c c e . . . (7 Replies)
Discussion started by: the_simpsons
7 Replies

10. Shell Programming and Scripting

Join files, omit duplicated records from one file

Hello I have 2 files, eg more file1 file2 :::::::::::::: file1 :::::::::::::: 1 fromfile1 2 fromfile1 3 fromfile1 4 fromfile1 5 fromfile1 6 fromfile1 7 fromfile1 :::::::::::::: file2 :::::::::::::: 3 fromfile2 5 fromfile2 (4 Replies)
Discussion started by: CHoggarth
4 Replies
DUFF(1) 						    BSD General Commands Manual 						   DUFF(1)

NAME
duff -- duplicate file finder SYNOPSIS
duff [-0HLPaeqprtz] [-d function] [-f format] [-l limit] [file ...] duff [-h] duff [-v] DESCRIPTION
The duff utility reports clusters of duplicates in the specified files and/or directories. In the default mode, duff prints a customizable header, followed by the names of all the files in the cluster. In excess mode, duff does not print a header, but instead for each cluster prints the names of all but the first of the files it includes. If no files are specified as arguments, duff reads file names from stdin. Note that as of version 0.4, duff ignores symbolic links to files, as that behavior was conceptually broken. Therefore, the -H, -L and -P options now apply only to directories. The following options are available: -0 If reading file names from stdin, assume they are null-terminated, instead of separated by newlines. Also, when printing file names and cluster headers, terminate them with null characters instead of newlines. This is useful for file names containing whitespace or other non-standard characters. -H Follow symbolic links listed on the command line. This overrides any previous -L or -P option. Note that this only applies to directories, as symbolic links to files are never followed. -L Follow all symbolic links. This overrides any previous -H or -P option. Note that this only applies to directories, as symbolic links to files are never followed. -P Don't follow any symbolic links. This overrides any previous -H or -L option. This is the default. Note that this only applies to directories, as symbolic links to files are never followed. -a Include hidden files and directories when searching recursively. -d function The message digest function to use. The supported functions are sha1, sha256, sha384 and sha512. The default is sha1. -e Excess mode. List all but one file from each cluster of duplicates. Also suppresses output of the cluster header. This is useful when you want to automate removal of duplicate files and don't care which duplicates are removed. -f format Set the format of the cluster header. If the header is set to the empty string, no header line is printed. The following escape sequences are available: %n The number of files in the cluster. %c A legacy synonym for %d, for compatibility reasons. %d The message digest of files in the cluster. This may not be combined with -t as no digest is calculated. %i The one-based index of the file cluster. %s The size, in bytes, of a file in the cluster. %% A '%' character. The default format string when using -t is: %n files in cluster %i (%s bytes) The default format string for other modes is: %n files in cluster %i (%s bytes, digest %d) -h Display help information and exit. -l limit The minimum size of files to be sampled. If the size of files in a cluster is equal or greater than the specified limit, duff will sample and compare a few bytes from the start of each file before calculating a full digest. This is stricly an optimization and does not affect which files are considered by duff. The default limit is zero bytes, i.e. to use sampling on all files. -q Quiet mode. Suppress warnings and error messages. -p Physical mode. Make duff consider physical files instead of hard links. If specified, multiple hard links to the same physical file will not be reported as duplicates. -r Recursively search into all specified directories. -t Thorough mode. Distrust digests as a guarantee for equality. In thorough mode, duff compares files byte by byte when their sizes match. -v Display version information and exit. -z Do not consider empty files to be equal. This option prevents empty files from being reported as duplicates. EXAMPLES
The command: duff -r foo/ lists all duplicate files in the directory foo and its subdirectories. The command: duff -e0 * | xargs -0 rm removes all duplicate files in the current directory. Note that you have no control over which files in each cluster that are selected by -e (excess mode). Use with care. The command: find . -name '*.h' -type f | duff lists all duplicate header files in the current directory and its subdirectories. The command: find . -name '*.h' -type f -print0 | duff -0 | xargs -0 -n1 echo lists all duplicate header files in the current directory and its subdirectories, correctly handling file names containing whitespace. Note the use of xargs and echo to remove the null separators again before listing. DIAGNOSTICS
The duff utility exits 0 on success, and >0 if an error occurs. SEE ALSO
find(1), xargs(1) AUTHORS
Camilla Berglund <elmindreda@elmindreda.org> BUGS
duff doesn't check whether the same file has been specified twice on the command line. This will lead it to report files listed multiple times as duplicates when not using -p (physical mode). Note that this problem only affects files, not directories. duff no longer (as of version 0.4) reports symbolic links to files as duplicates, as they're by definition always duplicates. This may break scripts relying on the previous behavior. If the underlying files are modified while duff is running, all bets are off. This is not really a bug, but it can still bite you. BSD
January 18, 2012 BSD
All times are GMT -4. The time now is 11:37 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy