Sponsored Content
Full Discussion: Checking file for duplicates
Top Forums Shell Programming and Scripting Checking file for duplicates Post 302422993 by matrixmadhan on Thursday 20th of May 2010 01:24:09 AM
Old 05-20-2010
Quote:
Originally Posted by methyl
We must assume you have a mainstream database engine which can handle CSV files and a computer which has capacity for this task. You really don't mention much about the data or the computer.

At a design level, each record must contain a unique key and whatever information is a parameter to the "purge". Unless you know the source "purge" rules your database of "data already processed" will just grow.



It would make more sense to fix the data feed design at at source. A convention is to mark the record in the source database with a unique extract run reference to prevent repeat extracts - whilst also allowing a rerun.
Unique key - this is not required, record as a whole could be unique and there is no need for unique columns in it. Worst, after applying normalization or some form of transformation, 2 records could be unique or not be so.

Fixing the problem at scope, is definitely out of question, though I agree it makes sense to do that, most of the time, its completely out of scope. Not everything flows in line. We need to work on what has been received or what could be potentially received in a real world scenario.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Avoid Duplicates in a file

Hi Gurus, I had a question regarding avoiding duplicates.i have a file abc.txt abc.txt ------- READER_1_1_1> HIER_28056 XML Reader: Error occurred while parsing:; line number ; column number READER_1_3_1> Sun Mar 23 23:52:48 2008 READER_1_3_1> HIER_28056 XML Reader: Error occurred while... (7 Replies)
Discussion started by: pssandeep
7 Replies

2. Shell Programming and Scripting

Remove duplicates from a file

Hi, I need to remove duplicates from a file. The file will be like this 0003 10101 20100120 abcdefghi 0003 10101 20100121 abcdefghi 0003 10101 20100122 abcdefghi 0003 10102 20100120 abcdefghi 0003 10103 20100120 abcdefghi 0003 10103 20100121 abcdefghi Here if the first colum and... (6 Replies)
Discussion started by: gpaulose
6 Replies

3. UNIX for Dummies Questions & Answers

CSV file:Find duplicates, save original and duplicate records in a new file

Hi Unix gurus, Maybe it is too much to ask for but please take a moment and help me out. A very humble request to you gurus. I'm new to Unix and I have started learning Unix. I have this project which is way to advanced for me. File format: CSV file File has four columns with no header... (8 Replies)
Discussion started by: arvindosu
8 Replies

4. Shell Programming and Scripting

Removing Duplicates from file

Hi Experts, Please check the following new requirement. I got data like the following in a file. FILE_HEADER 01cbbfde7898410| 3477945| home| 1 01cbc275d2c122| 3478234| WORK| 1 01cbbe4362743da| 3496386| Rich Spare| 1 01cbc275d2c122| 3478234| WORK| 1 This is pipe separated file with... (3 Replies)
Discussion started by: tinufarid
3 Replies

5. Shell Programming and Scripting

Duplicates in an XML file

Hi All, I have an xml file that contains information like this <ID>574922<COMMENT>TEXT TEXT TEXT</COMMENT></ID> <ID>574922<COMMENT>TEXT TEXT TEXT</COMMENT></ID> <ID>412659<COMMENT>TEXT TEXT TEXT TEXT TEXT</COMMENT></ID> <ID>873520<COMMENT>TEXT</COMMENT></ID>... (5 Replies)
Discussion started by: TasosARISFC
5 Replies

6. Shell Programming and Scripting

Remove the partial duplicates by checking the length of a field

Hi Folks - I'm quite new to awk and didn't come across such issues before. The problem statement is that, I've a file with duplicate records in 3rd and 4th fields. The sample is as below: aaaaaa|a12|45|56 abbbbaaa|a12|45|56 bbaabb|b1|51|45 bbbbbabbb|b2|51|45 aaabbbaaaa|a11|45|56 ... (3 Replies)
Discussion started by: asyed
3 Replies

7. Programming

[Solved] Removing duplicates from the file and saving as new file

Dear All I have 200 data files and each files has many duplicates. I am looking for the automated awk script such that it checks and removes the duplicates from the each file and saving them as new files for all 200 files in the respective folder. For example my data looks like this.. ... (12 Replies)
Discussion started by: bala06
12 Replies

8. UNIX for Dummies Questions & Answers

Remove duplicates from a file

Can u tell me how to remove duplicate records from a file? (11 Replies)
Discussion started by: saga20
11 Replies

9. UNIX for Dummies Questions & Answers

Removing duplicates from a file

Hi All, I am merging files coming from 2 different systems ,while doing that I am getting duplicates entries in the merged file I,01,000131,764,2,4.00 I,01,000131,765,2,4.00 I,01,000131,772,2,4.00 I,01,000131,773,2,4.00 I,01,000168,762,2,2.00 I,01,000168,763,2,2.00... (5 Replies)
Discussion started by: Sri3001
5 Replies

10. Shell Programming and Scripting

Removing duplicates from new file

i hav two files like i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3.I have tried previous post also,but in that complete line must be similar.In this case i have to verify first column only regardless what is the content in succeeding columns. (3 Replies)
Discussion started by: sagar_1986
3 Replies
TCFMGR(1)							   Tokyo Cabinet							 TCFMGR(1)

NAME
tcfmgr - the command line utility of the fixed-length database API DESCRIPTION
The command `tcfmgr' is a utility for test and debugging of the fixed-length database API and its applications. `path' specifies the path of a database file. `width' specifies the width of the value of each record. `limsiz' specifies the limit size of the database file. `key' specifies the key of a record. `value' specifies the value of a record. `file' specifies the input file. tcfmgr create path [width [limsiz]] Create a database file. tcfmgr inform [-nl|-nb] path Print miscellaneous information to the standard output. tcfmgr put [-nl|-nb] [-sx] [-dk|-dc|-dai|-dad] path key value Store a record. tcfmgr out [-nl|-nb] [-sx] path key Remove a record. tcfmgr get [-nl|-nb] [-sx] [-px] [-pz] path key Print the value of a record. tcfmgr list [-nl|-nb] [-m num] [-pv] [-px] [-rb lkey ukey] [-ri str] path Print keys of all records, separated by line feeds. tcfmgr optimize [-nl|-nb] path [width [limsiz]] Optimize a database file. tcfmgr importtsv [-nl|-nb] [-sc] path [file] Store records of TSV in each line of a file. tcfmgr version Print the version information of Tokyo Cabinet. Options feature the following. -nl : enable the option `FDBNOLCK'. -nb : enable the option `FDBLCKNB'. -sx : the input data is evaluated as a hexadecimal data string. -dk : use the function `tcfdbputkeep' instead of `tcfdbput'. -dc : use the function `tcfdbputcat' instead of `tcfdbput'. -dai : use the function `tcfdbaddint' instead of `tcfdbput'. -dad : use the function `tcfdbadddouble' instead of `tcfdbput'. -px : the output data is converted into a hexadecimal data string. -pz : do not append line feed at the end of the output. -m num : specify the maximum number of the output. -pv : print values of records also. -rb lkey ukey : specify the range of keys. -ri str : specify the interval notation of keys. -sc : normalize keys as lower cases. This command returns 0 on success, another on failure. SEE ALSO
tcftest(1), tcfmttest(1), tcfdb(3), tokyocabinet(3) Man Page 2011-02-12 TCFMGR(1)
All times are GMT -4. The time now is 01:05 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy