Sponsored Content
Top Forums Shell Programming and Scripting Deleting duplicate records from file 1 if records from file 2 match Post 302635461 by vestport on Saturday 5th of May 2012 01:43:32 AM
Old 05-05-2012
Deleting duplicate records from file 1 if records from file 2 match

I have 2 files

"File 1" is delimited by ";" and "File 2" is delimited by "|".


File 1 below (3 record shown):
Code:
Doc1;03/01/2012;New York;6 Main Street;Mr. Smith 1;Mr. Jones
Doc2;03/01/2012;Syracuse;876 Broadway;John Davis;Barbara Lull
Doc3;03/01/2012;Buffalo;779 Old Windy Road;Charles O'Brien;Bill Rudd


File 2 below (4 records shown):
Code:
6 Main Street|New York
345 Tipp Road|Brewser
885 Peartree|Buffalo
779 Old Windy Road|Buffalo


"File 1" is faily small, "File 2" is huge.


My problem: Line by line I need to copare each record in "File 1", the 3rd field (city) and 4th field (address)against matching field data in "File 2", the 1st field (address) and 2nd field (city) to make sure that there are no record matches.

All records that do not match should be copied out or > redirected to a new file (the edited file). If there is a match then that record should not be copied out to the edited file.


In other words given the example data above from "File 1" and "File 2" the "new edited file" should look like this:
Code:
Doc2;03/01/2012;Syracuse;876 Broadway;John Davis;Barbara Lull

The other 2 files below would be discarded as records matched "File 2"
Code:
Doc1;03/01/2012;New York;6 Main Street;Mr. Smith 1;Mr. Jones
Doc3;03/01/2012;Buffalo;779 Old Windy Road;Charles O'Brien;Bill Rudd


I hope that is not too confusing. I know this can probably be done with awk but I am as rusty as the Titanic with coding and lucky I got as far as I did with this project. Many thanks to "agama" for helping out on the last issue!

Thanks in advance for any replies!


Art

Last edited by Franklin52; 05-05-2012 at 10:45 AM.. Reason: Please use code tags for data and code samples, thank you
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Remove all instances of duplicate records from the file

Hi experts, I am new to scripting. I have a requirement as below. File1: A|123|NAME1 A|123|NAME2 B|123|NAME3 File2: C|123|NAME4 C|123|NAME5 D|123|NAME6 1) I have 2 merge both the files. 2) need to do a sort ( key fields are first and second field) 3) remove all the instances... (3 Replies)
Discussion started by: vukkusila
3 Replies

2. UNIX for Advanced & Expert Users

Duplicate records from oracle to text file.

Hi, I want to fetch duplicate records from an external table to a text file. Pls suggest me. Thanks (1 Reply)
Discussion started by: shilendrajadon
1 Replies

3. Shell Programming and Scripting

How to find Duplicate Records in a text file

Hi all pls help me by providing soln for my problem I'm having a text file which contains duplicate records . Example: abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452 abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452 tas 3420 3562 ... (1 Reply)
Discussion started by: G.Aavudai
1 Replies

4. Shell Programming and Scripting

compare fields in a file with duplicate records

Hi: I've been searching the net but didnt find a clue. I have a file in which, for some records, some fields coincide. I want to compare one (or more) of the dissimilar fields and retain the one record that fulfills a certain condition. For example, on this file: 99 TR 1991 5 06 ... (1 Reply)
Discussion started by: rleal
1 Replies

5. Shell Programming and Scripting

find out duplicate records in file?

Dear All, I have one file which looks like : account1:passwd1 account2:passwd2 account3:passwd3 account1:passwd4 account5:passwd5 account6:passwd6 you can see there're two records for account1. and is there any shell command which can find out : account1 is the duplicate record in... (3 Replies)
Discussion started by: tiger2000
3 Replies

6. Shell Programming and Scripting

Deleting last records of a file

can you please give shell script for daleting the last 7 records of file... (7 Replies)
Discussion started by: vsairam
7 Replies

7. Shell Programming and Scripting

Deleting Duplicate Records

Hello, I'm have a file of xy data with over 1000 records. I want to delete both x and y values for any record that has the same x value as any previous record thus removing the duplicates from my file. Can anyone help? Thanks, Dan (3 Replies)
Discussion started by: DFr0st
3 Replies

8. Shell Programming and Scripting

Find Duplicate records in first Column in File

Hi, Need to find a duplicate records on the first column, ANU4501710430989 0000000W20389390 ANU4501710430989 0000000W67065483 ANU4501130050520 0000000W80838713 ANU4501210170685 0000000W69246611... (3 Replies)
Discussion started by: Murugesh
3 Replies

9. UNIX for Dummies Questions & Answers

CSV file:Find duplicates, save original and duplicate records in a new file

Hi Unix gurus, Maybe it is too much to ask for but please take a moment and help me out. A very humble request to you gurus. I'm new to Unix and I have started learning Unix. I have this project which is way to advanced for me. File format: CSV file File has four columns with no header... (8 Replies)
Discussion started by: arvindosu
8 Replies

10. UNIX for Dummies Questions & Answers

Deleting records from .dat file

Hi, I need to delete one row from a .dat file. error processing column PCT_USED in row 1053295 for datafile /exp/stats/ts_stats.dat ORA-01722: invalid number This is used to load records using sql loader. Please let me know the procedure to do it. Regards, VN (3 Replies)
Discussion started by: narayanv
3 Replies
PX_GET_RECORD2(3)					     Library Functions Manual						 PX_GET_RECORD2(3)

NAME
PX_get_record2 -- Returns record in Paradox file SYNOPSIS
#include <paradox.h> int PX_get_record2(pxdoc_t *pxdoc, int recno, char *data, int *deleted, pxdatablockinfo_t *pxdbinfo) DESCRIPTION
This function is similar to PX_get_record(3) but takes two extra parameters. If *deleted is set to 1 the function will consider any record in the database, even those which are deleted. If *pxdbinfo is not NULL, the function will return some information about the data block where the record has been read from. You will have to allocate memory for pxdbinfo before calling PX_get_record2. On return *deleted will be set to 1 if the requested record is deleted or 0 if it is not deleted. The struct pxdatablockinfo_t has the fol- lowing fields: blockpos (long) File positon where the block starts. The first six bytes of the block contain the header, followed by the record data. recordpos (long) File position where the requested record starts. size (int) Size of the data block without the six bytes for the header. recno (int) Record number within the data block. The first record in the block has number 0. numrecords (int) The number of records in this block. number (int) The number of the data block. This function may return records with invalid data, because records are not explizitly marked as deleted, but rather the size of a valid data block is modified. A data block is a fixed size area in the file which holds a certain number of records. If for some reason a data block has newer been completely filled with records, the algorithmn anticipates deleted records in this data block, which are not there. This often happens with the last data block in a file, which is likely to not being fully filled with records. If you accessing several records, do it in ascending order, because this is the most efficient way. Note: This function is deprecated. Use PX_retrieve_record(3) instead RETURN VALUE
Returns 0 on success and -1 on failure. SEE ALSO
PX_get_field(3), PX_get_record(3) AUTHOR
This manual page was written by Uwe Steinmann uwe@steinmann.cx. PX_GET_RECORD2(3)
All times are GMT -4. The time now is 09:16 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy