---------- Post updated at 03:17 AM ---------- Previous update was at 02:08 AM ----------
Dear Puma
I would like to keep always the last value found,,, looks like the code keep always the fist one... Please advise
---------- Post updated at 03:54 AM ---------- Previous update was at 03:17 AM ----------
Please help me to get the to files output as I display bellow,,, the objetive is to delete the duplicate files, keeping allwas the last one...
The columns where are the duplicate files are $2 $3 ( colum 2-25),, and they have a index indentity (colum 26),, example
in the input file this value appears,
I'm not sure I understand your "duplicate" criterion. In one post, it's 7275.01 (6 digits + "."), in the other it's just 4 digits before the period. On top, your input files vary from post to post. This does not help us to help you.
Try this; you may want to sort both files afterwards:
The only change that I did in the output file was to increase 2, columns more to concatenate 4 dig from colun 1 & 4 dig in colum 2. Saved to column 9. To take it as reference to find duplicated records.... Fir that I have used colum 9...
Please can you let me know where is the error in code that I am using...why the file of removed file is empty
I'd prefer pamu to explain his code to you. Did you give my proposal a try? The removed file is not empty with that approach. Right now, it is using the full 7270.01 for testing uniqueness; could be adapted to 4 digits by minor modifications.
Hi,
In an ideal scenario, I will have a listing of db transaction log that gets copied to a DR site and if I have them all, they will be numbered consecutively like below.
1_79811_01234567.arc
1_79812_01234567.arc
1_79813_01234567.arc
1_79814_01234567.arc
1_79815_01234567.arc... (3 Replies)
Hello,
I have some text data that is in the form of multi-line records. Each record ends with the string $$$$ and the next record starts on the next line.
RDKit 2D
15 14 0 0 0 0 0 0 0 0999 V2000
5.4596 2.1267 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 ... (5 Replies)
Hi,
I need help on a complicated file that I am working on. I wanted to extract important info from a very huge file. It is space delimited file. I have hundred thousands of records in this file. An example content of the inputfile as below:-
##
ID Ser402 Old; 23... (2 Replies)
Hi,
In a file, I have to mark duplicate records as 'D' and the latest record alone as 'C'.
In the below file, I have to identify if duplicate records are there or not based on Man_ID, Man_DT, Ship_ID and I have to mark the record with latest Ship_DT as "C" and other as "D" (I have to create... (7 Replies)
Hi all,
I have a file containing two fields with 154 rows/records/lines (forgive me, my UNIX terminology is not quite up to par yet). I am trying to read from this list, find a value (lets say 0), then print the record/line/row that value falls on (In this case it would be record/line/row #27)?... (5 Replies)
Hi Gurus,
Do any kind souls encounter have the same script as mentioned here.
Find and compare filenames in different mount point and remove duplicates.
Thanks a million!!!
wanna13e (7 Replies)
I have i got a requirement like below.
I have input file which contains following fixed width records.
00000000000088500232007112007111
I need the full record and concatenated with ~ and characters from 1to 5 and concatenated with ~ and charactes from 10 to 15
The out put will be like... (1 Reply)
Hi all,
I have to remove duplicate lines in a file without chainging the order.for eg if i have a record
pqr
def
abc
lmn
pqr
abc
mkh
hgf
the output should be
pqr
def
abc
lmn
mkh
hgf (7 Replies)
All,
I have a task to search through several hundred files and extract duplicate detail records and keep them grouped with their header record. If no duplicate detail record exists, don't pull the header. For example, an input file could look like this:
input.txt
HA
D1
D2
D2
D3
D4
D4... (17 Replies)