Hi, I'm sorry I'm no coder so I came here, counting on your free time and good will to beg for spoonfeeding some good code. I'll try to be quick and concise!
Got file with 50k lines like this:
Problem is that somewhere (anywhere) in file may appear a similar line (but usually not exactly the same), which needs to be recognized as duplicate and deleted!
My example - of what could be found and should be recognized (and deleted) as duplicate:
So I guess algorithm should basically do this:
1. from each line read only letters [a-z], [A-Z] and numbers [0-9] and disregard any possible spacing or special characters or punctuation
2. compare with every other line (in same manner a-Z, 0-9) and if same arrangement of letters and numbers is found (ignoring spacing, case, special chars...) delete one of the lines (doesn't matter which one)
But slight observation: I had some 200 lines in file that would differentiate only by numbers and this code would (incorrectly) count them as duplicate.
But slight observation: I had some 200 lines in file that would differentiate only by numbers and this code would (incorrectly) count them as duplicate.
Not sure what you mean...can you post a sample of how that input file looks like...
Not sure what you mean...can you post a sample of how that input file looks like...
Sure, It is 5mb compilation of trivia questions. One question per row with * for separator from answer (file will be used by irc trivia bot). Aim is to weed out automatically as much duplicate questions as possible. There is sample in my first post but here is bigger chunk of file: www.pastebin.com/u1a1ZGHr which also shows entries that get selected as duplicates and deleted with your code - these are the ones starting with "Algebra : "
thx, tyler_durden, will try this perl code in moment
edit:
tyler_durden's perl code shrunk questions from 55983 lines to 20915
shamrock's awk code shrunk questions from 55983 lines to 40724
I have yet to compare in detail (manually? :<) but I think perl code ate too much 'duplicates'. Can't believe its more then half, but don't know yet, I may be wrong, have to confirm.
Sure, It is 5mb compilation of trivia questions. One question per row with * for separator from answer (file will be used by irc trivia bot). Aim is to weed out automatically as much duplicate questions as possible. There is sample in my first post but here is bigger chunk of file: sample trivia - Pastebin.com which also shows entries that get selected as duplicates and deleted with your code - these are the ones starting with "Algebra : "
Is * the only non alphanumeric character in the input file as that makes it easy...but is that really the case as your original post had others...so if you define it clearly a better awk solution can be given...
Hi
I need to delete duplicate like pattern lines from a text file containing 2 duplicates only (one being subset of the other) using sed or awk preferably.
Input:
FM:Chicago:Development
FM:Chicago:Development:Score
SR:Cary:Testing:Testcases
PM:Newyork:Scripting
PM:Newyork:Scripting:Audit... (6 Replies)
Dear folks
I have a map file of around 54K lines and some of the values in the second column have the same value and I want to find them and delete all of the same values. I looked over duplicate commands but my case is not to keep one of the duplicate values. I want to remove all of the same... (4 Replies)
The question is not as simple as the title... I have a file, it looks like this
<string name="string1">RZ-LED</string>
<string name="string2">2.0</string>
<string name="string2">Version 2.0</string>
<string name="string3">BP</string>
I would like to check for duplicate entries of... (11 Replies)
Hi All,
I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space.
I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
hi :)
I need to delete partial duplicate lines
I have this in a file
sihp8027,/opt/cf20,1980182
sihp8027,/opt/oracle/10gRelIIcd,155200016
sihp8027,/opt/oracle/10gRelIIcd,155200176
sihp8027,/var/opt/ERP,10376312
and need to leave it like this:
sihp8027,/opt/cf20,1980182... (2 Replies)
Hey all, a relative bash/script newbie trying solve a problem.
I've got a text file with lots of lines that I've been able to clean up and format with awk/sed/cut, but now I'd like to remove the lines with duplicate usernames based on time stamp. Here's what the data looks like
2007-11-03... (3 Replies)
Hi please help me how to remove duplicate lines in any file.
I have a file having huge number of lines.
i want to remove selected lines in it.
And also if there exists duplicate lines, I want to delete the rest & just keep one of them.
Please help me with any unix commands or even fortran... (7 Replies)
OK, I have read several things on how to do this, but can't make it work. I am writing this to a vi file then calling it as an awk script.
So I need to search a file for duplicate lines, delete duplicate lines, then write the result to another file, say /home/accountant/files/docs/nodup
... (2 Replies)
Ok here's what I'm trying to do. I need to get a listing of all the mountpoints on a system into a file, which is easy enough, just using something like "mount | awk '{print $1}'"
However, on a couple of systems, they have some mount points looking like this:
/stage
/stand
/usr
/MFPIS... (2 Replies)