Sponsored Content
Top Forums Shell Programming and Scripting Delete duplicate lines... with a twist! Post 302575740 by shamrock on Tuesday 22nd of November 2011 04:55:20 PM
Old 11-22-2011
Code:
awk '{s=tolower($0);gsub("[^a-z]","",s);x[s]=$0} END {for(i in x) print x[i]}' file

This User Gave Thanks to shamrock For This Post:
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

delete semi-duplicate lines from file?

Ok here's what I'm trying to do. I need to get a listing of all the mountpoints on a system into a file, which is easy enough, just using something like "mount | awk '{print $1}'" However, on a couple of systems, they have some mount points looking like this: /stage /stand /usr /MFPIS... (2 Replies)
Discussion started by: paqman
2 Replies

2. UNIX for Dummies Questions & Answers

Delete duplicate lines and print to file

OK, I have read several things on how to do this, but can't make it work. I am writing this to a vi file then calling it as an awk script. So I need to search a file for duplicate lines, delete duplicate lines, then write the result to another file, say /home/accountant/files/docs/nodup ... (2 Replies)
Discussion started by: bfurlong
2 Replies

3. UNIX for Dummies Questions & Answers

How to delete or remove duplicate lines in a file

Hi please help me how to remove duplicate lines in any file. I have a file having huge number of lines. i want to remove selected lines in it. And also if there exists duplicate lines, I want to delete the rest & just keep one of them. Please help me with any unix commands or even fortran... (7 Replies)
Discussion started by: reva
7 Replies

4. UNIX for Dummies Questions & Answers

Delete lines with duplicate strings based on date

Hey all, a relative bash/script newbie trying solve a problem. I've got a text file with lots of lines that I've been able to clean up and format with awk/sed/cut, but now I'd like to remove the lines with duplicate usernames based on time stamp. Here's what the data looks like 2007-11-03... (3 Replies)
Discussion started by: mattv
3 Replies

5. UNIX for Dummies Questions & Answers

How to delete partial duplicate lines unix

hi :) I need to delete partial duplicate lines I have this in a file sihp8027,/opt/cf20,1980182 sihp8027,/opt/oracle/10gRelIIcd,155200016 sihp8027,/opt/oracle/10gRelIIcd,155200176 sihp8027,/var/opt/ERP,10376312 and need to leave it like this: sihp8027,/opt/cf20,1980182... (2 Replies)
Discussion started by: C|KiLLeR|S
2 Replies

6. UNIX for Advanced & Expert Users

In a huge file, Delete duplicate lines leaving unique lines

Hi All, I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space. I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Discussion started by: krishnix
16 Replies

7. Shell Programming and Scripting

Delete lines in file containing duplicate strings, keeping longer strings

The question is not as simple as the title... I have a file, it looks like this <string name="string1">RZ-LED</string> <string name="string2">2.0</string> <string name="string2">Version 2.0</string> <string name="string3">BP</string> I would like to check for duplicate entries of... (11 Replies)
Discussion started by: raidzero
11 Replies

8. Shell Programming and Scripting

Delete duplicate rows

Hi, This is a followup to my earlier post him mno klm 20 76 . + . klm_mango unix_00000001; alp fdc klm 123 456 . + . klm_mango unix_0000103; her tkr klm 415 439 . + . klm_mango unix_00001043; abc tvr klm 20 76 . + . klm_mango unix_00000001; abc def klm 83 84 . + . klm_mango... (5 Replies)
Discussion started by: jacobs.smith
5 Replies

9. Shell Programming and Scripting

Find duplicate values in specific column and delete all the duplicate values

Dear folks I have a map file of around 54K lines and some of the values in the second column have the same value and I want to find them and delete all of the same values. I looked over duplicate commands but my case is not to keep one of the duplicate values. I want to remove all of the same... (4 Replies)
Discussion started by: sajmar
4 Replies

10. UNIX for Beginners Questions & Answers

Delete duplicate like pattern lines

Hi I need to delete duplicate like pattern lines from a text file containing 2 duplicates only (one being subset of the other) using sed or awk preferably. Input: FM:Chicago:Development FM:Chicago:Development:Score SR:Cary:Testing:Testcases PM:Newyork:Scripting PM:Newyork:Scripting:Audit... (6 Replies)
Discussion started by: tech_frk
6 Replies
Perl::Critic::Statistics(3pm)				User Contributed Perl Documentation			     Perl::Critic::Statistics(3pm)

NAME
Perl::Critic::Statistics - Compile stats on Perl::Critic violations. DESCRIPTION
This class accumulates statistics on Perl::Critic violations across one or more files. NOTE: This class is experimental and subject to change. INTERFACE SUPPORT
This is considered to be a non-public class. Its interface is subject to change without notice. METHODS
"new()" Create a new instance of Perl::Critic::Statistics. No arguments are supported at this time. " accumulate( $doc, @violations ) " Accumulates statistics about the $doc and the @violations that were found. "modules()" The number of chunks of code (usually files) that have been analyzed. "subs()" The total number of subroutines analyzed by this Critic. "statements()" The total number of statements analyzed by this Critic. "lines()" The total number of lines of code analyzed by this Critic. "lines_of_blank()" The total number of blank lines analyzed by this Critic. This includes only blank lines in code, not POD or data. "lines_of_comment()" The total number of comment lines analyzed by this Critic. This includes only lines whose first non-whitespace character is "#". "lines_of_data()" The total number of lines of data section analyzed by this Critic, not counting the "__END__" or "__DATA__" line. POD in a data section is counted as POD, not data. "lines_of_perl()" The total number of lines of Perl code analyzed by this Critic. Perl appearing in the data section is not counted. "lines_of_pod()" The total number of lines of POD analyzed by this Critic. Pod occurring in a data section is counted as POD, not as data. "violations_by_severity()" The number of violations of each severity found by this Critic as a reference to a hash keyed by severity. "violations_by_policy()" The number of violations of each policy found by this Critic as a reference to a hash keyed by full policy name. "total_violations()" The the total number of violations found by this Critic. "statements_other_than_subs()" The total number of statements minus the number of subroutines. Useful because a subroutine is considered a statement by PPI. "average_sub_mccabe()" The average McCabe score of all scanned subroutines. "violations_per_file()" The total violations divided by the number of modules. "violations_per_statement()" The total violations divided by the number statements minus subroutines. "violations_per_line_of_code()" The total violations divided by the lines of code. AUTHOR
Elliot Shank "<perl@galumph.com>" COPYRIGHT
Copyright (c) 2007-2011 Elliot Shank. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. The full text of this license can be found in the LICENSE file included with this module. perl v5.14.2 2012-06-07 Perl::Critic::Statistics(3pm)
All times are GMT -4. The time now is 09:40 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy