Ok here's what I'm trying to do. I need to get a listing of all the mountpoints on a system into a file, which is easy enough, just using something like "mount | awk '{print $1}'"
However, on a couple of systems, they have some mount points looking like this:
/stage
/stand
/usr
/MFPIS... (2 Replies)
OK, I have read several things on how to do this, but can't make it work. I am writing this to a vi file then calling it as an awk script.
So I need to search a file for duplicate lines, delete duplicate lines, then write the result to another file, say /home/accountant/files/docs/nodup
... (2 Replies)
Hi please help me how to remove duplicate lines in any file.
I have a file having huge number of lines.
i want to remove selected lines in it.
And also if there exists duplicate lines, I want to delete the rest & just keep one of them.
Please help me with any unix commands or even fortran... (7 Replies)
Hey all, a relative bash/script newbie trying solve a problem.
I've got a text file with lots of lines that I've been able to clean up and format with awk/sed/cut, but now I'd like to remove the lines with duplicate usernames based on time stamp. Here's what the data looks like
2007-11-03... (3 Replies)
hi :)
I need to delete partial duplicate lines
I have this in a file
sihp8027,/opt/cf20,1980182
sihp8027,/opt/oracle/10gRelIIcd,155200016
sihp8027,/opt/oracle/10gRelIIcd,155200176
sihp8027,/var/opt/ERP,10376312
and need to leave it like this:
sihp8027,/opt/cf20,1980182... (2 Replies)
Hi All,
I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space.
I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
The question is not as simple as the title... I have a file, it looks like this
<string name="string1">RZ-LED</string>
<string name="string2">2.0</string>
<string name="string2">Version 2.0</string>
<string name="string3">BP</string>
I would like to check for duplicate entries of... (11 Replies)
Dear folks
I have a map file of around 54K lines and some of the values in the second column have the same value and I want to find them and delete all of the same values. I looked over duplicate commands but my case is not to keep one of the duplicate values. I want to remove all of the same... (4 Replies)
Hi
I need to delete duplicate like pattern lines from a text file containing 2 duplicates only (one being subset of the other) using sed or awk preferably.
Input:
FM:Chicago:Development
FM:Chicago:Development:Score
SR:Cary:Testing:Testcases
PM:Newyork:Scripting
PM:Newyork:Scripting:Audit... (6 Replies)
Discussion started by: tech_frk
6 Replies
LEARN ABOUT DEBIAN
perl::critic::statistics
Perl::Critic::Statistics(3pm) User Contributed Perl Documentation Perl::Critic::Statistics(3pm)NAME
Perl::Critic::Statistics - Compile stats on Perl::Critic violations.
DESCRIPTION
This class accumulates statistics on Perl::Critic violations across one or more files. NOTE: This class is experimental and subject to
change.
INTERFACE SUPPORT
This is considered to be a non-public class. Its interface is subject to change without notice.
METHODS
"new()"
Create a new instance of Perl::Critic::Statistics. No arguments are supported at this time.
" accumulate( $doc, @violations ) "
Accumulates statistics about the $doc and the @violations that were found.
"modules()"
The number of chunks of code (usually files) that have been analyzed.
"subs()"
The total number of subroutines analyzed by this Critic.
"statements()"
The total number of statements analyzed by this Critic.
"lines()"
The total number of lines of code analyzed by this Critic.
"lines_of_blank()"
The total number of blank lines analyzed by this Critic. This includes only blank lines in code, not POD or data.
"lines_of_comment()"
The total number of comment lines analyzed by this Critic. This includes only lines whose first non-whitespace character is "#".
"lines_of_data()"
The total number of lines of data section analyzed by this Critic, not counting the "__END__" or "__DATA__" line. POD in a data section
is counted as POD, not data.
"lines_of_perl()"
The total number of lines of Perl code analyzed by this Critic. Perl appearing in the data section is not counted.
"lines_of_pod()"
The total number of lines of POD analyzed by this Critic. Pod occurring in a data section is counted as POD, not as data.
"violations_by_severity()"
The number of violations of each severity found by this Critic as a reference to a hash keyed by severity.
"violations_by_policy()"
The number of violations of each policy found by this Critic as a reference to a hash keyed by full policy name.
"total_violations()"
The the total number of violations found by this Critic.
"statements_other_than_subs()"
The total number of statements minus the number of subroutines. Useful because a subroutine is considered a statement by PPI.
"average_sub_mccabe()"
The average McCabe score of all scanned subroutines.
"violations_per_file()"
The total violations divided by the number of modules.
"violations_per_statement()"
The total violations divided by the number statements minus subroutines.
"violations_per_line_of_code()"
The total violations divided by the lines of code.
AUTHOR
Elliot Shank "<perl@galumph.com>"
COPYRIGHT
Copyright (c) 2007-2011 Elliot Shank.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. The full text of this license
can be found in the LICENSE file included with this module.
perl v5.14.2 2012-06-07 Perl::Critic::Statistics(3pm)