Hi,
I have a challenging task,in which i have to find the duplicate files by its name and size,then i need to take anyone of the file.Then i need to open the file and find for more than one pattern and count of that pattern.
Note:These are the samples of two files,but i can have more... (2 Replies)
I've written a script to count the total size of SAN storage LUNs, and also display the LUN sizes.
From server to server, the LUNs sizes differ.
What I want to do is count the occurances as they occur and change.
These are the LUN sizes:
49.95
49.95
49.95
49.95
49.95
49.95
49.95
49.95... (2 Replies)
Hi,
I have file 1.txt with following entries as shown:
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
**
**
**
In file 2.txt I have the following entries as shown:
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433... (4 Replies)
Hi all,
I'm looking for some help. I have a file (very long) that is organized like below:
>Cluster 0
0 283nt, >01_FRYJ6ZM12HMXZS... at +/99%
1 279nt, >01_FRYJ6ZM12HN12A... at +/99%
2 281nt, >01_FRYJ6ZM12HM4TS... at +/99%
3 283nt, >01_FRYJ6ZM12HM946... at +/99%
4 279nt,... (4 Replies)
Hi ,
I have a file which has multiple rows of data, i want to match the pattern for two columns and if both conditions satisfied i have to add the counter by 1 and finally print the count value. How to proceed...
I tried in this way...
awk -F, 'BEGIN {cnt = 0} {if $6 == "VLY278" &&... (6 Replies)
I am trying to search a file for a patterns ERR- in a file and return a count for each of the error reported
Input file is a free flowing file without any format
example of output
ERR-00001=5
....
ERR-01010=10
.....
ERR-99999=10 (4 Replies)
'Hi
I'm using the following code to extract the lines(and redirect them to a txt file) after the pattern match. But the output is inclusive of the line with pattern match.
Which option is to be used to exclude the line containing the pattern?
sed -n '/Conn.*User/,$p' > consumers.txt (11 Replies)
The sample file:
dept1: user1,user2,user3
dept2: user4,user5,user6
dept3: user7,user8,user9
I want to match by '/^dept2.*/' but don't want to have substring 'dept2:' in output. How to compose such regex? (8 Replies)
Guys -
Need your ideas on a section of code to finish something up. To make a long story short, I'm parsing a print output file that goes to pre-printed forms. I'm intercepting it, parsing it, formatting it, cutting it up into individual pages, grabbing the text I want in zones, building an... (3 Replies)
Hi all!
Thanks for taking the time to view this!
I want to grep out all lines of a file that starts with pattern 1 but also does not match with the second pattern.
Example:
Drink a soda
Eat a banana
Eat multiple bananas
Drink an apple juice
Eat an apple
Eat multiple apples
I... (8 Replies)
Discussion started by: demmel
8 Replies
LEARN ABOUT BSD
uniq
UNIQ(1) General Commands Manual UNIQ(1)NAME
uniq - report repeated lines in a file
SYNOPSIS
uniq [ -udc [ +n ] [ -n ] ] [ input [ output ] ]
DESCRIPTION
Uniq reads the input file comparing adjacent lines. In the normal case, the second and succeeding copies of repeated lines are removed;
the remainder is written on the output file. Note that repeated lines must be adjacent in order to be found; see sort(1). If the -u flag
is used, just the lines that are not repeated in the original file are output. The -d option specifies that one copy of just the repeated
lines is to be written. The normal mode output is the union of the -u and -d mode outputs.
The -c option supersedes -u and -d and generates an output report in default style but with each line preceded by a count of the number of
times it occurred.
The n arguments specify skipping an initial portion of each line in the comparison:
-n The first n fields together with any blanks before each are ignored. A field is defined as a string of non-space, non-tab charac-
ters separated by tabs and spaces from its neighbors.
+n The first n characters are ignored. Fields are skipped before characters.
SEE ALSO sort(1), comm(1)7th Edition April 29, 1985 UNIQ(1)