Hi,
I have a file like this.
please notice that ./usr/orders1/order_new_2627 appears more than once, thus needs to be merged.
I would like to merge the lines where the first column match
so the output should be like this:
Please help (2 Replies)
I have a log file "logreport" that contains several lines as seen below:
04:20:00 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but responded to ping
06:38:08 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but responded to ping
07:11:05 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead... (4 Replies)
Greetings, I have been trying to merge the following lines:
Sat. May 9 8:00 PM
Sat. May 9 8:00 PM CW
Sat. May 9 8:00 PM CW Cursed
Sat. May 9 9:00 PM
Sat. May 9 9:00 PM CW
Sat. May 9 9:00 PM CW Sanctuary
Sat. May 16 8:00 PM
Sat. May 16 8:00 PM CW
Sat. May 16 8:00 PM CW Sanctuary
Sat. May... (2 Replies)
Hello,
I have a large amount of files under a root directory, with several sub-directories, and many of these sub-directories have similar files with similar names. I need to clean this up.
The filenames are of the format:
/path/to/dir/subdir/file name.dat
/path/to/dir/subdir/file name... (3 Replies)
Hi,
I have a little problem with counting lines. I know similar topics from this forum, but they don't resolve my problem. I have file with lines like this:
2009-05-25 16:55:32,143 some text some regular expressions ect.
2009-05-25 16:55:32,144 some text.
2009-05-28 18:15:12,148 some... (4 Replies)
Hello folks
I have a question for you gurus of sed or grep (maybe awk, but I would prefer the first two)
I have a file (f1) that says:
(actually, these are not numbers but md5sum, but for simplicity, let's assume these numbers.)
1
2
3
4
5And I have a file (f2) that says
1|a
1|b
1|c
2|d... (3 Replies)
Hi,
Pretty new to scripting sed awk etc. I'm trying to speed up calculations of disk space allocation. I've extracted the data i want and cleaned it up but i cant figure out the final step. I need to discover a Maximum value of 1 field where the value of another field is the same using awk
so... (4 Replies)
consider i have two files
cat onlyviews1.sql
CREATE VIEW V11
AS
SELECT id,
name,
FROM
etc etc
WHERE etc etc;
CREATE VIEW V22
AS
SELECT id,
name,
FROM
etc etc
WHERE etc etc;
CREATE VIEW V33
AS (10 Replies)
I have 2 files, and I wish to count number of lines with this characteristic:
if any token at line x in file1, is similar to a token at line x in file2.
Here's an example:
file1:
ab, abc
ef
fg
file2:
ab
cd ef
gh
In this case I wish to get 3.
Note that token of file1 are... (3 Replies)
Hi,
I need to compare the /etc/passwd files from 2 servers, and extract the users that are similar in these two files. I sorted the 2 files based on the user IDs (UID) (3rd column). I first sorted the files using the username (1st column), however when I use comm to compare the files there is no... (1 Reply)
Discussion started by: anaigini45
1 Replies
LEARN ABOUT BSD
uniq
UNIQ(1) General Commands Manual UNIQ(1)NAME
uniq - report repeated lines in a file
SYNOPSIS
uniq [ -udc [ +n ] [ -n ] ] [ input [ output ] ]
DESCRIPTION
Uniq reads the input file comparing adjacent lines. In the normal case, the second and succeeding copies of repeated lines are removed;
the remainder is written on the output file. Note that repeated lines must be adjacent in order to be found; see sort(1). If the -u flag
is used, just the lines that are not repeated in the original file are output. The -d option specifies that one copy of just the repeated
lines is to be written. The normal mode output is the union of the -u and -d mode outputs.
The -c option supersedes -u and -d and generates an output report in default style but with each line preceded by a count of the number of
times it occurred.
The n arguments specify skipping an initial portion of each line in the comparison:
-n The first n fields together with any blanks before each are ignored. A field is defined as a string of non-space, non-tab charac-
ters separated by tabs and spaces from its neighbors.
+n The first n characters are ignored. Fields are skipped before characters.
SEE ALSO sort(1), comm(1)7th Edition April 29, 1985 UNIQ(1)