Don, Guiliano means there are not more than two lines with the same ID in column1.
And otherwise rdrtx1 solution would even handle it well.
--
In case the duplicate IDs are in adjacent lines, the following saves some memory
Hi,
I need to join two files based on first column of both files.If first column of first file matches with the first column of second file, then the lines should be merged together and go for next line to check. It is something like:
File one:
110001 abc efd
110002 fgh dfg
110003 ... (10 Replies)
Hi everyone,
I'm just wondering how could I using awk language merge two files by comparison of one their row.
I mean, I have one file like this:
file#1:
21/07/2009 11:45:00 100.0000000 27.2727280
21/07/2009 11:50:00 75.9856644 25.2492676
21/07/2009 11:55:00 51.9713287 23.2258072... (4 Replies)
Input_file
data1 USA 100 ASE
data3 UK 20 GWQR
data4 Brazil 40 QWE
data2 Scotland 60 THWE
data5 USA 40 QWERR
Reference_file
USA 12312 34532
1324 Brazil 23321
231 3421 Scotland
342 34235 UK
231 141 England... (1 Reply)
Hi,
Can anyone suggest quick way to get desired output?
Sample input file content:
A 12 9
A -0.3 2.3
B 1.0 -4
C 34 1000
C -111 900
C 99 0.09
Output required:
A 12 9 -0.3 2.3
B 1.0 -4
C 34 1000 -111 900 99 0.09
Thanks (3 Replies)
I am trying to merge two lines to one based on some matching condition.
The file is as follows:
Matches filter:
'request ', timestamp, <HTTPFlow
request=<GET:
Matches filter:
'request ', timestamp, <HTTPFlow
request=<GET:
Matches filter:
'request ', timestamp, <HTTPFlow
... (8 Replies)
Hi,
Anyone did experience to merge data at same column but different row previously by using awk, sed, perl, etc?
Input File:
SSO12256
SSO0001
thiD-1
rbsK-1
SSO0006
SSO0007
SSO0008
SSO0009
SSO0010
SSO0011
Desired Output File: (5 Replies)
Hi ALL,
We have requirement in a file, i have multiple rows.
Example below:
Input file rows
01,1,102319,0,0,70,26,U,1,331,000000113200000011920000001212
01,1,102319,0,1,80,20,U,1,241,00000059420000006021
I need my output file should be as mentioned below. Last field should split for... (4 Replies)
Discussion started by: kotra
4 Replies
LEARN ABOUT BSD
uniq
UNIQ(1) General Commands Manual UNIQ(1)NAME
uniq - report repeated lines in a file
SYNOPSIS
uniq [ -udc [ +n ] [ -n ] ] [ input [ output ] ]
DESCRIPTION
Uniq reads the input file comparing adjacent lines. In the normal case, the second and succeeding copies of repeated lines are removed;
the remainder is written on the output file. Note that repeated lines must be adjacent in order to be found; see sort(1). If the -u flag
is used, just the lines that are not repeated in the original file are output. The -d option specifies that one copy of just the repeated
lines is to be written. The normal mode output is the union of the -u and -d mode outputs.
The -c option supersedes -u and -d and generates an output report in default style but with each line preceded by a count of the number of
times it occurred.
The n arguments specify skipping an initial portion of each line in the comparison:
-n The first n fields together with any blanks before each are ignored. A field is defined as a string of non-space, non-tab charac-
ters separated by tabs and spaces from its neighbors.
+n The first n characters are ignored. Fields are skipped before characters.
SEE ALSO sort(1), comm(1)7th Edition April 29, 1985 UNIQ(1)