Don, Guiliano means there are not more than two lines with the same ID in column1.
And otherwise rdrtx1 solution would even handle it well.
--
In case the duplicate IDs are in adjacent lines, the following saves some memory
Hi,
I need to join two files based on first column of both files.If first column of first file matches with the first column of second file, then the lines should be merged together and go for next line to check. It is something like:
File one:
110001 abc efd
110002 fgh dfg
110003 ... (10 Replies)
Hi everyone,
I'm just wondering how could I using awk language merge two files by comparison of one their row.
I mean, I have one file like this:
file#1:
21/07/2009 11:45:00 100.0000000 27.2727280
21/07/2009 11:50:00 75.9856644 25.2492676
21/07/2009 11:55:00 51.9713287 23.2258072... (4 Replies)
Input_file
data1 USA 100 ASE
data3 UK 20 GWQR
data4 Brazil 40 QWE
data2 Scotland 60 THWE
data5 USA 40 QWERR
Reference_file
USA 12312 34532
1324 Brazil 23321
231 3421 Scotland
342 34235 UK
231 141 England... (1 Reply)
Hi,
Can anyone suggest quick way to get desired output?
Sample input file content:
A 12 9
A -0.3 2.3
B 1.0 -4
C 34 1000
C -111 900
C 99 0.09
Output required:
A 12 9 -0.3 2.3
B 1.0 -4
C 34 1000 -111 900 99 0.09
Thanks (3 Replies)
I am trying to merge two lines to one based on some matching condition.
The file is as follows:
Matches filter:
'request ', timestamp, <HTTPFlow
request=<GET:
Matches filter:
'request ', timestamp, <HTTPFlow
request=<GET:
Matches filter:
'request ', timestamp, <HTTPFlow
... (8 Replies)
Hi,
Anyone did experience to merge data at same column but different row previously by using awk, sed, perl, etc?
Input File:
SSO12256
SSO0001
thiD-1
rbsK-1
SSO0006
SSO0007
SSO0008
SSO0009
SSO0010
SSO0011
Desired Output File: (5 Replies)
Hi ALL,
We have requirement in a file, i have multiple rows.
Example below:
Input file rows
01,1,102319,0,0,70,26,U,1,331,000000113200000011920000001212
01,1,102319,0,1,80,20,U,1,241,00000059420000006021
I need my output file should be as mentioned below. Last field should split for... (4 Replies)
Discussion started by: kotra
4 Replies
LEARN ABOUT REDHAT
uniq
UNIQ(1) FSF UNIQ(1)NAME
uniq - remove duplicate lines from a sorted file
SYNOPSIS
uniq [OPTION]... [INPUT [OUTPUT]]
DESCRIPTION
Discard all but one of successive identical lines from INPUT (or standard input), writing to OUTPUT (or standard output).
Mandatory arguments to long options are mandatory for short options too.
-c, --count
prefix lines by the number of occurrences
-d, --repeated
only print duplicate lines
-D, --all-repeated[=delimit-method] print all duplicate lines
delimit-method={none(default),prepend,separate} Delimiting is done with blank lines.
-f, --skip-fields=N
avoid comparing the first N fields
-i, --ignore-case
ignore differences in case when comparing
-s, --skip-chars=N
avoid comparing the first N characters
-u, --unique
only print unique lines
-w, --check-chars=N
compare no more than N characters in lines
--help display this help and exit
--version
output version information and exit
A field is a run of whitespace, then non-whitespace characters. Fields are skipped before chars.
AUTHOR
Written by Richard Stallman and David MacKenzie.
REPORTING BUGS
Report bugs to <bug-coreutils@gnu.org>.
COPYRIGHT
Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICU-
LAR PURPOSE.
SEE ALSO
The full documentation for uniq is maintained as a Texinfo manual. If the info and uniq programs are properly installed at your site, the
command
info uniq
should give you access to the complete manual.
uniq (coreutils) 4.5.3 February 2003 UNIQ(1)