01-10-2008
awk '{print $3}' file.name > new.file.name
then the same for the second file, but append the result, and then "uniq -d" which will print only the duplicates.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
I have two files that I want to compare and output a new file that will contain the duplicates. I have tried comm -12 and it doesn't work? Any help will be helpful.
Thanks,
Barbara (2 Replies)
Discussion started by: blt123
2 Replies
2. Shell Programming and Scripting
Hi ,
Just to find out a way to compare these 2 files and give unique output.
For eg:
1.txt contains
1
2
3
4
5
6
--------------------------------------
2.txt contains
1
2
6
8 (1 Reply)
Discussion started by: rauphelhunter
1 Replies
3. Shell Programming and Scripting
I have made several attempts to read two files of ip addresses and eliminate records from file1 that are in file2.
My latest attempt follows. Everything works except my file3 is exactly the same as file1 and it should not be.
# !/usr/bin/bash
#
# NoInterfaces
# Utility will create a file... (8 Replies)
Discussion started by: altamaha
8 Replies
4. Shell Programming and Scripting
Hi,
I have two different files, one has two columns and other has only one column. I would like to compare the first column in the first file with the data in the second file and write a third file with the data that is not present is not common to them.
First file:... (26 Replies)
Discussion started by: swame_sp
26 Replies
5. Shell Programming and Scripting
I have two files '
1st one
ALIC-000352-B
ALIC-000916-O
DDS-STNGD
FDH-PPO1-001
PFG-30601-001
2nd one
'ALIC-000352-B'
'ALIC-000916-O'
'DDS-STNGD'
'FDH-PPO1-001' (4 Replies)
Discussion started by: Pratik4891
4 Replies
6. Shell Programming and Scripting
Hi
My file have 7 column, FIle is pipe delimed
Col1|Col2|col3|Col4|col5|Col6|Col7
I want to find out uniq record count on col3, col4 and col2 ( same order) how can I achieve it.
ex
1|3|A|V|C|1|1
1|3|A|V|C|1|1
1|4|A|V|C|1|1
Output should be
FREQ|A|V|3|2
FREQ|A|V|4|1
Here... (5 Replies)
Discussion started by: sanranad
5 Replies
7. Shell Programming and Scripting
Hi Friends,
This is the only solution to my task. So, any help is highly appreciated.
I have a file
cat input1.bed
chr1 100 200 abc
chr1 120 300 def
chr1 145 226 ghi
chr2 567 600 unix
Now, I have another file by name
input2.bed (This file is a binary file not readable by the... (7 Replies)
Discussion started by: jacobs.smith
7 Replies
8. UNIX for Dummies Questions & Answers
Hi,
Please help How to compare two files-
Any mismatches 2nd and 3rd column's values corresponding to 1st column.
file1
15294024|Not Allowed|null
15291398|Not Allowed|null
15303292|Dropship (standard)|N
15303291|Dropship (standard)|N
15275561|Store Only|Y
15275560|Store Only|Y... (2 Replies)
Discussion started by: Ankita Talukdar
2 Replies
9. UNIX for Dummies Questions & Answers
Hi
Please help me to compare two files and output into a new file
file1.txt
15114933 |4001
15291649 |933502
15764675 |4316
15764678 |4316
15761974 |282501
15673104 |933505
15673577 |933505
15673098 |933505
15673096 |933505
15673092 |933505
15760705 ... (13 Replies)
Discussion started by: Ankita Talukdar
13 Replies
10. Shell Programming and Scripting
Hallo Friends,
I would like to compare two files, then write the difference between the two into output file then find a pattern then search for that pattern.
-bash-3.2$ cat BS_Orig_20141112.csv|head -20
BW0159574451211141638275086@196.35.130.5
BW02043750712111491637691@196.35.130.5... (2 Replies)
Discussion started by: kekanap
2 Replies
UNIQ(1) User Commands UNIQ(1)
NAME
uniq - report or omit repeated lines
SYNOPSIS
uniq [OPTION]... [INPUT [OUTPUT]]
DESCRIPTION
Filter adjacent matching lines from INPUT (or standard input), writing to OUTPUT (or standard output).
With no options, matching lines are merged to the first occurrence.
Mandatory arguments to long options are mandatory for short options too.
-c, --count
prefix lines by the number of occurrences
-d, --repeated
only print duplicate lines, one for each group
-D print all duplicate lines
--all-repeated[=METHOD]
like -D, but allow separating groups with an empty line; METHOD={none(default),prepend,separate}
-f, --skip-fields=N
avoid comparing the first N fields
--group[=METHOD]
show all items, separating groups with an empty line; METHOD={separate(default),prepend,append,both}
-i, --ignore-case
ignore differences in case when comparing
-s, --skip-chars=N
avoid comparing the first N characters
-u, --unique
only print unique lines
-z, --zero-terminated
line delimiter is NUL, not newline
-w, --check-chars=N
compare no more than N characters in lines
--help display this help and exit
--version
output version information and exit
A field is a run of blanks (usually spaces and/or TABs), then non-blank characters. Fields are skipped before chars.
Note: 'uniq' does not detect repeated lines unless they are adjacent. You may want to sort the input first, or use 'sort -u' without
'uniq'. Also, comparisons honor the rules specified by 'LC_COLLATE'.
AUTHOR
Written by Richard M. Stallman and David MacKenzie.
REPORTING BUGS
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
Report uniq translation bugs to <http://translationproject.org/team/>
COPYRIGHT
Copyright (C) 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.
SEE ALSO
comm(1), join(1), sort(1)
Full documentation at: <http://www.gnu.org/software/coreutils/uniq>
or available locally via: info '(coreutils) uniq invocation'
GNU coreutils 8.28 January 2018 UNIQ(1)