I have a file with duplicate lines. I could eliminate duplicate lines by running
and it works fine BUT it changes the order of the entries as it we did "sort".
I need to remove duplicates and also need to keep the order/sequence of entries. I think i can do this by looping the contents and check in the uniq_file and create new file but it doesn't look optimal.
Hello Experts,
I have two files named old and new. Below are my example files. I need to compare and print the records that only exist in my new file. I tried the below awk script, this script works perfectly well if the records have exact match, the issue I have is my old file has got extra... (4 Replies)
Hi,
I'm trying to strip all lines between two headers in a file:
### BEGIN ###
Text to remove, contains all kinds of characters
...
Antispyware-Downloadserver.com (Germany)=http://www.antispyware-downloadserver.c
om/updates/
Antispyware-Downloadserver.com #2... (3 Replies)
I created a snapshot and subsequent clone of a zfs volume. But now i 'm not able to remove the snapshot it gives me following error
zfs destroy newpool/ldom2/zdisk4@bootimg
cannot destroy 'newpool/ldom2/zdisk4@bootimg': snapshot has dependent clones
use '-R' to destroy the following... (7 Replies)
Hi,
I am having a file which is fix length and comma seperated. And I want to replace values for one column.
I am reading file line by line in variable $LINE and then replacing the string.
Problem is after changing value and writing new file temp5.txt, formating of original file is getting... (8 Replies)
Hi all,
I have a file with 3 columns separated by space. Each column has a heading. I want to sort according to the values in the 2nd column (ascending order).
Ex.
Name rank direction
goory 0.05 --+
laby 0.0006 ---
namy 0.31 -+-
....etc.
Output should be
Name rank direction
laby... (3 Replies)
Currently I have the following to separate the numeric values. However the decimal point get separated.
ls -lrt *smp*.cmd | awk '{print $NF}' | sed 's/^.*\///' | sed 's/\(*\)/ & /g'
As an example on the files
n02-z30-dsr65-terr0.50-dc0.05-4x3smp.cmd... (8 Replies)
Hi All,
I am trying write a simple command using AWK and SED to this but without any success.
Here is what I am using:
head -1 test1.txt>test2.txt|sed '1d;$d' test1.txt|awk '{print substr($0,0,(length($0)-2))}' >>test2.txt|tail -1 test1.txt>>test2.txt
Input:
Header
1234567
abcdefgh... (2 Replies)
I have a file with the following format:
fields seperated by "|"
title1|something class|long...content1|keys
title2|somhing class|log...content1|kes
title1|sothing class|lon...content1|kes
title3|shing cls|log...content1|ks
I want to remove all duplicates with the same "title field"(the... (3 Replies)
Hi I have a below file structure.
200,1245,E1,1,E1,,7611068,KWH,30, ,,,,,,,,
200,1245,E1,1,E1,,7611070,KWH,30, ,,,,,,,,
300,20140223,0.001,0.001,0.001,0.001,0.001
300,20140224,0.001,0.001,0.001,0.001,0.001
300,20140225,0.001,0.001,0.001,0.001,0.001
300,20140226,0.001,0.001,0.001,0.001,0.001... (1 Reply)
Hello all,
I need to filter a dataframe composed of several columns of data to remove the duplicates according to one of the columns. I did it with pandas. In the main time, I need that the last column that contains all different data ( not redundant) is conserved in the output like this:
A ... (5 Replies)
Discussion started by: pedro88
5 Replies
LEARN ABOUT BSD
uniq
UNIQ(1) General Commands Manual UNIQ(1)NAME
uniq - report repeated lines in a file
SYNOPSIS
uniq [ -udc [ +n ] [ -n ] ] [ input [ output ] ]
DESCRIPTION
Uniq reads the input file comparing adjacent lines. In the normal case, the second and succeeding copies of repeated lines are removed;
the remainder is written on the output file. Note that repeated lines must be adjacent in order to be found; see sort(1). If the -u flag
is used, just the lines that are not repeated in the original file are output. The -d option specifies that one copy of just the repeated
lines is to be written. The normal mode output is the union of the -u and -d mode outputs.
The -c option supersedes -u and -d and generates an output report in default style but with each line preceded by a count of the number of
times it occurred.
The n arguments specify skipping an initial portion of each line in the comparison:
-n The first n fields together with any blanks before each are ignored. A field is defined as a string of non-space, non-tab charac-
ters separated by tabs and spaces from its neighbors.
+n The first n characters are ignored. Fields are skipped before characters.
SEE ALSO sort(1), comm(1)7th Edition April 29, 1985 UNIQ(1)