11-22-2001
remove unnecessary lines
If I have a file with 5000 lines contains numbers. Some of the number are repeated and some are not. Among those repeated number, I only would like to keep only one. How do I remove those balance repeated number.
Your help is much appreciated. Thank you.
10 More Discussions You Might Find Interesting
1. IP Networking
Good day :)
I recently checked some stuff on my gateway and discovered what I believe is an unneeded route entry.
# route -n
Kernel IP routing table
Destination Gateway Genmask ... Iface
287.265.45.0 0.0.0.0 255.255.255.0 ... eth0
192.168.0.0 0.0.0.0 ... (3 Replies)
Discussion started by: J.P
3 Replies
2. UNIX for Dummies Questions & Answers
hi folks...
i have to write a sript that removes unnecessary backup-files.
iam new to shell scripting so please be patient with me. and no its not homework :p
these files look like "javacore303330.1209029863.txt" where the first number is the PID and the second is the timestamp. so there can be... (5 Replies)
Discussion started by: cypher82
5 Replies
3. UNIX for Advanced & Expert Users
HI all,
I have a file with following data - test1
"ABC,D",1234,"XYZ,QWER",1234
"SZXA",9870,"ASD,QWERT",234
"XZ,SD",9478,"ADCS,AXZ",876
"WESR",8764,"AQZXAS",9888
"WESR",9898,"WESDRTSAW",3323
I need to get rid of unnecessary commas in fields having double quotes.
Ouput -
... (1 Reply)
Discussion started by: sumeet
1 Replies
4. Shell Programming and Scripting
Hi,
I'm not a expert in shell programming, so i've come here to take help from u gurus.
I'm trying to tailor a csv file that i got to make it work for the LOAD FROM command.
I've a datatable csv of the below format -
--in file format
xx,xx,xx ,xx , , , , ,,xx,
xxxx,, ,, xxx,... (11 Replies)
Discussion started by: dvah
11 Replies
5. Solaris
I want to clean used solaris OS and then to give another developer.
How can I understand that difference between system file/folder and others.
I want to delete apart from the files/folders.
I need tools or scripts like disk-cleanup or something like that.
#usr>du -s -h *
6.6M 4lib
... (1 Reply)
Discussion started by: getrue
1 Replies
6. UNIX for Dummies Questions & Answers
Hi,
I have a huge file which has Lacs of lines. File system got full.
I want your guys help to suggest me a solution so that I can remove all lines from that file but not last 50,000 lines. I want solution which can remove lines from existing file so that I can have some space left with. (28 Replies)
Discussion started by: prashant2507198
28 Replies
7. Shell Programming and Scripting
I have two files, a keepout.txt and a database.csv. They're unsorted, but could be sorted.
keepout:
user1
buser3
anuser19
notheruser27
database:
user1,2343,"information about",field,blah,34
user2,4231,"mo info",etc,stuff,43
notheruser27,4344,"hiya",thing,more thing,423... (4 Replies)
Discussion started by: esoffron
4 Replies
8. Shell Programming and Scripting
i have data as below
123,"paul phiri",paul@yahoo.com,"po.box 23, BT","Eco Bank,Blantyre,Malawi"
i need an output to be
123,"paul phiri",paul@yahoo.com,"po.box 23 BT","Eco Bank Blantyre Malawi" (5 Replies)
Discussion started by: mathias23
5 Replies
9. Shell Programming and Scripting
Hello everyone,
Although it seems easy, I've been stuck with this problem for a moment now and I can't figure out a way to get it done.
My problem is the following:
I have a file where each line is a sequence of IP addresses, example :
10.0.0.1 10.0.0.2
10.0.0.5 10.0.0.1 10.0.0.2... (5 Replies)
Discussion started by: MisterJellyBean
5 Replies
10. Shell Programming and Scripting
I have been searching and trying to come up with an awk that will perform the following on a
converted text file (original is a pdf).
1. Since the first two lines are (begin with) text they are removed
2. if $1 is a number then all text is merged (combined) into one line until the next... (3 Replies)
Discussion started by: cmccabe
3 Replies
UNIQ(1) General Commands Manual UNIQ(1)
NAME
uniq - report repeated lines in a file
SYNOPSIS
uniq [ -udc [ +n ] [ -n ] ] [ input [ output ] ]
DESCRIPTION
Uniq reads the input file comparing adjacent lines. In the normal case, the second and succeeding copies of repeated lines are removed;
the remainder is written on the output file. Note that repeated lines must be adjacent in order to be found; see sort(1). If the -u flag
is used, just the lines that are not repeated in the original file are output. The -d option specifies that one copy of just the repeated
lines is to be written. The normal mode output is the union of the -u and -d mode outputs.
The -c option supersedes -u and -d and generates an output report in default style but with each line preceded by a count of the number of
times it occurred.
The n arguments specify skipping an initial portion of each line in the comparison:
-n The first n fields together with any blanks before each are ignored. A field is defined as a string of non-space, non-tab charac-
ters separated by tabs and spaces from its neighbors.
+n The first n characters are ignored. Fields are skipped before characters.
SEE ALSO
sort(1), comm(1)
UNIQ(1)