08-13-2010
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
I have a file content like below.
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""... (5 Replies)
Discussion started by: vamshikrishnab
5 Replies
2. Shell Programming and Scripting
I want to duplicate a row if found two or more values in a particular column for corresponding row which is delimitted by comma.
Input
abc,line one,value1
abc,line two, value1, value2
abc,line three,value1
needs to converted to
abc,line one,value1
abc,line two, value1
abc,line... (8 Replies)
Discussion started by: Incrediblian
8 Replies
3. Shell Programming and Scripting
i have the long file more than one ns and www and mx in the line like .
i need the first ns record and first www and first mx from line .
the records are seperated with tthe ; i am try ing in awk scripting not getiing the solution.
... (4 Replies)
Discussion started by: kiranmosarla
4 Replies
4. Shell Programming and Scripting
hii i have a huge amt of data stored in a file.Here in this file i need to remove duplicates rows in such a way that the last column has different data & i must check for greatest among last colmn data & print the largest data along with other entries but just one of other duplicate entries is... (16 Replies)
Discussion started by: reva
16 Replies
5. Shell Programming and Scripting
for example:
this is the data file test.txt with more than 1000 rows
1. ccc 200
2.ddd 300
3.eee 400
4 fff 5000
........
1000 ddd 500
....
I would like to keep the rows with ccc and ddd, all other rows will be deleted, I still need the same output file: test.txt, how can... (5 Replies)
Discussion started by: jdsignature88
5 Replies
6. Shell Programming and Scripting
I am new to this forum and this is my first post.
I am looking at an old post with exactly the same name. Can not paste URL because I do not have 5 posts
My requirement is exactly opposite.
I want to get rid of duplicate rows and try to append the values of columns in those rows
... (10 Replies)
Discussion started by: vbhonde11
10 Replies
7. Shell Programming and Scripting
Hi,
This is a followup to my earlier post
him mno klm 20 76 . + . klm_mango unix_00000001;
alp fdc klm 123 456 . + . klm_mango unix_0000103;
her tkr klm 415 439 . + . klm_mango unix_00001043;
abc tvr klm 20 76 . + . klm_mango unix_00000001;
abc def klm 83 84 . + . klm_mango... (5 Replies)
Discussion started by: jacobs.smith
5 Replies
8. Shell Programming and Scripting
Hello..
I am trying to remove the duplicate entries in a log files and used the the below shell script to do the same.
awk '!x++' <filename>
Can I do without using the awk command and the regex? I do not want to start the search from the beginning of the line in the log file as it contains... (9 Replies)
Discussion started by: sandeepcm
9 Replies
9. Shell Programming and Scripting
Hi champs,
I have one of the requirement, where I need to compare two files line by line and ignore duplicates. Note, I hav files in sorted order.
I have tried using the comm command, but its not working for my scenario.
Input file1
srv1..development..employee..empname,empid,empdesg... (1 Reply)
Discussion started by: Selva_2507
1 Replies
10. Shell Programming and Scripting
I want to duplicate each row in my file
Egfile.txt
Name State Age
Jack NJ 34
John MA 23
Jessica FL 45
I want the code to produce this output
Name State Age
Jack NJ 34
Jack NJ 34
John MA 23
John MA 23
Jessica FL 45
Jessica FL 45 (6 Replies)
Discussion started by: sidnow
6 Replies
UNIQ(1) BSD General Commands Manual UNIQ(1)
NAME
uniq -- report or filter out repeated lines in a file
SYNOPSIS
uniq [-c | -d | -u] [-i] [-f num] [-s chars] [input_file [output_file]]
DESCRIPTION
The uniq utility reads the specified input_file comparing adjacent lines, and writes a copy of each unique input line to the output_file. If
input_file is a single dash ('-') or absent, the standard input is read. If output_file is absent, standard output is used for output. The
second and succeeding copies of identical adjacent input lines are not written. Repeated lines in the input will not be detected if they are
not adjacent, so it may be necessary to sort the files first.
The following options are available:
-c Precede each output line with the count of the number of times the line occurred in the input, followed by a single space.
-d Only output lines that are repeated in the input.
-f num Ignore the first num fields in each input line when doing comparisons. A field is a string of non-blank characters separated from
adjacent fields by blanks. Field numbers are one based, i.e., the first field is field one.
-s chars
Ignore the first chars characters in each input line when doing comparisons. If specified in conjunction with the -f option, the
first chars characters after the first num fields will be ignored. Character numbers are one based, i.e., the first character is
character one.
-u Only output lines that are not repeated in the input.
-i Case insensitive comparison of lines.
ENVIRONMENT
The LANG, LC_ALL, LC_COLLATE and LC_CTYPE environment variables affect the execution of uniq as described in environ(7).
EXIT STATUS
The uniq utility exits 0 on success, and >0 if an error occurs.
COMPATIBILITY
The historic +number and -number options have been deprecated but are still supported in this implementation.
SEE ALSO
sort(1)
STANDARDS
The uniq utility conforms to IEEE Std 1003.1-2001 (``POSIX.1'') as amended by Cor. 1-2002.
HISTORY
A uniq command appeared in Version 3 AT&T UNIX.
BSD
December 17, 2009 BSD