Sponsored Content
Top Forums Shell Programming and Scripting Delete repeated rows from a file Post 302137245 by pbsrinivas on Monday 24th of September 2007 02:47:45 AM
Old 09-24-2007
cat Input_file|sort -ruk 1n
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How to delete particular rows from a file

Hi I have a file having 1000 rows. Now I would like to remove 10 rows from it. Plz give me the script. Eg: input file like 4 1 4500.0 1 5 1 1.0 30 6 1 1.0 4500 7 1 4.0 730 7 2 500000.0 730 8 1 785460.0 45 8 7 94255.0 30 9 1 31800.0 30 9 4 36000.0 30 10 1 15000.0 30... (5 Replies)
Discussion started by: suresh3566
5 Replies

2. Shell Programming and Scripting

how to delete duplicate rows in a file

I have a file content like below. "0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","","" "0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","","" "0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""... (5 Replies)
Discussion started by: vamshikrishnab
5 Replies

3. UNIX for Dummies Questions & Answers

Delete repeated nos in a file

Hi, I need to delete repeated nos in a file and finally list the count. Can some one assist me? file: 12345 12345 56345 12345 23896 Output needed: 12345 56345 23896 Total count:3 Thanks (2 Replies)
Discussion started by: gini
2 Replies

4. Shell Programming and Scripting

Delete repeated word in text file

Hi expert, I am using C shell. And i trying to delete repeated word. Example file.txt: BLUE YELLOW RED VIOLET RED RED BLUE WHITE YELLOW BLACK and i wan store the output into a new file: BLUE (6 Replies)
Discussion started by: vincyoxy
6 Replies

5. Shell Programming and Scripting

[HELP] - Delete rows on a CSV file

Hello to all members, I am very new in unix stuff (shell scripting), but a want to learn a lot. I am a ex windows user but now i am absolutely Linux super user... :D So i am tryng to made a function to do this: I have two csv files only with numbers, the first one a have: 1 2 3 4 5... (6 Replies)
Discussion started by: Sadarrab
6 Replies

6. Shell Programming and Scripting

delete rows in a file based on the rows of another file

I need to delete rows based on the number of lines in a different file, I have a piece of code with me working but when I merge with my C application, it doesnt work. sed '1,'\"`wc -l < /tmp/fileyyyy`\"'d' /tmp/fileA > /tmp/filexxxx Can anyone give me an alternate solution for the above (2 Replies)
Discussion started by: Muthuraj K
2 Replies

7. UNIX for Advanced & Expert Users

Delete rows from a file...!!

Say i have a file with X rows and Y columns....i see that in some of the rows,some columns are blank (no value set)...i wish to delete such rows....how can it be done? e.g 181766 100 2009-06-04 184443 2009-06-04 10962 151 2009-06-04 161 2009-06-04... (7 Replies)
Discussion started by: ak835
7 Replies

8. Shell Programming and Scripting

Delete rows in text file

Hi I do have a text file with 1000's of lines with 1 row and column with a specific pattern. 1102 1 1 1 1 1234 1 1 1 1 1009 1 1 1 1 1056 1 (3 Replies)
Discussion started by: Lucky Ali
3 Replies

9. Shell Programming and Scripting

Converted repeated rows into splitted columns

Dear Friends, I have an input file contains lot of datas, which is like repaeated rows report. The output file need to have column wise report, rather than row-wise. Input File random line 1 random line 2 random line 3 ------------------------------------- Start line 1.1 (9.9) ... (1 Reply)
Discussion started by: vasanth.vadalur
1 Replies

10. Shell Programming and Scripting

Transposing Repeated Rows to Columns.

I have 1000s of these rows that I would like to transpose to columns. However I would like the transpose every 3 consecutive rows to columns like below, sorted by column 3 and provide a total for each occurrences. Finally I would like a grand total of column 3. 21|FE|41|0B 50\65\78 15... (2 Replies)
Discussion started by: ravzter
2 Replies
UNIQ(1) 						      General Commands Manual							   UNIQ(1)

NAME
uniq - report repeated lines in a file SYNOPSIS
uniq [ -udc [ +n ] [ -n ] ] [ input [ output ] ] DESCRIPTION
Uniq reads the input file comparing adjacent lines. In the normal case, the second and succeeding copies of repeated lines are removed; the remainder is written on the output file. Note that repeated lines must be adjacent in order to be found; see sort(1). If the -u flag is used, just the lines that are not repeated in the original file are output. The -d option specifies that one copy of just the repeated lines is to be written. The normal mode output is the union of the -u and -d mode outputs. The -c option supersedes -u and -d and generates an output report in default style but with each line preceded by a count of the number of times it occurred. The n arguments specify skipping an initial portion of each line in the comparison: -n The first n fields together with any blanks before each are ignored. A field is defined as a string of non-space, non-tab charac- ters separated by tabs and spaces from its neighbors. +n The first n characters are ignored. Fields are skipped before characters. SEE ALSO
sort(1), comm(1) 7th Edition April 29, 1985 UNIQ(1)
All times are GMT -4. The time now is 09:26 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy