Sponsored Content
Full Discussion: sort and uniq in perl
Top Forums Shell Programming and Scripting sort and uniq in perl Post 302073898 by reggiej on Thursday 18th of May 2006 12:35:17 PM
Old 05-18-2006
Cbkihong, thats exactly what I was looking for. Thanks.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

sort/uniq

I have a file: Fred Fred Fred Jim Fred Jim Jim If sort is executed on the listed file, shouldn't the output be?: Fred Fred Fred Fred Jim Jim Jim (3 Replies)
Discussion started by: jimmyflip
3 Replies

2. UNIX for Dummies Questions & Answers

Help with Last,uniq, sort and cut

Using the last, uniq, sort and cut commands, determine how many times the different users have logged in. I know how to use the last command and cut command... i came up with last | cut -f1 -d" " | uniq i dont know if this is right, can someone please help me... thanks (1 Reply)
Discussion started by: jay1228
1 Replies

3. Shell Programming and Scripting

Sort, Uniq, Duplicates

Input File is : ------------- 25060008,0040,03, 25136437,0030,03, 25069457,0040,02, 80303438,0014,03,1st 80321837,0009,03,1st 80321977,0009,03,1st 80341345,0007,03,1st 84176527,0047,03,1st 84176527,0047,03, 20000735,0018,03,1st 25060008,0040,03, I am using the following in the script... (5 Replies)
Discussion started by: Amruta Pitkar
5 Replies

4. Shell Programming and Scripting

Help with Uniq and sort

The key is first field i want only uniq record for the first field in file. I want the output as or output as Appreciate help on this (4 Replies)
Discussion started by: pinnacle
4 Replies

5. Shell Programming and Scripting

sort | uniq question

Hello, I have a large data file: 1234 8888 bbb 2745 8888 bbb 9489 8888 bbb 1234 8888 aaa 4838 8888 aaa 3977 8888 aaa I need to remove duplicate lines (where the first column is the duplicate). I have been using: sort file.txt | uniq -w4 > newfile.txt However, it seems to keep the... (11 Replies)
Discussion started by: palex
11 Replies

6. Shell Programming and Scripting

Sort and uniq after comparision

Hi All, I have a text file with the format shown below. Some of the records are duplicated with the only exception being date (Field 15). I want to compare all duplicate records using subscriber number (field 7) and keep only those records with greater date. ... (1 Reply)
Discussion started by: nua7
1 Replies

7. Shell Programming and Scripting

Sort field and uniq

I have a flatfile A.txt 2012/12/04 14:06:07 |trees|Boards 2, 3|denver|mekong|mekong12 2012/12/04 17:07:22 |trees|Boards 2, 3|denver|mekong|mekong12 2012/12/04 17:13:27 |trees|Boards 2, 3|denver|mekong|mekong12 2012/12/04 14:07:39 |rain|Boards 1|tampa|merced|merced11 How do i sort and get... (3 Replies)
Discussion started by: sabercats
3 Replies

8. Shell Programming and Scripting

Sort uniq or awk

Hi again, I have files with the following contents datetime,ip1,port1,ip2,port2,number How would I find out how many times ip1 field shows up a particular file? Then how would I find out how many time ip1 and port 2 shows up? Please mind the file may contain 100k lines. (8 Replies)
Discussion started by: LDHB2012
8 Replies

9. UNIX for Dummies Questions & Answers

Uniq and sort -u

Hello all, Need to pick your brains, I have a 10Gb file where each row is a name, I am expecting about 50 names in total. So there are a lot of repetitions in clusters. So I want to do a sort -u file Will it be considerably faster or slower to use a uniq before piping it to sort... (3 Replies)
Discussion started by: senhia83
3 Replies

10. Shell Programming and Scripting

Sort & Uniq -u

Hi All, Below the actual file which i like to sort and Uniq -u /opt/oracle/work/Antony/Shell_Script> cat emp.1st 2233|a.k. shukula |g.m. |sales |12/12/52 |6000 1006|chanchal singhvi |director |sales |03/09/38 |6700... (8 Replies)
Discussion started by: Antony Ankrose
8 Replies
UNIQ(1) 						      General Commands Manual							   UNIQ(1)

NAME
uniq - report repeated lines in a file SYNOPSIS
uniq [ -udc [ +n ] [ -n ] ] [ input [ output ] ] DESCRIPTION
Uniq reads the input file comparing adjacent lines. In the normal case, the second and succeeding copies of repeated lines are removed; the remainder is written on the output file. Note that repeated lines must be adjacent in order to be found; see sort(1). If the -u flag is used, just the lines that are not repeated in the original file are output. The -d option specifies that one copy of just the repeated lines is to be written. The normal mode output is the union of the -u and -d mode outputs. The -c option supersedes -u and -d and generates an output report in default style but with each line preceded by a count of the number of times it occurred. The n arguments specify skipping an initial portion of each line in the comparison: -n The first n fields together with any blanks before each are ignored. A field is defined as a string of non-space, non-tab charac- ters separated by tabs and spaces from its neighbors. +n The first n characters are ignored. Fields are skipped before characters. SEE ALSO
sort(1), comm(1) 7th Edition April 29, 1985 UNIQ(1)
All times are GMT -4. The time now is 11:25 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy