Sponsored Content
Full Discussion: Help with uniq or awk??
Top Forums Shell Programming and Scripting Help with uniq or awk?? Post 302345724 by Franklin52 on Thursday 20th of August 2009 05:06:42 AM
Old 08-20-2009
It's gonna be difficult to find a balanced solution, your data is not consistent...

Regards
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

using uniq and awk??

I have a file that is populated: hits/books.hits:143.217.64.204 Thu Sep 21 22:24:57 GMT 2006 hits/books.hits:62.145.39.14 Fri Sep 22 00:38:32 GMT 2006 hits/books.hits:81.140.86.170 Fri Sep 22 08:45:26 GMT 2006 hits/books.hits:81.140.86.170 Fri Sep 22 09:13:57 GMT... (13 Replies)
Discussion started by: amatuer_lee_3
13 Replies

2. Shell Programming and Scripting

How to replicate data using Uniq or awk

Hi, I have this scenario; where there are two classes:- apple and orange. 1,2,3,4,5,6,apple 1,1,0,4,2,3,apple 1,3,3,3,3,4,apple 1,1,1,1,1,1,orange 1,2,3,1,1,1,orange Basically for apple, i have 3 entries in the file, and for orange, I have 2 entries. Im trying to edit the file and find... (5 Replies)
Discussion started by: ahjiefreak
5 Replies

3. Shell Programming and Scripting

Text Proccessing with sort,uniq,awk

Hello, I have a log file with the following input: X , ID , Date, Time, Y 01,01368,2010-12-02,09:07:00,Pass 01,01368,2010-12-02,10:54:00,Pass 01,01368,2010-12-02,13:07:04,Pass 01,01368,2010-12-02,18:54:01,Pass 01,01368,2010-12-03,09:02:00,Pass 01,01368,2010-12-03,13:53:00,Pass... (12 Replies)
Discussion started by: rollyah
12 Replies

4. Shell Programming and Scripting

[uniq + awk?] How to remove duplicate blocks of lines in files?

Hello again, I am wanting to remove all duplicate blocks of XML code in a file. This is an example: input: <string-array name="threeItems"> <item>item1</item> <item>item2</item> <item>item3</item> </string-array> <string-array name="twoItems"> <item>item1</item> <item>item2</item>... (19 Replies)
Discussion started by: raidzero
19 Replies

5. Shell Programming and Scripting

awk - getting uniq count on multiple col

Hi My file have 7 column, FIle is pipe delimed Col1|Col2|col3|Col4|col5|Col6|Col7 I want to find out uniq record count on col3, col4 and col2 ( same order) how can I achieve it. ex 1|3|A|V|C|1|1 1|3|A|V|C|1|1 1|4|A|V|C|1|1 Output should be FREQ|A|V|3|2 FREQ|A|V|4|1 Here... (5 Replies)
Discussion started by: sanranad
5 Replies

6. Shell Programming and Scripting

awk uniq and longest string of a column as index

I met a challenge to filter ~70 millions of sequence rows and I want using awk with conditions: 1) longest string of each pattern in column 2, ignore any sub-string, as the index; 2) all the unique patterns after 1); 3) print the whole row; input: 1 ABCDEFGHI longest_sequence1 2 ABCDEFGH... (12 Replies)
Discussion started by: yifangt
12 Replies

7. Shell Programming and Scripting

Rewriting GNU uniq in awk

Within a shell script I use uniq -w 16 -D in order to process all lines in which the first 16 characters are duplicated. Now I want to also run that script on a BSD based system where the included version of uniq does not support the -w (--check-chars) option. To get around this I have... (7 Replies)
Discussion started by: mij
7 Replies

8. Shell Programming and Scripting

Sort uniq or awk

Hi again, I have files with the following contents datetime,ip1,port1,ip2,port2,number How would I find out how many times ip1 field shows up a particular file? Then how would I find out how many time ip1 and port 2 shows up? Please mind the file may contain 100k lines. (8 Replies)
Discussion started by: LDHB2012
8 Replies

9. Shell Programming and Scripting

awk compare and keep uniq

Hi all I was wondering if you may help me in resolving an issue. In particular I have a file like this: the ... represent different string and what I wrote Cur or Ent are the constant. Well, what I would like to obtain is a file in which are reported only the ID in which the second column... (6 Replies)
Discussion started by: giuliangiuseppe
6 Replies

10. UNIX for Dummies Questions & Answers

awk or uniq

Hi Help, I have a file which looks like 1 20 30 40 50 60 6 2 20 30 40 50 60 8 7 20 30 40 50 60 7 4 30 40 50 60 70 8 5 30 40 50 60 70 9 2 30 40 50 60 70 8 I want the o/p as 1 20 30 40 50 60 6 4 30 40 50 60 70 8 Is there a way I can use uniq command or awk to do this? ... (11 Replies)
Discussion started by: Indra2011
11 Replies
uniq(1) 						      General Commands Manual							   uniq(1)

NAME
uniq - Removes or lists repeated lines in a file SYNOPSIS
Current Syntax uniq [-cdu] [-f fields] [-s chars] [input-file [output-file]] Obsolescent Syntax uniq [-cdu] [-fields] [+chars] [input-file [output-file]] The uniq command reads from the specified input_file, compares adjacent lines, removes the second and succeeding occurrences of a line, and writes to standard output. STANDARDS
Interfaces documented on this reference page conform to industry standards as follows: uniq: XCU5.0 Refer to the standards(5) reference page for more information about industry standards and associated tags. OPTIONS
Precedes each output line with a count of the number of times each line appears in the file. This option supersedes the -d and -u options. Displays repeated lines only. Ignores the first fields fields on each input line when doing comparisons, where fields is a positive deci- mal integer. A field is the maximal string matched by the basic regular expression: [[:blank:]]*[^[:blank:]]* If the fields argument specifies more fields than appear on an input line, a null string is used for comparisons. Ignores the spec- ified number of characters when doing comparisons. The chars argument is a positive decimal integer. If specified with the -f option, the first chars characters after the first fields fields are ignored. If the chars argument speci- fies more characters than remain on an input line, uniq uses a null string for comparison. Displays unique lines only. Equivalent to -f fields. (Obsolescent) Equivalent to -s chars. (Obsolescent) OPERANDS
A pathname for the input file. If this operand is omitted or specified as -, then standard input is read. A pathname for the output file. If this operand is omitted, then standard output is written. DESCRIPTION
The input_file and output_file arguments must be different files. If the input_file operand is not specified, or if it is -, uniq uses standard input. Repeated lines must be on consecutive lines to be found. You can arrange them with the sort command before processing. EXAMPLES
To delete repeated lines in the following file called fruit and save it to a file named newfruit, enter: uniq fruit newfruit The file fruit contains the following lines: apples apples bananas cherries cherries peaches pears The file newfruit contains the following lines: apples bananas cherries peaches pears EXIT STATUS
The following exit values are returned: Successful completion. An error occurred. ENVIRONMENT VARIABLES
The following environment variables affect the execution of uniq: Provides a default value for the internationalization variables that are unset or null. If LANG is unset or null, the corresponding value from the default locale is used. If any of the internationalization vari- ables contain an invalid setting, the utility behaves as if none of the variables had been defined. If set to a non-empty string value, overrides the values of all the other internationalization variables. Determines the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multibyte characters in arguments). Determines the locale for the for- mat and contents of diagnostic messages written to standard error. Determines the location of message catalogues for the processing of LC_MESSAGES. SEE ALSO
Commands: comm(1), sort(1) Standards: standards(5) uniq(1)
All times are GMT -4. The time now is 05:17 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy