Awk to print distinct col values | Unix Linux Forums | Shell Programming and Scripting

  Go Back    


Shell Programming and Scripting Post questions about KSH, CSH, SH, BASH, PERL, PHP, SED, AWK and OTHER shell scripts and shell scripting languages here.

Awk to print distinct col values

Shell Programming and Scripting


Tags
awk distinct column values, perl, perl shift, shift, shift perl

Closed Thread    
 
Thread Tools Search this Thread Display Modes
    #1  
Old 08-13-2008
anduzzi anduzzi is offline
Registered User
 
Join Date: Aug 2008
Last Activity: 17 June 2013, 2:39 PM EDT
Posts: 65
Thanks: 7
Thanked 0 Times in 0 Posts
Awk to print distinct col values

Hi Guys...
I am newbie to awk and would like a solution to probably one of the simple practical questions.

I have a test file that goes as:

1,2,3,4,5,6
7,2,3,8,7,6
9,3,5,6,7,3
8,3,1,1,1,1
4,4,2,2,2,2

I would like to know how AWK can get me the distinct values say for eg: on col2 and figure out that the distinct values are only 2,3,4.
I need to check my actual realtime medical file with the distinct service dates from around a relatively big 200,000 records.

And one more question is...how can I use awk to print out records which dont meet a specific criteria...like...
Eg: I want to see only those records where Distinct Col2 values are less than 10 and see the actual distinct values to figure out why they are < 10

I know I can always go for some fancy ETLs to achieve complex requirements(ofcoz this requirement is not complex anyway) and play around with the data but I wanna use the power of awk/sed to accomplish the tasks.

Help is highly appreciated.
Thank you very much
-Anduzzi
Sponsored Links
    #2  
Old 08-14-2008
anduzzi anduzzi is offline
Registered User
 
Join Date: Aug 2008
Last Activity: 17 June 2013, 2:39 PM EDT
Posts: 65
Thanks: 7
Thanked 0 Times in 0 Posts
Cud someone help me on this pls?...

Thanks !!!
Sponsored Links
    #3  
Old 08-14-2008
era era is offline Forum Advisor  
Herder of Useless Cats (On Sabbatical)
 
Join Date: Mar 2008
Last Activity: 28 March 2011, 6:41 AM EDT
Location: /there/is/only/bin/sh
Posts: 3,653
Thanks: 0
Thanked 10 Times in 8 Posts
Using just shell script,


Code:
cut -d , -f2 file | sort | uniq

The sort is a prerequisite for uniq which is kind of unfortunate if there is a lot of data.

Using awk,


Code:
awk -F , '{ a[$2]++ } END { for (b in a) { print b } }' file

The array a counts the number of occurrences of each distinct value in the second field. We don't use the actual count of occurrences, just the keys (distinct values in the second field) in the final print, but that's obviously easy to change if you want to see the counts, too.

The array could collect something else than counts; for example, a[$2]=$0 would remember the latest line with a particular value in field $2 for each distinct value in field $2. Collecting more complex data such as all lines with a particular value is doable, but slightly more complex -- you could append to the existing data. But at that point, perhaps just collecting the keys you want, and then doing another round to extract only those records would be more efficient if there is a lot of data.
    #4  
Old 08-14-2008
summer_cherry summer_cherry is offline Forum Advisor  
Registered User
 
Join Date: Jun 2007
Last Activity: 17 April 2014, 3:29 AM EDT
Location: Beijing China
Posts: 1,294
Thanks: 0
Thanked 24 Times in 24 Posts
awk:


Code:
col=$1
nawk -v col="$col" 'BEGIN{FS=","}
{
	arr[$col]++
}
END{
print "Column "col" has distinct value:"
for (i in arr)
print i
}' file

perl:


Code:
$col=shift;
open(FH,"<file");
while(<FH>){
	@arr=split(",",$_);
	$hash{$arr[$col-1]}++;
}
close(FH);
print "Colum $col has distinct value:\n";
for $key (keys %hash){
	print $key,"\n";
}

Sponsored Links
    #5  
Old 08-14-2008
era era is offline Forum Advisor  
Herder of Useless Cats (On Sabbatical)
 
Join Date: Mar 2008
Last Activity: 28 March 2011, 6:41 AM EDT
Location: /there/is/only/bin/sh
Posts: 3,653
Thanks: 0
Thanked 10 Times in 8 Posts
The Perl can be even more succinctly expressed as a one-liner.


Code:
perl -laF, -ne '{ $a{$F[1]}++ } END { print for keys %a }' file

The OP was specifically asking for an awk solution, though.
Sponsored Links
    #6  
Old 08-14-2008
anduzzi anduzzi is offline
Registered User
 
Join Date: Aug 2008
Last Activity: 17 June 2013, 2:39 PM EDT
Posts: 65
Thanks: 7
Thanked 0 Times in 0 Posts
Thanks !!

Hi Guys...
Thank you(era and cherry) very much for the solutions...I could use the array solution for my requirement.

Well, how abt incorporating a specific condition in AWK to retrieve such records which DONT meet the criteria as mentioned in my original post?...any thoughts ?...

-Anduzzi
Sponsored Links
    #7  
Old 08-14-2008
Ygor's Avatar
Ygor Ygor is offline Forum Advisor  
Advisor
 
Join Date: Oct 2003
Last Activity: 21 November 2013, 8:38 AM EST
Location: 54.23, -4.53
Posts: 1,801
Thanks: 1
Thanked 110 Times in 98 Posts
Quote:
Originally Posted by anduzzi View Post
Well, how abt incorporating a specific condition in AWK to retrieve such records which DONT meet the criteria as mentioned in my original post?...any thoughts ?
Print lines where column 2 < 10
Code:
awk -F, '$2<10' file

Conversely...
Code:
awk -F, '!($2<10)' file

Sponsored Links
Closed Thread

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

More UNIX and Linux Forum Topics You Might Find Helpful
Thread Thread Starter Forum Replies Last Post
average of distinct values with awk saif Shell Programming and Scripting 3 12-16-2010 12:20 PM
distinct values of all the fields vukkusila Shell Programming and Scripting 2 10-12-2010 09:56 PM
distinct values of all the fields vukkusila UNIX for Dummies Questions & Answers 1 10-12-2010 09:55 PM
grep distinct values guessingo Shell Programming and Scripting 4 01-07-2010 01:13 PM
Loop through only the distinct values in a file pbekal Shell Programming and Scripting 2 02-06-2008 04:47 PM



All times are GMT -4. The time now is 11:28 AM.