Sponsored Content
Top Forums Shell Programming and Scripting Unique values from a Terabyte File Post 302251208 by jim mcnamara on Saturday 25th of October 2008 06:57:22 PM
Old 10-25-2008
A hashmap or associative arrays (another word for them) is probably best.

You might even try awk if your version handles largefiles. Assume your map key is characters 1-10 of the record.
Code:
awk '!arr[substr($0,1,10)++' myTBfile

 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Getting Unique values in a file

Hi, I have a file like this: Some_String_Here 123 123 123 321 321 321 3432 3221 557 886 321 321 I would like to find only the unique values in the files and get the following output: Some_String_Here 123 321 3432 3221 557 886 I am trying to get this done using awk. Can someone please... (5 Replies)
Discussion started by: Legend986
5 Replies

2. UNIX Desktop Questions & Answers

Fetching unique values from file

After giving grep -A4 "feature 1," <file name> I have extracted the following text feature 1, subfeat 2, type 1, subtype 5, dump '30352f30312f323030392031313a33303a3337'H -- "05/01/2009 11:30:37" -- -- ... (1 Reply)
Discussion started by: shivi707
1 Replies

3. UNIX for Dummies Questions & Answers

Extract Unique Values from file

Hello all, I have a file with following sample data 2009-08-26 05:32:01.65 spid5 Process ID 86:214 owns resources that are blocking processes on Scheduler 0. 2009-08-26 05:32:01.65 spid5 Process ID 86:214 owns resources that are blocking processes on Scheduler 0. 2009-08-26... (5 Replies)
Discussion started by: simonsimon
5 Replies

4. Shell Programming and Scripting

How to count Unique Values from a file.

Hi I have the following info in a file - <Cell id="25D"/> <Cell id="26A"/> <Cell id="26B"/> <Cell id="26C"/> <Cell id="27A"/> <Cell id="27B"/> <Cell id="27C"/> <Cell id="28A"/> I would like to know how would you go about counting all... (4 Replies)
Discussion started by: Prega
4 Replies

5. Shell Programming and Scripting

List unique values and count instances in .csv file

I need to take the second column of a .csv file and count the number of instances of each unique value in that same second column. I'd like the output to be value,count sorted by most instances. Thanks for any guidance! Data example: 317476,317756,0 816063,318861,0 313123,319091,0... (4 Replies)
Discussion started by: batcho
4 Replies

6. Shell Programming and Scripting

Find and count unique date values in a file based on position

Hello, I need some sort of way to extract every date contained in a file, and count how many of those dates there are. Here are the specifics: The date format I'm looking for is mm/dd/yyyy I only need to look after line 45 in the file (that's where the data begins) The columns of... (2 Replies)
Discussion started by: ronan1219
2 Replies

7. Linux

To get all the columns in a CSV file based on unique values of particular column

cat sample.csv ID,Name,no 1,AAA,1 2,BBB,1 3,AAA,1 4,BBB,1 cut -d',' -f2 sample.csv | sort | uniq this gives only the 2nd column values Name AAA BBB How to I get all the columns of CSV along with this? (1 Reply)
Discussion started by: sanvel
1 Replies

8. Shell Programming and Scripting

Extracting unique values of a column from a feed file

Hi Folks, I have the below feed file named abc1.txt in which you can see there is a title and below is the respective values in the rows and it is completely pipe delimited file ,. ... (4 Replies)
Discussion started by: punpun66
4 Replies

9. Shell Programming and Scripting

Using grep and a parameter file to return unique values

Hello Everyone! I have updated the first post so that my intentions are easier to understand, and also attached sample files (post #18). I have over 500 text files in a directory. Over 1 GB of data. The data in those files is organised in lines: My intention is to return one line per... (23 Replies)
Discussion started by: clippertm
23 Replies

10. Shell Programming and Scripting

How to identify varying unique fields values from a text file in UNIX?

Hi, I have a huge unsorted text file. We wanted to identify the unique field values in a line and consider those fields as a primary key for a table in upstream system. Basically, the process or script should fetch the values from each line that are unique compared to the rest of the lines in... (13 Replies)
Discussion started by: manikandan23
13 Replies
Array::Unique(3pm)					User Contributed Perl Documentation					Array::Unique(3pm)

NAME
Array::Unique - Tie-able array that allows only unique values SYNOPSIS
use Array::Unique; tie @a, 'Array::Unique'; Now use @a as a regular array. DESCRIPTION
This package lets you create an array which will allow only one occurrence of any value. In other words no matter how many times you put in 42 it will keep only the first occurrence and the rest will be dropped. You use the module via tie and once you tied your array to this module it will behave correctly. Uniqueness is checked with the 'eq' operator so among other things it is case sensitive. As a side effect the module does not allow undef as a value in the array. EXAMPLES
use Array::Unique; tie @a, 'Array::Unique'; @a = qw(a b c a d e f); push @a, qw(x b z); print "@a "; # a b c d e f x z DISCUSSION
When you are collecting a list of items and you want to make sure there is only one occurrence of each item, you have several option: 1) using an array and extracting the unique elements later You might use a regular array to hold this unique set of values and either remove duplicates on each update by that keeping the array always unique or remove duplicates just before you want to use the uniqueness feature of the array. In either case you might run a function you call @a = unique_value(@a); The problem with this approach is that you have to implement the unique_value function (see later) AND you have to make sure you don't forget to call it. I would say don't rely on remembering this. There is good discussion about it in the 1st edition of the Perl Cookbook of O'Reilly. I have copied the solutions here, you can see further discussion in the book. Extracting Unique Elements from a List (Section 4.6 in the Perl Cookbook 1st ed.) # Straightforward %seen = (); @uniq = (); foreach $item (@list) [ unless ($seen{$item}) { # if we get here we have not seen it before $seen{$item} = 1; push (@uniq, $item); } } # Faster %seen = (); foreach $item (@list) { push(@uniq, $item) unless $seen{$item}++; } # Faster but different %seen; foreach $item (@list) { $seen{$item}++; } @uniq = keys %seen; # Faster and even more different %seen; @uniq = grep {! $seen{$_}++} @list; 2) using a hash Some people use the keys of a hash to keep the items and put an arbitrary value as the values of the hash: To build such a list: %unique = map { $_ => 1 } qw( one two one two three four! ); To print it: print join ", ", sort keys %unique; To add values to it: $unique{$_}=1 foreach qw( one after the nine oh nine ); To remove values: delete @unique{ qw(oh nine) }; To check if a value is there: $unique{ $value }; # which is why I like to use "1" as my value (thanks to Gaal Yahas for the above examples) There are three drawbacks I see: 1) You type more. 2) Your reader might not understand at first why did you use hash and what will be the values. 3) You lose the order. Usually non of them is critical but when I saw this the 10th time in a code I had to understand with 0 documentation I got frustrated. 3) using Array::Unique So I decided to write this module because I got frustrated by my lack of understanding what's going on in that code I mentioned. In addition I thought it might be interesting to write this and then benchmark it. Additionally it is nice to have your name displayed in bright lights all over CPAN ... or at least in a module. Array::Unique lets you tie an array to hmmm, itself (?) and makes sure the values of the array are always unique. Since writing this I am not sure if I really recommend its usage. I would say stick with the hash version and document that the variable is aggregating a unique list of values. 4) Using real SET There are modules on CPAN that let you create and maintain SETs. I have not checked any of those but I guess they just as much of an overkill for this functionality as Unique::Array. BUGS
use Array::Unique; tie @a, 'Array::Unique'; @c = @a = qw(a b c a d e f b); @c will contain the same as @a AND two undefs at the end because @c you get the same length as the right most list. TODO
Test: Change size of the array Elements with false values ('', '0', 0) splice: splice @a; splice @a, 3; splice @a, -3; splice @a, 3, 5; splice @a, 3, -5; splice @a, -3, 5; splice @a, -3, -5; splice @a, ?, ?, @b; Benchmark speed Add faster functions that don't check uniqueness so if I know part of the data that comes from a unique source then I can speed up the process, In short shoot myself in the leg. Enable optional compare with other functions Write even better implementations. AUTHOR
Gabor Szabo <gabor@pti.co.il> LICENSE
Copyright (C) 2002-2008 Gabor Szabo <gabor@pti.co.il> All rights reserved. http://www.pti.co.il/ You may distribute under the terms of either the GNU General Public License or the Artistic License, as specified in the Perl README file. No WARRANTY whatsoever. CREDITS
Thanks for suggestions and bug reports to Szabo Balazs (dLux) Shlomo Yona Gaal Yahas Jeff 'japhy' Pinyan Werner Weichselberger VERSION
Version: 0.08 Date: 2008 June 04 perl v5.10.0 2009-03-06 Array::Unique(3pm)
All times are GMT -4. The time now is 02:53 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy