Sponsored Content
Top Forums Shell Programming and Scripting Filter uniq field values (non-substring) Post 302901379 by vgersh99 on Tuesday 13th of May 2014 02:19:35 PM
Old 05-13-2014
Quote:
Originally Posted by yifangt
I did not forget this discussion, the left challenge is the speed. The script by vgersh99 took ~45hours to finish the file of 76,235 rows. Probably awk is not the best choice to process big file. Thank you all anyway!
I don't think it's a matter of a 'tool of choice', but rather of an algorithm to deal with the cached/stored matches in order to minimize the 'full table scan' for each newly found row/line....
I couldn't think of an easy non-convoluted way of minimizing the full table scan...
Maybe others would have better ideas.....
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Uniq using only the first field

Hi all, I have a file that contains a list of codes (shown below). I want to 'uniq' the file using only the first field. Anyone know an easy way of doing it? Cheers, Dave ##### Input File ##### 1xr1 1xws 1yxt 1yxu 1yxv 1yxx 2o3p 2o63 2o64 2o65 1xr1 1xws 1yxt 1yxv 1yxx 2o3p 2o63 2o64... (8 Replies)
Discussion started by: Digby
8 Replies

2. UNIX for Dummies Questions & Answers

How to uniq third field in a file

Hi ; I have a question regarding the uniq command in unix How do I uniq 3rd field in a file ? original file : zoom coord 39 18652 39 18652 zoom coord 39 18653 39 18653 zoom coord 39 18818 39 18818 zoom coord 39 18840 39 18840 zoom coord 41 15096 41 15096 zoom... (1 Reply)
Discussion started by: babycakes
1 Replies

3. Shell Programming and Scripting

How to use uniq on a certain field?

How can I use uniq on a certain field or what else could I use? If I want to use uniq on the second field and the output would remove one of the lines with a 5. bob 5 hand jane 3 leg jon 4 head chris 5 lungs (1 Reply)
Discussion started by: Bandit390
1 Replies

4. Shell Programming and Scripting

filter the uniq record problem

Anyone can help for filter the uniq record for below example? Thank you very much Input file 20090503011111|test|abc 20090503011112|tet1|abc|def 20090503011112|test1|bcd|def 20090503011131|abc|abc 20090503011131|bbc|bcd 20090503011152|bcd|abc 20090503011151|abc|abc... (8 Replies)
Discussion started by: bleach8578
8 Replies

5. Shell Programming and Scripting

Uniq based on first field

Hi New to unix. I want to display only the unrepeated lines from a file using first field. Ex: 1234 uname1 status1 1235 uname2 status2 1234 uname3 status3 1236 uname5 status5 I used sort filename | uniq -u output: 1234 uname1 status1 1235 uname2 status2 1234 uname3 status3 1236... (10 Replies)
Discussion started by: venummca
10 Replies

6. Shell Programming and Scripting

Sort field and uniq

I have a flatfile A.txt 2012/12/04 14:06:07 |trees|Boards 2, 3|denver|mekong|mekong12 2012/12/04 17:07:22 |trees|Boards 2, 3|denver|mekong|mekong12 2012/12/04 17:13:27 |trees|Boards 2, 3|denver|mekong|mekong12 2012/12/04 14:07:39 |rain|Boards 1|tampa|merced|merced11 How do i sort and get... (3 Replies)
Discussion started by: sabercats
3 Replies

7. Shell Programming and Scripting

Printing uniq first field with the the highest second field

Hi All, I am searching for a script which will produce an output file with the uniq first field with the second field having highest value among all the duplicates.. The output file will produce only the uniqs which are duplicate 3 times.. Input file X 9 B 5 A 1 Z 9 T 4 C 9 A 4... (13 Replies)
Discussion started by: ailnilanjan
13 Replies

8. Shell Programming and Scripting

Grok filter to extract substring from path and add to host field in logstash

Hii, I am reading data from files by defining path as *.log etc, Files names are like app1a_test2_heep.log , cdc2a_test3_heep.log etc How to configure logstash so that the part of string that is string before underscore (app1a, cdc2a..) should be grepped and added to host field and... (7 Replies)
Discussion started by: Ravi Kishore
7 Replies

9. Shell Programming and Scripting

HELP - uniq values per column

Hi All, I am trying to output uniq values per column. see file below. can you please assist? Thank you in advance. cat names joe allen ibm joe smith ibm joe allen google joe smith google rachel allen google desired output is: joe allen google rachel smith ibm (5 Replies)
Discussion started by: Apollo
5 Replies

10. Shell Programming and Scripting

awk to update field using matching value in file1 and substring in field in file2

In the awk below I am trying to set/update the value of $14 in file2 in bold, using the matching NM_ in $12 or $9 in file2 with the NM_ in $2 of file1. The lengths of $9 and $12 can be variable but what is consistent is the start pattern will always be NM_ and the end pattern is always ;... (2 Replies)
Discussion started by: cmccabe
2 Replies
CLUSTER(7)							   SQL Commands 							CLUSTER(7)

NAME
CLUSTER - cluster a table according to an index SYNOPSIS
CLUSTER [VERBOSE] tablename [ USING indexname ] CLUSTER [VERBOSE] DESCRIPTION
CLUSTER instructs PostgreSQL to cluster the table specified by tablename based on the index specified by indexname. The index must already have been defined on tablename. When a table is clustered, it is physically reordered based on the index information. Clustering is a one-time operation: when the table is subsequently updated, the changes are not clustered. That is, no attempt is made to store new or updated rows according to their index order. (If one wishes, one can periodically recluster by issuing the command again. Also, setting the table's FILLFACTOR storage parameter to less than 100% can aid in preserving cluster ordering during updates, since updated rows are preferentially kept on the same page.) When a table is clustered, PostgreSQL remembers which index it was clustered by. The form CLUSTER tablename reclusters the table using the same index as before. CLUSTER without any parameter reclusters all the previously-clustered tables in the current database that the calling user owns, or all such tables if called by a superuser. This form of CLUSTER cannot be executed inside a transaction block. When a table is being clustered, an ACCESS EXCLUSIVE lock is acquired on it. This prevents any other database operations (both reads and writes) from operating on the table until the CLUSTER is finished. PARAMETERS
tablename The name (possibly schema-qualified) of a table. indexname The name of an index. VERBOSE Prints a progress report as each table is clustered. NOTES
In cases where you are accessing single rows randomly within a table, the actual order of the data in the table is unimportant. However, if you tend to access some data more than others, and there is an index that groups them together, you will benefit from using CLUSTER. If you are requesting a range of indexed values from a table, or a single indexed value that has multiple rows that match, CLUSTER will help because once the index identifies the table page for the first row that matches, all other rows that match are probably already on the same table page, and so you save disk accesses and speed up the query. During the cluster operation, a temporary copy of the table is created that contains the table data in the index order. Temporary copies of each index on the table are created as well. Therefore, you need free space on disk at least equal to the sum of the table size and the index sizes. Because CLUSTER remembers the clustering information, one can cluster the tables one wants clustered manually the first time, and setup a timed event similar to VACUUM so that the tables are periodically reclustered. Because the planner records statistics about the ordering of tables, it is advisable to run ANALYZE [analyze(7)] on the newly clustered ta- ble. Otherwise, the planner might make poor choices of query plans. There is another way to cluster data. The CLUSTER command reorders the original table by scanning it using the index you specify. This can be slow on large tables because the rows are fetched from the table in index order, and if the table is disordered, the entries are on ran- dom pages, so there is one disk page retrieved for every row moved. (PostgreSQL has a cache, but the majority of a big table will not fit in the cache.) The other way to cluster a table is to use: CREATE TABLE newtable AS SELECT * FROM table ORDER BY columnlist; which uses the PostgreSQL sorting code to produce the desired order; this is usually much faster than an index scan for disordered data. Then you drop the old table, use ALTER TABLE ... RENAME to rename newtable to the old name, and recreate the table's indexes. The big dis- advantage of this approach is that it does not preserve OIDs, constraints, foreign key relationships, granted privileges, and other ancil- lary properties of the table -- all such items must be manually recreated. Another disadvantage is that this way requires a sort temporary file about the same size as the table itself, so peak disk usage is about three times the table size instead of twice the table size. EXAMPLES
Cluster the table employees on the basis of its index employees_ind: CLUSTER employees USING employees_ind; Cluster the employees table using the same index that was used before: CLUSTER employees; Cluster all tables in the database that have previously been clustered: CLUSTER; COMPATIBILITY
There is no CLUSTER statement in the SQL standard. The syntax CLUSTER indexname ON tablename is also supported for compatibility with pre-8.3 PostgreSQL versions. SEE ALSO
clusterdb [clusterdb(1)] SQL - Language Statements 2010-05-14 CLUSTER(7)
All times are GMT -4. The time now is 05:13 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy