forming duplicate rows based on value of a key


 
Thread Tools Search this Thread
Top Forums UNIX for Dummies Questions & Answers forming duplicate rows based on value of a key
# 8  
Old 03-25-2010
hey

you understood 200% correct. That's what I exactly expected.

Grazie
Ruby
 
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Extract and exclude rows based on duplicate values

Hello I have a file like this: > cat examplefile ghi|NN603762|eee mno|NN607265|ttt pqr|NN613879|yyy stu|NN615002|uuu jkl|NN607265|rrr vwx|NN615002|iii yzA|NN618555|ooo def|NN190486|www BCD|NN628717|ppp abc|NN190486|qqq EFG|NN628717|aaa HIJ|NN628717|sss > I can sort the file by... (5 Replies)
Discussion started by: CHoggarth
5 Replies

2. Shell Programming and Scripting

Convert rows to columns based on key and count

Team, I am having requirement to convert rows to columns Input is: key ,count, id1, pulse1, id2, pulse2 ,id3, pulse3 12, 2 , 14 , 56 , 15, 65 13, 3, 12, 32, 14, 23, 18, 54 22, 1 , 32, 42 Expected Out put: key, id,pulse 12, 14, 56 12, 15, 65 13 ,12, 32 13, 14 ,23 13, 18 ,54 22 ,32,... (3 Replies)
Discussion started by: syam1406
3 Replies

3. Shell Programming and Scripting

Remove duplicate rows based on one column

Dear members, I need to filter a file based on the 8th column (that is id), and does not mather the other columns, because I want just one id (1 line of each id) and remove the duplicates lines based on this id (8th column), and does not matter wich duplicate will be removed. example of my file... (3 Replies)
Discussion started by: clarissab
3 Replies

4. Shell Programming and Scripting

Find duplicate based on 'n' fields and mark the duplicate as 'D'

Hi, In a file, I have to mark duplicate records as 'D' and the latest record alone as 'C'. In the below file, I have to identify if duplicate records are there or not based on Man_ID, Man_DT, Ship_ID and I have to mark the record with latest Ship_DT as "C" and other as "D" (I have to create... (7 Replies)
Discussion started by: machomaddy
7 Replies

5. UNIX for Dummies Questions & Answers

Remove duplicate rows when >10 based on single column value

Hello, I'm trying to delete duplicates when there are more than 10 duplicates, based on the value of the first column. e.g. a 1 a 2 a 3 b 1 c 1 gives b 1 c 1 but requires 11 duplicates before it deletes. Thanks for the help Video tutorial on how to use code tags in The UNIX... (11 Replies)
Discussion started by: informaticist
11 Replies

6. Shell Programming and Scripting

Duplicate rows in CSV files based on values

I am new to this forum and this is my first post. I am looking at an old post with exactly the same name. Can not paste URL because I do not have 5 posts My requirement is exactly opposite. I want to get rid of duplicate rows and try to append the values of columns in those rows ... (10 Replies)
Discussion started by: vbhonde11
10 Replies

7. Shell Programming and Scripting

how to delete duplicate rows based on last column

hii i have a huge amt of data stored in a file.Here in this file i need to remove duplicates rows in such a way that the last column has different data & i must check for greatest among last colmn data & print the largest data along with other entries but just one of other duplicate entries is... (16 Replies)
Discussion started by: reva
16 Replies

8. Shell Programming and Scripting

Duplicate rows in CSV files based on values

I want to duplicate a row if found two or more values in a particular column for corresponding row which is delimitted by comma. Input abc,line one,value1 abc,line two, value1, value2 abc,line three,value1 needs to converted to abc,line one,value1 abc,line two, value1 abc,line... (8 Replies)
Discussion started by: Incrediblian
8 Replies

9. Shell Programming and Scripting

How to delete duplicate records based on key

For example suppose I have a file which contains data as: $cat data 800,2 100,9 700,3 100,9 200,8 100,3 Now I want the output as 200,8 700,3 800,2 Key is first three characters, I don't want any reords which are having duplicate keys. Like sort +0.0 -0.3 data can we use... (9 Replies)
Discussion started by: sumitc
9 Replies

10. UNIX for Dummies Questions & Answers

Remove duplicate rows of a file based on a value of a column

Hi, I am processing a file and would like to delete duplicate records as indicated by one of its column. e.g. COL1 COL2 COL3 A 1234 1234 B 3k32 2322 C Xk32 TTT A NEW XX22 B 3k32 ... (7 Replies)
Discussion started by: risk_sly
7 Replies
Login or Register to Ask a Question
ftok(3) 						     Library Functions Manual							   ftok(3)

Name
       ftok - standard interprocess communication package

Syntax
       #include <sys/types.h>
       #include <sys/ipc.h>

       key_t ftok(path, id)
       char *path;
       char id;

Description
       All interprocess communication facilities require the user to supply a key to be used by the and system calls to obtain interprocess commu-
       nication identifiers.  One suggested method for forming a key is to use the file to key, subroutine described below.  Another way  to  com-
       pose  keys is to include the project ID in the most significant byte and to use the remaining portion as a sequence number.  There are many
       other ways to form keys, but it is necessary for each system to define standards for forming them.  If some standard is not adhered to,	it
       will  be  possible  for	unrelated processes to unintentionally interfere with each other's operation.  Therefore, it is strongly suggested
       that the most significant byte of a key in some sense refer to a project so that keys do not conflict across a given system.

       The subroutine returns a key based on path and id that is usable in subsequent and system calls.  The path must be  the	path  name  of	an
       existing  file  that  is  accessible to the process.  The id is a character which uniquely identifies a project.  Note that will return the
       same key for linked files when called with the same id and that it will return different keys when called with the same file name but  dif-
       ferent ids.

Return Values
       The subroutine returns (key_t) -1 if path does not exist or if it is not accessible to the process.

Warning
       If  the file whose path is passed to is removed when keys still refer to the file, future calls to with the same path and id will return an
       error.  If the same file is recreated, then is likely to return a different key than it did the original time it was called.

See Also
       intro(2), msgget(2), semget(2), shmget(2)

																	   ftok(3)