Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Remove duplicate rows of a file based on a value of a column Post 302240605 by risk_sly on Friday 26th of September 2008 06:10:00 AM
Old 09-26-2008
Thanks again Jim, but I still get the "arr[: event not found error". Smilie I also noticed that when I recall the command (by pressing up arrow key), the part "![arr", is removed from the script. ie. the script becomes

awk -F, '$1]++' oldfile > newfile

im running on a solaris, and have also tried gawk and nawk, but the same error is being returned.

thank you.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

To remove date and duplicate rows from a log file using unix commands

Hi, I have a log file having size of 48mb. For such a large log file. I want to get the message in a particular format which includes only unique error and exception messages. The following things to be done : 1) To remove all the date and time from the log file 2) To remove all the... (1 Reply)
Discussion started by: Pank10
1 Replies

2. Shell Programming and Scripting

how to delete duplicate rows based on last column

hii i have a huge amt of data stored in a file.Here in this file i need to remove duplicates rows in such a way that the last column has different data & i must check for greatest among last colmn data & print the largest data along with other entries but just one of other duplicate entries is... (16 Replies)
Discussion started by: reva
16 Replies

3. Shell Programming and Scripting

Remove duplicate line detail based on column one data

My input file: AVI.out <detail>named as the RRM .</detail> AVI.out <detail>Contains 1 RRM .</detail> AR0.out <detail>named as the tellurite-resistance.</detail> AWG.out <detail>Contains 2 HTH .</detail> ADV.out <detail>named as the DENR family.</detail> ADV.out ... (10 Replies)
Discussion started by: patrick87
10 Replies

4. UNIX for Dummies Questions & Answers

How to get remove duplicate of a file based on many conditions

Hii Friends.. I have a huge set of data stored in a file.Which is as shown below a.dat: RAO 1869 12 19 0 0 0.00 17.9000 82.3000 10.0 0 0.00 0 3.70 0.00 0.00 0 0.00 3.70 4 NULL LEE 1870 4 11 1 0 0.00 30.0000 99.0000 0.0 0 0.00 0 0.00 0.00 0.00 0 ... (3 Replies)
Discussion started by: reva
3 Replies

5. UNIX for Dummies Questions & Answers

Remove duplicate rows when >10 based on single column value

Hello, I'm trying to delete duplicates when there are more than 10 duplicates, based on the value of the first column. e.g. a 1 a 2 a 3 b 1 c 1 gives b 1 c 1 but requires 11 duplicates before it deletes. Thanks for the help Video tutorial on how to use code tags in The UNIX... (11 Replies)
Discussion started by: informaticist
11 Replies

6. UNIX for Dummies Questions & Answers

merging rows into new file based on rows and first column

I have 2 files, file01= 7 columns, row unknown (but few) file02= 7 columns, row unknown (but many) now I want to create an output with the first field that is shared in both of them and then subtract the results from the rest of the fields and print there e.g. file 01 James|0|50|25|10|50|30... (1 Reply)
Discussion started by: A-V
1 Replies

7. Shell Programming and Scripting

Remove duplicate rows based on one column

Dear members, I need to filter a file based on the 8th column (that is id), and does not mather the other columns, because I want just one id (1 line of each id) and remove the duplicates lines based on this id (8th column), and does not matter wich duplicate will be removed. example of my file... (3 Replies)
Discussion started by: clarissab
3 Replies

8. Shell Programming and Scripting

Remove duplicate lines from file based on fields

Dear community, I have to remove duplicate lines from a file contains a very big ammount of rows (milions?) based on 1st and 3rd columns The data are like this: Region 23/11/2014 09:11:36 41752 Medio 23/11/2014 03:11:38 4132 Info 23/11/2014 05:11:09 4323... (2 Replies)
Discussion started by: Lord Spectre
2 Replies

9. Shell Programming and Scripting

Filter file to remove duplicate values in first column

Hello, I have a script that is generating a tab delimited output file. num Name PCA_A1 PCA_A2 PCA_A3 0 compound_00 -3.5054 -1.1207 -2.4372 1 compound_01 -2.2641 0.4287 -1.6120 3 compound_03 -1.3053 1.8495 ... (3 Replies)
Discussion started by: LMHmedchem
3 Replies

10. Shell Programming and Scripting

Remove duplicate values in a column(not in the file)

Hi Gurus, I have a file(weblog) as below abc|xyz|123|agentcode=sample code abcdeeess,agentcode=sample code abcdeeess,agentcode=sample code abcdeeess|agentadd=abcd stereet 23343,agentadd=abcd stereet 23343 sss|wwq|999|agentcode=sample1 code wqwdeeess,gentcode=sample1 code... (4 Replies)
Discussion started by: ratheeshjulk
4 Replies
STAG-FLATTEN(1p)					User Contributed Perl Documentation					  STAG-FLATTEN(1p)

NAME
stag-flatten - turns stag data into a flat table SYNOPSIS
stag-flatten -c name -c person/name dept MyFile.xml DESCRIPTION
reads in a file in a stag format, and 'flattens' it to a tab-delimited table format. given this data: (company (dept (name "special-operations") (person (name "james-bond")) (person (name "fred")))) the above command will return a two column table special-operations james-bond special-operations fred If there are multiple values for the columns within the node, then the cartesian product will be calculated USAGE
stag-flatten [-p PARSER] [-c COLS] [-c COLS] NODE <file> ARGUMENTS
-p|parser FORMAT FORMAT is one of xml, sxpr or itext xml assumed as default -c|column COL1,COL2,COL3,.. the name of the columns/elements to write out this can be specified either with multiple -c arguments, or with a comma-seperated (no spaces) list of column (terminal node) names after a single -c -n|nest if set, then the output will be a compress repeating values into the same row; each cell in the table will be enclosed by {}, and will contain a comma-delimited set of values SEE ALSO
Data::Stag perl v5.10.0 2008-12-23 STAG-FLATTEN(1p)
All times are GMT -4. The time now is 08:56 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy