Sponsored Content
Full Discussion: Removing all the duplicates
Homework and Emergencies Emergency UNIX and Linux Support Removing all the duplicates Post 302547567 by pandeesh on Tuesday 16th of August 2011 04:20:03 AM
Old 08-16-2011
Yes with nawk its working.But i want to make 10th field as key field.so what i need to change in that script?
shall i replace $1 by $10?

Thanks
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Removing duplicates

Hi, I've been trying to removed duplicates lines with similar columns in a fixed width file and it's not working. I've search the forum but nothing comes close. I have a sample file: 27147140631203RA CCD * 27147140631203RA PPN * 37147140631207RD AAA 47147140631203RD JNA... (12 Replies)
Discussion started by: giannicello
12 Replies

2. UNIX for Dummies Questions & Answers

removing duplicates and sort -k

Hello experts, I am trying to remove all lines in a csv file where the 2nd columns is a duplicate. I am try to use sort with the key parameter sort -u -k 2,2 File.csv > Output.csv File.csv File Name|Document Name|Document Title|Organization Word Doc 1.doc|Word Document|Sample... (3 Replies)
Discussion started by: orahi001
3 Replies

3. Shell Programming and Scripting

removing duplicates

Hi I have a file that are a list of people & their credentials i recieve frequently The issue is that whne I catnet this list that duplicat entries exists & are NOT CONSECUTIVE (i.e. uniq -1 may not weork here ) I'm trying to write a scrip that will remove duplicate entries the script can... (5 Replies)
Discussion started by: stevie_velvet
5 Replies

4. Shell Programming and Scripting

Removing duplicates

Hi, I have a file in the below format., test test (10) to to (25) see see (45) and i need the output in the format of test 10 to 25 see 45 Some one help me? (6 Replies)
Discussion started by: imdadulla
6 Replies

5. UNIX for Advanced & Expert Users

removing duplicates.

Hi All In unix ,we have a file ,there we have to remove the duplicates by using one specific column. Can any body tell me the command. ex: file1 id,name 1,ww 2,qwq 2,asas 3,asa 4,asas 4,asas o/p: 1,ww 2,qwq 3,asa (7 Replies)
Discussion started by: raju4u
7 Replies

6. Shell Programming and Scripting

Removing duplicates

I have a test file with the following 2 columns: Col 1 | Col 2 T1 | 1 <= remove T5 | 1 T4 | 2 T1 | 3 T3 | 3 T4 | 1 <= remove T1 | 2 <= remove T3 ... (7 Replies)
Discussion started by: gctex
7 Replies

7. Shell Programming and Scripting

Help in removing duplicates

I have an input file abc.txt with info like: abcd rateuse inklite robet rateuse abcd I need to remove duplicates from the file (eg: abcd,rateuse) from the file and need to place the contents in same file abc.txt if needed can be placed in another file. can anyone help me in this :( (4 Replies)
Discussion started by: rkrish
4 Replies

8. UNIX for Dummies Questions & Answers

Removing duplicates from a file

Hi All, I am merging files coming from 2 different systems ,while doing that I am getting duplicates entries in the merged file I,01,000131,764,2,4.00 I,01,000131,765,2,4.00 I,01,000131,772,2,4.00 I,01,000131,773,2,4.00 I,01,000168,762,2,2.00 I,01,000168,763,2,2.00... (5 Replies)
Discussion started by: Sri3001
5 Replies

9. Shell Programming and Scripting

Removing duplicates except the last occurrence

Hi All, i have a file like below, @DB_FCTS\src\Data\Scripts\Delete_CU_OM_BIL_PRT_STMT_TYP.sql @DB_FCTS\src\Data\Scripts\Delete_CDP_BILL_LBL_MSG.sql @DB_FCTS\src\Data\Scripts\Delete_OM_BIDDR.sql @DB_FCTS\src\Data\Scripts\Insert_CU_OM_LBL_MSG.sql... (11 Replies)
Discussion started by: mechvijays
11 Replies

10. Shell Programming and Scripting

Removing duplicates from new file

i hav two files like i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3 (2 Replies)
Discussion started by: sagar_1986
2 Replies
AUSEARCH_ADD_ITEM(3)						  Linux Audit API					      AUSEARCH_ADD_ITEM(3)

NAME
ausearch_add_item - build up search rule SYNOPSIS
#include <auparse.h> int ausearch_add_item(auparse_state_t *au, const char *field, const char *op, const char *value, ausearch_rule_t how); DESCRIPTION
ausearch_add_item adds one search condition to the current audit search expression. The search conditions can then be used to scan logs, files, or buffers for something of interest. The field value is the field name that the value will be checked for. The op variable describes what kind of check is to be done. Legal op values are: exists just check that a field name exists = locate the field name and check that the value associated with it is equal to the value given in this rule. != locate the field name and check that the value associated with it is NOT equal to the value given in this rule. The value parameter is compared to the uninterpreted field value. The how value determines how this search condition will affect the existing search expression if one is already defined. The possible val- ues are: AUSEARCH_RULE_CLEAR Clear the current search expression, if any, and use only this search condition. AUSEARCH_RULE_OR If a search expression E is already configured, replace it by (E || this_search_condition). AUSEARCH_RULE_AND If a search expression E is already configured, replace it by (E && this_search_condition). RETURN VALUE
Returns -1 if an error occurs; otherwise, 0 for success. SEE ALSO
ausearch_add_expression(3), ausearch_add_interpreted_item(3), ausearch_add_timestamp_item(3), ausearch_add_regex(3), ausearch_set_stop(3), ausearch_clear(3), ausearch_next_event(3), ausearch-expression(5). AUTHOR
Steve Grubb Red Hat Nov 2007 AUSEARCH_ADD_ITEM(3)
All times are GMT -4. The time now is 12:57 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy