awk statement to eliminate the duplicates


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting awk statement to eliminate the duplicates
# 8  
Old 12-29-2011
@klashxx.. the code dint work
Code:
cat tablextract2.sql
CREATE PROCEDURE after72DeleteTgr(id int)
BEGIN
END
$$
Delimiter ;
CREATE PROCEDURE after72DeleteTgr(id int)
BEGIN
END
$$
Delimiter ;
[root@dunkin-ds-dev-103 vivek]#
[root@dunkin-ds-dev-103 vivek]#  perl -in -e '$print=1 if /'${proc_name1}'/;print unless !$print;/Delimiter/i && exit ' tablextract2.sql
[root@dunkin-ds-dev-103 vivek]# cat tablextract2.sql
CREATE PROCEDURE after72DeleteTgr(id int)
BEGIN
END
$$
Delimiter ;
CREATE PROCEDURE after72DeleteTgr(id int)
BEGIN
END
$$
Delimiter ;

anyways thanks for helping me out... :-) it gave me some idea how to use the code.. thanks a lot.. i will use awk.. its giving output correctly...
# 9  
Old 12-29-2011
I forgot to change the flag order:
Code:
# cat tablextract2.sql
CREATE PROCEDURE after72DeleteTgr(id int)
BEGIN
END
$$
Delimiter ;
CREATE PROCEDURE after72DeleteTgr(id int)
BEGIN
END
$$
Delimiter ;

Code:
#proc_name1=after72DeleteTgr

Code:
# perl -ni -e '$print=1 if /'${proc_name1}'/;print unless !$print;/Delimiter/i && exit ' tablextract2.sql

Code:
# cat tablextract2.sql
CREATE PROCEDURE after72DeleteTgr(id int)
BEGIN
END
$$
Delimiter ;

This User Gave Thanks to Klashxx For This Post:
# 10  
Old 12-30-2011
thanks now its working :-)
# 11  
Old 01-04-2012
i have some thing like this..
Code:
cat diff
CREATE PROCEDURE after100DeleteTgr(id int)
BEGIN
END
$$
Delimiter ;
CREATE PROCEDURE after100DeleteTgr(id int)
BEGIN
        insert into DeleteEvent(entityName, entityId) values ('PersonalContactNote', id) ;
END
$$
Delimiter ;

when i perform the below
Code:
proc_name1="after100DeleteTgr"
echo "`awk 'BEGIN { FS = "[( ]" } {if($3~v){a=1}}a;tolower($0) ~ /delimiter/{exit}' v="$proc_name1" diff`" > diff
 
cat diff
CREATE PROCEDURE after100DeleteTgr(id int)
BEGIN
END
$$
Delimiter ;

so what i really want is
Code:
 
cat diff
CREATE PROCEDURE after100DeleteTgr(id int)
BEGIN
        insert into DeleteEvent(entityName, entityId) values ('PersonalContactNote', id) ;
END
$$
Delimiter ;

that is i want second duplicate of the file diff to be overwritten into diff....
so can anyone help what to modify in the awk command so that it writes the second duplicate (if the file has duplicate) to same file

Last edited by vivek d r; 01-04-2012 at 06:06 AM..
# 12  
Old 01-04-2012
Try this, replace the awk command with:
Code:
awk '/^Delimiter/ && f{f=0;next} $3 ~ v && ++c==n{f=1}!f' n=1 v="$proc_name1" diff

To remove the first instance set the variable n=1.
# 13  
Old 01-04-2012
is there any way to redirect the output to same file...? and also here the end point is Delimiter.. what if its DELIMITER or delimiter i tried below one
Code:
 echo "`awk '/^[dD]elimiter\|DELIMITER/ && f{f=0;next} $3 ~ v && ++c==n{f=1}!f' n=1 v="$proc_name1" diff`" > diff
-bash: !f': event not found

but its giving some error
# 14  
Old 01-04-2012
Quote:
Originally Posted by vivek d r
is there any way to redirect the output to same file...? and also here the end point is Delimiter.. what if its DELIMITER or delimiter i tried below one
Code:
 echo "`awk '/^[dD]elimiter\|DELIMITER/ && f{f=0;next} $3 ~ v && ++c==n{f=1}!f' n=1 v="$proc_name1" diff`" > diff
-bash: !f': event not found

but its giving some error
Maybe something like this?
Code:
awk 'toupper($1)=="DELIMITER" && f{f=0;next} $3 ~ v && ++c==n{f=1}!f' n=1 v="$proc_name1" diff > diff_temp

mv diff_temp diff

This User Gave Thanks to Franklin52 For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

awk and seen to report duplicates

I have this file: @Muestra-1 agctgcgagctgcgacccgggttatataggaagagacacacacaccccc + !@$#%^&*()@^#&HH!&*(@&#*(FT^%$&*()*&^%@ @Muestra-2 agctgcgagctgcgacccgggttatataggaagagacacacacaccccc + !@$#%^&*()@^#&HH!&*(@&#*(FT^%$&*()*&^%@ @Muestra-3 agctgcgagctgcgacccgggttatataggaagagacacacacaccccc +... (4 Replies)
Discussion started by: Xterra
4 Replies

2. Shell Programming and Scripting

awk - Remove duplicates during array build

Greetings Experts, Issue: Within awk script, remove the duplicate occurrences that are space (1 single space character) separated Description: I am processing 2 files using awk and during processing, I am building an array and there are duplicates on this; how can I delete the duplicates... (3 Replies)
Discussion started by: chill3chee
3 Replies

3. Shell Programming and Scripting

Convert Update statement into Insert statement in UNIX using awk, sed....

Hi folks, I have a scenario to convert the update statements into insert statements using shell script (awk, sed...) or in database using regex. I have a bunch of update statements with all columns in a file which I need to convert into insert statements. UPDATE TABLE_A SET COL1=1 WHERE... (0 Replies)
Discussion started by: dev123
0 Replies

4. Shell Programming and Scripting

awk remove first duplicates

Hi All, I have searched many threads for possible close solution. But I was unable to get simlar scenario. I would like to print all duplicate based on 3rd column except the first occurance. Also would like to print if it is single entry(non-duplicate). i/P file 12 NIL ABD LON 11 NIL ABC... (6 Replies)
Discussion started by: sybadm
6 Replies

5. Shell Programming and Scripting

Awk: Remove Duplicates

I have the following code for removing duplicate records based on fields in inputfile file & moves the duplicate records in duplicates file(1st Awk) & in 2nd awk i fetch the non duplicate entries in inputfile to tmp file and use move to update the original file. Requirement: Can both the awk... (4 Replies)
Discussion started by: siramitsharma
4 Replies

6. Shell Programming and Scripting

Creating duplicates in awk

Hi, I am using Ubuntu 12.04 I have a file as following: KHO123 KHO245 KHO456 . . .I want to add a second column of characters to my file but I want to write a script to make this automatic, so, depending on the number of the lines in my first column, I get the string I need repeated... (4 Replies)
Discussion started by: Homa
4 Replies

7. Shell Programming and Scripting

Find duplicates in column 1 and merge their lines (awk?)

Hi, I have a file (sorted by sort) with 8 tab delimited columns. The first column contains duplicated fields and I need to merge all these identical lines. My input file: comp100002 aaa bbb ccc ddd eee fff ggg comp100003 aba aba aba aba aba aba aba comp100003 fff fff fff fff fff fff fff... (5 Replies)
Discussion started by: falcox
5 Replies

8. Shell Programming and Scripting

Awk Help - duplicates in $1 that match x & y in $2

I'm primarily a "Windows" systems administrator whose been getting his toes in the Linux waters. I am new to programming and advanced scripting so please bear with me and my incomplete example below. I have exported all entries from our DNS zones. I used sed to remove everything other than the... (3 Replies)
Discussion started by: Omaplata
3 Replies

9. Shell Programming and Scripting

Awk to find duplicates in 2nd field

I want to find duplicates in file on 2nd field i wrote this code: nawk '{a++} END{for i in a {if (a>1) print}}' temp Could not find whats wrong with this. Appreciate help (5 Replies)
Discussion started by: pinnacle
5 Replies

10. Shell Programming and Scripting

How to eliminate inf value in AWK

Hi, I have the calculations which return me infinity (inf), -inf, other very larger number when I printed them out. I did try to insert some control condition not to print these out if the above condition is met. The code I implemented is something like:- for (i=0;i<=1000;i++){ ... (3 Replies)
Discussion started by: ahjiefreak
3 Replies
Login or Register to Ask a Question