Sponsored Content
Full Discussion: awk command optimization
Top Forums Shell Programming and Scripting awk command optimization Post 302917047 by SkySmart on Saturday 13th of September 2014 03:01:37 PM
Old 09-13-2014
Quote:
Originally Posted by disedorgue
Hi,
You can try:
Code:
gawk -v sw="error|fail|panic|accepted" 'NR>1 && NR <=128500 && match($0,"/"sw"/") {
											d[substr($0,RSTART,RLENGTH)]++
                                                                                }

                                                                BEGIN {
                                                                        c = split(sw,a,"[|]")
                                                                }
                                                                END {
                                                                for (i in a)
                                                                {
                                                                        o = o (a[i]"="(d[a[i]]?d[a[i]]:0)",")
                                                                }
                                                                        sub(",*$","",o)
                                                                        print o
                                                                }' /var/log/treg.test

Regards.
thank you so much!

this looks promising. when i run it though, it only gives a count for one of the strings even though there are lines in the data file that contain the other strings:

Code:
accepted=0,error=0,fail=3859,panic=0

---------- Post updated at 02:01 PM ---------- Previous update was at 01:58 PM ----------

Quote:
Originally Posted by shamrock
Well you could give this [g]awk a try...
Code:
gawk '{
    for (i=1; i<=NF; i++)
        if ($i ~ "^(error|fail|panic|accepted)$")
            a[$i]++
} END {
    for (i in a) {
        n++
        printf("%s=%s%s", i, a[i], (n < 4 ? ", " : "\n"))
    }
}' file

this looks quite promising as well. thank you so much!!!

looks like the code is written in such a way that it only counts the number of lines that contain just the specific patterns specified. but i believe i can play with it some more.

one question. is the "n < 4" setting a limit of patterns that can be specified?

btw, this completed in under .3 seconds on a 5mb file. so very good news!!!
 

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

script optimization

:o Hi, I am writing a script in which at some time, I need to get the process id of a special process and kill it... I am getting the PID as follows... ps -ef | grep $PKMS/scripts | grep -v grep | awk '{print $2 }'can we optimize it more further since my script already doing lot of other... (3 Replies)
Discussion started by: vivek.gkp
3 Replies

2. Shell Programming and Scripting

AWK optimization

Hello, Do you have any tips on how to optimize the AWK that gets the lines in the log between these XML tags? se2|6|<ns1:accountInfoRequest xmlns:ns1="http://www.123.com/123/ se2|6|etc2"> .... <some other tags> se2|6|</ns1:acc se2|6|ountInfoRequest> The AWK I'm using to get this... (2 Replies)
Discussion started by: majormark
2 Replies

3. Shell Programming and Scripting

sed optimization

I have a process using the following series of sed commands that works pretty well. sed -e 1,1d $file |sed 1i\\"EHLO Broadridge.com" |sed 2i\\"MAIL FROM:${eaddr}"|sed 3i\\"RCPT TO:${eaddr}"|sed 4i\\"DATA"|sed 5s/.FROM/FROM:/|sed 6s/.TO/TO:/|sed 7,7d|sed s/.ENDDATA/./|sed s/.ENDARRAY// >temp/$file... (1 Reply)
Discussion started by: njaiswal
1 Replies

4. Shell Programming and Scripting

Awk script gsub optimization

I have created Shell script with below awk code for replacing special characters from input file. Source file has 6 mn records. This script was able to handle 2 mn records in 1 hr. This is very slow speed and we need to optimise our processing. Can any Guru help me for optimization... (6 Replies)
Discussion started by: Akshay
6 Replies

5. Shell Programming and Scripting

awk command in script gives error while same awk command at prompt runs fine: Why?

Hello all, Here is what my bash script does: sums number columns, saves the tot in new column, outputs if tot >= threshold val: > cat getnon0file.sh #!/bin/bash this="getnon0file.sh" USAGE=$this" InFile="xyz.38" Min="0.05" # awk '{sum=0; for(n=2; n<=NF; n++){sum+=$n};... (4 Replies)
Discussion started by: catalys
4 Replies

6. Shell Programming and Scripting

CPU optimization

hi guys , I have 10 scripts suppose 1.sh , 2.sh ,3.sh ,4.sh ......10.sh each takes some time ( for instance 2 minutes to 40 minutes ) my server can run around 3-4 files at a time suppose, 1.sh , 2.sh , 3.sh are running currently now as soon as ANY ONE of the gets finished i... (4 Replies)
Discussion started by: Gl@)!aTor
4 Replies

7. Shell Programming and Scripting

awk command optimization

Hi, I need some help to optimize this piece of code: sqlplus -S $DB_USER/$DB_PWD@$DB_INSTANCE @$PRODUCT_COLL/$SSA_NAME/bin/tools/sql/tablespace.sql | grep -i UNDO_001_COD3 | awk '{printf ";TBS_UNDO_001_COD3"$5"\n"}' sqlplus -S $DB_USER/$DB_PWD@$DB_INSTANCE... (1 Reply)
Discussion started by: abhi1988sri
1 Replies

8. Shell Programming and Scripting

Code optimization

Hi all I wrote below code: #!/bin/sh R='\033 do you have any idea how to optimize my code ? (to make it shorter eg.) (11 Replies)
Discussion started by: primo102
11 Replies

9. UNIX for Advanced & Expert Users

Need Optimization shell/awk script to aggreagte (sum) for all the columns of Huge data file

Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file File delimiter "|" Need to have Sum of all columns, with column number : aggregation (summation) for each column File not having the header Like below - Column 1 "Total Column 2 : "Total ... ...... (2 Replies)
Discussion started by: kartikirans
2 Replies
All times are GMT -4. The time now is 11:37 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy