I would to add another user input parameter (to script provided by Agama posted on 08-10-11) i.e. history period (ranging from no history to everything till the sampling interval) in order to calculate the last column i.e. new IPs.
If I understand correctly, the awk would need just a couple of changes allowing the start time (in epoch seconds) and end time to be passed into the script as parameters 2 and 3. Changes are in bold:
Code:
#!/usr/bin/env ksh
awk -v startt=${2:-0} -v endt=${3:-9000000000} -v bin_size=${1:-5} '
function dump( )
{
if( NR == 1 )
return;
new_count = 0;
for( u in unique ) # compute total in this bin that were not in last bin
if( last_bin[u] == 0 )
new_count++;
printf( "%3d %3d %3d\n", bin+1, total, new_count );
bin++;
}
$1 < startt { next; }
$1 > endt { exit( 0 ); }
{
if( $1+0 >= next_bin )
{
dump( );
next_bin = $1 + bin_size;
delete last_bin;
for( u in unique ) # copy hits from this bin
last_bin[u] = 1;
delete unique;
total = 0;
}
unique[$2]++
total++;
}
END {
if( total )
dump( );
}
'
Thanks for the reply! I realize I wasn't clear enough, my apologies for the confusion. I will try and explain the problem again.
If you have a look at post #2 and #9 of the thread, the scripts take 1 user input i.e. interval (in seconds) and returns four things viz.. interval (bin+1), total no. of packets in that interval (total), no. of unique IPs in that interval (length(unique)) and no. of new IPs as compared to the previous interval (new_count). Now, I am looking to have an additional user parameter i.e. history_period (in seconds) which should be used to evaluate the last output (no. of new IPs) by comparing with the "history_period" interval and NOT with the "immediate previous interval" as it is currently doing.
For eg. if the user gives 1(interval) and 10(history_period) as the inputs, the script should return the values for every 1 second BUT for calculating "no. of new IPs" it should use the previous 10 seconds as the history (or comparison) period. So essentially the 4th column would be empty for first 10 seconds (or in general till history has been formed) and from there onwards history would be a "moving" things (last 10 seconds in this example.)
I hope I was more clearer this time. Looking for some solution.
Yep, I completely misunderstood!! Script below has the same function as before with:
1) A second parameter on the command line is interpreted as the long interval length. This length is multiples of the short interval, not seconds because the output cycle, and binning are both tied directly to the short interval. So, if short interval is 2 seconds, and long interval is given as 5, the number of seconds covered by the long interval is 10 seconds.
2) Once the long interval has passed, a 4th column will be printed. This column contains the number of unique addresses observed during the previous n short intervals where n is the long interval value.
3) to better identify the data I added a header line.
Output looks like this when run with a interval of 2 seconds and a long interval of 4:
And to be sure I'm on track with your thinking, the dummy input I used with comments showing how the long interval groups break up and which groups contribute to the count in the 4th column.
Code:
899726401 112.254.1.0 long interval group 1
899726402 154.162.38.0
899726402 160.114.12.0
899726402 165.161.7.0
899726403 101.226.38.0 long interval group 2
899726403 101.226.38.0
899726403 101.226.38.0
899726403 73.214.29.0
899726403 144.12.40.0
899726404 144.12.40.0
899726404 1.14.4.0
899726405 112.254.1.0 long interval group 3
899726405 154.162.38.0
899726405 160.114.12.0
899726406 165.161.7.0
899726406 101.226.38.0
899726406 101.226.38.1
899726407 101.226.38.2 long interval group 4
899726407 73.214.29.0
899726407 144.12.40.0
899726408 144.12.40.0
---------------------------- write 4th output line -- 10 unique addresses in groups 1-4
899726409 1.14.4.0 long interval group 5
---------------------------- write 5th output line -- 10 unique addresses in groups 2-5
899726411 112.254.1.4 long interval group 6
---------------------------- write 6th output line -- 12 unique addresses in groups 3-6
899726412 154.162.38.0 long interval group 7
899726412 160.114.12.r
899726412 165.161.7.0
---------------------------- write 7th output line -- 13 unique addresses in groups 4-7
899726413 101.226.38.0 long interval group 8
899726413 101.226.38.0
899726413 101.226.38.0
899726413 73.214.29.0
899726413 144.12.40.0
---------------------------- write 8th output line -- 18 unique addresses in groups 5-8
899726414 144.12.40.0
899726414 1.14.4.5
899726415 112.254.1.5
899726415 154.162.38.5
899726415 160.114.12.5
899726416 165.161.7.5
899726416 101.226.38.5
899726416 101.226.38.5
899726417 101.226.38.8
899726417 73.214.29.0
899726417 144.12.40.0
899726418 144.12.40.0
899726419 1.14.4.0
And finally the augmented script:
Code:
#!/usr/bin/env ksh
# 4 colums of output:
# 1 - short interval number
# 2 - total observations during the short interval
# 3 - new observations during the short interval
# 4 - new observations during long interval (after first complete long interval)
awk -v lbin_size=${2:-10} -v bin_size=${1:-1} '
function dump( )
{
if( NR == 1 )
return;
new_count = 0;
for( u in unique ) # compute total in this bin that were not in last bin
if( last_bin[u] == 0 )
new_count++;
if( ++lidx >= lbin_size ) # spot for next list; roll if needed
{
lidx = 0;
lwrap = 1;
}
if( !lwrap )
printf( "%5d %5d %5d %5s\n", bin+1, total, new_count, " -" ); # no value until we have wrapped round
else
{ # compute the unique addresses in the long interval
for( l in llist ) # go through each long interval list to weed out duplicates
{
split( llist[l], a, " " );
for( i = 1; i <= length( a ); i++ )
lunique[a[i]] = 1; # get unique set across the long interval
}
ltotal = 0;
for( u in lunique )
ltotal++; # finally total the unique addresses seen in long interval
printf( "%5d %5d %5d %5d\n", bin+1, total, new_count, ltotal );
}
llist[lidx] = ""; # clear the next list
bin++;
}
BEGIN {
# comment next line out if no header is needed
printf( "%5s %5s %5s %5s\n", "INT", "TOT", "NEW-S", "NEW-L" );
lwrap = 0;
lidx = 0;
}
{
if( $1+0 >= next_bin ) # short interval expires
{
dump( ); # write a line of data
next_bin = $1 + bin_size; # set new expiry time
delete last_bin;
for( u in unique ) # copy hits from this bin
last_bin[u] = 1;
delete unique;
total = 0;
}
llist[lidx] = llist[lidx] $2 " "; # add this to list of addresses for the long interval
unique[$2] = 1; # track unique addrs in the short interval
total++;
}
END {
if( total )
dump( );
}
'
exit
Last edited by agama; 02-22-2012 at 09:02 PM..
Reason: comments
Dear community,
I have an already filtered log on my machine, something like:
WARN 2016.03.10 10:59:01.136 logging.LogAlarmListener raise ALARMWARNINGRAISED Alarm NODE-NetworkAccessGroup.Client.41283 SERVICEDOWN-41283.WC severity WARNING raised: Service 41283.WC protocoltype client is down... (13 Replies)
Hi All ,
I got stuck on the below scenario.If anyone can help me ,that will be really helpful.
I have a target hdfs file layout.I need to know the no of column in that file.
Target_RECRD_layout
{
ABC_ID EN NOTNULLABLE,
ABC_COUNTRY CHARACTER ENCODING ASCII NOTNULLABLE,
... (5 Replies)
I want to count lines of a file using AWK (only) and not in the END part like this awk 'END{print FNR}' because I want to use it.
Does anyone know of a way?
Thanks a lot. (7 Replies)
Hi All,
I have a small problem of counting the number of times a particular entry that exists in a horizontal string of elements and a vertical feild (column of entries). For example AATGGTCCTGExpected outputA=2 C=2 G=3 T=3 I have an idea to do this but I dont know how to do that if these entries... (1 Reply)
Ok say I wanted to count every Y in a data file.
Then set Y as my delimiter so that I can separate my file by taking all the contents that occur BEFORE the first Y and store them in a variable so that I may use this content later on in my program. Then I could do the same thing with the next Y's... (5 Replies)
Hi,
I have a very big (with around 1 million entries) txt file with IPv4 addresses in the standard format, i.e. a.b.c.d
The file looks like
10.1.1.1
10.1.1.1
10.1.1.1
10.1.2.4
10.1.2.4
12.1.5.6
.
.
.
.
and so on....
There are duplicate/multiple entries for some IP... (3 Replies)
Hi,
I have a big file (~960MB) having epoch time values (~50 million entries) which looks like
897393601
897393601
897393601
897393601
897393602
897393602
897393602
897393602
897393602
897393603
897393603
897393603
897393603
and so on....each time stamp has more than one... (6 Replies)
Please find the below program. It contains the purpose of the program itself.
/* Program : Write a program to count the number of words in a given text file */
/* Date : 12-June-2010 */
# include <stdio.h>
# include <stdlib.h>
# include <string.h>
int main( int argc, char *argv )
{... (6 Replies)
Hi,
Please help me in counting the below records(1st field) from samplefile:
Expected output:
Count Descr
-------------------------------------------
7 Mean manager
14 ... (7 Replies)
I'm trying to figure out a way to count the number of words in the follwing file:
cal 2002 > file1
Is there anyway to do this without using wc but instead using the cut command? (1 Reply)