So, we have a script, that is supposed to have a couple of functions like showing number of failed connections, recieved bytes per IP-address, and so on. We are supposed to be able to limit the number of results to either 0-24 hours or X days back from the last data in the log file.
Everything is working, except we dont know how to limit searches within a given timespace.
Our code looks like this:
Code:
#!/bin/sh
#-n: Limit the number of results to N
#-h: Limit the query to the last number of hours (< 24)
#-d: Limit the query to the last number of days (counting from
#midnight)
#-c: Which IP address makes the most number of connection attempts?
#-2: Which address makes the most number of successful attempts?
#-r: What are the most common results codes and where do they come
#from?
#-F: What are the most common result codes that indicate failure (no
#auth, not found etc) and where do they come from?
#-t: Which IP number get the most bytes sent to them?
#<filename> refers to the logfile. If '-' is given as a filename, or
#no filename is given, then standard input should be read. This
#enables the script to be used in a pipeline.
FILENAME=*.log
MAXSHOW=99999
LIMITHOURS=0
LIMITDAYS=0
h=1
c=0
b=0
r=0
F=0
t=0
while getopts :n:h:d:c2rFt option
do
case $option in
n)
MAXSHOW=$OPTARG
;;
h)
LIMITHOURS=$OPTARG
;;
d)
LIMITDAYS=$OPTARG
;;
c)
c=1
;;
2)
b=1
;;
r)
r=1
;;
F)
F=1
;;
t)
t=1
;;
esac
done
if [ "$h" -eq "1" ]; then
#?????
fi
if [ "$d" -eq "1" ]; then
#??????
fi
if [ "$c" -eq "1" ]; then
cat $FILENAME|awk '{print $1}' |sort|uniq -c|sort -k 1 -n -r|head -$MAXSHOW
fi
if [ "$b" -eq "1" ]; then
grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}.* 200" $FILENAME|awk '{print $1}'|sort|uniq -c|sort -nr|head -$MAXSHOW
fi
if [ "$r" -eq "1" ]; then
cat $FILENAME|awk '{print $1" "$9}'|sort|uniq -c|sort -nr|head -$MAXSHOW
fi
if [ "$F" -eq "1" ]; then
cat $FILENAME | if $9 > "200" ; then
awk '{print $1" "$9}' |sort|uniq -c|sort -nr|head -$MAXSHOW
fi
fi
if [ "$t" -eq "1" ]; then
cat $FILENAME |awk '{print $1" "$10}'|awk '{ x[$1]+=$2 } END{for(data in x) print data, x[data]}' | sort -k2,2 -nr|head -$MAXSHOW
fi
give me a shell-script which extract data from log file on a server by giving date and time as input (for both start time and end time) and it will give the logs generated during the given time as output. (4 Replies)
Hi , I am having a script which will start a process and appends the process related logs to a log file. The log file writes logs with every line starting with date in the format of: date +"%Y %b %d %H:%M:%S".
So, in the script, before I start the process, I am storing the date as DATE=`date +"%Y... (5 Replies)
Dear All,
Please apology to me if this question already posted, because I try to find it but not found.
I have make bash script to automatically download data from ftp and this running very well. and after the data downloaded it will automatically extract the data and keep in the specific... (2 Replies)
Looking for a shell script or a simple perl script . I am new to scripting and not very good at it .
I have 2 directories . One of them holds a text file with list of files in it and the second one is a daily log which shows the file completion time. I need to co-relate both and make a report.
... (0 Replies)
If I have a log like :
Mon Jul 19 05:07:34 2010; TCP; eth3; 52 bytes; from abc to def
Mon Jul 19 05:07:35 2010; UDP; eth3; 46 bytes; from aaa to bbb
Mon Jul 19 05:07:35 2010; TCP; eth3; 52 bytes; from def to ghi
I will need an output like this :
Time abc to def... (1 Reply)
Please help me out to extract the Data from the XML Log files.
So here is the data
ERROR|2010-08-26 00:05:52,958|SERIAL_ID=128279996|ST=2010-08-2600:05:52|DEVICE=113.2.21.12:601|TYPE=TransactionLog... (9 Replies)
I was searching for parsing a log file and found what I need in this link
http://stackoverflow.com/questions/7575267/extract-data-from-log-file-in-specified-range-of-time
But the most useful answer (posted by @Kent):
# this variable you could customize, important is convert to seconds.
# e.g... (2 Replies)
Hi,
I am trying to extract lines of data within a log file on a Redhat 5 Linux system.
eg I need all the lines with a particular username over the last 3 minutes.
the log file may read like this, and I want a way to search all the lines extracting all the relevant lines over the last 3... (2 Replies)
Hi,
I have a log file that gets updated every second. Currently the size has grown to 20+ GB. I need to have a command/script, that will try to get the actual size of the file and will remove 50% of the data that are in the log file. I don't mind removing the data as the size has grown to huge... (8 Replies)
Hi, I would like to seek your help for a script that will extract data from log file and put it in a file.
Sample log file
2018-10-23 12:33:21 AI ERROR -- tpid: SAMPLE_TH account: 123456789 aiSessionNumber: 660640464 mapName: xxx to yyy
errorDesc: Translation Error:ErrorNumber : 993 ... (2 Replies)
Discussion started by: neverwinter112
2 Replies
LEARN ABOUT X11R4
uniq
UNIQ(1) User Commands UNIQ(1)NAME
uniq - report or omit repeated lines
SYNOPSIS
uniq [OPTION]... [INPUT [OUTPUT]]
DESCRIPTION
Filter adjacent matching lines from INPUT (or standard input), writing to OUTPUT (or standard output).
With no options, matching lines are merged to the first occurrence.
Mandatory arguments to long options are mandatory for short options too.
-c, --count
prefix lines by the number of occurrences
-d, --repeated
only print duplicate lines, one for each group
-D print all duplicate lines
--all-repeated[=METHOD]
like -D, but allow separating groups with an empty line; METHOD={none(default),prepend,separate}
-f, --skip-fields=N
avoid comparing the first N fields
--group[=METHOD]
show all items, separating groups with an empty line; METHOD={separate(default),prepend,append,both}
-i, --ignore-case
ignore differences in case when comparing
-s, --skip-chars=N
avoid comparing the first N characters
-u, --unique
only print unique lines
-z, --zero-terminated
line delimiter is NUL, not newline
-w, --check-chars=N
compare no more than N characters in lines
--help display this help and exit
--version
output version information and exit
A field is a run of blanks (usually spaces and/or TABs), then non-blank characters. Fields are skipped before chars.
Note: 'uniq' does not detect repeated lines unless they are adjacent. You may want to sort the input first, or use 'sort -u' without
'uniq'. Also, comparisons honor the rules specified by 'LC_COLLATE'.
AUTHOR
Written by Richard M. Stallman and David MacKenzie.
REPORTING BUGS
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
Report uniq translation bugs to <http://translationproject.org/team/>
COPYRIGHT
Copyright (C) 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.
SEE ALSO comm(1), join(1), sort(1)
Full documentation at: <http://www.gnu.org/software/coreutils/uniq>
or available locally via: info '(coreutils) uniq invocation'
GNU coreutils 8.28 January 2018 UNIQ(1)