03-20-2011
How to find ip addresses in logfiles?
Hi guys,
I need to check a few log files as below to find out whether certain ip addresses is present on these log files.
type8code0: ls -alt
-rw-r--r-- 1 root other 796219588 Mar 20 02:25 logfile
drwxr-xr-x 2 root root 1536 Mar 20 02:00 .
-rw-r--r-- 1 root other 1854093343 Mar 20 02:00 logfile.hour02
-rw-r--r-- 1 root other 366729263 Mar 20 01:00 logfile.hour01
-rw-r--r-- 1 root other 9001399293 Mar 20 00:47 logfile.20.Z
-rw-r--r-- 1 root other 8267721901 Mar 19 00:45 logfile.19.Z
-rw-r--r-- 1 root other 7498682761 Mar 18 00:39 logfile.18.Z
-rw-r--r-- 1 root other 6196926607 Mar 17 00:31 logfile.17.Z
-rw-r--r-- 1 root other 4794493570 Mar 16 00:23 logfile.16.Z
I've saved the list of ip addresses in “iplist” file.
cat iplist
10.10.10.10
10.10.10.11
10.10.10.12
10.10.10.13
10.10.10.14
What is the best command to do this?
This is what I do now, but it takes sometime. Hopefully there is easy way to do this.
grep 10.10.10.10 logfile > output_logfile_10.10.10.10
grep 10.10.10.11 logfile > output_ logfile_10.10.10.11
and so on
zcat logfile.16.Z | grep 10.10.10.10 > output_logfile.16.Z_10.10.10.10
zcat logfile.16.Z | grep 10.10.10.11 > output_logfile.16.Z_10.10.10.11
and so on
Thanks
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
My server has only has access logs turned on. How do I turn on the other standard logs (i.e. I'd like to see the referring urls).
Thanks in advance. (3 Replies)
Discussion started by: pingdom
3 Replies
2. UNIX for Dummies Questions & Answers
I support an app that outputs alert and audit messages to one log file (vendor says they can't be separated). The script that I have written takes a copy (mv cmd) of the file to do the separation and reformatting. I have a problem that I loose records (messages are being written constantly, upto 3+... (5 Replies)
Discussion started by: nhatch
5 Replies
3. IP Networking
Arright, here's what I'm trying to do. I want to dig up currently active IP addresses on my subnet, and my present strategy is to ping every address until I find active ones, then ping them more often to verify their status. Next, I want to find the names of the computers associated with those... (1 Reply)
Discussion started by: sladuuch
1 Replies
4. Shell Programming and Scripting
Hi All
There are some cron jobs ,which runs 24 hrs. Log files are generated when one job fails.
So I need the log files to be emailed to my personal e-mail id. So that I can see the log files at my home If there is any error.
How can I implement this in Unix shell programming.
Thanks... (4 Replies)
Discussion started by: deep_kol
4 Replies
5. Shell Programming and Scripting
Hi,
I have a lot of logfiles like fooYYYYMM.log (foo200301.log, foo200810.log) with lines like
YYYY-MM-DD TIMESTAMP,text1,text2,text3...
but I need (for postprocessing) the form fooYYYYMMDD.log (so foo200402.log becomes foo20040201.log, foo20040202.log...)
with unmodified content of lines.
... (1 Reply)
Discussion started by: clzupp
1 Replies
6. Shell Programming and Scripting
Hi All,
I have a peculiar problem. I will call a script from another script.
Script abc.ksh is called by ABC.ksh as
ABC.ksh abc.ksh
in abc.ksh I will create and redirect all the statements to log file.
ABC.ksh will also has a log file. I want all the logs generated in file abc in ABC... (5 Replies)
Discussion started by: javeed7
5 Replies
7. Red Hat
Hi,
I need to logrotate logs in directories in /var/log/httpd/. There are 4 directories in /var/log/httpd/... these directories are /var/log/httpd/access/
/var/log/httpd/debug/
/var/log/httpd/error/
/var/log/httpd/required/
Each of the access, required, error and debug directories have around... (1 Reply)
Discussion started by: renuka
1 Replies
8. Shell Programming and Scripting
Hi,
I have a file having following content.
<sip:9376507346@97.208.31.7:51088
<sip:9907472291@97.208.31.7:51208
<sip:8103742422@97.208.31.7:51024
<sip:9579892841@97.208.31.7:51080
<sip:9370904222@97.208.31.7:51104
<sip:9327665215@97.208.31.7:51104
<sip:9098364262@97.208.31.7:51024... (2 Replies)
Discussion started by: SunilB2011
2 Replies
9. UNIX for Advanced & Expert Users
Hi,
I have a web server running on Debian 6.0.4 in a computer outside my university, but the web URL is blocked by my university, the security group of the university said because it was scanning computers inside university.
I could not find any applications in my web server are doing... (3 Replies)
Discussion started by: hce
3 Replies
10. UNIX for Beginners Questions & Answers
Hi,
I have a file with a list of bunch of IP addresses from different VLAN's . I am trying to find the list the number of each vlan occurence in the output
Here is how my file looks like
1.1.1.1
1.1.1.2
1.1.1.3
1.1.2.1
1.1.2.2
1.1.3.1
1.1.3.2
1.1.3.3
1.1.3.4
So what I am trying... (2 Replies)
Discussion started by: new2prog
2 Replies
LEARN ABOUT LINUX
inviso_lfm
inviso_lfm(3erl) Erlang Module Definition inviso_lfm(3erl)
NAME
inviso_lfm - An Inviso Off-Line Logfile Merger
DESCRIPTION
Implements an off-line logfile merger, merging binary trace-log files from several nodes together in chronological order. The logfile
merger can also do pid-to-alias translations.
The logfile merger is supposed to be called from the Erlang shell or a higher layer trace tool. For it to work, all logfiles and trace
information files (containing the pid-alias associations) must be located in the file system accessible from this node and organized
according to the API description.
The logfile merger starts a process, the output process, which in its turn starts one reader process for every node it shall merge logfiles
from. Note that the reason for a process for each node is not remote communication, the logfile merger is an off-line utility, it is to
sort the logfile entries in chronological order.
The logfile merger can be customized both when it comes to the implementation of the reader processes and the output the output process
shall generate for every logfile entry.
EXPORTS
merge(Files, OutFile) ->
merge(Files, WorkHFun, InitHandlerData) ->
merge(Files, BeginHFun, WorkHFun, EndHFun, InitHandlerData) -> {ok, Count} | {error, Reason}
Types Files = [FileDescription]
FileDescription = FileSet | {reader,RMod,RFunc,FileSet}
FileSet = {Node,LogFiles} | {Node,[LogFiles]}
Node = atom()
LogFiles = [{trace_log,[FileName]}] | [{trace_log,[FileName]},{ti_log,TiFileSpec}]
TiFileSpec = [string()] - a list of one string.
FileName = string()
RMod = RFunc = atom()
OutFile = string()
BeginHFun = fun(InitHandlerData) -> {ok, NewHandlerData} | {error, Reason}
WorkHFun = fun(Node, LogEntry, PidMappings, HandlerData) -> {ok, NewHandlerData}
LogEntry = tuple()
PidMappings = term()
EndHFun = fun(HandlerData) -> ok | {error, Reason}
Count = int()
Reason = term()
Merges the logfiles in Files together into one file in chronological order. The logfile merger consists of an output process and one
or several reader processes.
Returns {ok, Count} where Count is the total number of log entries processed, if successful.
When specifying LogFiles , currently the standard reader-process only supports:
* one single file
* a list of wraplog files, following the naming convention <Prefix><Nr><Suffix> .
Note that (when using the standard reader process) it is possible to give a list of LogFiles . The list must be sorted starting with
the oldest. This will cause several trace-logs (from the same node) to be merged together in the same OutFile . The reader process
will simply start reading the next file (or wrapset) when the previous is done.
FileDescription == {reader,RMod,RFunc,FileSet} indicates that spawn(RMod, RFunc, [OutputPid,LogFiles]) shall create a reader
process.
The output process is customized with BeginHFun , WorkHFun and EndHFun . If using merge/2 a default output process configuration is
used, basically creating a text file and writing the output line by line. BeginHFun is called once before requesting log entries
from the reader processes. WorkHFun is called for every log entry (trace message) LogEntry . Here the log entry typically gets writ-
ten to the output. PidMappings is the translations produced by the reader process. EndHFun is called when all reader processes have
terminated.
Currently the standard reader can only handle one ti-file (per LogFiles ). The current inviso meta tracer is further not capable of
wrapping ti-files. (This also because a wrapped ti-log will most likely be worthless since alias associations done in the beginning
are erased but still used in the trace-log).
The standard reader process is implemented in the module inviso_lfm_tpreader (trace port reader). It understands Erlang linked in
trace-port driver generated trace-logs and inviso_rt_meta generated trace information files.
WRITING YOUR OWN READER PROCESS
Writing a reader process is not that difficult. It must:
* Export an init-like function accepting two arguments, pid of the output process and the LogFiles component. LogFiles is actually only
used by the reader processes, making it possible to redefine LogFiles if implementing an own reader process.
* Respond to {get_next_entry, OutputPid} messages with {next_entry, self(), PidMappings, NowTimeStamp, Term} or {next_entry, self(),
{error,Reason}} .
* Terminate normally when no more log entries are available.
* Terminate on an incoming EXIT-signal from OutputPid .
The reader process must of course understand the format of a logfile written by the runtime component.
Ericsson AB inviso 0.6.2 inviso_lfm(3erl)