Sponsored Content
Top Forums Shell Programming and Scripting How to find ip addresses in logfiles? Post 302506317 by LivinFree on Sunday 20th of March 2011 01:22:30 AM
Old 03-20-2011
OK, here's try #2:
Code:
#! /bin/bash

for file in /logs/logfile.*; do
	while read ip; do
		if [[ "${file#${file%??}}" == ".Z" ]]; then
			# This is a compressed file - it ends with .Z - use zgrep
			zgrep ${ip} ${file} >> ~/results/output_log_${ip}
		else
			# Not a .Z file - regular ol' grep
			grep ${ip} ${file} >> ~/results/output_log_${ip}
		fi
	done <iplist
done

In your example, you're redefining the "logfile" variable - only the last one will count. You could set an array or a simple list of files to loop through, though.

See mine above - it gathers the list of logfiles at run time and loops over each one, checks to see if it has a .Z (I assume you use that to mean compressed - it's typical but not necessarily true) to determine if it should run grep or zgrep, appends the output to the output_log_$ip file (append will create if necessary).

I haven't really tested it - does it work on your system with your data?
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Logfiles

My server has only has access logs turned on. How do I turn on the other standard logs (i.e. I'd like to see the referring urls). Thanks in advance. (3 Replies)
Discussion started by: pingdom
3 Replies

2. UNIX for Dummies Questions & Answers

Controlling logfiles

I support an app that outputs alert and audit messages to one log file (vendor says they can't be separated). The script that I have written takes a copy (mv cmd) of the file to do the separation and reformatting. I have a problem that I loose records (messages are being written constantly, upto 3+... (5 Replies)
Discussion started by: nhatch
5 Replies

3. IP Networking

find computer names from IP addresses?

Arright, here's what I'm trying to do. I want to dig up currently active IP addresses on my subnet, and my present strategy is to ping every address until I find active ones, then ping them more often to verify their status. Next, I want to find the names of the computers associated with those... (1 Reply)
Discussion started by: sladuuch
1 Replies

4. Shell Programming and Scripting

Logfiles E-mailed

Hi All There are some cron jobs ,which runs 24 hrs. Log files are generated when one job fails. So I need the log files to be emailed to my personal e-mail id. So that I can see the log files at my home If there is any error. How can I implement this in Unix shell programming. Thanks... (4 Replies)
Discussion started by: deep_kol
4 Replies

5. Shell Programming and Scripting

split monthly logfiles into daily logfiles

Hi, I have a lot of logfiles like fooYYYYMM.log (foo200301.log, foo200810.log) with lines like YYYY-MM-DD TIMESTAMP,text1,text2,text3... but I need (for postprocessing) the form fooYYYYMMDD.log (so foo200402.log becomes foo20040201.log, foo20040202.log...) with unmodified content of lines. ... (1 Reply)
Discussion started by: clzupp
1 Replies

6. Shell Programming and Scripting

Logfiles

Hi All, I have a peculiar problem. I will call a script from another script. Script abc.ksh is called by ABC.ksh as ABC.ksh abc.ksh in abc.ksh I will create and redirect all the statements to log file. ABC.ksh will also has a log file. I want all the logs generated in file abc in ABC... (5 Replies)
Discussion started by: javeed7
5 Replies

7. Red Hat

logrotate httpd logfiles

Hi, I need to logrotate logs in directories in /var/log/httpd/. There are 4 directories in /var/log/httpd/... these directories are /var/log/httpd/access/ /var/log/httpd/debug/ /var/log/httpd/error/ /var/log/httpd/required/ Each of the access, required, error and debug directories have around... (1 Reply)
Discussion started by: renuka
1 Replies

8. Shell Programming and Scripting

Delete characters & find unique IP addresses with port

Hi, I have a file having following content. <sip:9376507346@97.208.31.7:51088 <sip:9907472291@97.208.31.7:51208 <sip:8103742422@97.208.31.7:51024 <sip:9579892841@97.208.31.7:51080 <sip:9370904222@97.208.31.7:51104 <sip:9327665215@97.208.31.7:51104 <sip:9098364262@97.208.31.7:51024... (2 Replies)
Discussion started by: SunilB2011
2 Replies

9. UNIX for Advanced & Expert Users

How to find remote IP addresses that applications are scanning them?

Hi, I have a web server running on Debian 6.0.4 in a computer outside my university, but the web URL is blocked by my university, the security group of the university said because it was scanning computers inside university. I could not find any applications in my web server are doing... (3 Replies)
Discussion started by: hce
3 Replies

10. UNIX for Beginners Questions & Answers

How to find the count of IP addresses that belong to different subnets and display the count?

Hi, I have a file with a list of bunch of IP addresses from different VLAN's . I am trying to find the list the number of each vlan occurence in the output Here is how my file looks like 1.1.1.1 1.1.1.2 1.1.1.3 1.1.2.1 1.1.2.2 1.1.3.1 1.1.3.2 1.1.3.3 1.1.3.4 So what I am trying... (2 Replies)
Discussion started by: new2prog
2 Replies
inviso_lfm(3erl)					     Erlang Module Definition						  inviso_lfm(3erl)

NAME
inviso_lfm - An Inviso Off-Line Logfile Merger DESCRIPTION
Implements an off-line logfile merger, merging binary trace-log files from several nodes together in chronological order. The logfile merger can also do pid-to-alias translations. The logfile merger is supposed to be called from the Erlang shell or a higher layer trace tool. For it to work, all logfiles and trace information files (containing the pid-alias associations) must be located in the file system accessible from this node and organized according to the API description. The logfile merger starts a process, the output process, which in its turn starts one reader process for every node it shall merge logfiles from. Note that the reason for a process for each node is not remote communication, the logfile merger is an off-line utility, it is to sort the logfile entries in chronological order. The logfile merger can be customized both when it comes to the implementation of the reader processes and the output the output process shall generate for every logfile entry. EXPORTS
merge(Files, OutFile) -> merge(Files, WorkHFun, InitHandlerData) -> merge(Files, BeginHFun, WorkHFun, EndHFun, InitHandlerData) -> {ok, Count} | {error, Reason} Types Files = [FileDescription] FileDescription = FileSet | {reader,RMod,RFunc,FileSet} FileSet = {Node,LogFiles} | {Node,[LogFiles]} Node = atom() LogFiles = [{trace_log,[FileName]}] | [{trace_log,[FileName]},{ti_log,TiFileSpec}] TiFileSpec = [string()] - a list of one string. FileName = string() RMod = RFunc = atom() OutFile = string() BeginHFun = fun(InitHandlerData) -> {ok, NewHandlerData} | {error, Reason} WorkHFun = fun(Node, LogEntry, PidMappings, HandlerData) -> {ok, NewHandlerData} LogEntry = tuple() PidMappings = term() EndHFun = fun(HandlerData) -> ok | {error, Reason} Count = int() Reason = term() Merges the logfiles in Files together into one file in chronological order. The logfile merger consists of an output process and one or several reader processes. Returns {ok, Count} where Count is the total number of log entries processed, if successful. When specifying LogFiles , currently the standard reader-process only supports: * one single file * a list of wraplog files, following the naming convention <Prefix><Nr><Suffix> . Note that (when using the standard reader process) it is possible to give a list of LogFiles . The list must be sorted starting with the oldest. This will cause several trace-logs (from the same node) to be merged together in the same OutFile . The reader process will simply start reading the next file (or wrapset) when the previous is done. FileDescription == {reader,RMod,RFunc,FileSet} indicates that spawn(RMod, RFunc, [OutputPid,LogFiles]) shall create a reader process. The output process is customized with BeginHFun , WorkHFun and EndHFun . If using merge/2 a default output process configuration is used, basically creating a text file and writing the output line by line. BeginHFun is called once before requesting log entries from the reader processes. WorkHFun is called for every log entry (trace message) LogEntry . Here the log entry typically gets writ- ten to the output. PidMappings is the translations produced by the reader process. EndHFun is called when all reader processes have terminated. Currently the standard reader can only handle one ti-file (per LogFiles ). The current inviso meta tracer is further not capable of wrapping ti-files. (This also because a wrapped ti-log will most likely be worthless since alias associations done in the beginning are erased but still used in the trace-log). The standard reader process is implemented in the module inviso_lfm_tpreader (trace port reader). It understands Erlang linked in trace-port driver generated trace-logs and inviso_rt_meta generated trace information files. WRITING YOUR OWN READER PROCESS
Writing a reader process is not that difficult. It must: * Export an init-like function accepting two arguments, pid of the output process and the LogFiles component. LogFiles is actually only used by the reader processes, making it possible to redefine LogFiles if implementing an own reader process. * Respond to {get_next_entry, OutputPid} messages with {next_entry, self(), PidMappings, NowTimeStamp, Term} or {next_entry, self(), {error,Reason}} . * Terminate normally when no more log entries are available. * Terminate on an incoming EXIT-signal from OutputPid . The reader process must of course understand the format of a logfile written by the runtime component. Ericsson AB inviso 0.6.2 inviso_lfm(3erl)
All times are GMT -4. The time now is 04:31 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy