How to make parallel processing rather than serial processing ??
Hello everybody,
I have a little problem with one of my program. I made a plugin for collectd (a stats collector for my servers) but I have a problem to make it run in parallel.
My program gathers stats from logs, so it needs to run in background waiting for any new lines added in the log files.
My problem is that I call my program in a bash script that tails my log files, like this :
The problem is that it does not work very well. So I would like to make the "tail -f" parallel processing directly in my perl program.
Here is the perl program (collection.pl) I'd like to make parallel processing with :
I have seen that parallel processing could be done with fork or something like that but I don't really understand how to do it. Does anyone has an idea of how I could do that ?
Hi All,
I am working on solaris 8 sparc machine with 2 cpu.
I am trying to run my application which generates files. I run multiple instance of the application, but the results don't seem to show as if it were runing parallely.
When i run the application once it takes 12 secs to generate a... (1 Reply)
Hi
I want to run two shell script files parallely. These two scripts are interacting with the database. can any body help on this Pls
Regards
Audippa naidu.M (3 Replies)
Hi
I am looking for some kind of feature in unix that will help me write a script that can invoke multiple processes in parallel. And make sure that the multiple parallel processes complete successfully before I proceed to the next step.
Someone suggested something called timespid or... (6 Replies)
hi i am preparing a set of batches for a set of files sequentially
There is a folder /xyz where all the files reside
now all the files starting with
01 - will be appended for one below other to form a batch batch01
then all the files starting with
02 - will be appended for one below other to... (7 Replies)
Hey, I just wanted to know how many algorithms there are that cannot be accelerated by parallel processing. I know one such algorithm is Euclid's Algorithm (for GCF). does anyone know any other algorithms that cannot be accelerated by pp? if so please list the names and a general sentence of what... (2 Replies)
HI All,
I have scenerio where I need to call sub modules through for loop
for (i=0; i<30 ;i++)
{
..
..
..
subroutine 1;
subroutine 2;
}
I want this to be run in parallel
process1
{
...
...
subroutine 1;
subroutine 2; (0 Replies)
Need urgent help on thread: the goal here is the separtemask will take each image and separate different contours and for each contour in the image it will call handleobject thread. So every for loop will call the handeobject thread. However, object index variable needs to be passed in each... (0 Replies)
Unix OS : Linux 2.6x
Shell type : Korn
Hi all ,
This is a requirement to incorporate parallel processing of a Unix code .
I have two pieces of unix code , one of which will act as a parent process .
This script will invoke multiple ( say four ) instances of the second script at one go... (13 Replies)
Hi,
I am taking up the cue from where I was left in my earlier post ( link given below )
https://www.unix.com/shell-programming-scripting/231107-implement-parallel-processing.html
I actually wanted to know the significance of using the Unix "wait" , which returns
the control from background to... (3 Replies)
I have 10,000 + files, each of which I need to zip using bzip2.
Is ti possible to use bash to create 8 parallel streams sending a new file to be processed from the list when one of the others has finished? (1 Reply)
Discussion started by: garethsays
1 Replies
LEARN ABOUT CENTOS
collectd
COLLECTD(1) collectd COLLECTD(1)NAME
collectd - System statistics collection daemon
SYNOPSIS
collectd [options]
DESCRIPTION
collectd is a daemon that receives system statistics and makes them available in a number of ways. The main daemon itself doesn't have any
real functionality apart from loading, querying and submitting to plugins. For a description of available plugins please see "PLUGINS"
below.
OPTIONS
Most of collectd's configuration is done using using a configfile. See collectd.conf(5) for an in-depth description of all options.
-C <config-file>
Specify an alternative config file. This is the place to go when you wish to change collectd's behavior. The path may be relative to
the current working directory.
-t Test the configuration only. The program immediately exits after parsing the config file. A return code not equal to zero indicates an
error.
-T Test the plugin read callbacks only. The program immediately exits after invoking the read callbacks once. A return code not equal to
zero indicates an error.
-P <pid-file>
Specify an alternative pid file. This overwrites any settings in the config file. This is thought for init-scripts that require the
PID-file in a certain directory to work correctly. For everyday-usage use the PIDFile config-option.
-f Don't fork to the background. collectd will also not close standard file descriptors, detach from the session nor write a pid file.
This is mainly thought for 'supervising' init replacements such as runit.
-h Output usage information and exit.
PLUGINS
As noted above, the real power of collectd lies within it's plugins. A (hopefully complete) list of plugins and short descriptions can be
found in the README file that is distributed with the sourcecode. If you're using a package it's a good bet to search somewhere near
/usr/share/doc/collectd.
There are two big groups of plugins, input and output plugins:
o Input plugins are queried periodically. They somehow acquire the current value of whatever they where designed to work with and submit
these values back to the daemon, i. e. they "dispatch" the values. As an example, the "cpu plugin" reads the current cpu-counters of
time spent in the various modes (user, system, nice, ...) and dispatches these counters to the daemon.
o Output plugins get the dispatched values from the daemon and does something with them. Common applications are writing to RRD-files,
CSV-files or sending the data over a network link to a remote box.
Of course not all plugins fit neatly into one of the two above categories. The "network plugin", for example, is able to send (i. e.
"write") and receive (i. e. "dispatch") values. Also, it opens a socket upon initialization and dispatches the values when it receives them
and isn't triggered at the same time the input plugins are being read. You can think of the network receive part as working asynchronous if
it helps.
In addition to the above, there are "logging plugins". Right now those are the "logfile plugin" and the "syslog plugin". With these plugins
collectd can provide information about issues and significant situations to the user. Several loglevels let you suppress uninteresting
messages.
Starting with version 4.3.0 collectd has support for monitoring. This is done by checking thresholds defined by the user. If a value is out
of range, a notification will be dispatched to "notification plugins". See collectd.conf(5) for more detailed information about threshold
checking.
Please note that some plugins, that provide other means of communicating with the daemon, have manpages of their own to describe their
functionality in more detail. In particular those are collectd-email(5), collectd-exec(5), collectd-perl(5), collectd-snmp(5), and
collectd-unixsock(5)SIGNALS
collectd accepts the following signals:
SIGINT, SIGTERM
These signals cause collectd to shut down all plugins and terminate.
SIGUSR1
This signal causes collectd to signal all plugins to flush data from internal caches. E. g. the "rrdtool plugin" will write all pending
data to the RRD files. This is the same as using the "FLUSH -1" command of the "unixsock plugin".
SEE ALSO collectd.conf(5), collectd-email(5), collectd-exec(5), collectd-perl(5), collectd-snmp(5), collectd-unixsock(5), types.db(5),
<http://collectd.org/>
AUTHOR
Florian Forster <octo@verplant.org>
5.1.0 2012-04-02 COLLECTD(1)