Hi quirkasaurus,
Thanks for ur reply..
As per ur reply i would like to give some more details about my problem,
I have 500 different set of arguments in a file(say list.txt). These arguments i need to pass to an executable (or an application; say "./applictn.out") which wil run and does the job taking each arguments. That means i will get 500 sets of outputs on executing whole system.
If I execute this using script serially(means one after other) execution wil take more time. so within a system i can execute this in parallel (i.e making them as background process) using xargs. Like
Syntax:xargs <utility> <arguments>
Given: xargs ./applictn.out list.txt..
This wil be executed in parallel on single system..
Now my problem is if i want to execute these sets(set of 500 arguments) on different linux systems, which should run in parallel. So that i should get output at lower amount of time.
parallelism i mean to say is, if mahine1 should take say 150 sets and start processing..
Machine 2 should take say 200 sets and starts processing..
machine 3 should take remaining in sets and should start working on it..
All machines should work in parallel...
Thanks..
123an
Quote:
Originally Posted by
quirkasaurus
not enough info for me...
are you talking about the same script with 500 different sets of arguments?
do you have the arguments already somewhere or
will you generate them?
should the jobs run in series once on their individual machine?
or can they run concurrently with a maximum threshold?
is this just for benchmarking? or is this going to be a permanent run and
everything should take about the same time?
i'm thinking . . . . just create all the command lines....
dump them all into a file...
then have another script, read this file,
divide them equally into scripts for each machine....
rcp these scripts to the respective machines,
and kick 'em off.