Monitoring processes in parallel and process log file after process exits
I am writing a script to kick off a process to gather logs on multiple nodes in parallel using "&". These processes create individual log files. Which I would like to filter and convert in CSV format after they are complete. I am facing following issues:
1. Monitor all Processes parallelly. whichever process completes first I would like to convert o/p log file of corresponding process to csv format. Sometimes process might take more than 30 mins to complete. Problem with my code is it will go serially converting files and not any process which completes or releases the file first. Please help
I have following code
Code:
#!/bin/sh -x
infile="$1"
PWD=`pwd`
shift
exec 2>&1
for IPADD in `cat $infile|awk '{print $1}' |tr -d '\015'`
do
OPFILE=${IPADD}_syslog.txt
nohup ${PWD}/collect_log.sh $IPADD > ${OPFILE} &
DPID=$!
echo -e "$DPID $OPFILE" >> pid_pfile.txt
done
##### Loop to parse and rename the files after data collection is complete.
check_palive()
{
PALIVE=`ps cax | grep $DPID | grep -o '^[ ]*[0-9]*'`
if [ -z $PALIVE ];then
HNAME=`grep -i hostname |awk '{print $NF}'`
awk '/name/,/exit/' $OPFILE |head -n -1 |awk '{print $1,$2,$3,$4,$5}'> ${HNAME}.txt
fi
}
PCOUNT=`pgrep collect_log.sh |wc -l`
while [ $PCOUNT -gt 0 ];do
for DPID in `pgrep collect_log.sh |awk '{print $1}'`
do
PALIVE=`ps -p $DPID --no-headers | wc -l`
if [ $PALIVE == 0 ];then
wait $DPID
OPFILE=`grep $DPID pid_pfile.txt|awk '{print $2}'`
check_palive
# sed "/$DPID/d" pid_pfile.txt > pid_pfile.txt
else
sleep 120
fi
done
PCOUNT=`pgrep "junk" |wc -l`
done
rm -f pid_pfile.txt
---------- Post updated at 11:42 PM ---------- Previous update was at 11:38 PM ----------
Hi guys:
I have a an oracle job which uses 10 parallel hints and would like to killit when it hangs. I want to kill all the processes that have been spawned. what I do right now is get the pid of the scheduler process which initiated theis job and the do a ps -ef| grep 'pid' and trace through... (1 Reply)
Hi
I need to split a huge file into multiple smaller files using split command.
After that i need to process each file in the back ground with sql loader .Sql loader is a utlity to load CSV files into oracle .
Check the status of each of these sqlloaders and then after sucessfull... (6 Replies)
hi all,
i would like to write the shell script to monitoring the processing, but if i passing the parameter the number of process is incorrect
how to slove it? many thx
got the correct number of process as following script:
===========================================================... (3 Replies)
There is a unix process process in oracle running and i see running by typing ps -fea|grep GE_CLIENTES.
The question is How can i see if this process is running in paralel. I dont know with a Unix command or specifically its a comand from Oracle.
I kow a Parallel process ia a process that... (1 Reply)
I am a new member of this forum and am also new to unix shell scripting.
I joined the forum to seek for help to achieve my task as this forum helps people.
here's what i do manually on daily basis
1)Loginto different unix box
2)Ftp the log files (morethan 50 each dir) to windows
3)use text pad... (3 Replies)
I had issues with processes locking up. This script checks for processes and kills them if they are older than a certain time.
Its uses some functions you'll need to define or remove, like slog() which I use for logging, and is_running() which checks if this script is already running so you can... (0 Replies)
HI All,
I have scenerio where I need to call sub modules through for loop
for (i=0; i<8000 ;i++)
{
..
BLOCKA
}
BLOCKA
{
..
..
subroutine 1;
subroutine 2;
}
I want this to be run in parallel
process1 BLOCKA
{ (6 Replies)
Hi,
I have a file which has a list of 200 tables e.g: table.txt
I need to do a count for each table and store it in a file.
So I did something like this:
for TABLE in `cat table.txt`
do
T_CNT=$(sqlplus -s -l / as sysdba <<EOF
set echo off heading off feadback off
SELECT count(*)
FROM... (1 Reply)
Discussion started by: wahi80
1 Replies
LEARN ABOUT DEBIAN
x2sys_merge
X2SYS_MERGE(1gmt) Generic Mapping Tools X2SYS_MERGE(1gmt)NAME
x2sys_merge - Merge an updated COEs tables
SYNOPSIS
x2sys_merge -Amain_COElist.d -Mnew_COElist.d
DESCRIPTION
x2sys_merge will read two crossovers data base and output the contents of the main one updated with the COEs in the second one. The second
file should only contain updated COEs relatively to the first one. That is, it MUST NOT contain any new two tracks intersections (This
point is NOT checked in the code). This program is useful when, for any good reason like file editing NAV correction or whatever, one had
to recompute only the COEs between the edited files and the rest of the database.
-A Specify the file main_COElist.d with the main crossover error data base.
-M Specify the file new_COElist.d with the newly computed crossover error data base.
OPTIONS
No space between the option flag and the associated arguments.
EXAMPLES
To update the main COE_data.txt with the new COEs estimations saved in the smaller COE_fresh.txt, try
x2sys_merge -ACOE_data.txt -MCOE_fresh.txt > COE_updated.txt
SEE ALSO x2sys_binlist(1), x2sys_cross(1), x2sys_datalist(1), x2sys_get(1), x2sys_init(1), x2sys_list(1), x2sys_put(1), x2sys_report(1)GMT 4.5.7 15 Jul 2011 X2SYS_MERGE(1gmt)