Sponsored Content
Top Forums UNIX for Advanced & Expert Users Kill phantom jobs automatically? Post 7112 by LivinFree on Wednesday 19th of September 2001 01:27:43 AM
Old 09-19-2001
I read an interesting description of zombies a while back. I can't remember where - maybe someone else can.
Have you ever seen the movie "The Princess Bride"? If so, you remeber that the main character was not dead - just "mostly dead". Zombie processes are like that. You can't kill them because they don't really exist as processes.

The only way of killing them, that I'm aware is to kill the parent - and not with -9 either. Usually, when a server shuts down ( from what I've seen ), it send various kill signals to all processes, starting with a simple request, then progressing (eventually) to SIGKILL (-9), to make sure all proceeses stop. Is this happening on your server? Also, are you running any sort of large database, or disk intensive programs? If so, it can take over 20-30 minutes to write all data to disk before it will exit. Make sure you are not kill -9'ing those!
 

8 More Discussions You Might Find Interesting

1. AIX

I created a phantom file now i can't delete it?

Ok, somehow i've managed to create two .ksh files with the same name. Impossible i know but somehow i did it by mistake... I was actually copying a file and renaming it as something else but as i was typing the copy name i hit the delete key by mistake and got the ^? characters in the file name... (9 Replies)
Discussion started by: Jazmania
9 Replies

2. AIX

w shows phantom user

Hi When i use "w" command, It list some users with "-" command. That means these use already logout,but still in the system somewhere, no process but list under the "w" and "who" command.How can i get rid of these user. Can anybody help me out.thanks a lot xiko (2 Replies)
Discussion started by: xiko
2 Replies

3. Shell Programming and Scripting

background jobs exit status and limit the number of jobs to run

i need to execute 5 jobs at a time in background and need to get the exit status of all the jobs i wrote small script below , i'm not sure this is right way to do it.any ideas please help. $cat run_job.ksh #!/usr/bin/ksh #################################### typeset -u SCHEMA_NAME=$1 ... (1 Reply)
Discussion started by: GrepMe
1 Replies

4. Shell Programming and Scripting

waiting on jobs in bash, allowing limited parallel jobs at one time, and then for all to finish

Hello, I am running GNU bash, version 3.2.39(1)-release (x86_64-pc-linux-gnu). I have a specific question pertaining to waiting on jobs run in sub-shells, based on the max number of parallel processes I want to allow, and then wait... (1 Reply)
Discussion started by: srao
1 Replies

5. Red Hat

Phantom space usage in /

Hi everyone, Got an interesting one (well, interesting to me) I have a box with a 5Gb / mount point. Checking for large files I found nothing and in fact when I did a full du I found that there was only 1.6Gb in use! And yet 100% used in / So there's an unaccounted 3.4Gb somewhere! The... (3 Replies)
Discussion started by: keefbaker
3 Replies

6. UNIX for Dummies Questions & Answers

Phantom Protocol Configuration File

I've been trying to set up the phantom protocol just to try it out. I compiled it fine, but when I ran it I got an error that the configuration file wouldn't load. I found that file didn't exist, so I created it with a blank file, but got this: ./phantom Loading configuration file... (4 Replies)
Discussion started by: Azrael
4 Replies

7. Programming

Phantom Arrays in PL/SQL

Having issue with an oracle stored procedure that fetches 5k array size to an down stream application using oracle client interface. It is creating phantom arrays and keeps sending arrays that do not exist to begin with and congesting the connections. This happened when we upgraded from oracle... (1 Reply)
Discussion started by: mrn6430
1 Replies

8. Shell Programming and Scripting

Shell script to run multiple jobs and it's dependent jobs

I have multiple jobs and each job dependent on other job. Each Job generates a log and If job completed successfully log file end's with JOB ENDED SUCCESSFULLY message and if it failed then it will end with JOB ENDED with FAILURE. I need an help how to start. Attaching the JOB dependency... (3 Replies)
Discussion started by: santoshkumarkal
3 Replies
SYSTEMD.KILL(5) 						   systemd.kill 						   SYSTEMD.KILL(5)

NAME
systemd.kill - Process killing procedure configuration SYNOPSIS
service.service, socket.socket, mount.mount, swap.swap, scope.scope DESCRIPTION
Unit configuration files for services, sockets, mount points, swap devices and scopes share a subset of configuration options which define the killing procedure of processes belonging to the unit. This man page lists the configuration options shared by these five unit types. See systemd.unit(5) for the common options shared by all unit configuration files, and systemd.service(5), systemd.socket(5), systemd.swap(5), systemd.mount(5) and systemd.scope(5) for more information on the configuration file options specific to each unit type. The kill procedure configuration options are configured in the [Service], [Socket], [Mount] or [Swap] section, depending on the unit type. OPTIONS
KillMode= Specifies how processes of this unit shall be killed. One of control-group, process, mixed, none. If set to control-group, all remaining processes in the control group of this unit will be killed on unit stop (for services: after the stop command is executed, as configured with ExecStop=). If set to process, only the main process itself is killed. If set to mixed, the SIGTERM signal (see below) is sent to the main process while the subsequent SIGKILL signal (see below) is sent to all remaining processes of the unit's control group. If set to none, no process is killed. In this case, only the stop command will be executed on unit stop, but no process be killed otherwise. Processes remaining alive after stop are left in their control group and the control group continues to exist after stop unless it is empty. Processes will first be terminated via SIGTERM (unless the signal to send is changed via KillSignal=). Optionally, this is immediately followed by a SIGHUP (if enabled with SendSIGHUP=). If then, after a delay (configured via the TimeoutStopSec= option), processes still remain, the termination request is repeated with the SIGKILL signal (unless this is disabled via the SendSIGKILL= option). See kill(2) for more information. Defaults to control-group. KillSignal= Specifies which signal to use when killing a service. This controls the signal that is sent as first step of shutting down a unit (see above), and is usually followed by SIGKILL (see above and below). For a list of valid signals, see signal(7). Defaults to SIGTERM. Note that, right after sending the signal specified in this setting, systemd will always send SIGCONT, to ensure that even suspended tasks can be terminated cleanly. SendSIGHUP= Specifies whether to send SIGHUP to remaining processes immediately after sending the signal configured with KillSignal=. This is useful to indicate to shells and shell-like programs that their connection has been severed. Takes a boolean value. Defaults to "no". SendSIGKILL= Specifies whether to send SIGKILL to remaining processes after a timeout, if the normal shutdown procedure left processes of the service around. Takes a boolean value. Defaults to "yes". SEE ALSO
systemd(1), systemctl(1), journalctl(8), systemd.unit(5), systemd.service(5), systemd.socket(5), systemd.swap(5), systemd.mount(5), systemd.exec(5), systemd.directives(7), kill(2), signal(7) systemd 237 SYSTEMD.KILL(5)
All times are GMT -4. The time now is 05:47 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy