Sponsored Content
Operating Systems Solaris SUDO error in Solaris: auth.error] fork Post 302970102 by DukeNuke2 on Friday 1st of April 2016 07:20:03 PM
Old 04-01-2016
Some more information would be usefull... Looks like a cron job (every 10 minutes) goes bad. What is it you are trying to do (or better what is happening every 10 minutes). Which Solaris version are you using... The more information, the better...
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

fork error

hi i am having unix 5.05.with compaq ml330 serevr it works in single mode ok but afetr installing specialix 30 port serial card driver it gives following message on server FORK ERROR TOO MANY PROCESS (1 Reply)
Discussion started by: santoshsonawane
1 Replies

2. UNIX for Advanced & Expert Users

Solaris 10 auth issue

Very strange one, we've got a recently build server (Sol10 via JET flash). Bascially you can ssh to it fine, but telnet will allow entry of username, but will then feed in a carriage return on the passwd field, this also happens on any auth type command, ie passwd on a user account will also... (4 Replies)
Discussion started by: itsupplies
4 Replies

3. AIX

Sudo error

I want give a user "sar" permission, so I modify the sudoers file: unix1 is the group for users can use sar command Cmnd_Alias RUN_SAR = /usr/sbin/sar User_Alias UNIX1_USERS = %unix1 UNIX1_USERS ALL = NOPASSWD:RUN_SAR However, when I run sar command, it shows: $ sar 1 4 sar: The... (1 Reply)
Discussion started by: rainbow_bean
1 Replies

4. UNIX for Advanced & Expert Users

Problem due to Fork Error

Hi, I have developed a datastage job...which has many process running in parallel..but because of Fork Error my job is not working:( Can any body help me out to solve this Fork error problem.:rolleyes: My Os is SUNOS. IS there any setting in Unix through admin where in if i set some paramter... (8 Replies)
Discussion started by: Amey Joshi
8 Replies

5. HP-UX

error : can not fork new process

hi today we came across error "can not fork new process" when i checked there were 400 ksh processes were running for that particular user ( due to kernel parameter setting no of processes were restricted to 400 ) and the reason for this was somebody executed shell script which had "*" ( only *... (3 Replies)
Discussion started by: zedex
3 Replies

6. UNIX for Advanced & Expert Users

sudo su error

Hello, I am logging to a server using username 'test'. I want to execute some commands as user test2. When I am trying to run `sudo su - test2 -c 'ls'` it gives error user 'test' is not allowed to run sudo in host. But when I login into the account 'test2' using sudo su - test2 all these... (6 Replies)
Discussion started by: karayan
6 Replies

7. UNIX for Dummies Questions & Answers

error message when use fork with open file

I get this message when I write myshell.c program "VM pagefault:SIGSEGV bad add 0x0 err 0x4 nopage read myshell PM: ciredump signal 11 for 1725 /myshell memory fault (core dumped)" /* RCS information: $Id: myshell.c,v 1.2 2006/04/05 22:46:33 elm Exp $ */ #include <stdio.h> #include <unistd.h>... (1 Reply)
Discussion started by: rosecomp
1 Replies

8. UNIX for Dummies Questions & Answers

Error in Sudo

Hi, I have installed sudo on Solaris 10 (sparc). When I try to add a user I get the following: -bash-3.00$ sudo addusr scarlet sudo sudo: /usr/local/etc/sudoers.d is owned by uid 2, should be 0 Password: I entered a password, thinking it was for the sudo user but it failed. Then I entered the... (3 Replies)
Discussion started by: Scarlet
3 Replies

9. UNIX for Dummies Questions & Answers

Fork resource unavailable error, max # filehandles open?

I wrote a perl program that simultaneously reads in data from 691 tar.gz files using zcat. I can run one instance of the program without any issues and the memory and swap sizes are negligible. However, when I attempt to run more than 1 I start to get fork: resource unavailable messages. Are... (6 Replies)
Discussion started by: aquinom85
6 Replies

10. Shell Programming and Scripting

Strange fork error while running script

more run.sh !/bin/bash input="data.txt" while IFS= read -r var do startdir="/web/logs" searchterm=$(echo $var | awk -F'=' '{print $1}') replaceterm=$(echo $var | awk -F'=' '{print $2}') find "$startdir" -type f -exec grep -l "$searchterm" {} + | while read file do if sed -e... (1 Reply)
Discussion started by: mohtashims
1 Replies
queuedefs(4)							   File Formats 						      queuedefs(4)

NAME
queuedefs - queue description file for at, batch, and cron SYNOPSIS
/etc/cron.d/queuedefs DESCRIPTION
The queuedefs file describes the characteristics of the queues managed by cron(1M). Each non-comment line in this file describes one queue. The format of the lines are as follows: q.[njobj][nicen][nwaitw] The fields in this line are: q The name of the queue. a is the default queue for jobs started by at(1); b is the default queue for jobs started by batch (see at(1)); c is the default queue for jobs run from a crontab(1) file. njob The maximum number of jobs that can be run simultaneously in that queue; if more than njob jobs are ready to run, only the first njob jobs will be run, and the others will be run as jobs that are currently running terminate. The default value is 100. nice The nice(1) value to give to all jobs in that queue that are not run with a user ID of super-user. The default value is 2. nwait The number of seconds to wait before rescheduling a job that was deferred because more than njob jobs were running in that job's queue, or because the system-wide limit of jobs executing has been reached. The default value is 60. Lines beginning with # are comments, and are ignored. EXAMPLES
Example 1: A sample file. # # a.4j1n b.2j2n90w This file specifies that the a queue, for at jobs, can have up to 4 jobs running simultaneously; those jobs will be run with a nice value of 1. As no nwait value was given, if a job cannot be run because too many other jobs are running cron will wait 60 seconds before trying again to run it. The b queue, for batch(1) jobs, can have up to 2 jobs running simultaneously; those jobs will be run with a nice(1) value of 2. If a job cannot be run because too many other jobs are running, cron(1M) will wait 90 seconds before trying again to run it. All other queues can have up to 100 jobs running simultaneously; they will be run with a nice value of 2, and if a job cannot be run because too many other jobs are running cron will wait 60 seconds before trying again to run it. FILES
/etc/cron.d/queuedefs queue description file for at, batch, and cron. SEE ALSO
at(1), crontab(1), nice(1), cron(1M) SunOS 5.10 1 Mar 1994 queuedefs(4)
All times are GMT -4. The time now is 01:59 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy