This may or may not end up being a strictly 'scripting' issue, but I'll start here.
I'm looking for a way of establishing the 'max' or 'high water' useage of a given mount point over a period of a week. My first thought was for a cron job, as:
which works fine as far as it goes, but it would be nice to be able to append the current date/time to each. While ultimately what I need is to establish the high and low usage points during a week, but would be nice to also see the pattern.
If a completely different approach will work, I'm all ears, but I'm not the SA and don't have root access.
Moderator's Comments:
Please view this code tag video for how to use code tags when posting code and data.
Last edited by Corona688; 10-02-2012 at 01:23 PM..
hi ,
I was applying patches after that when i reboot i get these message. I did not do anything other thatn this. Now i am unable to start my oracle . Tell me how to solve this
These are the error messages
forceload of /drv/rdriver failed
/drv/rdmexus failed
... (7 Replies)
hi people,
I'm trying to create a mount point, but am having no sucess at all, with the following:
mount -F ufs /dev/dsk/diskname /newdirectory
but i keep getting - mount-point /newdirectory doesn't exist.
What am i doing wrong/missing?
Thanks
Rc (1 Reply)
Hi,
I am seeing very high kernel usage and very high load averages on my system (Although we are not loading much data to our database). Here is the output of top...does anyone know what i should be looking at?
Thanks,
Lorraine
last pid: 13144; load averages: 22.32, 19.81, 16.78 ... (4 Replies)
Hi,
On Solaris 5.10, I have a following mount point:
/dev/dsk/emcpower0a 492G 369G 118G 76% /u02
In /u02, from the du -h command, I can see that only 110G is used by couple of directories. I am wondering where the rest of 259G has gone? Any ideas please?
How can I check... (17 Replies)
Deart All,
can any one help to do this,
i need to change mount point in AIX 6
/opt/OM should be /usr/lpp/OM, how do i do....
Please help me Urgent issue (2 Replies)
Hi Guys,
I have Solaris 9 and RHEL 5 boxes I implemented script to send me an email when my mount point is > 90.
Now the ouput id like these:
/dev/dsk/emcpower20a 1589461168 1509087840 64478720 96% /data1
/dev/dsk/emcpower21a 474982909 451894234 18338846 97% /data2... (2 Replies)
How to create a new mount point with 600GB and add 350 GBexisting mount point
Best if there step that i can follow or execute before i mount or add diskspace IN AIX
Thanks (2 Replies)
Hi there,
I have a mount point that is locked.
How do I unlocked it? (1 Reply)
Discussion started by: alvinoo
1 Replies
LEARN ABOUT OPENDARWIN
queuedefs
queuedefs(4) File Formats queuedefs(4)NAME
queuedefs - queue description file for at, batch, and cron
SYNOPSIS
/etc/cron.d/queuedefs
DESCRIPTION
The queuedefs file describes the characteristics of the queues managed by cron(1M). Each non-comment line in this file describes one queue.
The format of the lines are as follows:
q.[njobj][nicen][nwaitw]
The fields in this line are:
q The name of the queue. a is the default queue for jobs started by at(1); b is the default queue for jobs started by batch (see
at(1)); c is the default queue for jobs run from a crontab(1) file.
njob The maximum number of jobs that can be run simultaneously in that queue; if more than njob jobs are ready to run, only the first
njob jobs will be run, and the others will be run as jobs that are currently running terminate. The default value is 100.
nice The nice(1) value to give to all jobs in that queue that are not run with a user ID of super-user. The default value is 2.
nwait The number of seconds to wait before rescheduling a job that was deferred because more than njob jobs were running in that job's
queue, or because the system-wide limit of jobs executing has been reached. The default value is 60.
Lines beginning with # are comments, and are ignored.
EXAMPLES
Example 1: A sample file.
#
#
a.4j1n
b.2j2n90w
This file specifies that the a queue, for at jobs, can have up to 4 jobs running simultaneously; those jobs will be run with a nice value
of 1. As no nwait value was given, if a job cannot be run because too many other jobs are running cron will wait 60 seconds before trying
again to run it.
The b queue, for batch(1) jobs, can have up to 2 jobs running simultaneously; those jobs will be run with a nice(1) value of 2. If a job
cannot be run because too many other jobs are running, cron(1M) will wait 90 seconds before trying again to run it. All other queues can
have up to 100 jobs running simultaneously; they will be run with a nice value of 2, and if a job cannot be run because too many other jobs
are running cron will wait 60 seconds before trying again to run it.
FILES
/etc/cron.d/queuedefs queue description file for at, batch, and cron.
SEE ALSO at(1), crontab(1), nice(1), cron(1M)SunOS 5.10 1 Mar 1994 queuedefs(4)