02-14-2020
Exactly Victor,
It's not a big deal because the spikes are just for a minute 4 to 6 times a day; but the problem is when (potentially) all the "bad things" align all at once (bots, DB, system loads), and the one minute problem becomes a two or three minute problem (it's possible, of course).
I'm going to refine my instrumentation and see if I can figure it out. If so, great. Normally, I can solve most any system-level computer problem and with (more than) a bit of uncertainty in the new COVID-19 biohazard around these parts, I'm not so keen in going out with so many tourists here now (none of the foreign tourists are wearing masks, as I can see today, and there are a LOT of tourists now) ; so this little spike problem is keeping me busy inside, avoiding a potential virus from Chinese wildlife markets.
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
we have an unix system which has
load average normally about 20.
but while i am running a particular unix batch which performs heavy
operations on filesystem and database average load
reduces to 15.
how can we explain this situation?
while running that batch idle cpu time is about %60-65... (0 Replies)
Discussion started by: gfhgfnhhn
0 Replies
2. UNIX for Dummies Questions & Answers
Hello all, I have a question about load averages.
I've read the man pages for the uptime and w command for two or three different flavors of Unix (Red Hat, Tru64, Solaris). All of them agree that in the output of the 2 aforementioned commands, you are given the load average for the box, but... (3 Replies)
Discussion started by: Heathe_Kyle
3 Replies
3. UNIX for Dummies Questions & Answers
Hello, Here is the output of top command. My understanding here is,
the load average 0.03 in last 1 min, 0.02 is in last 5 min, 0.00 is in last 15 min.
By seeing this load average, When can we say that, the system load averge is too high?
When can we say that, load average is medium/low??... (8 Replies)
Discussion started by: govindts
8 Replies
4. Solaris
Hi,
i have installed solaris 10 on t-5120 sparc enterprise.
I am little surprised to see load average of 2 or around on this OS.
when checked with ps command following process is using highest CPU. looks like it is running for long time and does not want to stop, but I do not know... (5 Replies)
Discussion started by: upengan78
5 Replies
5. UNIX for Dummies Questions & Answers
Hello AlL,..
I want from experts to help me as my load average is increased and i dont know where is the problem !!
this is my top result :
root@a4s # top
top - 11:30:38 up 40 min, 1 user, load average: 3.06, 2.49, 4.66
Mem: 8168788k total, 2889596k used, 5279192k free, 47792k... (3 Replies)
Discussion started by: black-code
3 Replies
6. UNIX for Advanced & Expert Users
Hi ,
I am using 48 CPU sunOS server at my work.
The application has facility to check the current load average before starting a new process to control the load.
Right now it is configured as 48. So it does mean that each CPU can take maximum one proces and no processe is waiting.
... (2 Replies)
Discussion started by: kumaran_5555
2 Replies
7. Solaris
NPROC USERNAME SWAP RSS MEMORY TIME CPU
320 oracle 23G 22G 69% 582:55:11 85%
47 root 148M 101M 0.3% 99:29:40 0.3%
53 rafmsdb 38M 60M 0.2% 0:46:17 0.1%
1 smmsp 1296K 5440K 0.0% 0:00:08 0.0%
7 daemon ... (2 Replies)
Discussion started by: snjksh
2 Replies
8. UNIX for Dummies Questions & Answers
Hi,
I am getting a high load average, around 7, once an hour. It last for about 4 minutes and makes things fairly unusable for this time.
How do I find out what is using this. Looking at top the only thing running at the time is md5sum.
I have looked at the crontab and there is nothing... (10 Replies)
Discussion started by: sm9ai
10 Replies
9. UNIX for Dummies Questions & Answers
how load average is calculated and what exactly is it
difference between cpu% and load average (9 Replies)
Discussion started by: robo
9 Replies
10. Programming
Here we go....
Preface:
..... so in a galaxy far, far, far away from commercial, data sharing corporations.....
For this project, I used the ESP-WROOM-32 as an MQTT (publish / subscribe) client which receives Linux server "load averages" as messages published as MQTT pub/sub messages.... (6 Replies)
Discussion started by: Neo
6 Replies
LEARN ABOUT CENTOS
wd_keepalive
WD_KEEPALIVE(8) System Manager's Manual WD_KEEPALIVE(8)
NAME
wd_keepalive - a simplified software watchdog daemon
SYNOPSIS
wd_keepalive [-c filename|--config-file filename]
DESCRIPTION
This is a simplified version of the watchdog daemon. If configured so it only opens
.IR /dev/watchdog , and keeps writing to it often enough to keep the kernel from resetting, at least once per minute. Each write delays
the reboot time another minute. After a minute of inactivity the watchdog hardware will cause a reset. In the case of the software watchdog
the ability to reboot will depend on the state of the machines and interrupts.
The wd_keepalive daemon can be stopped without causing a reboot if the device /dev/watchdog is closed correctly, unless your kernel is com-
piled with the CONFIG_WATCHDOG_NOWAYOUT option enabled.
Under high system load wd_keepalive might be swapped out of memory and may fail to make it back in in time. Under these circumstances the
Linux kernel will reset the machine. To make sure you won't get unnecessary reboots make sure you have the variable realtime set to yes in
the configuration file watchdog.conf. This adds real time support to wd_keepalive: it will lock itself into memory and there should be no
problem even under the highest of loads.
On system running out of memory the kernel will try to free enough memory by killing process. The wd_keepalive daemon itself is exempted
from this so-called out-of-memory killer.
OPTIONS
Available command line options are the following:
-c config-file, --config-file config-file
Use config-file as the configuration file instead of the default /etc/watchdog.conf.
FILES
/dev/watchdog
The watchdog device.
/var/run/wd_keepalive.pid
The pid file of the running wd_keepalive.
SEE ALSO
watchdog.conf(5)
watchdog(8)
4th Berkeley Distribution January 2005 WD_KEEPALIVE(8)