Sponsored Content
Top Forums UNIX for Advanced & Expert Users Nearly Random, Uncorrelated Server Load Average Spikes Post 303044134 by Neo on Friday 14th of February 2020 10:51:49 PM
Old 02-14-2020
Quote:
Originally Posted by stomp
Glad you figured it out already.
Not yet.

Last night did not confirm the "rogue bots are the cause" .... hypothesis (see above post). Two more spikes, no correlation to increase bot number or network I/O. But I'm still looking into it Smilie

Regarding instrumentation, I prefer to build my own, like I have done with Node-RED and MQTT.

I like instrumentation which works for me; and not instrumentation designed by others. Believe me, I have used many "others" packages in the past, over decades.

Web based packages which run on the server we are observing start having problems when the server itself is having problems, so I do not use them.

That is why I use MQTT, so the only additional load requirement of the server when under stress is to publish a short message to the network (off platform). Installing packages on the same server being tested, especially web-based programs resident on servers being monitored which are primarily web servers, is not a good way to build instrumentation, in my view (so I don't do it and only recommend it in the most simple case).

MQTT is ideal for this kind of instrumentation. MQTT is free. MQTT is very easy to operate and maintain; and MQTT permits a wide-variety of ways to store data (on any node running MQTT in the network) and visualize the data (MQTT supported apps, anywhere on the network).

So, I do not have an instrumentation problem. The issue I have is trying to decide, based on evidence and strong correlation, what to monitor.

At the moment, I am testing apache2 mod pagespeed (have turned it off, temporarily). I may turn off XCache later (after the disable mod pagespeed test, and see if that changes things.

I am also very happy with Node-RED. In fact, I am extremely impressed with it.

Let me close with saying that I use MQTT and Node-RED by choice and do want want any other packages (I have used many of them over the decades). I really like MQTT and Node-RED. These tools fit my style and work great for me. For others, please use any instrumentation and monitor tools what work for you and / or supported by your organization.
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

load average

we have an unix system which has load average normally about 20. but while i am running a particular unix batch which performs heavy operations on filesystem and database average load reduces to 15. how can we explain this situation? while running that batch idle cpu time is about %60-65... (0 Replies)
Discussion started by: gfhgfnhhn
0 Replies

2. UNIX for Dummies Questions & Answers

Load Average

Hello all, I have a question about load averages. I've read the man pages for the uptime and w command for two or three different flavors of Unix (Red Hat, Tru64, Solaris). All of them agree that in the output of the 2 aforementioned commands, you are given the load average for the box, but... (3 Replies)
Discussion started by: Heathe_Kyle
3 Replies

3. UNIX for Dummies Questions & Answers

top - Load average

Hello, Here is the output of top command. My understanding here is, the load average 0.03 in last 1 min, 0.02 is in last 5 min, 0.00 is in last 15 min. By seeing this load average, When can we say that, the system load averge is too high? When can we say that, load average is medium/low??... (8 Replies)
Discussion started by: govindts
8 Replies

4. Solaris

load average query.

Hi, i have installed solaris 10 on t-5120 sparc enterprise. I am little surprised to see load average of 2 or around on this OS. when checked with ps command following process is using highest CPU. looks like it is running for long time and does not want to stop, but I do not know... (5 Replies)
Discussion started by: upengan78
5 Replies

5. UNIX for Dummies Questions & Answers

Please Help me in my load average

Hello AlL,.. I want from experts to help me as my load average is increased and i dont know where is the problem !! this is my top result : root@a4s # top top - 11:30:38 up 40 min, 1 user, load average: 3.06, 2.49, 4.66 Mem: 8168788k total, 2889596k used, 5279192k free, 47792k... (3 Replies)
Discussion started by: black-code
3 Replies

6. UNIX for Advanced & Expert Users

Load average in UNIX

Hi , I am using 48 CPU sunOS server at my work. The application has facility to check the current load average before starting a new process to control the load. Right now it is configured as 48. So it does mean that each CPU can take maximum one proces and no processe is waiting. ... (2 Replies)
Discussion started by: kumaran_5555
2 Replies

7. Solaris

Load Average and Lwps

NPROC USERNAME SWAP RSS MEMORY TIME CPU 320 oracle 23G 22G 69% 582:55:11 85% 47 root 148M 101M 0.3% 99:29:40 0.3% 53 rafmsdb 38M 60M 0.2% 0:46:17 0.1% 1 smmsp 1296K 5440K 0.0% 0:00:08 0.0% 7 daemon ... (2 Replies)
Discussion started by: snjksh
2 Replies

8. UNIX for Dummies Questions & Answers

Load average spikes once an hour

Hi, I am getting a high load average, around 7, once an hour. It last for about 4 minutes and makes things fairly unusable for this time. How do I find out what is using this. Looking at top the only thing running at the time is md5sum. I have looked at the crontab and there is nothing... (10 Replies)
Discussion started by: sm9ai
10 Replies

9. UNIX for Dummies Questions & Answers

Help with load average?

how load average is calculated and what exactly is it difference between cpu% and load average (9 Replies)
Discussion started by: robo
9 Replies

10. Programming

ESP32 (ESP-WROOM-32) as an MQTT Client Subscribed to Linux Server Load Average Messages

Here we go.... Preface: ..... so in a galaxy far, far, far away from commercial, data sharing corporations..... For this project, I used the ESP-WROOM-32 as an MQTT (publish / subscribe) client which receives Linux server "load averages" as messages published as MQTT pub/sub messages.... (6 Replies)
Discussion started by: Neo
6 Replies
Catalyst::Manual::Deployment::FastCGI(3pm)		User Contributed Perl Documentation		Catalyst::Manual::Deployment::FastCGI(3pm)

NAME
Catalyst::Manual::Deployment::FastCGI - Deploying Catalyst with FastCGI FastCGI Deployment FastCGI is a high-performance extension to CGI. It is suitable for production environments, and is the standard method for deploying Catalyst in shared hosting environments. Pros Speed FastCGI performs equally as well as mod_perl. Don't let the 'CGI' fool you; your app runs as multiple persistent processes ready to receive connections from the web server. App Server When using external FastCGI servers, your application runs as a standalone application server. It may be restarted independently from the web server. This allows for a more robust environment and faster reload times when pushing new app changes. The frontend server can even be configured to display a friendly "down for maintenance" page while the application is restarting. Load-balancing You can launch your application on multiple backend servers and allow the frontend web server to perform load-balancing among all of them. And of course, if one goes down, your app continues to run. Multiple versions of the same app Each FastCGI application is a separate process, so you can run different versions of the same app on a single server. Can run with threaded Apache Since your app is not running inside of Apache, the faster mpm_worker module can be used without worrying about the thread safety of your application. Widely supported. FastCGI is compatible with many server implementations, not just Apache. Cons You may have to disable mod_deflate. If you experience page hangs with mod_fastcgi then remove deflate.load and deflate.conf from mods-enabled/ More complex environment With FastCGI, there are more things to monitor and more processes running than when using mod_perl. Standalone FastCGI Server In server mode the application runs as a standalone server and accepts connections from a web server. The application can be on the same machine as the web server, on a remote machine, or even on multiple remote machines. Advantages of this method include running the Catalyst application as a different user than the web server, and the ability to set up a scalable server farm. To start your application in server mode, install the FCGI::ProcManager module and then use the included fastcgi.pl script. $ script/myapp_fastcgi.pl -l /tmp/myapp.socket -n 5 Command line options for fastcgi.pl include: -d -daemon Daemonize the server. -p -pidfile Write a pidfile with the pid of the process manager. -l -listen Listen on a socket path, hostname:port, or :port. -n -nproc The number of processes started to handle requests. See below for the specific web server configurations for using the external server. WEB SERVERS
Any web server which supports FastCGI should work with Catalyst. Configuration recipies for well-known web servers are linked below, and we would welcome contributions from people deploying Catalyst on other web servers. Apache Catalyst::Manual::Deployment::Apache::FastCGI nginx Catalyst::Manual::Deployment::nginx::FastCGI lighttpd Catalyst::Manual::Deployment::lighttpd::FastCGI Microsoft IIS Catalyst::Manual::Deployment::IIS::FastCGI AUTHORS
Catalyst Contributors, see Catalyst.pm COPYRIGHT
This library is free software. You can redistribute it and/or modify it under the same terms as Perl itself. perl v5.14.2 2012-01-20 Catalyst::Manual::Deployment::FastCGI(3pm)
All times are GMT -4. The time now is 01:27 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy