Location: Asia Pacific, Cyberspace, in the Dark Dystopia
Posts: 19,118
Thanks Given: 2,351
Thanked 3,359 Times in 1,878 Posts
Update:
Quote:
Originally Posted by Neo
So, as a sanity check, I have disabled apache2 mod pagespeed (just now) to see if there is any effect at all.
This is just a "shot in the dark" (disabling mod pagespeed), but at least we will know something. If the spikes continue, I will turn it back on, of course.
Did not help at all. Slowed the site down a bit and did not stop any spikes.
Location: Asia Pacific, Cyberspace, in the Dark Dystopia
Posts: 19,118
Thanks Given: 2,351
Thanked 3,359 Times in 1,878 Posts
Next:
I have some old "cyberspace situational awareness" PHP code I used for a visualization project a few years ago, which captures and stores details information on web session activity; this code has proven handy identifying rouge bots in the past.
So, I have modified that code to capture and store detailed session information, including the number of hits per IP address, the user agent string, country code, etc. when the 1 minute load average is above 20 and less than 50.
So, let's see what happens the next time we get a spike... this should be interesting.
Location: Asia Pacific, Cyberspace, in the Dark Dystopia
Posts: 19,118
Thanks Given: 2,351
Thanked 3,359 Times in 1,878 Posts
Update:
Just noticed, after digging around in the DB logs from my MQTT instrumentation, that the last spike correlated with a jump in data transferred out of the network interface:
Typical values are much less (see below), so this would seem to validate the "rouge bots hypothesis", currently leading the candidate to explain these periodic spikes:
This is also the first "hard correlation" of a spike with network interface iostats, so, let's see if my code in the post before this one will trap the next big spike
Location: Asia Pacific, Cyberspace, in the Dark Dystopia
Posts: 19,118
Thanks Given: 2,351
Thanked 3,359 Times in 1,878 Posts
Update:
There were two spikes three hours apart; both were captured by my HTTP session logging program, which logs session detaisl aggregated by IP address. In this case, the code starts logging (kicks off) when the one minute load average exceeds 20 and ends when the same load average exceeds 50. So, in a spike we will record a very short snap shot in time of the traffic (on the way up and on the way down, but I may change this in the future to only capture on the way up).
The results were as follows:
In both spikes, there were at least fort Chinese IP addresses present at the top of the "hit count" chart (the DB table):
116.232.49.231
116.232.48.112
117.144.138.130
117.135.187.18
All four of these IP addresses were present during the 4AM and 7AM (Bangkok Time) spikes, and all three identified with the same user agent string:
This indicates these IP addresses (in China) are running the same bot software; but that is only an indication (but a fairly strong indication).
However, there is no denying that my "trap the bots" code has identified four Chinese IP addresses running some bot software which is more-than-likely contributing to the cause of the spikes.
In addition, during the same two spikes spaced three hours apart (as mentioned), there was one US-based IP address running with a blank user agent string:
208.91.198.98
Keep in mind in this capture, the code only captured the session information when the one minute load average was above 20 and below 50, and there were two spikes spaced almost exactly three hours apart:
So, having recorded the events above, I have just now emptied that DB table and have "reset the trap" for the next spikes.
Now, turning our attention to my instrumentation log where I am using MQTT to log all application and system cron (batch) events (start and end times) as well as a number of system metrics, we see there is a correlation (during the first spike) at 1581800045 of a spike in traffic out of the network interface, along with correlating spikes in Apache2 processes and CPU.
Now looking at the second event (spike 2), there is a similar pattern, but of interest in that proceeding both spikes, is an hourly application cron function:
This seems to indicate that the cause of the spikes, in this case, is a combination of aggressive bot activity coincidental with an hourly cron / batch process, causing spikes.
To be more certain of this, I am going to change the time of the "update attachment view" cron process from kicking off on the 53 minute mark of every hour, to the 23 minute mark of every hour, and see if the times of the spikes shift in time as well.
Location: Asia Pacific, Cyberspace, in the Dark Dystopia
Posts: 19,118
Thanks Given: 2,351
Thanked 3,359 Times in 1,878 Posts
So, let's try this:
Empty the "trap" again and block two Chinese subnetworks with rouge, unidentified bot activity.
Honestly, this is starting to "annoy me a lot" in the possibility that these performance hits, and all the time I am spending to find the cause of these hits / spikes, wasting valuable "time in life" is related to rouge, unidentified bots from Chinese networks.
If this continues, I am going to start blocking Chinese networks at the /16 and /8 levels (entire networks).
First, let's see if this is indeed the main source of these spikes. As we all know from situational awareness theory and the famous OODA loop by John Boyd.
OBSERVE
ORIENT
DECIDE
ACT
Already, we have enough information to ACT. But lets continue to OBSERVE
The loop goes on ... and on ....
Please note that we cannot trust apache2 modules and other third-party software to automatically block IPs, because this can results in blocking the "good bots" which are important for search engine optimization and site traffic.
That means, if this is confirmed that these kinds of bots continue to be the cause of problems, then I will need to DECIDE how to deal with this situation moving forward. I think point in time, I am going to continue to "trap and trace" before making a decision. However, it does seem, at this point, that rouge, unidentified bots from Chinese networks are causing performance issues and need to be "dealt with".
If anyone else has experienced similar issues and has an interesting potential solution to this problem, please reply and share your ideas.
Thanks!
PS: I may consider automating this, as follows:
Capture network session activity when one minute load average exceeds a threshold (as I am doing now).
Filter results captured in the DB based on "hitcount" and "country".
If the "hitcount" exceeds a certain threshold and "country" is in an array of "known to have rouge bots countries".
Here we go....
Preface:
..... so in a galaxy far, far, far away from commercial, data sharing corporations.....
For this project, I used the ESP-WROOM-32 as an MQTT (publish / subscribe) client which receives Linux server "load averages" as messages published as MQTT pub/sub messages.... (6 Replies)
Hi,
I am getting a high load average, around 7, once an hour. It last for about 4 minutes and makes things fairly unusable for this time.
How do I find out what is using this. Looking at top the only thing running at the time is md5sum.
I have looked at the crontab and there is nothing... (10 Replies)
Hi ,
I am using 48 CPU sunOS server at my work.
The application has facility to check the current load average before starting a new process to control the load.
Right now it is configured as 48. So it does mean that each CPU can take maximum one proces and no processe is waiting.
... (2 Replies)
Hello AlL,..
I want from experts to help me as my load average is increased and i dont know where is the problem !!
this is my top result :
root@a4s # top
top - 11:30:38 up 40 min, 1 user, load average: 3.06, 2.49, 4.66
Mem: 8168788k total, 2889596k used, 5279192k free, 47792k... (3 Replies)
Hi,
i have installed solaris 10 on t-5120 sparc enterprise.
I am little surprised to see load average of 2 or around on this OS.
when checked with ps command following process is using highest CPU. looks like it is running for long time and does not want to stop, but I do not know... (5 Replies)
Hello, Here is the output of top command. My understanding here is,
the load average 0.03 in last 1 min, 0.02 is in last 5 min, 0.00 is in last 15 min.
By seeing this load average, When can we say that, the system load averge is too high?
When can we say that, load average is medium/low??... (8 Replies)
Hello all, I have a question about load averages.
I've read the man pages for the uptime and w command for two or three different flavors of Unix (Red Hat, Tru64, Solaris). All of them agree that in the output of the 2 aforementioned commands, you are given the load average for the box, but... (3 Replies)
we have an unix system which has
load average normally about 20.
but while i am running a particular unix batch which performs heavy
operations on filesystem and database average load
reduces to 15.
how can we explain this situation?
while running that batch idle cpu time is about %60-65... (0 Replies)