Based on prior experience, a SYN Flood attack msg in dmesg is a fraction of the traffic the site has (it's noise), so I don't think that is an issue (it's noise, I think... not "signal" .... ).
There is no I/O spike (network I/O) as mentioned earlier (in case you missed it).
There is zero correlation between network I/0 and the load spike.
I do not think it is network I/O related.
The site gets tons of bot traffic from wayward bots globally, and there would be a correlation, but there is also no correlation between bots, network i/o, etc. None.
we have an unix system which has
load average normally about 20.
but while i am running a particular unix batch which performs heavy
operations on filesystem and database average load
reduces to 15.
how can we explain this situation?
while running that batch idle cpu time is about %60-65... (0 Replies)
Hello all, I have a question about load averages.
I've read the man pages for the uptime and w command for two or three different flavors of Unix (Red Hat, Tru64, Solaris). All of them agree that in the output of the 2 aforementioned commands, you are given the load average for the box, but... (3 Replies)
Hello, Here is the output of top command. My understanding here is,
the load average 0.03 in last 1 min, 0.02 is in last 5 min, 0.00 is in last 15 min.
By seeing this load average, When can we say that, the system load averge is too high?
When can we say that, load average is medium/low??... (8 Replies)
Hi,
i have installed solaris 10 on t-5120 sparc enterprise.
I am little surprised to see load average of 2 or around on this OS.
when checked with ps command following process is using highest CPU. looks like it is running for long time and does not want to stop, but I do not know... (5 Replies)
Hello AlL,..
I want from experts to help me as my load average is increased and i dont know where is the problem !!
this is my top result :
root@a4s # top
top - 11:30:38 up 40 min, 1 user, load average: 3.06, 2.49, 4.66
Mem: 8168788k total, 2889596k used, 5279192k free, 47792k... (3 Replies)
Hi ,
I am using 48 CPU sunOS server at my work.
The application has facility to check the current load average before starting a new process to control the load.
Right now it is configured as 48. So it does mean that each CPU can take maximum one proces and no processe is waiting.
... (2 Replies)
Hi,
I am getting a high load average, around 7, once an hour. It last for about 4 minutes and makes things fairly unusable for this time.
How do I find out what is using this. Looking at top the only thing running at the time is md5sum.
I have looked at the crontab and there is nothing... (10 Replies)
Here we go....
Preface:
..... so in a galaxy far, far, far away from commercial, data sharing corporations.....
For this project, I used the ESP-WROOM-32 as an MQTT (publish / subscribe) client which receives Linux server "load averages" as messages published as MQTT pub/sub messages.... (6 Replies)
Discussion started by: Neo
6 Replies
LEARN ABOUT REDHAT
lsraid
LSRAID(8) Linus md Utilities LSRAID(8)NAME
lsraid - List and query Linux md devices.
SYNOPSIS
lsraid -A [-g|-s|-f] {-a <device> | -d <device>} ...
lsraid -A -p
lsraid -D [-l] {-a <device> | -d <device>} ...
lsraid -D -p
lsraid -R {-a <device> | -d <device>} ...
lsraid -R -p
lsraid -h
lsraid -V
DESCRIPTION
lsraid is a program for querying Linux md devices. It can describe the composite device and the block devices that belong to it. It can
also provide a description of the md device suitable for including in the /etc/raidtab configuration file.
lsraid also has the ability to operate on online and offline devices. It can read an online device via the kernel interface and provide
information about it. When a device is offline, lsraid can look at any of the block devices that are a part of the md device and read the
persistent md superblock for information.
OPTIONS -A Selects array-based operation. lsraid will query the given devices and output a short listing of the referenced md devices.
-a <device>
Adds md device <device> to the list of devices to query. If the device is online, lsraid will discover all of the block devices that
belong to it via the kernel interface. Otherwise lsraid will only be able to verify that the device exists.
-D Selects disk-based operation. lsraid will query the given devices and then output a description of all the member disks requested.
-d <device>
Adds block device <device> to the list of devices to query. lsraid will read the md superblock off of <device> and use it to discover
the assocated md device and block devices.
-f Displays only failed block devices in array-based mode (-A).
-g Displays only good block devices in array-based mode (-A).
-h, --help
Displays a short usage message, then exits.
-l Displays a long dump of block device superblocks in disk-based mode (-D). This output is verbatim from the on-disk md superblock, and
reflects the state on the specific disk, not the state the md device currently considers authorative.
-p Scans all block devices in /proc/partitions for RAID arrays. This can be slow in the presence of network block devices and the like.
This option is mutually exclusive with the -a and -d options.
-R Selects raidtab operation. lsraid will query all the devices specified and output a description of the referenced md devices in a for-
mat suitable for placing in a raidtab(5) file.
-s Displays only spare block devices in array-based mode (-A).
NOTES
lsraid cannot discover the block devices that make up an offline md device. Providing one of the member devices with the -d option allows
lsraid to discover the rest of the information about the offline md device.
Disk-based operation only displays the block devices specified on the command line. Specify the md device on the command line to see
information about all of the member disks. If the md device is offline, specify both the md device and one of the member disks.
lsraid does not do any special handling of md devices composed of other md devices (eg RAID 1+0). The member devices are merely treated as
block devices while in the context of the parent device. This is only an issue for raidtab-based operation. The raidtab(5) output will be
printed in the order the md devices are queried. This means that a command creating a raidtab(5) for a RAID 1+0 device should list the
member devices first on the command line.
EXAMPLES
lsraid -A -a /dev/md0
Display a short listing of the md0 device.
lsraid -A -d /dev/sda1
Display a short listing of the array that sda1 belongs to.
lsraid -A -f -a /dev/md0
Display the failed devices belonging to the md0 device.
lsraid -D -l -a /dev/md0
Display a long dump of the on-disk md superblock of every disk in md0.
lsraid -D -a /dev/md0 -d /dev/sda1
Display a short discription of the disks in md0 as well as a short description of the disk sda1. sda1 will only be described once if
it belongs to md0.
lsraid -R -a /dev/md0 -a /dev/md1 -a /dev/md2
Display a description of the arrays in an output format suitable for using in raidtab(5) files. Note that if md0 and md1 are raid0
arrays and md2 is a raid1 created from md0 and md1, this command will output the information in the correct order.
lsraid -R -p
Scan all block devices in /proc/partitions and display all discovered md devices in a format suitable for using in raidtab(5) files.
BUGS
Probably.
SEE ALSO mkraid(8), raidtab(5), raidstart(8), raidstop(8)VERSION
lsraid version 0.7.0 (26 March 2002)
HISTORY
Version 0.7.0
Added scanning of active block device partitions.
Version 0.4.0
Initial documented version. Functionally complete.
AUTHOR
Joel Becker <joel.becker@oracle.com>
COPYRIGHT
Copyright (C) Oracle Corporation, Joel Becker. All rights reserved.
This program is free software; see the file COPYING in the source distribution for the terms under which it can be redistributed and/or
modified.
3rd Berkeley Distribution 2002-03-26 LSRAID(8)