I am looking at the specific configuration settings for multipath.conf on RHEL5.5.
In particular i was looking at the max_fds setting. Can anyone tell me the max number of open file descriptors that the RHEL5.5 system can have?
It has been set previously to 8192, and i was wondering if this value is correct. should it be higher or lower and what are the implications of setting this value either higher or lower?
my current defaults are as below:
and my devices specific settings are as follows:
If anyone has a good amount of experience on this and can provide any insight into any issues with this i would be grateful to hear it. The SAN disk it is attached to is an HP24000 disk array.
In particular i was looking at the max_fds setting. Can anyone tell me the max number of open file descriptors that the RHEL5.5 system can have?
Depends on what you have configured. That's a tunable kernel parameter. You can see what you're currently configured at by cat'ing out the sysctl's value:
Quote:
It has been set previously to 8192, and i was wondering if this value is correct.
It's a tunable parameter pretty much because there isn't a correct or incorrect number. If there were only one correct answer, or only one correct way to get to the one correct answer, then the kernel/multipathd would have been designed to just do that thing and leave you out of the mix. If you're not running into any starvation issues then you're ok. I've seen 8192 before plenty of places. It's an acceptable number (but I don't know what your workload is, that number is actually incredibly low if you're going to install Oracle for instance).
The same is true for the multipath.conf setting. They try to fix sane defaults but ultimately it's a tunable because for a lot of things humans are just better at picking configuration values than machines are. Symptoms of multipathd starvation issues include it throwing warnings to messages, not all paths being detected by multipathd, etc. AFAIK there aren't many drawbacks to having a high descriptor limit with multipath, it's just giving the administrator the ability to cap it off keeps multipathd from eating up all the file descriptors if some NetApp appliance (or whatever) goes schizo and starts presenting a million paths to the same LUN.
Long and short of it is: Play around with it, find something that seems to work well on that server and just save/document it.
Quote:
should it be higher or lower and what are the implications of setting this value either higher or lower?
Basically, this parameter controls how the kernel data structures are allocated in the kernel. Higher amounts of descriptors can slow down write/read operations, while too low of numbers can lead to descriptor starvation. Playing around with this number (while not damaging in any lasting way) isn't the only way to alleviate starvation. You might consider using limits.conf to set default limits for most users and just have the process that needs the most descriptors have a higher limit than everyone else (which is SOP for installing Oracle on a RHEL box, specifically to address starvation).
Last edited by thmnetwork; 03-06-2012 at 11:38 AM..
This User Gave Thanks to thmnetwork For This Post:
Thanks for your reply, its more than i was hoping for on this. I know if you don't set the max_fds its default is just the maximum that the system can handle.
I haven't had much involvement with multipathing so i am trying to understand as much as i can to provide the best performance and reliability.
For reliability, that's mostly based on the type failover you want. I wouldn't try to get too fancy with the multipathd tuning, advanced configurations are hard to troubleshoot. Sometimes you need to do it, but without knowing what particular end you're going for it's probably better to leave it at defaults as much as possible. It's been a while since I've messed with multipath in production, but I'm pretty sure the only performance hit of multipath is it's failover time, so you're better off concentrating elsewhere for the performance tuning (I/O schedulers and all that).
Yeah, most of the settings are the defaults, however this was setup by another colleague of mine who is not working on Linux and i have taken full control. I thought it's better to understand it now rather than when we have some issues in future. I have already come across a fair few other problems which ive had to sort out, and SAN disks and their configurations seemed to be the cause. Not the multipathing settings but elsewhere, udev rules were changed with no knowledge of what they affected... and i have the joyous job of picking through it all conf file by conf file to see what needs resolving.
Hi,
In a RHEL 5 box, I have just added new multipath configurations in /etc/multipath.conf :
blacklist_exceptions {
wwid "360002ac0000000000000008e0001ee00"
wwid "360002ac0000000000000008f0001ee00"
wwid "360002ac000000000000000900001ee00"
wwid... (0 Replies)
Hi,
I've installed Solaris 11.3(live media) and configured DNS. Everytime I reboot the server, resolv.conf got deleted and it created a new nsswitch.conf.
I used below to configure both settings:
# svccfg -s dns/client
svc:/network/dns/client> setprop config/nameserver = (xx.xx.xx.aa... (1 Reply)
Hello all,
Newbie here.
I'm currently tasked with updating rsyslog.conf and auditd.conf on a large set of servers. I know the exact logging configurations that I want to enable. I have updated both files on on a server and hope to use the updated files as a template for the rest of the... (3 Replies)
Hi all,
I'm wondering if anyone knows whether Solaris IPMP can be configured such that the IPs of the physical NICs are not available to applications when using IP4 or if the group address and the underlying physical address are always present?
Thanks. (3 Replies)
Hello all,
I am running "Red Hat Enterprise Linux Server release 6.4 (Santiago)"
The root Filesystem is currently part of the Multipath configuration and I need to remove it without rebooting the production Server.
The wwid I want to remove from Multipathing is the last one i the list of... (3 Replies)
I am trying to understand what are the differences of boot messages verbosity levels for the kernel field in grub.conf
From my research, there appear to be three levels:
quiet
verbose
debug
I have also found documents that specify removing quiet from the kernel field. If this is done, is... (1 Reply)
I would like to configure the syslog.conf to have a good monitoring information about my system.
do you have any idea about best configuration from your experience in your Data Centers
BR, (5 Replies)