Sponsored Content
Full Discussion: IPMP Configuration
Operating Systems Solaris IPMP Configuration Post 302344860 by incredible on Monday 17th of August 2009 10:18:02 PM
Old 08-17-2009
IPMP does not support load balancing but trunking does.
 

10 More Discussions You Might Find Interesting

1. Solaris

Ipmp

Hi All, Kindly help me in configuring IPMP or guid me to some link, tried goggling however no luck. Thanks in anticipation (1 Reply)
Discussion started by: kumarmani
1 Replies

2. Solaris

IPMP config

Hi All, I have unplumbed one interface. after that ifconfig -a shows that lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 ... (7 Replies)
Discussion started by: jegaraman
7 Replies

3. Solaris

IPMP questions

Probe-based IPMP active - active Probe-based IPMP active - passive Link-based only active -standby ???? What are differences between Probe-based IPMP active - passive anddd Link-based only active -standby ???!!! For example in active active probe-based IPMP there are lets... (6 Replies)
Discussion started by: samar
6 Replies

4. Solaris

setting up IPMP

Guide me in setting up IPMP on solaris 10 box . my system has 2 physical interface bge0 - not used bge1 - 10.10.10.10 bge1:1 - 10.10.10.11 please guide , how can i implement IPMP on this server , how many additional test ip is required ? , as 2 ip's are configured on bge1 interface ,... (3 Replies)
Discussion started by: skamal4u
3 Replies

5. Solaris

IPMP/HA: most common configuration

Hi, based on your experience what is the most common IPMP configuration on a two nodes cluster for HA failover / scalable services? ...active-active, active-standby with probe-based failure detection etc. Thanks (3 Replies)
Discussion started by: gxmsgx
3 Replies

6. Solaris

IPMP configuration and detach problem

Hi, I've a problem when try to detach nic e1000g1. IPMP configuration: # ifconfig -a ...cut... e1000g1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7 inet 0.0.0.0 netmask 0 ether 0:c:29:67:16:ef e1000g2: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4>... (3 Replies)
Discussion started by: gxmsgx
3 Replies

7. Solaris

IPMP Configuration

Dear All, I have configured IPMP in one of my SUN machine which is having solros 9 version. I have updated all files like /etc/hosts,/etc/hostname.ce0 and /etc/hostname.ce1 and rebooted the server. But IP's are not mounted which i have given in /etc/hosts file. Please find the below ifconfig... (6 Replies)
Discussion started by: lbreddy
6 Replies

8. Solaris

IPMP active-standby configuration

Hi All, Need your help to explaining why when i configure 2 interface IPMP active-standby appear, INACTIVE and DEPRECATED parameter, detail screen shoot as below : root@machine01 # ifconfig -a e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7 inet... (6 Replies)
Discussion started by: Wong_Cilacap
6 Replies

9. Solaris

Solaris IPMP

Can any one please explain me the concept behind IPMP in solaris clustering.Basic explanation would be really appreciated... Thanks in Advance vks (2 Replies)
Discussion started by: vks47
2 Replies

10. Solaris

IPMP Configuration Question/Problem

Hi, I have two physical interface connected to solaris box. 1. e1000g1 2. e1000g2 I have added these interfaced under a same IPMP group "IPMP1" After that I have configured a test address for e1000g1 like below ifconfig e1000g1 addif <ip-address> netmask + broadcast + -failover... (1 Reply)
Discussion started by: praveensharma21
1 Replies
scsi_max_qdepth(5)						File Formats Manual						scsi_max_qdepth(5)

NAME
scsi_max_qdepth - maximum number of I/Os that target will queue up for execution (OBSOLETE) VALUES
Failsafe Default Allowed values Recommended values Most SCSI-2 and above devices accept multiple commands and have enough internal memory to support the default queue depth set by HP. You may change the default value to tune devices for higher throughput or load balancing. DESCRIPTION
Note: This tunable is obsolete and is replaced by the attribute which can be set through the command. See scsimgr(1M). Some SCSI devices support tagged queuing, which means that they can have more than one SCSI command outstanding at any point in time. The number of commands that can be outstanding varies by device, and is not known to HP-UX. To avoid overflowing this queue, HP-UX will not send more than a certain number of outstanding commands to any SCSI device. This tunable sets the default value for that limit. The default value can be overridden for specific devices using Queue depth is synonymous to the tagged queuing. When supported by a target, it allows the target to accept multiple SCSI commands for execution. Some targets can allow up to 256 commands to be stored from different initiators. This mechanism can help optimization for better performance. Once the target command queue is full, the target terminates any additional I/O and returns a status to the initiator. Targets may support less than 256 commands to be queued, hence the factory defaults to If the system has a combination of devices that support small and larger queue depths, then a queue depth can be set to a value which would work for most devices. For specific devices, the system administrator can change the queue depth on a per device basis using See scsictl(1M) for more on how to use The values for both 32-bit and 64-bit kernel are the same. Who Is Expected to Change This Tunable? Anyone. Restrictions on Changing Changes to this tunable take effect immediately. When Should the Value of This Tunable Be Raised? SCSI devices that have enough memory to support higher queue depth than the default set by HP. Such devices may offer better performance if the queue depth is set to a higher value. What Are the Side Effects of Raising the Value of This Tunable? The queue depth applies to all the SCSI devices that support tag queuing. Setting the queue depth to a value larger than the disk can han- dle will result in I/Os being held off once a condition exists on the disk. A mechanism exists that will lower the queue depth of the device in case of condition avoiding infinite conditions on that device. Nevertheless, this mechanism will periodically try higher queue depths and conditions will arise. When Should the Value of This Tunable Be Lowered? When the connected SCSI devices support smaller queue depth or for load balancing. What Are the Side Effects of Lowering the Value of This Tunable? Devices that support higher queue depth may not deliver optimal performance when a lower queue depth value is set. What Other Tunables Should Be Changed at the Same Time? None. WARNINGS
All HP-UX kernel tunable parameters are release specific. This parameter has been obsoleted for HP-UX 11i Version 3. Installation of optional kernel software, from HP or other vendors, may cause changes to tunable parameter values. After installation, some tunable parameters may no longer be at the default or recommended values. For information about the effects of installation on tun- able values, consult the documentation for the kernel software being installed. For information about optional kernel software that was factory installed on your system, see at AUTHOR
was developed by HP. SEE ALSO
scsictl(1M), ioctl(2), scsi(7). OBSOLETE
Tunable Kernel Parameters scsi_max_qdepth(5)
All times are GMT -4. The time now is 10:07 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy