IPMP configuration and detach problem


 
Thread Tools Search this Thread
Operating Systems Solaris IPMP configuration and detach problem
# 1  
Old 08-11-2009
IPMP configuration and detach problem

Hi,
I've a problem when try to detach nic e1000g1.

IPMP configuration:
Code:
# ifconfig -a
...cut...
e1000g1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
        inet 0.0.0.0 netmask 0
        ether 0:c:29:67:16:ef
e1000g2: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
        inet 0.0.0.0 netmask 0
        ether 0:c:29:67:16:f9 

# cat /etc/hosts
...cut...
192.168.72.30   clab-cl
192.168.72.33   clab-e1000g1 
192.168.72.34   clab-e1000g2 

# eeprom local-mac-address?
local-mac-address?=true

# cat /etc/hostname.e1000g1
clab-e1000g1 netmask + broadcast + \
group ipmp0 deprecated -failover up \
addif clab-cl netmask + broadcast + failover up

# cat /etc/hostname.e1000g2
ether 0:c:29:67:16:ef
clab-e1000g2 netmask + broadcast + \
group ipmp0 deprecated -failover up \
addif clab-cl netmask + broadcast + failover up

# svcadm restart physical:default
/var/adm/messages
clab2 Cluster.PNM: [ID 185191 daemon.error] MAC addresses are not unique per subnet.

# ifconfig -a
...cut...
e1000g1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 11
        inet 192.168.72.33 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:ef 
e1000g1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 11
        inet 192.168.72.30 netmask ffffff00 broadcast 192.168.72.255
e1000g2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 12
        inet 192.168.72.34 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:ef 
e1000g2:1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 12
        inet 192.168.72.30 netmask ffffff00 broadcast 192.168.72.255

Detach test (e1000g2,e1000g1)
Code:
# if_mpadm -d e1000g2
/var/adm/messages
clab2 in.mpathd[475]: [ID 832587 daemon.error] Successfully failed over from NIC e1000g2 to NIC e1000g1

# ifconfig -a
...cut...
e1000g1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 15
        inet 192.168.72.33 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:ef 
e1000g1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 15
        inet 192.168.72.30 netmask ffffff00 broadcast 192.168.72.255
e1000g1:2: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 15
        inet 192.168.72.30 netmask ffffff00 broadcast 192.168.72.255
e1000g2: flags=89040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,OFFLINE> mtu 1500 index 16
        inet 192.168.72.34 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:ef 

# if_mpadm -r e1000g2
/var/adm/messages
clab2 in.mpathd[475]: [ID 620804 daemon.error] Successfully failed back to NIC e1000g2

# if_mpadm -d e1000g1

# ifconfig -a
...cut...
e1000g1: flags=89040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,OFFLINE> mtu 1500 index 15
        inet 192.168.72.33 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:ef 
e1000g2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 16
        inet 192.168.72.34 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:ef 
e1000g2:1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 16
        inet 192.168.72.30 netmask ffffff00 broadcast 192.168.72.255
e1000g2:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 16
        inet 192.168.72.30 netmask ffffff00 broadcast 192.168.72.255

/var/adm/messages
clab2 in.mpathd[475]: [ID 832587 daemon.error] Successfully failed over from NIC e1000g1 to NIC e1000g2

...and after six seconds...
clab2 in.mpathd[475]: [ID 594170 daemon.error] NIC failure detected on e1000g2 of group ipmp0
clab2 Cluster.PNM: [ID 890413 daemon.notice] ipmp0: state transition from OK to DOWN.

Where am I wrong? Smilie

Last edited by gxmsgx; 08-11-2009 at 09:42 AM..
# 2  
Old 08-11-2009
disable ipmp and use a normal ip configuration. set the first interface up and try to ping your default router. if successfull try the same with the second interface.

can you reach the router from both interfaces?
# 3  
Old 08-11-2009
Quote:
Originally Posted by gxmsgx
ipmp0: state transition from OK to DOWN.
Besides the above error, what else is seen in the messages file?
# 4  
Old 08-11-2009
SOLVED

1) delete line "setprop local-mac-address? 'true'" from /boot/solaris/bootenv.rc
2) delete line "ether 0:c:29:67:16:ef" from /etc/hostname.e1000g2
3) reboot

Code:
# ifconfig -a
...cut...
e1000g1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
        inet 192.168.72.33 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:ef 
e1000g1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 192.168.72.40 netmask ffffff00 broadcast 192.168.72.255
e1000g2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
        inet 192.168.72.34 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:f9 
e1000g2:1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 192.168.72.40 netmask ffffff00 broadcast 192.168.72.255

Detach test (e1000g2,e1000g1)
Code:
# if_mpadm -d e1000g2
/var/adm/messages
clab2 in.mpathd[470]: [ID 832587 daemon.error] Successfully failed over from NIC e1000g2 to NIC e1000g1

# ifconfig -a
...cut...
e1000g1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
        inet 192.168.72.33 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:ef 
e1000g1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 192.168.72.40 netmask ffffff00 broadcast 192.168.72.255
e1000g1:2: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 192.168.72.40 netmask ffffff00 broadcast 192.168.72.255
e1000g2: flags=89040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,OFFLINE> mtu 1500 index 4
        inet 192.168.72.34 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:f9 

# if_mpadm -r e1000g2
/var/adm/messages
clab2 in.mpathd[470]: [ID 620804 daemon.error] Successfully failed back to NIC e1000g2

# if_mpadm -d e1000g1
/var/adm/messages
clab2 in.mpathd[470]: [ID 832587 daemon.error] Successfully failed over from NIC e1000g1 to NIC e1000g2

# ifconfig -a
...cut...
e1000g1: flags=89040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,OFFLINE> mtu 1500 index 3
        inet 192.168.72.33 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:ef 
e1000g2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
        inet 192.168.72.34 netmask ffffff00 broadcast 192.168.72.255
        groupname ipmp0
        ether 0:c:29:67:16:f9 
e1000g2:1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 192.168.72.40 netmask ffffff00 broadcast 192.168.72.255
e1000g2:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 192.168.72.40 netmask ffffff00 broadcast 192.168.72.255

# if_mpadm -r e1000g1
/var/adm/messages
clab2 in.mpathd[470]: [ID 620804 daemon.error] Successfully failed back to NIC e1000g1

thanks to all
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Solaris

IPMP Configuration Question/Problem

Hi, I have two physical interface connected to solaris box. 1. e1000g1 2. e1000g2 I have added these interfaced under a same IPMP group "IPMP1" After that I have configured a test address for e1000g1 like below ifconfig e1000g1 addif <ip-address> netmask + broadcast + -failover... (1 Reply)
Discussion started by: praveensharma21
1 Replies

2. Solaris

veritas volume in DETACH STATE

Hello, recently some disks lost contact with solaris 10 system but the disks came back online on their own. I used "vxreattach disk" to attach them to dg and started raid 5 volume. Voulme's kstate is Enabled however state is DETACH. It bothers me a lot. In this state also I can mount the... (1 Reply)
Discussion started by: upengan78
1 Replies

3. Solaris

IPMP problem

Hi all i had two interface ce0 and ce5 but when i run the ifconfig, it show me alot of extra virtual interface, may i know how to remove it? # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 ce0:... (2 Replies)
Discussion started by: SmartAntz
2 Replies

4. Solaris

IPMP active-standby configuration

Hi All, Need your help to explaining why when i configure 2 interface IPMP active-standby appear, INACTIVE and DEPRECATED parameter, detail screen shoot as below : root@machine01 # ifconfig -a e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7 inet... (6 Replies)
Discussion started by: Wong_Cilacap
6 Replies

5. Solaris

IPMP Configuration

Dear All, I have configured IPMP in one of my SUN machine which is having solros 9 version. I have updated all files like /etc/hosts,/etc/hostname.ce0 and /etc/hostname.ce1 and rebooted the server. But IP's are not mounted which i have given in /etc/hosts file. Please find the below ifconfig... (6 Replies)
Discussion started by: lbreddy
6 Replies

6. Solaris

IPMP Configuration

Hi all, Currently i got a server which had two interface. I had study the <SystemAdministration Guide: IP> but still donno what is the correct step to configure it. Can u all help me on it? thanks # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index... (6 Replies)
Discussion started by: SmartAntz
6 Replies

7. Solaris

IPMP/HA: most common configuration

Hi, based on your experience what is the most common IPMP configuration on a two nodes cluster for HA failover / scalable services? ...active-active, active-standby with probe-based failure detection etc. Thanks (3 Replies)
Discussion started by: gxmsgx
3 Replies

8. Solaris

IPMP. Is this a NIC problem!!?

-- IPMP Groups -- Node Name Group Status Adapter Status --------- ----- ------ ------- ------ IPMP Group: hostname1.xx-xxxx.xxxi sc_ipmp1 Online ce5 Online IPMP Group: hostname1.xx-xxxx.xxxi ipmpgrp01... (2 Replies)
Discussion started by: FeNiCrC_Neil
2 Replies

9. HP-UX

fstream - detach in aCC

How to use detach function for aCC. Error 187: "/cc/smc3/root/development/auto_activation/AARS/src/ResultServer.cpp", line 623 # Referenced object 'detach' is not a member of class basic_ofstream<char,std::char_traits<char> > . READ_TOKEN(tmp) (0 Replies)
Discussion started by: onlyforforum
0 Replies
Login or Register to Ask a Question