03-03-2013
PowerHA HACMP on VIOS servers
Few questions regarding Power HA ( previously known as HACMP) and VIOS POWERVM IVM ( IBM Virtualization I/O Server )
[1] Is it possible to create HACMP cluster between two VIOS servers
Physical Machine_1
VIOS_SERVER_1
LPAR_1
SHARED_DISK_XX
VIOS_SERVER_2
Physical Machine_2
LPAR_2
SHARED_DISK_XX
In this scenario, You will have to assign the shared disk first to the VIOS server and from there it will be assigned to LPAR.
Would HACMP work on this shared disk ?
[2] Can you have DLPAR ( Dynamic LPAR ) capabilities in IVM , that is assign one physical Fiber Adapter card to LPAR directly ? IVM VIOS ( which is you are running VIOS without HMC = Hardware Management Console )
in this scenario, you can give the LPAR , shared disk directly from the Storage ? ( provided feature is supported ?! )
[3] DO I need HMC for NPIV or can it run on IVM only ?
What is NPIV?
N_Port ID Virtualization(NPIV) is a standardized method for virtualizing a physical fibre channel port. An NPIV-capable fibre channel HBA can have multiple N_Ports, each with a unique identity. NPIV coupled with the Virtual I/O Server (VIOS) adapter sharing capabilities allow a physical fibre channel HBA to be shared across multiple guest operating systems. The PowerVM implementation of NPIV enables POWER logical partitions (LPARs) to have virtual fibre channel HBAs, each with a dedicated world wide port name. Each virtual fibre channel HBA has a unique SAN identity similar to that of a dedicated physical HBA.
The minimum requirement for the 8 Gigabit Dual Port Fibre Channel adapter, feature code 5735, to support NPIV is 110304
10 More Discussions You Might Find Interesting
1. AIX
Hello,
I would like to know if anyone has faced this problem. Whenever there is a duplicate IP address, HACMP goes down infact HACMP ( PowerHA ) takes the whole system down.
Does anyone know how to solve this problem ? (3 Replies)
Discussion started by: filosophizer
3 Replies
2. AIX
hi,
when I do a failover, hacmp always starts db2 but recently it fails to start db2..noticed the issue is db2nodes.cfg is not modified by hacmp and is still showing primary node..manually changed the node name to secondary after which db2 started immediately..unable to figure out why hacmp is... (4 Replies)
Discussion started by: gkr747
4 Replies
3. AIX
I am planning for building a new database server using AIX 6.1 and Oracle 11.2 using ASM.
As i have learned starting with Oracle 11.2 ASM can only be used in conjunction with Clusterware, which is Oracles HA-software. As is the companies policy we do intend to use PowerHA as HA-solution instead... (1 Reply)
Discussion started by: bakunin
1 Replies
4. AIX
Hello!
I have this infraestructure:
- 1 POWER7 with single VIOS on Site A.
- 1 POWER6 with single VIOS on Site B.
- 1 LPAR called NodeA as primary node for PowerHA 6.1 on Site A.
- 1 LPAR called NodeB as secondary (cold) node for PowerHA 6.1 on SiteB.
- 1 Storage DS4700 on Site A.
- 1... (8 Replies)
Discussion started by: enzote
8 Replies
5. AIX
Hello AIX GURU's
Can anybody tell me the steps to crate shared VG (enhanced concurent) for my Lpars from VIO server?
my questions are:
1.
Should I crate Enhanced Concurent VG in VIO and map it using virtual Scsi to Lpar?
or
2. Can I just create virtual SCSI in VIO and map to Lpar and... (1 Reply)
Discussion started by: Vit0_Corleone
1 Replies
6. AIX
Hello,
Lets say for simplicity that I do not use any vlan config inside my server - one lpar group use hea physical port1, another group hea physical port2. Physical port1 configured as vlan1 on external switch, physical port2 as vlan2.
What is the common practice - should I isolate my vios... (0 Replies)
Discussion started by: vilius
0 Replies
7. AIX
Hello,
AIX 6.1 TL7 SP6
POwerHA 6.1 SP10
I was experimenting with new hacmp build. It's 3-node cluster build on AIX 6.1 lpars. It contains Ethernet and diskhb networks. Shared vg disk is SAN disk. Two nodes see disk using vscsi, third node sees disk using npiv. Application is db2 server.
... (4 Replies)
Discussion started by: vilius
4 Replies
8. AIX
hello
ive installed powerha 7.1.3 on two servers aix 6.1.9 6100-09-03-1415
work with dmx4 EMC storgae.
after sync cluster ( terminate with OK ) ive see that the repository disk upper only in one machine :
hdiskpower60 00c7f6b59fc60d9d caavg_private active... (1 Reply)
Discussion started by: ariec
1 Replies
9. AIX
I have created a VxVM disk group in AIX7.1. I have tried to added this VxVM disk group in powerHA 6.1. But in cluster VxVM DGs are not listing. Is there any other procedure to add vxvm diskgroup to hacmp.
Please share me steps for adding vxvm diskgroup to hacmp. (6 Replies)
Discussion started by: sunnybee
6 Replies
10. AIX
Hi All,
As per the IBM upgrade/support matrix
IBM Techdocs Technote: PowerHA for AIX Version Compatibility Matrix
we can't do online upgrade or rolling migration from powerha v7.1.0 to v7.1.3. on AIX61_TL9_SP4, So we are following the steps as below ...
1 ) Bring down the cluster
2 )... (2 Replies)
Discussion started by: linux.amrit
2 Replies
fp(7d) Devices fp(7d)
NAME
fp - Sun Fibre Channel port driver
DESCRIPTION
The fp driver is a Sun fibre channel nexus driver that enables fibre channel topology discovery, device discovery, fibre channel adapter
port management and other capabilities through well-defined fibre channel adapter driver interfaces.
The fp driver requires the presence of a fabric name server in fabric and public loop topologies to discover fibre channel devices. In pri-
vate loop topologies, the driver discovers devices by performing PLOGI to all valid AL_PAs, provided that devices do not participate in
LIRP and LILP stages of loop initialization.
CONFIGURATION
The fp driver is configured by defining properties in the fp.conf file. The fp driver supports the following properties:
mpxio-disable
Solaris I/O multipathing is enabled or disabled on fibre channel devices with the mpxio-disable property. Specifying mpxio-disable="no"
activates I/O multipathing, while mpxio-disable="yes" disables the feature.
Solaris I/O multipathing may be enabled or disabled on a per port basis. Per port settings override the global setting for the specified
ports. The following example shows how to disable multipathing on port 0 whose parent is /pci@8,600000/SUNW,qlc@4:
name="fp" parent="/pci@8,600000/SUNW,qlc@4" port=0
mpxio-disable="yes";
manual_configuration_only
Automatic configuration of SCSI devices in the fabric is enabled by default and thus allows all devices discovered in the SAN zone to be
enumerated in the kernel's device tree automatically. The manual_configuration_only property may be configured to disable the default
behavior and force the manual configuration of the devices in the SAN. Specifying manual_configuration_only=1 disables the automatic con-
figuration of devices.
FILES
/kernel/drv/fp
32-bit ELF kernel driver (x86)
/kernel/drv/amd64/fp
64-bit ELF kernel driver (x86)
/kernel/drv/sparcv9/fp
64-bit ELF kernel driver (SPARC)
/kernel/drv/fp.conf
fp driver configuration file.
ATTRIBUTES
See attributes(5) for descriptions of the following attributes:
+-----------------------------+-----------------------------+
| ATTRIBUTE TYPE | ATTRIBUTE VALUE |
+-----------------------------+-----------------------------+
|mpxio-disable |Unstable |
+-----------------------------+-----------------------------+
|manual_configuration_only |Obsolete |
+-----------------------------+-----------------------------+
|Availability |SUNWfctl |
+-----------------------------+-----------------------------+
SEE ALSO
cfgadm_fp(1M), prtconf(1M), stmsboot(1M), driver.conf(4), attributes(5), fcp(7D), fctl(7D), scsi_vhci(7D)
Writing Device Drivers
Fibre Channel Physical and Signaling Interface (FC-PH) ANSI X3.230: 1994
Fibre Channel Generic Services (FC-GS-2) Project 1134-D
Fibre Channel Arbitrated Loop (FC-AL) ANSI X3.272-1996
Fibre Channel Protocol for SCSI (FCP) ANSI X3.269-1996
SCSI-3 Architecture Model (SAM) Fibre Channel Private Loop SCSI Direct Attach (FC-PLDA) ANSI X3.270-1996
SCSI Direct Attach (FC-PLDA) ANSI X3.270-1996
SCSI Direct Attach (FC-PLDA) NCITS TR-19:1998
Fabric Loop Attachment (FC-FLA), NCITS TR-20:1998
SunOS 5.10 2 Dec 2004 fp(7d)