Quote:
Originally Posted by
filosophizer
Few questions regarding Power HA ( previously known as HACMP) and VIOS POWERVM IVM ( IBM Virtualization I/O Server )
[1] Is it possible to create HACMP cluster between two VIOS servers
Physical Machine_1
VIOS_SERVER_1
LPAR_1
SHARED_DISK_XX
VIOS_SERVER_2
Physical Machine_2
LPAR_2
SHARED_DISK_XX
In this scenario, You will have to assign the shared disk first to the VIOS server and from there it will be assigned to LPAR.
Would HACMP work on this shared disk ?
Short answer is yes. On both systems you need to make sure the attribute no_reserve is set. It is the same process that you would use to setup MPIO with dual VIO servers in one system:
From an
article I wrote years ago:
Quote:
Setup Disks
On my system the disks the first four disks are local and I have 8 LUNs that I want to make MPIO ready for the clients. So, with both VIO servers having the disks offline, or only one VIO server active, it is simple to get the disks setup for no_reserve. I also add that I want the VIO server to be aware of the pvid the clients put, or modify on these disks. Finally, I make sure the client sees the disk as "clean".
# for i in 4 5 6 7 8 9 10 11
> do
> chdev -l hdisk$i -a pv=yes
> chdev -l hdisk$i -a reserve_policy=no_reserve
> chpv -C hdisk$i ## this "erases the disk so be careful!
> done
Now I can activate the disks again using:
# cfgmgr
Quote:
Originally Posted by
filosophizer
[2] Can you have DLPAR ( Dynamic LPAR ) capabilities in IVM , that is assign one physical Fiber Adapter card to LPAR directly ? IVM VIOS ( which is you are running VIOS without HMC = Hardware Management Console )
in this scenario, you can give the LPAR , shared disk directly from the Storage ? ( provided feature is supported ?! )
I am not sure of the current status, there has been some rumor that it might be supported, but I do not see it (assigning an adapter) in the IVM user interface. I use IVM a lot with my test systems - as I do not have an HMC.
So, I assume no - no dynamic resource allocation of adapters, only processor, memory and virtual adapters.
Quote:
Originally Posted by
filosophizer
[3] DO I need HMC for NPIV or can it run on IVM only ?
Quote:
Originally Posted by
filosophizer
What is NPIV?
N_Port ID Virtualization(NPIV) is a standardized method for virtualizing a physical fibre channel port. An NPIV-capable fibre channel HBA can have multiple N_Ports, each with a unique identity. NPIV coupled with the Virtual I/O Server (VIOS) adapter sharing capabilities allow a physical fibre channel HBA to be shared across multiple guest operating systems. The PowerVM implementation of NPIV enables POWER logical partitions (LPARs) to have virtual fibre channel HBAs, each with a dedicated world wide port name. Each virtual fibre channel HBA has a unique SAN identity similar to that of a dedicated physical HBA.
The minimum requirement for the 8 Gigabit Dual Port Fibre Channel adapter, feature code 5735, to support NPIV is 110304
NPIV can be used with IVM. I will need more time to find the command syntax to create an NPIV adapter. Again, I am limited by my test systems (Power5). On Power6 and Power7 I would expect it to be visible in the IVM web interface. Just remember, the other minimum requirements are: Power6 at high enough firmware, and a SAN switch that supports NPIV. The switch does not have to be 8G speed (in the beginning many customers worked with 8G cards connected to 2G ports - not any more these days
)