Sponsored Content
Operating Systems AIX PowerHA HACMP on VIOS servers Post 302774883 by dukessd on Sunday 3rd of March 2013 06:16:21 PM
Old 03-03-2013
POWER7 information
Dynamically managing physical adapters

IBM

You can use the Integrated Virtualization Manager to change the physical adapters that a running logical partition uses.

ensure that the Integrated Virtualization Manager is at version 1.5 or later.

HTH
This User Gave Thanks to dukessd For This Post:
 

10 More Discussions You Might Find Interesting

1. AIX

Duplicate IP address makes PowerHA ( HACMP ) down

Hello, I would like to know if anyone has faced this problem. Whenever there is a duplicate IP address, HACMP goes down infact HACMP ( PowerHA ) takes the whole system down. Does anyone know how to solve this problem ? (3 Replies)
Discussion started by: filosophizer
3 Replies

2. AIX

HACMP does not start db2 after failover (db2nodes not getting modified by hacmp)

hi, when I do a failover, hacmp always starts db2 but recently it fails to start db2..noticed the issue is db2nodes.cfg is not modified by hacmp and is still showing primary node..manually changed the node name to secondary after which db2 started immediately..unable to figure out why hacmp is... (4 Replies)
Discussion started by: gkr747
4 Replies

3. AIX

Interoperability Oracle Clusterware - PowerHA/HACMP

I am planning for building a new database server using AIX 6.1 and Oracle 11.2 using ASM. As i have learned starting with Oracle 11.2 ASM can only be used in conjunction with Clusterware, which is Oracles HA-software. As is the companies policy we do intend to use PowerHA as HA-solution instead... (1 Reply)
Discussion started by: bakunin
1 Replies

4. AIX

Increase LUN size in AIX with VIOS and HACMP

Hello! I have this infraestructure: - 1 POWER7 with single VIOS on Site A. - 1 POWER6 with single VIOS on Site B. - 1 LPAR called NodeA as primary node for PowerHA 6.1 on Site A. - 1 LPAR called NodeB as secondary (cold) node for PowerHA 6.1 on SiteB. - 1 Storage DS4700 on Site A. - 1... (8 Replies)
Discussion started by: enzote
8 Replies

5. AIX

PowerHA Disk on VIO Server

Hello AIX GURU's Can anybody tell me the steps to crate shared VG (enhanced concurent) for my Lpars from VIO server? my questions are: 1. Should I crate Enhanced Concurent VG in VIO and map it using virtual Scsi to Lpar? or 2. Can I just create virtual SCSI in VIO and map to Lpar and... (1 Reply)
Discussion started by: Vit0_Corleone
1 Replies

6. AIX

VIOS IP address - separate vlan for vios servers ?

Hello, Lets say for simplicity that I do not use any vlan config inside my server - one lpar group use hea physical port1, another group hea physical port2. Physical port1 configured as vlan1 on external switch, physical port2 as vlan2. What is the common practice - should I isolate my vios... (0 Replies)
Discussion started by: vilius
0 Replies

7. AIX

PowerHA(HACMP) full vg loss - cluster hangs on release_vg_fs event

Hello, AIX 6.1 TL7 SP6 POwerHA 6.1 SP10 I was experimenting with new hacmp build. It's 3-node cluster build on AIX 6.1 lpars. It contains Ethernet and diskhb networks. Shared vg disk is SAN disk. Two nodes see disk using vscsi, third node sees disk using npiv. Application is db2 server. ... (4 Replies)
Discussion started by: vilius
4 Replies

8. AIX

Powerha on AIX 6.1.9

hello ive installed powerha 7.1.3 on two servers aix 6.1.9 6100-09-03-1415 work with dmx4 EMC storgae. after sync cluster ( terminate with OK ) ive see that the repository disk upper only in one machine : hdiskpower60 00c7f6b59fc60d9d caavg_private active... (1 Reply)
Discussion started by: ariec
1 Replies

9. AIX

VXVM in POWERHA 6.1

I have created a VxVM disk group in AIX7.1. I have tried to added this VxVM disk group in powerHA 6.1. But in cluster VxVM DGs are not listing. Is there any other procedure to add vxvm diskgroup to hacmp. Please share me steps for adding vxvm diskgroup to hacmp. (6 Replies)
Discussion started by: sunnybee
6 Replies

10. AIX

Upgrading PowerHA from 7.1.0 to 7.1.3

Hi All, As per the IBM upgrade/support matrix IBM Techdocs Technote: PowerHA for AIX Version Compatibility Matrix we can't do online upgrade or rolling migration from powerha v7.1.0 to v7.1.3. on AIX61_TL9_SP4, So we are following the steps as below ... 1 ) Bring down the cluster 2 )... (2 Replies)
Discussion started by: linux.amrit
2 Replies
HYPER-V(4)						   BSD Kernel Interfaces Manual 						HYPER-V(4)

NAME
hv_vmbus -- Hyper-V Virtual Machine Bus (VMBus) Driver SYNOPSIS
To compile this driver into the kernel, place the following lines in the system kernel configuration file: device hyperv DESCRIPTION
The hv_vmbus provides a high performance communication interface between guest and root partitions in Hyper-V. Hyper-V is a hypervisor-based virtualization technology from Microsoft. Hyper-V supports isolation in terms of a partition. A partition is a logical unit of isolation, supported by the hypervisor, in which operating systems execute. The Microsoft hypervisor must have at least one parent, or root, partition, running Windows Server operating system. The virtualization stack runs in the parent partition and has direct access to the hardware devices. The root partition then creates the child partitions which host the guest operating systems. Child partitions do not have direct access to other hardware resources and are presented a virtual view of the resources, as virtual devices (VDevs). Requests to the virtual devices are redirected either via the VMBus or the hypervisor to the devices in the parent partition, which handles the requests. The VMBus is a logical inter-partition communication channel. The parent partition hosts Virtualization Service Providers (VSPs) which com- municate over the VMBus to handle device access requests from child partitions. Child partitions host Virtualization Service Consumers (VSCs) which redirect device requests to VSPs in the parent partition via the VMBus. The Hyper-V VMBus driver defines and implements the interface that facilitate high performance bi-directional communication between the VSCs and VSPs. All VSCs utilize the VMBus driver. SEE ALSO
hv_ata_pci_disengage(4), hv_netvsc(4), hv_storvsc(4), hv_utils(4) HISTORY
Support for hv_vmbus first appeared in FreeBSD 10.0. The driver was developed through a joint effort between Citrix Incorporated, Microsoft Corporation, and Network Appliance Incorporated. AUTHORS
FreeBSD support for hv_vmbus was first added by Microsoft BSD Integration Services Team <bsdic@microsoft.com>. BSD
September 10, 2013 BSD
All times are GMT -4. The time now is 10:37 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy