01-13-2009
Sometimes we do micropartitioning in our production and failover environments, sometimes the partitions are on seperate hosts.
It's really my preference to have relating DB/App severs on the same host. This way we can create virtual network devices and any app-DB communication can be done via the backplane rather than over the physical network. The real setback for using this strategy is cost. It generally will cost less to buy 2 systems with half the memory than 1 with double the memory. This is slightly less of a problem with Power 6 as IBM has added additional RAM slots per module.
When we get into the VIO type of virtualization, we keep that to our development and QA environments. If we put that into production I'd spend half of every week on conference calls determining who is slowing who else down, or proving that it isn't happening.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hello,
I need explanations about physical disks and physical volumes. What is the difference between these 2 things?
In fact, i am trying to understand what the AIX lspv2command does.
Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies
2. UNIX for Dummies Questions & Answers
I was in smit, checking on disc space, etc. and it appears that one of our physical volumes that is part of a large volume group, has no free physical partitions. The server is running AIX 5.1. What would be the advisable step to take in this instance? (9 Replies)
Discussion started by: markper
9 Replies
3. Solaris
Hello,
I have a SUN T5240 running Solaris 10 with Logical Domain Manager (v 1.0.3). You can use the "ldm" command to display current resources on the box. Is there away to display all the "physical resources" on the box(i.e.,used and unused). For example, "ldm ls" will tell me what the... (5 Replies)
Discussion started by: stephanpitts
5 Replies
4. AIX
Hello All,
Can anybody please tell me what is the maximum limit of Physical IBM Power Machine which can be handled by single HMC at a single point of time?
Thanks,
Jenish (1 Reply)
Discussion started by: jenish_shah
1 Replies
5. Red Hat
Hi Friends,
Is there any one who is working on Redhat Virtualization (RHEV -- KVM)?
Regards,
Arumon (4 Replies)
Discussion started by: arumon
4 Replies
6. Virtualization and Cloud Computing
Hi everyone.
Last week we have some hardware problems in our Unixware 7 server, and i decided to change this physical machine into a virtual machine.
The scenario is:
architecture=IA32
bus_types=PCI2.10,ISA,PnP1.0
hostname=tecsup2uw.tecsupaqp.edu
hw_provider=Generic AT... (1 Reply)
Discussion started by: danilosevilla
1 Replies
7. Solaris
After a memory upgrade all network interfaces are misconfigued. How do i resolve this issue. Below are some out puts.thanks.
ifconfig: plumb: SIOCLIFADDIF: eg000g0:2: no such interface
# ifconfig eg1000g0:2 plumb
ifconfig: plumb: SIOCLIFADDIF: eg1000g0:2: no such interface
# ifconfig... (2 Replies)
Discussion started by: andersonedouard
2 Replies
8. HP-UX
Hello,
I am looking for a virtualization option for HP-UX that will allow VMs to run completely independent of each other with full Operating System capabilities (not a guest OS with limited features/access), with their own IP addresses and dedicated resources that will not interfere with any... (4 Replies)
Discussion started by: bstring
4 Replies
9. UNIX for Dummies Questions & Answers
Hi,
I am new to unix. I am working on Red Hat Linux and side by side on AIX also. After reading the concepts of Storage, I am now really confused regarding the terminologies
1)Physical Volume
2)Volume Group
3)Logical Volume
4)Physical Partition
Please help me to understand these concepts. (6 Replies)
Discussion started by: kashifsd17
6 Replies
10. Solaris
Hi everyone,
I have ten servers with solaris (diffrent versions).
there is solaris 10, solaris 6 and solaris 8.
some are SPARC and some are x86
I would like to move them to one server so they all will be virtual.
Is it possible? how? (4 Replies)
Discussion started by: bregtux
4 Replies
LEARN ABOUT FREEBSD
hv_vmbus
HYPER-V(4) BSD Kernel Interfaces Manual HYPER-V(4)
NAME
hv_vmbus -- Hyper-V Virtual Machine Bus (VMBus) Driver
SYNOPSIS
To compile this driver into the kernel, place the following lines in the system kernel configuration file:
device hyperv
DESCRIPTION
The hv_vmbus provides a high performance communication interface between guest and root partitions in Hyper-V. Hyper-V is a hypervisor-based
virtualization technology from Microsoft. Hyper-V supports isolation in terms of a partition. A partition is a logical unit of isolation,
supported by the hypervisor, in which operating systems execute.
The Microsoft hypervisor must have at least one parent, or root, partition, running Windows Server operating system. The virtualization
stack runs in the parent partition and has direct access to the hardware devices. The root partition then creates the child partitions which
host the guest operating systems.
Child partitions do not have direct access to other hardware resources and are presented a virtual view of the resources, as virtual devices
(VDevs). Requests to the virtual devices are redirected either via the VMBus or the hypervisor to the devices in the parent partition, which
handles the requests.
The VMBus is a logical inter-partition communication channel. The parent partition hosts Virtualization Service Providers (VSPs) which com-
municate over the VMBus to handle device access requests from child partitions. Child partitions host Virtualization Service Consumers
(VSCs) which redirect device requests to VSPs in the parent partition via the VMBus. The Hyper-V VMBus driver defines and implements the
interface that facilitate high performance bi-directional communication between the VSCs and VSPs. All VSCs utilize the VMBus driver.
SEE ALSO
hv_ata_pci_disengage(4), hv_netvsc(4), hv_storvsc(4), hv_utils(4)
HISTORY
Support for hv_vmbus first appeared in FreeBSD 10.0. The driver was developed through a joint effort between Citrix Incorporated, Microsoft
Corporation, and Network Appliance Incorporated.
AUTHORS
FreeBSD support for hv_vmbus was first added by Microsoft BSD Integration Services Team <bsdic@microsoft.com>.
BSD
September 10, 2013 BSD