05-12-2015
Computron, please tell me if i am trying to explain to you something you do already know, but i sense the problem is in fact the understanding of how virtualisation the IBM way works.
We'll start at the basics: what you get from IBM is a "managed system", which is the synonym for "hardware node" in most Linux-centric virtualisation platforms (Openbox, OpenVM, KVM, OpenVZ, etc.). The difference between these Linux-systems and the IBM hardware is that there is no OS you have to install at the base. Everything is handled in (or near) hardware.
With i.e. OpenBox you install an OS to the system, install the virtualisation software onto it, create several virtual machine profiles and start these as programs in the base system.
With IBM you have the hardware which already allows virtualisation (controlled by the HMC, which is a mere management platform, not a direct part of the virtualized system) and install directly into the LPARs (=virtual machines) you create.
For this to work you need to create "profiles", where you give out parts of the systems resources to the various LPARs. Your system might come with 2 processors and you might decide to give 0.1 to the one system and 1.2 to the other, keeping the remaining 0.7 in reserve. So, a good idea would be to first meticulously describe your system here:
# of Processors
# amount of memory installed
# and types of network adapters
# and types of built-in disks
etc.
This will get you a base for planning what to give to whom and when.
Now, the virtualisation works well for "anonymus" ressources like CPU and memory. It gets a little more tricky with less anonymous resources, namely network adapters and (internal) disks. Therefore the first (usually) thing to do is to create a special LPAR which acts as a man-in-the-middle between the physical hardware and the (other) virtualised systems. This special type of LPAR is called "VIOS" (virtual I/O server) and one has usually 2 (or several pairs) of them (a stripped-down two-node HACMP cluster is part of the software package for VIOS) per managed system.
Here is a typical way to create a system:
after discovering the MS and getting the HMC to manage it you create two LPAR profiles for your VIOS and use "installios" to install the VIOS software onto it. Typically these are the only systems in the whole MS which have physical disks to boot from. In a typical setting the bootdisks for all the other LPARs come from SAN and are connected to the LPARs via the VIOS as vSCSI LUNs.
You also give the VIOS each one (or more) real network adapter(s) to connect to the outside. Network traffic for the other LPARs is done via the VIOSes which act as a (virtual) bridge. You will have to plan thoroughly how to route what through this. Non-VIOS LPARs usually get one virtual network device which is in turn created on the VIOS and mapped to the system. (This is quite similar to how the Linux systems handle things, except that there is no specialised virtual system there but the hardware node, where you define this.)
The VIOS can pass information about handed-down LUNs and virtualised network to another VIOS - even one in another managed system - and it is possible to move (running!) LPARs around between different MSes. This is called "Live Partition Mobility" and a thing not possible with any other virtualisation system.
Also the built-in hypervisor is the fastest in the industry. It offers a real huge bandwidth and in practice you will have nothing to do to tune it.
So i suggest you start by posting your exact system data and start to plan what you want to do and how.
I hope this helps.
bakunin
10 More Discussions You Might Find Interesting
1. AIX
can i get a step b syep explanation in creating LPAR... i have searched for tutorials i couldn't find the right one.... (2 Replies)
Discussion started by: udtyuvaraj
2 Replies
2. AIX
Hi all,
I have the following configuration
2 ds3524 storage disk systems located over 2 locations
2 P720 server located over 2 locations
DS3524 are connected to san switch.
Each vio server has 1 fc adapter attached to a san switch.
per p720 server 2 virtual io servers. Vio 1 has 1 lun... (2 Replies)
Discussion started by: markiemark
2 Replies
3. AIX
hi guys,
i need to develop following setup for a customer:
high availability oracle database on aix7 and linux in 2 different LPAR using dual ps700 blande a ds3400 and HMC.
the question is, it is possible to have automatic vioc failover(aix,linux) when for example 1 vios goes down?
in... (6 Replies)
Discussion started by: gullio23
6 Replies
4. Solaris
Hi,
I am creating "LSOF" solaris package from solaris "source" files. I have compiled the source file and with that i created prototype file also. Then using pkgmk command i can make the package .
In the package i am having (pkginfo pkgmap reloc) two file and one directory respectively
But... (3 Replies)
Discussion started by: Kathirvel G
3 Replies
5. AIX
Hi all please give me a link for IBM PowerVM (4 Replies)
Discussion started by: babinlonston
4 Replies
6. Windows & DOS: Issues & Discussions
Hello everyone,
I've been attempting to make a program which creates user accounts from a file which contains the usernames required. It also checks if the directory of the username exists in the C:\Users directory and then is going to give the option to delete the directory, or rename it, this... (1 Reply)
Discussion started by: charlieabee
1 Replies
7. AIX
Some simple questions about Intellistation 285 and powervm.
I want to learn AIX,so i will buy an used I285 from ebay.
1)Will support aix 7.1?
2)Is powervm avaliable?Where to download or buy it?
Thanks (3 Replies)
Discussion started by: Linusolaradm1
3 Replies
8. AIX
There can be configurations in IBM Server wherein a
standalone partition is created on some supported IBM Server
Or
A VIOS - VIOC LPARs created.
Now in both cases they are lpars. But if I want to differentiate b/w a standalone LPAR vs an VIOC LPAR how can I do..?
On a... (2 Replies)
Discussion started by: Manish00712
2 Replies
9. AIX
Hello,
POWER7 machine.
4 x 1Gbit port ethernet adapter is dedicated to vios. 8023ad etherchannel is created using those 4 ports. Etherchannel adapter is shared to lpars using SEA.
If I test network performance directly from vios partition (using iperf) I'm geting nice 4Gbit throughput. But if... (3 Replies)
Discussion started by: vilius
3 Replies
10. AIX
Hello,
After installing on P6 which is POWERVM IVM VIOS enabled (VET CODE D21C77ACD9229817CA1F00002C10004164 )
i get this message
"I/O hosting requires a hosting partition - boot not permitted".
HMC was connected to the machine
then HMC was removed through ASMI
Searching on... (2 Replies)
Discussion started by: filosophizer
2 Replies
LEARN ABOUT DEBIAN
adt-virt-null
adt-virt-null(1) Linux Programmer's Manual adt-virt-null(1)
NAME
adt-virt-null - autopkgtest null virtualisation server
SYNOPSYS
adt-virt-null [options]
DESCRIPTION
adt-virt-null provides a dummy autopkgtest virtualisation server which does not actually offer any kind of separation between host and
testbed.
Normally adt-virt-null will be invoked by adt-run.
There is of course no locking; nothing prevents multiple simultaneous programs from invoking adt-virt-null so as to reuse the same system
as a testbed.
OPTIONS
-d | --debug
Enables debugging output. Probably not hugely interesting.
INPUT, OUTPUT AND EXIT STATUS
The behaviour of adt-virt-null is as described by the AutomatedTesting virtualisation regime specification.
SEE ALSO
adt-run(1), adt-virt-chroot(1), adt-virt-xenlvm(1), /usr/share/doc/autopkgtest/.
AUTHORS AND COPYRIGHT
This manpage is part of autopkgtest, a tool for testing Debian binary packages. autopkgtest is Copyright (C) 2006-2007 Canonical Ltd and
others.
See /usr/share/doc/autopkgtest/CREDITS for the list of contributors and full copying conditions.
autopkgtest 2007 adt-virt-null(1)