Many thanks - a consultant actually listening to what technicians have to say is balm on the tortured soul of the latter. ;-))
Quote:
Originally Posted by
Irishango
Clear now. I can guess that productivity of p-Series-type virtualization is better, since there are less layers of software.
But what are the drawbacks of p-Series-type virtualization? What are there things that first type of virtualization can do, but p-Series type of virtualization cannot do?
This is a bit tough to explain: basically there are two concepts of virtualization, called "full virtualization" and "para-virtualization". What i have described in my last post as "Linux-type virtualization" is in fact "full virtualization": The OS of the virtual machine is not aware that it is virtual at all. If, for example, the disk driver has to write some data to the disk it would write to the standard interfaces like there would be a real disk. Only then the underlying program in the host OS would intercept this, decode it, then create some real disk operation out of it and execute that. In "para-virtualization" the disk driver "knows" that it is only virtual and therefore would do a lot less, skipping tasks a real disk driver would do (because this would be done by the real driver anyways).
Paravirtualization is virtualization with virtual systems which are aware of their virtuality status. It is obvious that paravirtualization needs a lot less reources, hence is "more productive" than full virtualization - the "price" being that you need these paravirtualized software layers (network drivers, disk drivers, filesystem layers, etc.).
To come back to your question: the "drawback" of p-Series virtualization is its limited scope: you can only have AIX or Linux as OS of an LPAR. Compare this to VMWare, where you have *some* host OS (Windows, Linux, ....) and some (or even several) guest OS(es): Linux, Windows, BSD, ....
On the other hand there are paravirtualized environments in Linux too: OpenVZ, for instance, kvm (which is in parts paravirtualized), ... What exactly the "performance per unit of money" is in these systems is hard to tell - when you calculate the TCO of a system in the data center there are so many aspects - hardware cost, power consumption, footprint (space is a limited resource in data centers), service level of hardware support, etc., etc.. - that "virtual systems served per processor" or something such only plays a very limited role.
Quote:
Originally Posted by
Irishango
Hmm... You're saying that I can (in principle) move my app (and its LPAR) from box to box easily. And can I (again, in principle) at all decouple my app and my boxes?
In fact this was one of the expressed goals of this system. It is called "Live Partition Mobility" (you can google this term) and there is an IBM redbook about it, in case you are interested in the gory technical details:
RedBook on Live Partition Mobility
Quote:
Originally Posted by
Irishango
I.e. have a bunch of IBM boxes and some "hyper-LPAR" that aggregates all of them into one big machine? And my app that runs on that "hyper-LPAR" and doesn't really care whether it in reality executed on box A, box B, or even both boxes A and B? Do people do that?
Not that way, but the other way round: You have a big box A with LPARs A1, A2, A3, ... A10 and a big box B with LPARs B1, B2, ... B10. You can (if resources allow this) now move partition B1 so that its definition (the part on the HMC which says how much memory, processors, ...) is transferred, the running system is also transferred to box A and - this is the real clou ! - even the virtual disks and network adapters, which were served by the VIOS on box B are transferred to the VIOS on box A. All this can be done while the B1 system is running and under load.
There are a few considerations for the procurement of hardware though: to be real flexible you need as big systems as possible, because you can divide and distribute your resources inside but not outside. If you want to create an LPAR with 10G of memory but have 5G left in box A and another 5 in box B you are stuck. This means instead of many small you better have few big systems for maximum flexibility.
(Corollary: this is why Blade Centers are a real bad idea when you want to do virtualization - you better use 3 p570-(p770)-systems instead of 3 Bladecenters full of small units.)
On the other hand, every hardware breaks sometimes. That means you have to choose your hardware small enough that you can easily make one free so that you can perform maintenance on it. If you have only two very big boxes they will have to be big enough to take all the LPARs from the other one once you do maintenance on one of them and making them double as big as necessary can be costly. If you have 10 smaller boxes instead it is a lot easier to make one free to undergo maintenance, because its LPARs can be distributed over the remaining 9.
I hope this helps.
bakunin