We're just about to migrate a large application from an Enterprise 6900 to a pair of T5220s utilising LDOMs. Does anyone have any experience of LDOMs on this kit and can provide any recommendations or pitfalls to avoid?
I've heard that use of LDOMs can have an impact on I/O speeds as it's all virtualised and routes through the control LDOM - again, does anyone know any more?
Take a look here for a description of the differences between LDOMs and containers.
We think that we have good reasons to use LDOMs. There are some logical breaks in functionality in our application - online processes, user front end support, overnight batch etc that lend themselves to spliiting onto separate machines and these could be LDOMs or zones. We also have third party software that's licensed on a per core basis, so by running this in an LDOM, we could reduce our licence costs from 8 to 1. As zones share an OS and LDOMs each have their own, there are patching considerations. Also, using LDOMs allows us to share a production and development environment on the same hardware. The question really is, are there positive reasons NOT to use LDOMs?
The I/O part, it depends on how you set the LDOMs.
Disk-wise, you can give an LDOM its own physical disk that the ldom will not share with anyone else.
Network-wise, you can create one virtual switch out of every physical interface and assign each LDOM its own physical interface.
I have started testing LDoms in lab environment for more than few weeks now. I started my testing on T5220 and then moved to T2000. I had to upgrade the firmware and install some patches to meet the minimum requirement for LDoms1.1.
I am using LDoms without security hardening, so have not tested that part. Also had some problem with SVM so I moved to HW RAID mirroring. I am using virtual switch and seems to have not problems so far.
One of the issue I faced is sharing the DVD drive between different domains (which I am still working on). And working on a backup strategy; full disk backup and a per domain backup with minimum service disruption.
Also had some problem with SVM so I moved to HW RAID mirroring.
On my T5220's I've ended up using raidctl to set up hardware mirroring for the boot drive. Then I use ZFS to pool other drives. I've stopped using SVM, or any of it's previous renditions, although I used to use it fairly extensively in Solaris 8. ZFS is just amazingly cool (and simple to use). I haven't yet tried using it for boot drive mirroring, but I did see a document on transitioning to it using luupgrade, so I have a clear path to that option.