Reasons for NOT using LDOMs? reliability?


 
Thread Tools Search this Thread
Operating Systems Solaris Reasons for NOT using LDOMs? reliability?
# 1  
Old 08-27-2013
Question Reasons for NOT using LDOMs? reliability?

Dear Solaris Experts,

We are upgrading from sun4u to T4 systems and one proposal is to use LDOMs and also zones within LDOMs.

Someone advised using only zones and not LDOMs because the new machines have fewer chips and if a chip or a core fails then it doesn't impact the zones, but impacts the corresponding LDOMs.

What's the failure rate / probability of core failure on sun4v?

What are your experiences with the sun4v systems?

Do you avoid using LDOM for this or for any other reason?

Is a system with LDOMs inherently less reliable than one without, esp. since all guest LDOMs depend on the primary.

If each LDOM is given at least 2 cores each, then does that mitigate the total loss of the LDOM if one core is lost?

Thanks in advance for your help.
# 2  
Old 08-29-2013
I use this exact configuration. LDOMS with zones installed in the LDOMS. it really comes down to money and complexity. Money in the sense you can use your LDOM configs to limit your exposure to licensing by physical resource limitations in the LDOM config (you can't do this with zones alone) and complexity in that it's essentially a double VM, or a VM running in a VM and all the complexity that brings add mounted storage from a SAN and you can see where it gets complicated very fast.
This User Gave Thanks to os2mac For This Post:
# 3  
Old 08-30-2013
I would just like to correct os2mac, that zones can indeed be used as physical resource limitation.
As per document : http://www.oracle.com/technetwork/da...wp-1911914.pdf

If you use resource pool with processor sets attached to specified zones, Oracle recognizes it as hard partition for software licensing.

As for LDOM and high availability you can use migrate option and/or import the ldom configuration on another server manually (this will require FC or ISCSI disk for root available to multiple nodes).

Other option is to have, for instance, two LDOM on local disks with zone inside on FC/ISCSI disk.
If one host goes down, you can always attach and boot the zone(s) on another LDOM (node).
Since zone fc zpool now will warn you that the zone/zpool is active on another node(LDOM), you will not be able to accidentally attach the zone on both LDOM as the same time.

And of course you can always buy Solaris cluster which will do above things for you with HA agents (but that's rather expensive).

Hope that helps

Regards
Peasant.
This User Gave Thanks to Peasant For This Post:
# 4  
Old 08-30-2013
Enterprise Management Operations Center can also help with uptime by monitoring ldoms and restarting them as needed on another host.
This User Gave Thanks to DustinT For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

5 More Discussions You Might Find Interesting

1. AIX

MPIO reliability

Hi, we have a vew boxes using MPIO and they are connected to some virtualization software managing some disk subsystems, offering volumes to the AIX boxes. Sometimes when a cable has been plugged out for a test or when a real problem occurs, using lspath to show the state of the paths shows... (8 Replies)
Discussion started by: zaxxon
8 Replies

2. Ubuntu

Kubuntu on (certain) Lenovo laptops not possible - reasons?

Hi. I've used the Wubi install of Ubuntu and Kubuntu on my Windows XP machine for close to half a year now. My brother has let me know of a program where he works by which they're getting rid of (with support negated) several Lenovo laptops, on which, he also informs me, the K 'flavor' of Ubuntu... (1 Reply)
Discussion started by: SilversleevesX
1 Replies

3. High Performance Computing

High reliability web server - cluster, redundancy, etc

Hi. I am IT manager/developer for a small organization. I have been doing as-needed linux server administration for several years and am by no means an expert. I've built several of my own servers, and our org is currently using hosting services for our servers and I am relatively happy. We... (3 Replies)
Discussion started by: bsaadmin
3 Replies

4. Solaris

To find out the reasons it crash

What are the steps to find out the reasons it crash in the solaris machine (3 Replies)
Discussion started by: sandeepkv
3 Replies

5. Filesystems, Disks and Memory

Optimizing the system reliability

My product have around 10-15 programs/services running in the sun box, which together completes a task, sequentially. Several instances of the each program/service are running in the unix box, to manage the load and for risk-management reasons. As of now, we dont follow a strict strategy in... (2 Replies)
Discussion started by: Deepa
2 Replies
Login or Register to Ask a Question