Sponsored Content
Operating Systems Solaris Available design options for a cluster hosting many different virtualized Solaris versions Post 302902374 by Peasant on Tuesday 20th of May 2014 08:52:39 AM
Old 05-20-2014
LDOM (Oracle VM server for SPARC), with HA LDOM solaris cluster between physical nodes, it will cost you since SC is 'per core'.

Trouble is : If a node fails for some reason, cluster will only boot the LDOM on the next host (couple of minutes of downtime, but downtime).

You can migrate old machines to future LDOMs using flar archive or similar method.

Option is using LDOMs and running Solaris cluster between virtual machines. Depending on your resource (application) you will be able to configure active / active or active / passive.

If a node dies, other node and its ldom with SC will takeover, while you can just import the rest of the machines (ldoms) to working node (what SC HA LDOM does, but automatically, in above example).

Be sure to have auto-boot?=false for all ldom's which are on shared storage.
You don't want to boot root zpool on two nodes at once Smilie

You might also use bare metal with branded zones (doubtful solaris 8 will work that way) and Solaris cluster with HA ZONE or Solaris cluster zone cluster for high availability.
Or just importing the zone manually if the node dies, since zone rpool will not import on another node if configured properly (it will give error that it is active on another node).

Hope that helps
Regards
Peasant.
This User Gave Thanks to Peasant For This Post:
 

6 More Discussions You Might Find Interesting

1. Solaris

Solaris versions

Hi, Does anyone know if the various releases of Solaris are archived anywhere? I work for a DR company and it would be useful to have different releases of a Solaris version number i.e. Solaris 10 6/06 (3 Replies)
Discussion started by: callmebob
3 Replies

2. UNIX for Dummies Questions & Answers

Design Options for Property Files

Dear all, Hello and Good Morning. I have a properties file in a specific directory in UNIX that can be accessed by certain users. This properties file is being used by a number of backend programs. The properties file contain the username and the password of the database as well. How do I design... (1 Reply)
Discussion started by: jackal28
1 Replies

3. High Performance Computing

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris

Provides a description of how to set up a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris. More... (0 Replies)
Discussion started by: Linux Bot
0 Replies

4. Solaris

How to know which versions we are using c, c++ in Solaris.

How to know which versions we are using c, c++ in Solaris. Thanks & Regards Durgaprasad (1 Reply)
Discussion started by: durgaprasadr13
1 Replies

5. Solaris

Solaris 10 - Zones - Design thoughts

Good morning to one and all :-) About to set up a M5000 server, then create a few zones to port over a few apps. So, anyone had experience of zones ? and what things I need to consider / design / workout before hand ?? I need to work out whats going to be ported over, what the ip... (6 Replies)
Discussion started by: sbk1972
6 Replies

6. UNIX for Dummies Questions & Answers

MilesWeb.com shared hosting or unlimited hosting plans?

I want to host a website in India, after all my research I have found MilesWeb.com, I am planning to go for their shared plan http://www.milesweb.com/cpanel-hosting.php I have test their contact options and response time, they are really available 24/7. I have checked few other providers, they... (1 Reply)
Discussion started by: Guruguy
1 Replies
mkqdisk(8)						      Quorum Disk Management							mkqdisk(8)

NAME
mkqdisk - Cluster Quorum Disk Utility WARNING
Use of this command can cause the cluster to malfunction. SYNOPSIS
mkqdisk [-?|-h] | [-L] | [-f label] [-c device -l label] [-d [-d ...]] DESCRIPTION
The mkqdisk command is used to create a new quorum disk or display existing quorum disks accessible from a given cluster node. OPTIONS
-c device -l label Initialize a new cluster quorum disk. This will destroy all data on the given device. If a cluster is currently using that device as a quorum disk, the entire cluster will malfunction. Do not run this on an active cluster when qdiskd is running. Only one device on the SAN should ever have the given label; using multiple different devices is currently not supported (it is expected a RAID array is used for quorum disk redundancy). The label can be any textual string up to 127 characters - and is therefore enough space to hold a UUID created with uuidgen(1). -f label Find the cluster quorum disk with the given label and display information about it. -L Display information on all accessible cluster quorum disks. -d Increase debugging level. Specify multiple times for more information. Currently, specifying more than twice has no effect. SEE ALSO
qdisk(5), qdiskd(8), uuidgen(1) July 2006 mkqdisk(8)
All times are GMT -4. The time now is 04:32 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy