04-14-2011
SVM metaset on 2 node Solaris cluster storage replicated to non-clustered Solaris node
Hi,
Is it possible to have a Solaris cluster of 2 nodes at SITE-A using SVM and creating metaset using say 2 LUNs (on SAN). Then replicating these 2 LUNs to remote site SITE-B via storage based replication and then using these LUNs by importing them as a metaset on a server at SITE-B which is not running Solaris cluster? Solaris OS versions on all 3 nodes (2 nodes at SITE-A and 1 node at SITE-B) is same.
This is to say can I have a solution having 2 nodes in a Solaris cluster at SITE-A and have DR by having a single non-clustered node at SITE-B using SVM metaset replicated via storage replication.
Thanks.
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hello,
Under ksh I have to run a script on one of the nodes of a Solaris 8 cluster which at some time must execute a command on the alternate node:
# rsh <name> "command"
I have to implement this script on all the clusters of my company (a lot of...).
Fortunately, the names of the two nodes... (11 Replies)
Discussion started by: heartwork
11 Replies
2. Solaris
I m not able to copy the text present on the tree's node to terminal or other text editor in solaris. I m using <Shift><control> C and V comaand for the same but the text is not being copied and pasted on the text pad or the terminal window.
While the same is possible in windows OS using ctrl+c... (3 Replies)
Discussion started by: friendanoop
3 Replies
3. HP-UX
Need help guys!
when running cmrunnode batch i'm getting this error
cmrunnode : Waiting for cluster to... (1 Reply)
Discussion started by: Tris
1 Replies
4. High Performance Computing
All-
I am new to these forums so please excuse me if this post is in the wrong place.
I had a node crash in a 4 node cluster and mgmt has determined this node will not be part of the cluster when rebuilt. I am researching how to remove it from the cluster information on the other 3 nodes and... (2 Replies)
Discussion started by: bluescreen
2 Replies
5. High Performance Computing
hi,
i am trying to setup a 2 node cluster environment. following is what i have;
1. 2 x sun ultra60 - 450MHz procs, 1GB RAM, 9GB HDD, solaris 10
2. 2 x HBA cards
3. 2 x Connection leads to connect ultra60 with D1000
4. 1 x D1000 storage box.
5. 3 x 9GB HDD + 2 x 36GB HDD
first of all,... (1 Reply)
Discussion started by: solman17
1 Replies
6. Solaris
Dear all,
I have created a shared metaset(500gb) having 3 hosts in which 2 hosts are in cluster and 1 is non cluster. I have taken the ownership in cluster node from non cluster node but the problem is i am unable to mount the file system it is giving error "/dev/md/eccdb-ds/d100 or /eccdb-ds... (1 Reply)
Discussion started by: spandhan
1 Replies
7. Solaris
I have one disk that is reporting I/O errors but the same LUN mounted on a different node is able to access it without issue, is there a way to identify which HBA is being used for the LUN without swapping each out at a time? (4 Replies)
Discussion started by: thmnetwork
4 Replies
8. UNIX for Advanced & Expert Users
We have two node cluster with OS disk mirrored under SVM. There is slight disk problem on one of the mirror disk causing cluster to panic.
Failure of one mirror disk causing VCS to panic the node. Why VCS is not able to write /var filesystem, as one of the disk is healthy.
... (1 Reply)
Discussion started by: amlanroy
1 Replies
9. AIX
Setting up HACMP 6.1 on a two node cluster. The other node works fine and can start properly on STABLE state (VGs varied, FS mounted, Service IP aliased). However, the other node is always stuck on ST_JOINING state. Its taking forever and you can't stop the cluster as well or recover from script... (2 Replies)
Discussion started by: depam
2 Replies
10. HP-UX
Hi,
We have HP UX service guard cluster on OS 11.23. Recently 40+ LUNs presented to both nodes by SAN team but I was asked to mount them on only one node. I created required VGs/LVs, created VxFS and mounted all of them and they are working fine. Now client requested those FS on 2nd node as... (4 Replies)
Discussion started by: prvnrk
4 Replies
LEARN ABOUT OPENSOLARIS
scconf_dg_svm
scconf_dg_svm(1M) System Administration Commands scconf_dg_svm(1M)
NAME
scconf_dg_svm - change Solaris Volume Manager device group configuration.
SYNOPSIS
scconf -c -D [generic_options]
DESCRIPTION
Note -
Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software
still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor-
mation about the object-oriented command set, see the Intro(1CL) man page.
The following information is specific to the scconf command. To use the equivalent object-oriented commands, see the cldevicegroup(1CL)
man page.
A Solaris Volume Manager device group is defined by a name, the nodes upon which this group can be accessed, a global list of devices in
the disk set, and a set of properties used to control actions such as potential primary preference and failback behavior.
For Solaris Volume Manager device groups, only one disk set can be assigned to a device group, and the group name must always match the
name of the disk set itself.
In Solaris Volume Manager, a multihosted or shared device is a grouping of two or more hosts and disk drives that are accessible by all
hosts, and that have the same device names on all hosts. This identical device naming requirement is achieved by using the raw disk devices
to form the disk set. The device ID pseudo driver (DID) allows multihosted devices to have consistent names across the cluster. Only hosts
already configured as part of a disk set itself can be configured into the nodelist of a Solaris Volume Manager device group. At the time
drives are added to a shared disk set, they must not belong to any other shared disk set.
The Solaris Volume Manager metaset command creates the disk set, which also initially creates and registers it as a Solaris Volume Manager
device group. Next, you must use the scconf command to set the node preference list, the preferenced, failback and numsecondaries subop-
tions.
If you want to change the order of node preference list or the failback mode, you must specify all the nodes that currently exist in the
device group in the nodelist. In addition, if you are changing the order of node preference, you must also set the preferenced suboption to
true.
If you do not specify the preferenced suboption with the "change" form of the command, the already established true or false setting is
used.
You cannot use the scconf command to remove the Solaris Volume Manager device group from the cluster configuration. Use the Solaris Volume
Manager metaset command instead. You remove a device group by removing the Solaris Volume Manager disk set.
OPTIONS
See scconf(1M) for the list of supported generic options. See metaset(1M) for the list of metaset related commands to create and remove
disk sets and device groups.
Only one action option is allowed in the command. The following action options are supported.
-c Change the ordering of the node preference list, change preference and failback policy, and change the desired number
of secondaries.
EXAMPLES
Example 1 Creating and Registering a Disk Set
The following metaset commands create the disk set disksetand register the disk set as a Solaris Volume Manager device group.
Next, the scconf command is used to specify the order of the potential primary nodes for the device group, change the preferenced and fail-
back options, and change the desired number of secondaries.
host1# metaset -s diskset1 -a -h host1 host2
host1# scconf -c -D name=diskset1,nodelist=host2:host1,
preferenced=true,failback=disabled,numsecondaries=1
ATTRIBUTES
See attributes(5) for descriptions of the following attributes:
+-----------------------------+-----------------------------+
| ATTRIBUTE TYPE | ATTRIBUTE VALUE |
+-----------------------------+-----------------------------+
|Availability |SUNWsczu |
+-----------------------------+-----------------------------+
|Interface Stability |Evolving |
+-----------------------------+-----------------------------+
SEE ALSO
Intro(1CL), cldevicegroup(1CL), scconf(1M), metaset(1M)
Sun Cluster 3.2 10 Jul 2006 scconf_dg_svm(1M)