SVM metaset on 2 node Solaris cluster storage replicated to non-clustered Solaris node


 
Thread Tools Search this Thread
Operating Systems Solaris SVM metaset on 2 node Solaris cluster storage replicated to non-clustered Solaris node
# 1  
Old 04-14-2011
SVM metaset on 2 node Solaris cluster storage replicated to non-clustered Solaris node

Hi,

Is it possible to have a Solaris cluster of 2 nodes at SITE-A using SVM and creating metaset using say 2 LUNs (on SAN). Then replicating these 2 LUNs to remote site SITE-B via storage based replication and then using these LUNs by importing them as a metaset on a server at SITE-B which is not running Solaris cluster? Solaris OS versions on all 3 nodes (2 nodes at SITE-A and 1 node at SITE-B) is same.
This is to say can I have a solution having 2 nodes in a Solaris cluster at SITE-A and have DR by having a single non-clustered node at SITE-B using SVM metaset replicated via storage replication.

Thanks.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. HP-UX

Mount FIle systems from node-1 onto node-2

Hi, We have HP UX service guard cluster on OS 11.23. Recently 40+ LUNs presented to both nodes by SAN team but I was asked to mount them on only one node. I created required VGs/LVs, created VxFS and mounted all of them and they are working fine. Now client requested those FS on 2nd node as... (4 Replies)
Discussion started by: prvnrk
4 Replies

2. AIX

Cluster node not starting

Setting up HACMP 6.1 on a two node cluster. The other node works fine and can start properly on STABLE state (VGs varied, FS mounted, Service IP aliased). However, the other node is always stuck on ST_JOINING state. Its taking forever and you can't stop the cluster as well or recover from script... (2 Replies)
Discussion started by: depam
2 Replies

3. UNIX for Advanced & Expert Users

VCS triggerring panic on 1 node, root disk under SVM

We have two node cluster with OS disk mirrored under SVM. There is slight disk problem on one of the mirror disk causing cluster to panic. Failure of one mirror disk causing VCS to panic the node. Why VCS is not able to write /var filesystem, as one of the disk is healthy. ... (1 Reply)
Discussion started by: amlanroy
1 Replies

4. Solaris

Tracing node to a particular HBA in Solaris 9

I have one disk that is reporting I/O errors but the same LUN mounted on a different node is able to access it without issue, is there a way to identify which HBA is being used for the LUN without swapping each out at a time? (4 Replies)
Discussion started by: thmnetwork
4 Replies

5. Solaris

unable to mount metaset on cluster node

Dear all, I have created a shared metaset(500gb) having 3 hosts in which 2 hosts are in cluster and 1 is non cluster. I have taken the ownership in cluster node from non cluster node but the problem is i am unable to mount the file system it is giving error "/dev/md/eccdb-ds/d100 or /eccdb-ds... (1 Reply)
Discussion started by: spandhan
1 Replies

6. High Performance Computing

Setting up 2 node cluster using solaris 10

hi, i am trying to setup a 2 node cluster environment. following is what i have; 1. 2 x sun ultra60 - 450MHz procs, 1GB RAM, 9GB HDD, solaris 10 2. 2 x HBA cards 3. 2 x Connection leads to connect ultra60 with D1000 4. 1 x D1000 storage box. 5. 3 x 9GB HDD + 2 x 36GB HDD first of all,... (1 Reply)
Discussion started by: solman17
1 Replies

7. High Performance Computing

Removed crashed node from Solaris Cluster 3.0

All- I am new to these forums so please excuse me if this post is in the wrong place. I had a node crash in a 4 node cluster and mgmt has determined this node will not be part of the cluster when rebuilt. I am researching how to remove it from the cluster information on the other 3 nodes and... (2 Replies)
Discussion started by: bluescreen
2 Replies

8. HP-UX

Node can't join cluster

Need help guys! when running cmrunnode batch i'm getting this error cmrunnode : Waiting for cluster to... (1 Reply)
Discussion started by: Tris
1 Replies

9. Solaris

Not able to copy the tree node text in solaris, while easily done in window

I m not able to copy the text present on the tree's node to terminal or other text editor in solaris. I m using <Shift><control> C and V comaand for the same but the text is not being copied and pasted on the text pad or the terminal window. While the same is possible in windows OS using ctrl+c... (3 Replies)
Discussion started by: friendanoop
3 Replies

10. Shell Programming and Scripting

The other node name of a SUN cluster

Hello, Under ksh I have to run a script on one of the nodes of a Solaris 8 cluster which at some time must execute a command on the alternate node: # rsh <name> "command" I have to implement this script on all the clusters of my company (a lot of...). Fortunately, the names of the two nodes... (11 Replies)
Discussion started by: heartwork
11 Replies
Login or Register to Ask a Question
scgdevs(1M)						  System Administration Commands					       scgdevs(1M)

NAME
scgdevs - global devices namespace administration script SYNOPSIS
/usr/cluster/bin/scgdevs DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scgdevs command manages the global devices namespace. The global devices namespace is mounted under the /global directory and consists of a set of logical links to physical devices. As the /dev/global directory is visible to each node of the cluster, each physical device is visible across the cluster. This fact means that any disk, tape, or CD-ROM that is added to the global-devices namespace can be accessed from any node in the cluster. The scgdevs command enables you to attach new global devices (for example, tape drives, CD-ROM drives, and disk drives) to the global- devices namespace without requiring a system reboot. You must run the devfsadm command before you run the scgdevs command. Alternatively, you can perform a reconfiguration reboot to rebuild the global namespace and attach new global devices. See the boot(1M) man page for more information about reconfiguration reboots. You must run this command from a node that is a current cluster member. If you run this command from a node that is not a cluster member, the command exits with an error code and leaves the system state unchanged. You can use this command only in the global zone. You need solaris.cluster.system.modify RBAC authorization to use this command. See the rbac(5) man page. You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a spe- cial kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run the su command to assume a role. You can also use the pfexec command to issue privileged Sun Clus- ter commands. EXIT STATUS
The following exit values are returned: 0 The command completed successfully. nonzero An error occurred. Error messages are displayed on the standard output. FILES
/devices Device nodes directory /global/.devices Global devices nodes directory /dev/md/shared Solaris Volume Manager metaset directory ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
pfcsh(1), pfexec(1), pfksh(1), pfsh(1), Intro(1CL), cldevice(1CL), boot(1M), devfsadm(1M), su(1M), did(7) Sun Cluster System Administration Guide for Solaris OS NOTES
The scgdevs command, called from the local node, will perform its work on remote nodes asynchronously. Therefore, command completion on the local node does not necessarily mean that the command has completed its work clusterwide. This document does not constitute an API. The /global/.devices directory and the /devices directory might not exist or might have different contents or interpretations in a future release. The existence of this notice does not imply that any other documentation that lacks this notice constitutes an API. This interface should be considered an unstable interface. Sun Cluster 3.2 10 Apr 2006 scgdevs(1M)