Sponsored Content
Operating Systems Solaris SVM metaset on 2 node Solaris cluster storage replicated to non-clustered Solaris node Post 302513862 by dn2011 on Thursday 14th of April 2011 08:34:58 AM
Old 04-14-2011
SVM metaset on 2 node Solaris cluster storage replicated to non-clustered Solaris node

Hi,

Is it possible to have a Solaris cluster of 2 nodes at SITE-A using SVM and creating metaset using say 2 LUNs (on SAN). Then replicating these 2 LUNs to remote site SITE-B via storage based replication and then using these LUNs by importing them as a metaset on a server at SITE-B which is not running Solaris cluster? Solaris OS versions on all 3 nodes (2 nodes at SITE-A and 1 node at SITE-B) is same.
This is to say can I have a solution having 2 nodes in a Solaris cluster at SITE-A and have DR by having a single non-clustered node at SITE-B using SVM metaset replicated via storage replication.

Thanks.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

The other node name of a SUN cluster

Hello, Under ksh I have to run a script on one of the nodes of a Solaris 8 cluster which at some time must execute a command on the alternate node: # rsh <name> "command" I have to implement this script on all the clusters of my company (a lot of...). Fortunately, the names of the two nodes... (11 Replies)
Discussion started by: heartwork
11 Replies

2. Solaris

Not able to copy the tree node text in solaris, while easily done in window

I m not able to copy the text present on the tree's node to terminal or other text editor in solaris. I m using <Shift><control> C and V comaand for the same but the text is not being copied and pasted on the text pad or the terminal window. While the same is possible in windows OS using ctrl+c... (3 Replies)
Discussion started by: friendanoop
3 Replies

3. HP-UX

Node can't join cluster

Need help guys! when running cmrunnode batch i'm getting this error cmrunnode : Waiting for cluster to... (1 Reply)
Discussion started by: Tris
1 Replies

4. High Performance Computing

Removed crashed node from Solaris Cluster 3.0

All- I am new to these forums so please excuse me if this post is in the wrong place. I had a node crash in a 4 node cluster and mgmt has determined this node will not be part of the cluster when rebuilt. I am researching how to remove it from the cluster information on the other 3 nodes and... (2 Replies)
Discussion started by: bluescreen
2 Replies

5. High Performance Computing

Setting up 2 node cluster using solaris 10

hi, i am trying to setup a 2 node cluster environment. following is what i have; 1. 2 x sun ultra60 - 450MHz procs, 1GB RAM, 9GB HDD, solaris 10 2. 2 x HBA cards 3. 2 x Connection leads to connect ultra60 with D1000 4. 1 x D1000 storage box. 5. 3 x 9GB HDD + 2 x 36GB HDD first of all,... (1 Reply)
Discussion started by: solman17
1 Replies

6. Solaris

unable to mount metaset on cluster node

Dear all, I have created a shared metaset(500gb) having 3 hosts in which 2 hosts are in cluster and 1 is non cluster. I have taken the ownership in cluster node from non cluster node but the problem is i am unable to mount the file system it is giving error "/dev/md/eccdb-ds/d100 or /eccdb-ds... (1 Reply)
Discussion started by: spandhan
1 Replies

7. Solaris

Tracing node to a particular HBA in Solaris 9

I have one disk that is reporting I/O errors but the same LUN mounted on a different node is able to access it without issue, is there a way to identify which HBA is being used for the LUN without swapping each out at a time? (4 Replies)
Discussion started by: thmnetwork
4 Replies

8. UNIX for Advanced & Expert Users

VCS triggerring panic on 1 node, root disk under SVM

We have two node cluster with OS disk mirrored under SVM. There is slight disk problem on one of the mirror disk causing cluster to panic. Failure of one mirror disk causing VCS to panic the node. Why VCS is not able to write /var filesystem, as one of the disk is healthy. ... (1 Reply)
Discussion started by: amlanroy
1 Replies

9. AIX

Cluster node not starting

Setting up HACMP 6.1 on a two node cluster. The other node works fine and can start properly on STABLE state (VGs varied, FS mounted, Service IP aliased). However, the other node is always stuck on ST_JOINING state. Its taking forever and you can't stop the cluster as well or recover from script... (2 Replies)
Discussion started by: depam
2 Replies

10. HP-UX

Mount FIle systems from node-1 onto node-2

Hi, We have HP UX service guard cluster on OS 11.23. Recently 40+ LUNs presented to both nodes by SAN team but I was asked to mount them on only one node. I created required VGs/LVs, created VxFS and mounted all of them and they are working fine. Now client requested those FS on 2nd node as... (4 Replies)
Discussion started by: prvnrk
4 Replies
scdpm(1M)						  System Administration Commands						 scdpm(1M)

NAME
scdpm - manage disk path monitoring daemon SYNOPSIS
scdpm [-a] {node | all} scdpm -f filename scdpm -m {[node | all][:/dev/did/rdsk/]dN | [:/dev/rdsk/]cNtXdY | all} scdpm -n {node | all} scdpm -p [-F] {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} scdpm -u {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scdpm command manages the disk path monitoring daemon in a cluster. You use this command to monitor and unmonitor disk paths. You can also use this command to display the status of disk paths or nodes. All of the accessible disk paths in the cluster or on a specific node are printed on the standard output. You must run this command on a cluster node that is online and in cluster mode. You can specify either a global disk name or a UNIX path name when you monitor a new disk path. Additionally, you can force the daemon to reread the entire disk configuration. You can use this command only in the global zone. OPTIONS
The following options are supported: -a Enables the automatic rebooting of a node when all monitored disk paths fail, provided that the following conditions are met: o All monitored disk paths on the node fail. o At least one of the disks is accessible from a different node in the cluster. You can use this option only in the global zone. Rebooting the node restarts all resource and device groups that are mastered on that node on another node. If all monitored disk paths on a node remain inaccessible after the node automatically reboots, the node does not automatically reboot again. However, if any monitored disk paths become available after the node reboots but then all monitored disk paths again fail, the node automatically reboots again. You need solaris.cluster.device.admin role-based access control (RBAC) authorization to use this option. See rbac(5). -F If you specify the -F option with the -p option, scdpm also prints the faulty disk paths in the cluster. The -p option prints the cur- rent status of a node or a specified disk path from all the nodes that are attached to the storage. -f filename Reads a list of disk paths to monitor or unmonitor in filename. You can use this option only in the global zone. The following example shows the contents of filename. u schost-1:/dev/did/rdsk/d5 m schost-2:all Each line in the file must specify whether to monitor or unmonitor the disk path, the node name, and the disk path name. You specify the m option for monitor and the u option for unmonitor. You must insert a space between the command and the node name. You must also insert a colon (:) between the node name and the disk path name. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -m Monitors the new disk path that is specified by node:diskpath. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -n Disables the automatic rebooting of a node when all monitored disk paths fail. You can use this option only in the global zone. If all monitored disk paths on the node fail, the node is not rebooted. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -p Prints the current status of a node or a specified disk path from all the nodes that are attached to the storage. You can use this option only in the global zone. If you also specify the -F option, scdpm prints the faulty disk paths in the cluster. Valid status values for a disk path are Ok, Fail, Unmonitored, or Unknown. The valid status value for a node is Reboot_on_disk_failure. See the description of the -a and the -n options for more information about the Reboot_on_disk_failure status. You need solaris.cluster.device.read RBAC authorization to use this option. See rbac(5). -u Unmonitors a disk path. The daemon on each node stops monitoring the specified path. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). EXAMPLES
Example 1 Monitoring All Disk Paths in the Cluster Infrastructure The following command forces the daemon to monitor all disk paths in the cluster infrastructure. # scdpm -m all Example 2 Monitoring a New Disk Path The following command monitors a new disk path.All nodes monitor /dev/did/dsk/d3 where this path is valid. # scdpm -m /dev/did/dsk/d3 Example 3 Monitoring New Disk Paths on a Single Node The following command monitors new paths on a single node. The daemon on the schost-2 node monitors paths to the /dev/did/dsk/d4 and /dev/did/dsk/d5 disks. # scdpm -m schost-2:d4 -m schost-2:d5 Example 4 Printing All Disk Paths and Their Status The following command prints all disk paths in the cluster and their status. # scdpm -p schost-1:reboot_on_disk_failure enabled schost-2:reboot_on_disk_failure disabled schost-1:/dev/did/dsk/d4 Ok schost-1:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d5 Unmonitored schost-2:/dev/did/dsk/d6 Ok Example 5 Printing All Failed Disk Paths The following command prints all of the failed disk paths on the schost-2 node. # scdpm -p -F all schost-2:/dev/did/dsk/d4 Fail Example 6 Printing the Status of All Disk Paths From a Single Node The following command prints the disk path and the status of all disks that are monitored on the schost-2 node. # scdpm -p schost-2:all schost-2:reboot_on_disk_failure disabled schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok EXIT STATUS
The following exit values are returned: 0 The command completed successfully. 1 The command failed completely. 2 The command failed partially. Note - The disk path is represented by a node name and a disk name. The node name must be the host name or all. The disk name must be the global disk name, a UNIX path name, or all. The disk name can be either the full global path name or the disk name: /dev/did/dsk/d3 or d3. The disk name can also be the full UNIX path name: /dev/rdsk/c0t0d0s0. Disk path status changes are logged with the syslogd LOG_INFO facility level. All failures are logged with the LOG_ERR facility level. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevice(1CL), clnode(1CL), attributes(5) Sun Cluster System Administration Guide for Solaris OS Sun Cluster 3.2 22 Jun 2006 scdpm(1M)
All times are GMT -4. The time now is 04:30 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy