Solaris Cluster Device Problem


 
Thread Tools Search this Thread
Operating Systems Solaris Solaris Cluster Device Problem
# 1  
Old 12-07-2016
Solaris Cluster Device Problem

I build up two node cluster (node1, node2) in virtualbox. For these two nodes I add 5 shared disk. (Also each node have own OS disk).

1 shared disk for vtoc
2 shared disk for NFS resource group
2 shared disk for WEB resource group

When I finished my work; two nodes was ok and shared disk was successfully working. After cluster shutdown and restart now shared disk status

Code:
root@node1:/> cldev status

=== Cluster DID Devices ===

Device Instance               Node              Status
---------------               ----              ------
/dev/did/rdsk/d1              node1             Ok
                              node2             Ok

/dev/did/rdsk/d3              node2             Ok

/dev/did/rdsk/d4              node1             Ok
                              node2             Ok

/dev/did/rdsk/d5              node1             Ok
                              node2             Ok

/dev/did/rdsk/d7              node1             Ok

root@node1:/> cldev show

=== DID Device Instances ===

DID Device Name:                                /dev/did/rdsk/d1
  Full Device Path:                                node1:/dev/rdsk/c1t0d0
  Full Device Path:                                node2:/dev/rdsk/c1t1d0
  Full Device Path:                                node1:/dev/rdsk/c1t1d0
  Full Device Path:                                node2:/dev/rdsk/c1t0d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d3
  Full Device Path:                                node2:/dev/rdsk/c0t0d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d4
  Full Device Path:                                node1:/dev/rdsk/c1t4d0
  Full Device Path:                                node2:/dev/rdsk/c1t4d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d5
  Full Device Path:                                node2:/dev/rdsk/c1t3d0
  Full Device Path:                                node1:/dev/rdsk/c1t3d0
  Full Device Path:                                node2:/dev/rdsk/c1t2d0
  Full Device Path:                                node1:/dev/rdsk/c1t2d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d7
  Full Device Path:                                node1:/dev/rdsk/c0t0d0
  Replication:                                     none
  default_fencing:                                 global

root@node1:/> cldev list -v
DID Device          Full Device Path
----------          ----------------
d1                  node2:/dev/rdsk/c1t0d0
d1                  node1:/dev/rdsk/c1t1d0
d1                  node2:/dev/rdsk/c1t1d0
d1                  node1:/dev/rdsk/c1t0d0
d3                  node2:/dev/rdsk/c0t0d0
d4                  node2:/dev/rdsk/c1t4d0
d4                  node1:/dev/rdsk/c1t4d0
d5                  node1:/dev/rdsk/c1t2d0
d5                  node2:/dev/rdsk/c1t2d0
d5                  node1:/dev/rdsk/c1t3d0
d5                  node2:/dev/rdsk/c1t3d0
d7                  node1:/dev/rdsk/c0t0d0


and cluster resource groups (NFS and WEB) offline now. I cant figure out how d1 and d5 shared disk happen like this. They should be spareded like d1 d2 and d5 d6. But they are now combined in one. Can you help me how can solve thise problem?



Moderator's Comments:
Mod Comment Please use CODE tags as required by forum rules!

Last edited by RudiC; 12-07-2016 at 07:10 AM.. Reason: Added CODE tags.
# 2  
Old 12-08-2016
Have you worked through the whitepaper?

https://blogs.oracle.com/TF/resource...Box-extern.pdf
# 3  
Old 12-08-2016
I didnt use that one. I use "Setup a Oracle Solaris Cluster on Solar...alBox (part 1) _ Benjamin Allot's Blog" and "Oracle Solaris Cluster Administration Activity Guide". But your suggested document better than these. I will read it for any solution. If I can figure out did I do something wrong; I ll write down here. Thanks for your reply.
# 4  
Old 12-08-2016
Most important part is to use ISCSI protocol instead of shared virtualbox devices.

Get a box to act as ISCSI target, while your cluster nodes are initiators.
This is your disk subsystem from which your will make failover zpools or metasets.

Also the document is outdated for current releases, you might want to keep that in mind.

Once you have ISCSI setup done, you can just follow the regular documentation for your release keeping mind of iscsi notes if existing.

Hope that helps
Best regards
Peasant.
# 5  
Old 12-08-2016
Thank you for your reply Peasant.

For the document, it is just suits me, because I am using also old version of cluster and solaris (cluser 3.3 and sol10) for my environment.

I used solaris volum manager for FS. I m guessing that most probably I had this issue because of that but I want to be sure. DukeNuke2's document using zfs for it. Now I ll start over and I will try zfs. I am wonderig that do I have same problem with zfs or not.
I used svm because svm support global file system. now I ll try zfs.

And also after that I ll look for the ISCSI protocol for virtualbox environment.

I ll share my exprience in here.

Thank you again.
Login or Register to Ask a Question

Previous Thread | Next Thread

4 More Discussions You Might Find Interesting

1. Red Hat

PaceMaker Cluster Fence Device

I have 2 VM's setup with a shared VMware disk running RHEL 7.1 (just updated to 7.2 with yum update), and would like to know what is the easiest Fence device to implement for testing purposes. Apparently, I need a fence device before my IP resources will come online. I have the cluster... (1 Reply)
Discussion started by: mrmurdock
1 Replies

2. Solaris

Solaris cluster scdidadm Inquiry on device failed.

Solaris 10, Solaris Cluser 3.2, two node cluster, all software installed succefully, all nodes join to the cluster And on snod2 didn't recognize disks as a did devices and I can't make a quorum device. snod1#/usr/cluster/bin/cluster status === Cluster Nodes === --- Node Status --- Node... (2 Replies)
Discussion started by: Uran
2 Replies

3. High Performance Computing

Solaris 10 - Cluster Problem

Hi, I am trying to install Solaris cluster with Solaris10 and Cluster suite 3.2 2/08. After going through customer configuration in "scinstall" utility, I am facing a problem. At the time of selecting 2nd node's file system type, I am giving input as /globaldevices which is very much part of... (9 Replies)
Discussion started by: hi2joshi
9 Replies

4. High Performance Computing

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris

Provides a description of how to set up a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris. More... (0 Replies)
Discussion started by: Linux Bot
0 Replies
Login or Register to Ask a Question