Unix/Linux Go Back    


Solaris The Solaris Operating System, usually known simply as Solaris, is a Unix-based operating system introduced by Sun Microsystems. The Solaris OS is now owned by Oracle.

Solaris Cluster Device Problem

Solaris


Reply    
 
Thread Tools Search this Thread Display Modes
    #1  
Old Unix and Linux 12-07-2016
sonofsunra sonofsunra is offline
Registered User
 
Join Date: Dec 2016
Last Activity: 11 December 2016, 11:53 AM EST
Posts: 3
Thanks: 0
Thanked 0 Times in 0 Posts
Solaris Cluster Device Problem

I build up two node cluster (node1, node2) in virtualbox. For these two nodes I add 5 shared disk. (Also each node have own OS disk).

1 shared disk for vtoc
2 shared disk for NFS resource group
2 shared disk for WEB resource group

When I finished my work; two nodes was ok and shared disk was successfully working. After cluster shutdown and restart now shared disk status


Code:
root@node1:/> cldev status

=== Cluster DID Devices ===

Device Instance               Node              Status
---------------               ----              ------
/dev/did/rdsk/d1              node1             Ok
                              node2             Ok

/dev/did/rdsk/d3              node2             Ok

/dev/did/rdsk/d4              node1             Ok
                              node2             Ok

/dev/did/rdsk/d5              node1             Ok
                              node2             Ok

/dev/did/rdsk/d7              node1             Ok

root@node1:/> cldev show

=== DID Device Instances ===

DID Device Name:                                /dev/did/rdsk/d1
  Full Device Path:                                node1:/dev/rdsk/c1t0d0
  Full Device Path:                                node2:/dev/rdsk/c1t1d0
  Full Device Path:                                node1:/dev/rdsk/c1t1d0
  Full Device Path:                                node2:/dev/rdsk/c1t0d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d3
  Full Device Path:                                node2:/dev/rdsk/c0t0d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d4
  Full Device Path:                                node1:/dev/rdsk/c1t4d0
  Full Device Path:                                node2:/dev/rdsk/c1t4d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d5
  Full Device Path:                                node2:/dev/rdsk/c1t3d0
  Full Device Path:                                node1:/dev/rdsk/c1t3d0
  Full Device Path:                                node2:/dev/rdsk/c1t2d0
  Full Device Path:                                node1:/dev/rdsk/c1t2d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d7
  Full Device Path:                                node1:/dev/rdsk/c0t0d0
  Replication:                                     none
  default_fencing:                                 global

root@node1:/> cldev list -v
DID Device          Full Device Path
----------          ----------------
d1                  node2:/dev/rdsk/c1t0d0
d1                  node1:/dev/rdsk/c1t1d0
d1                  node2:/dev/rdsk/c1t1d0
d1                  node1:/dev/rdsk/c1t0d0
d3                  node2:/dev/rdsk/c0t0d0
d4                  node2:/dev/rdsk/c1t4d0
d4                  node1:/dev/rdsk/c1t4d0
d5                  node1:/dev/rdsk/c1t2d0
d5                  node2:/dev/rdsk/c1t2d0
d5                  node1:/dev/rdsk/c1t3d0
d5                  node2:/dev/rdsk/c1t3d0
d7                  node1:/dev/rdsk/c0t0d0


and cluster resource groups (NFS and WEB) offline now. I cant figure out how d1 and d5 shared disk happen like this. They should be spareded like d1 d2 and d5 d6. But they are now combined in one. Can you help me how can solve thise problem?



Moderator's Comments:
Solaris Cluster Device Problem Please use CODE tags as required by forum rules!

Last edited by RudiC; 12-07-2016 at 06:10 AM.. Reason: Added CODE tags.
Sponsored Links
    #2  
Old Unix and Linux 12-08-2016
DukeNuke2's Unix or Linux Image
DukeNuke2 DukeNuke2 is offline Forum Staff  
Soulman
 
Join Date: Jul 2006
Last Activity: 26 April 2017, 8:48 AM EDT
Location: Berlin, Germany
Posts: 5,688
Thanks: 70
Thanked 300 Times in 287 Posts
Have you worked through the whitepaper?

https://blogs.oracle.com/TF/resource...Box-extern.pdf
Sponsored Links
    #3  
Old Unix and Linux 12-08-2016
sonofsunra sonofsunra is offline
Registered User
 
Join Date: Dec 2016
Last Activity: 11 December 2016, 11:53 AM EST
Posts: 3
Thanks: 0
Thanked 0 Times in 0 Posts
I didnt use that one. I use "Setup a Oracle Solaris Cluster on Solar...alBox (part 1) _ Benjamin Allot's Blog" and "Oracle Solaris Cluster Administration Activity Guide". But your suggested document better than these. I will read it for any solution. If I can figure out did I do something wrong; I ll write down here. Thanks for your reply.
    #4  
Old Unix and Linux 12-08-2016
Peasant's Unix or Linux Image
Peasant Peasant is offline Forum Advisor  
Registered User
 
Join Date: Mar 2011
Last Activity: 27 April 2017, 1:08 AM EDT
Posts: 1,038
Thanks: 28
Thanked 300 Times in 263 Posts
Most important part is to use ISCSI protocol instead of shared virtualbox devices.

Get a box to act as ISCSI target, while your cluster nodes are initiators.
This is your disk subsystem from which your will make failover zpools or metasets.

Also the document is outdated for current releases, you might want to keep that in mind.

Once you have ISCSI setup done, you can just follow the regular documentation for your release keeping mind of iscsi notes if existing.

Hope that helps
Best regards
Peasant.
Sponsored Links
    #5  
Old Unix and Linux 12-08-2016
sonofsunra sonofsunra is offline
Registered User
 
Join Date: Dec 2016
Last Activity: 11 December 2016, 11:53 AM EST
Posts: 3
Thanks: 0
Thanked 0 Times in 0 Posts
Thank you for your reply Peasant.

For the document, it is just suits me, because I am using also old version of cluster and solaris (cluser 3.3 and sol10) for my environment.

I used solaris volum manager for FS. I m guessing that most probably I had this issue because of that but I want to be sure. DukeNuke2's document using zfs for it. Now I ll start over and I will try zfs. I am wonderig that do I have same problem with zfs or not.
I used svm because svm support global file system. now I ll try zfs.

And also after that I ll look for the ISCSI protocol for virtualbox environment.

I ll share my exprience in here.

Thank you again.
Sponsored Links
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Linux More UNIX and Linux Forum Topics You Might Find Helpful
Thread Thread Starter Forum Replies Last Post
PaceMaker Cluster Fence Device mrmurdock Red Hat 1 09-13-2016 03:16 PM
Solaris cluster scdidadm Inquiry on device failed. Uran Solaris 2 10-12-2009 01:21 AM
Solaris 10 - Cluster Problem hi2joshi High Performance Computing 9 11-17-2008 06:47 AM
Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris Linux Bot High Performance Computing 0 09-09-2008 10:55 AM



All times are GMT -4. The time now is 05:09 AM.