I build up two node cluster (node1, node2) in virtualbox. For these two nodes I add 5 shared disk. (Also each node have own OS disk).
1 shared disk for vtoc
2 shared disk for NFS resource group
2 shared disk for WEB resource group
When I finished my work; two nodes was ok and shared disk was successfully working. After cluster shutdown and restart now shared disk status
and cluster resource groups (NFS and WEB) offline now. I cant figure out how d1 and d5 shared disk happen like this. They should be spareded like d1 d2 and d5 d6. But they are now combined in one. Can you help me how can solve thise problem?
Moderator's Comments:
Please use CODE tags as required by forum rules!
Last edited by RudiC; 12-07-2016 at 07:10 AM..
Reason: Added CODE tags.
I didnt use that one. I use "Setup a Oracle Solaris Cluster on Solar...alBox (part 1) _ Benjamin Allot's Blog" and "Oracle Solaris Cluster Administration Activity Guide". But your suggested document better than these. I will read it for any solution. If I can figure out did I do something wrong; I ll write down here. Thanks for your reply.
Most important part is to use ISCSI protocol instead of shared virtualbox devices.
Get a box to act as ISCSI target, while your cluster nodes are initiators.
This is your disk subsystem from which your will make failover zpools or metasets.
Also the document is outdated for current releases, you might want to keep that in mind.
Once you have ISCSI setup done, you can just follow the regular documentation for your release keeping mind of iscsi notes if existing.
For the document, it is just suits me, because I am using also old version of cluster and solaris (cluser 3.3 and sol10) for my environment.
I used solaris volum manager for FS. I m guessing that most probably I had this issue because of that but I want to be sure. DukeNuke2's document using zfs for it. Now I ll start over and I will try zfs. I am wonderig that do I have same problem with zfs or not.
I used svm because svm support global file system. now I ll try zfs.
And also after that I ll look for the ISCSI protocol for virtualbox environment.
I have 2 VM's setup with a shared VMware disk running RHEL 7.1 (just updated to 7.2 with yum update), and would like to know what is the easiest Fence device to implement for testing purposes. Apparently, I need a fence device before my IP resources will come online.
I have the cluster... (1 Reply)
Solaris 10, Solaris Cluser 3.2, two node cluster, all software installed succefully, all nodes join to the cluster
And on snod2 didn't recognize disks as a did devices and I can't make a quorum device.
snod1#/usr/cluster/bin/cluster status
=== Cluster Nodes ===
--- Node Status ---
Node... (2 Replies)
Hi,
I am trying to install Solaris cluster with Solaris10 and Cluster suite 3.2 2/08. After going through customer configuration in "scinstall" utility, I am facing a problem.
At the time of selecting 2nd node's file system type, I am giving input as /globaldevices which is very much part of... (9 Replies)