02-10-2013
Using ZFS with Veritas Cluster Server
Until I really began to explore the practical implications of using ZFS with VCS, I would not have necessarily realised the obstacles that would be put in my path. Data integrity is a must-have for storage in a shared host environment, so it surprised me to learn as I opened this particular Pandora's box that VCS provides no solution at all for ensuring data integrity on any of the ZFS pools that are a part of your cluster. The 'Zpool' agent is dumb. It imports the pool. It exports the pool. "What about SCSI-3 persistent reservations?", I hear you ask. What about them, indeed. ZFS is a competing product, I won't expect an answer to that problem coming from the Symantec camp. So I took up the gauntlet on a mission to add SCSI-3 PR support to the Zpool agent for my client. I succeeded. So I have written up some notes that might help direct others should they stumble across the same obstacles, and along the way I've discovered the benefits that Solaris MPxIO has to offer that are superior to VxDMP.
Technical Prose: SCSI-3 PR with ZFS on Veritas Cluster Server
9 More Discussions You Might Find Interesting
1. High Performance Computing
Hello,
This might not be the right place to post my questions.
- I installed VCS 5.0 on the 2 nodes. What's next? I want to test the HA of NFS: i.e. the shared disk always accessible if one node goes down. How to do that?
- The management console was not installed. This is the GUI to manage... (2 Replies)
Discussion started by: melanie_pfefer
2 Replies
2. Solaris
Hi
I want to install VCS 5 on solaris 10
the product states it needs 3 nic cards. how to install it if I have 2 cards only (this is just for demo)?
thank you for your help. (3 Replies)
Discussion started by: melanie_pfefer
3 Replies
3. High Performance Computing
Dear All,
Can anyone explain about Pros and Cons of SUN and Veritas Cluster ?
Any comparison chart is highly appreciated.
Regards,
RAA (4 Replies)
Discussion started by: RAA
4 Replies
4. High Performance Computing
I have just completed a first RTFM of "Veritas Cluster Server Management Console Implementation Guide" 5.1, with a view to assessing it to possibly make our working lives easier.
Unfortunately, at my organisation, getting a test installation would be worse than pulling teeth, so I can't just go... (2 Replies)
Discussion started by: Beast Of Bodmin
2 Replies
5. Solaris
Can I make a veritas cluster on Sun vertual box or Vmwere. Please help me. (4 Replies)
Discussion started by: saga499
4 Replies
6. Solaris
Is it possible to configure veritas cluster server using 2 Ldoms on same host? I just want to test and learn VCS. We can do a cluster (sun cluster3.2 ) in a box using 2 Ldoms but i 'm not sure if thats possible with veritas cluster or not ? (1 Reply)
Discussion started by: fugitive
1 Replies
7. Solaris
Yesterday my customer told me to expect a vcs upgrade to happen in the future. He also plans to stop using HDS and move to EMC.
Am thinking how to migrate to sun cluster setup instead.
My plan as follows leave the existing vcs intact as a fallback plan.
Then install and build suncluster on... (5 Replies)
Discussion started by: sparcguy
5 Replies
8. UNIX for Advanced & Expert Users
Hello,
Usually I use "vxresize" to grow vxfs directory in a stand-alone server without any problems, but I am just told to grow vxfs directorys in Veritas Cluster nodes.
Since I never done it before, would like to ask all the experts here to make sure the concept and steps will be fine... (1 Reply)
Discussion started by: sunnychen98
1 Replies
9. UNIX for Beginners Questions & Answers
Hi Experts,
I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system.
#hastatus -sum
-- System State Frozen
A node1 running 0
A node2 running 0
-- Group State
-- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies
LEARN ABOUT CENTOS
fence_scsi
fence_scsi(8) System Manager's Manual fence_scsi(8)
NAME
fence_scsi - I/O fencing agent for SCSI persistent reservations
SYNOPSIS
fence_scsi [OPTION]...
DESCRIPTION
fence_scsi is an I/O fencing agent that uses SCSI-3 persistent reservations to control access to shared storage devices. These devices must
support SCSI-3 persistent reservations (SPC-3 or greater) as well as the "preempt-and-abort" subcommand.
The fence_scsi agent works by having each node in the cluster register a unique key with the SCSI devive(s). Once registered, a single node
will become the reservation holder by creating a "write exclusive, registrants only" reservation on the device(s). The result is that only
registered nodes may write to the device(s). When a node failure occurs, the fence_scsi agent will remove the key belonging to the failed
node from the device(s). The failed node will no longer be able to write to the device(s). A manual reboot is required. In the cluster
environment unfence action should be configured also.
Keys are either be specified manually (see -k option) or generated automatically (see -n option). Automatic key generation requires that
cman be running. Keys will then be generated using the cluster ID and node ID such that each node has a unique key that can be determined
by any other node in the cluster.
Devices can either be specified manually (see -d option) or discovered automatically. Multiple devices can be specified manually by using a
comma-separated list. If no devices are specified, the fence_scsi agent will attempt to discover devices by looking for cluster volumes and
extracting the underlying devices. Devices may be device-mapper multipath devices or raw devices. If using a device-mapper multipath
device, the fence_scsi agent will find the underlying devices (paths) and created registrations for each path.
OPTIONS
-o action
Fencing action. This value can be "on", "off", "status", or "metadata". The "on", "off", and "status" actions require either a key
(see -k option) or node name (see -n option). For "on", the agent will attempt to register with the device(s) and create a reserva-
tion if none exists. The "off" action will attempt to remove a node's key from the device(s). The "status" action will report
whether or not a node's key is currently register with one or more of the devices. The "metadata" action will display the XML meta-
data. The default action if "off".
-d devices
List of devices to use for current operation. Devices can be comma-separated list of raw device (eg. /dev/sdc) or device-mapper mul-
tipath devices (eg. /dev/dm-3). Each device must support SCSI-3 persistent reservations.
-f logfile
Log output to file.
-n nodename
Name of the node to be fenced. The node name is used to generate the key value used for the current operation. This option will be
ignored when used with the -k option.
-k key Key to use for the current operation. This key should be unique to a node. For the "on" action, the key specifies the key use to
register the local node. For the "off" action, this key specifies the key to be removed from the device(s).
-H delay
Wait X seconds before fencing is started (Default Value: 0)
-a Use the APTPL flag for registrations. This option is only used for the "on" action.
-h Print out a help message describing available options, then exit.
-v Verbose output.
-V Print out a version message, then exit.
STDIN PARAMETERS
agent = "param"
This option is used by fence_node(8) and is ignored by fence_scsi.
nodename = "param"
Same as -n option.
action = "param"
Same as -o option.
devices = "param"
Same as -d option.
logfile = "param"
Same as -f option
key = "param"
Same as -k option.
delay = "param"
Same as -H option.
aptpl = "1"
Enable the APTPL flag. Default is 0 (disable).
SEE ALSO
fence(8), fence_node(8), sg_persist(8), vgs(8), cman_tool(8), cman(5)
fence_scsi(8)