Using ZFS with Veritas Cluster Server


 
Thread Tools Search this Thread
Operating Systems Solaris Using ZFS with Veritas Cluster Server
# 1  
Old 02-10-2013
Wrench Using ZFS with Veritas Cluster Server

Until I really began to explore the practical implications of using ZFS with VCS, I would not have necessarily realised the obstacles that would be put in my path. Data integrity is a must-have for storage in a shared host environment, so it surprised me to learn as I opened this particular Pandora's box that VCS provides no solution at all for ensuring data integrity on any of the ZFS pools that are a part of your cluster. The 'Zpool' agent is dumb. It imports the pool. It exports the pool. "What about SCSI-3 persistent reservations?", I hear you ask. What about them, indeed. ZFS is a competing product, I won't expect an answer to that problem coming from the Symantec camp. So I took up the gauntlet on a mission to add SCSI-3 PR support to the Zpool agent for my client. I succeeded. So I have written up some notes that might help direct others should they stumble across the same obstacles, and along the way I've discovered the benefits that Solaris MPxIO has to offer that are superior to VxDMP.

Technical Prose: SCSI-3 PR with ZFS on Veritas Cluster Server
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

How to extend a disk in veritas volume manager in veritas cluster?

Hi Experts, I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system. #hastatus -sum -- System State Frozen A node1 running 0 A node2 running 0 -- Group State -- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies

2. UNIX for Advanced & Expert Users

How to grow vxfs directory but the server is in Veritas Cluster environment?

Hello, Usually I use "vxresize" to grow vxfs directory in a stand-alone server without any problems, but I am just told to grow vxfs directorys in Veritas Cluster nodes. Since I never done it before, would like to ask all the experts here to make sure the concept and steps will be fine... (1 Reply)
Discussion started by: sunnychen98
1 Replies

3. Solaris

Sun cluster and Veritas cluster question.

Yesterday my customer told me to expect a vcs upgrade to happen in the future. He also plans to stop using HDS and move to EMC. Am thinking how to migrate to sun cluster setup instead. My plan as follows leave the existing vcs intact as a fallback plan. Then install and build suncluster on... (5 Replies)
Discussion started by: sparcguy
5 Replies

4. Solaris

Veritas Cluster Server Question

Is it possible to configure veritas cluster server using 2 Ldoms on same host? I just want to test and learn VCS. We can do a cluster (sun cluster3.2 ) in a box using 2 Ldoms but i 'm not sure if thats possible with veritas cluster or not ? (1 Reply)
Discussion started by: fugitive
1 Replies

5. Solaris

Veritas Cluster

Can I make a veritas cluster on Sun vertual box or Vmwere. Please help me. (4 Replies)
Discussion started by: saga499
4 Replies

6. High Performance Computing

Veritas Cluster Server Management Console IP Failover

I have just completed a first RTFM of "Veritas Cluster Server Management Console Implementation Guide" 5.1, with a view to assessing it to possibly make our working lives easier. Unfortunately, at my organisation, getting a test installation would be worse than pulling teeth, so I can't just go... (2 Replies)
Discussion started by: Beast Of Bodmin
2 Replies

7. High Performance Computing

SUN Cluster Vs Veritas Cluster

Dear All, Can anyone explain about Pros and Cons of SUN and Veritas Cluster ? Any comparison chart is highly appreciated. Regards, RAA (4 Replies)
Discussion started by: RAA
4 Replies

8. Solaris

veritas cluster

Hi I want to install VCS 5 on solaris 10 the product states it needs 3 nic cards. how to install it if I have 2 cards only (this is just for demo)? thank you for your help. (3 Replies)
Discussion started by: melanie_pfefer
3 Replies

9. High Performance Computing

newbie in veritas cluster server

Hello, This might not be the right place to post my questions. - I installed VCS 5.0 on the 2 nodes. What's next? I want to test the HA of NFS: i.e. the shared disk always accessible if one node goes down. How to do that? - The management console was not installed. This is the GUI to manage... (2 Replies)
Discussion started by: melanie_pfefer
2 Replies
Login or Register to Ask a Question
ZFS-FUSE(8)							  [FIXME: manual]						       ZFS-FUSE(8)

NAME
zfs-fuse - ZFS filesystem daemon SYNOPSIS
zfs-fuse [--pidfile filename] [--no-daemon] [--no-kstat-mount] [--disable-block-cache] [--disable-page-cache] [--fuse-attr-timeout SECONDS] [--fuse-entry-timeout SECONDS] [--log-uberblocks] [--max-arc-size MB] [--fuse-mount-options OPT,OPT,OPT...] [--min-uberblock-txg MIN] [--stack-size=size] [--enable-xattr] [--help] DESCRIPTION
This manual page documents briefly the zfs-fuse command. zfs-fuse is a daemon which provides support for the ZFS filesystem, via fuse. Ordinarily this daemon will be invoked from system boot scripts. OPTIONS
This program follows the usual GNU command line syntax, with long options starting with two dashes (`-'). A summary of options is included below. For a complete description, see the Info files. -h --help Show summary of options. -p filename --pidfile filename Write the daemon's PID to filename after daemonizing. Ignored if --no-daemon is passed. filename should be a fully-qualified path. -n --no-daemon Stay in foreground, don't daemonize. --no-kstat-mount Do not mount kstats in /zfs-kstat --disable-block-cache Enable direct I/O for disk operations. Completely disables caching reads and writes in the kernel block cache. Breaks mmap() in ZFS datasets too. --disable-page-cache Disable the page cache for files residing within ZFS filesystems. Not recommended as it slows down I/O operations considerably. -a SECONDS --fuse-attr-timeout SECONDS Sets timeout for caching FUSE attributes in kernel. Defaults to 0.0. Higher values give a 40% performance boost. -e SECONDS --fuse-entry-timeout SECONDS Sets timeout for caching FUSE entries in kernel. Defaults to 0.0. Higher values give a 10000% performance boost but cause file permission checking security issues. --log-uberblocks Logs uberblocks of any mounted filesystem to syslog -m MB --max-arc-size MB Forces the maximum ARC size (in megabytes). Range: 16 to 16384. -o OPT... --fuse-mount-options OPT,OPT,OPT... Sets FUSE mount options for all filesystems. Format: comma-separated string of characters. -u MIN --min-uberblock-txg MIN Skips uberblocks with a TXG < MIN when mounting any fs -v MB --vdev-cache-size MB adjust the size of the vdev cache. Default : 10 --zfs-prefetch-disable Disable the high level prefetch cache in zfs. This thing can eat up to 150 Mb of ram, maybe more --stack-size=size Limit the stack size of threads (in kb). default : no limit (8 Mb for linux) -x --enable-xattr Enable support for extended attributes. Not generally recommended because it currently has a significant performance penalty for many small IOPS -h --help Show this usage summary. REMARKS ON PRECEDENCE
Note that the parameters passed on the command line take precedence over those supplied through /etc/zfs/zfsrc. BUGS
/CAVEATS The path to the configuration file (/etc/zfs/zfsrc) cannot at this time be configured. Most existing packages suggest settings can be set at the top of their init script. These get frequently overridden by a (distribution specific) /etc/default/zfs-fuse file, if it exists. Be sure to look at these places if you want your changes to options to take effect. The /etc/zfs/zfsrc is going to be the recommended approach in the future. So, packagers, please refrain from passing commandline parameters within the initscript (except for --pid-file). SEE ALSO
zfs (8), zpool (8), zdb(8), zstreamdump(8), /etc/zfs/zfsrc AUTHOR
This manual page was written by Bryan Donlan bdonlan@gmail.com for the Debian(TM) system (but may be used by others). Permission is granted to copy, distribute and/or modify this document under the terms of the GNU General Public License, Version 2 any later version published by the Free Software Foundation, or the Common Development and Distribution License. Revised by Seth Heeren zfs-fuse@sehe.nl On Debian systems, the complete text of the GNU General Public License can be found in /usr/share/common-licenses/GPL. The text of the Common Development and Distribution Licence may be found at /usr/share/doc/zfs-fuse/copyright COPYRIGHT
Copyright (C) 2010 Bryan Donlan [FIXME: source] 2010-06-09 ZFS-FUSE(8)