11-12-2013
Upgrading Solaris - what happens to zpool/zfs versions
Hi everyone,
I'm hoping someone can help me out here. I've googled lots and don't think I can find an easy answer to this.
We're in the process of upgrading Solaris from v10 5/08 to v10 9/10. The zpools for luns are currently at version 10, and I understand Solaris v10 9/10 has support for zfs v4, zpool versions upto 22.
What i'm trying to find out is after upgrading via live upgrade and booting to the new Solaris 10 9/10 environment, will it attempt to upgrade all zpools automatically? I'm hoping not, as that would not allow me to roll back to the original environment (it wouldn't be able to import a zpool at a revision higher than one it can support).
Logic tells me that it shouldn't, but could anyone please confirm?
7 More Discussions You Might Find Interesting
1. Solaris
Hi All,
I am trying to read zpool.cache file to find out pool information like pool name, devices it uses and all properties.
File seems to be in packed format.I am not sure how to unpack it.
But from opensolaris code base we can see that they have used libz for uncompromising this file, but... (0 Replies)
Discussion started by: shailesh_111
0 Replies
2. Solaris
Hi, my root pool is as follows. How can I create a metadb if I want to create SVM volumes?
zpool status
pool: rpool1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool1 ONLINE 0 0 0
c4t1d0s0 ... (10 Replies)
Discussion started by: incredible
10 Replies
3. BSD
I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error:
# zpool create zfspool /dev/da0s1a
cannot create 'zfspool': no such pool or dataset
# zpool create zfspool /dev/da0
cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies
4. Solaris
running VM server for Sparc on a Solaris 11 server. I have a Guest LDOm that had two seperate zpools running, one for the zones and one for the OS. The OS was corrupted and had to be replaced. The zones zfs file system is intact I think. I still have access to the disk and can still see it in... (3 Replies)
Discussion started by: os2mac
3 Replies
5. Solaris
HI,
Is there any GUI or web ui to administrate zfs/zpool.
i want to monitor/expand/migrate zfs from one machine to another machine (0 Replies)
Discussion started by: bentech4u
0 Replies
6. Solaris
So,
We have a Netapp storage solution. We have Sparc T4-4s running with LDOMS and client zones in the LDOMS, We are using FC for storage comms. So here's the basic setup
FC luns are exported to the primary on the Sparc box. using LDM they are then exported to the LDOM using vdisk. at the... (4 Replies)
Discussion started by: os2mac
4 Replies
7. Solaris
Hello,
I am upgrading Veritas from 5.1 to 6.0.1/6.0.5 in a Solaris 10 u8 server with OS mirrored (rpool) in zfs/zpool configuration.
I need to split it to have a quick way to backout in case of failure (make splitted mirror side bootable for a quick revert, like booting from it). I remember... (3 Replies)
Discussion started by: feroccimx
3 Replies
LEARN ABOUT DEBIAN
groupd
GROUPD(8) cluster GROUPD(8)
NAME
groupd - compatibility daemon for fenced, dlm_controld and gfs_controld
SYNOPSIS
groupd [OPTIONS]
DESCRIPTION
The groupd daemon and libgroup library are used by the fenced, dlm_controld and gfs_controld daemons when they are operating in clus-
ter2-compatible mode to perform a rolling cluster upgrade from cluster2 to cluster3.
See cman(5) for more information on the upgrading configuration option needed to perform a rolling upgrade.
When the upgrading option is enabled, cman adds the following to the online configuration:
<group groupd_compat="1"/>
This setting causes the cman init script to start the groupd daemon, and causes the groupd, fenced, dlm_controld and gfs_controld daemons
to operate in the old cluster2 mode so they will be compatible with cluster2 nodes in the cluster that have not yet been upgraded.
The upgrading setting, including the groupd_compat setting, cannot be changed in a running cluster. The entire cluster must be taken off-
line to change these because the new cluster3 default modes are not compatible with the old cluster2 modes. The upgrading/compat settings
cause the new cluster3 daemons to run the old cluster2 code and protocols.
OPTIONS
Command line options override a corresponding setting in cluster.conf.
-D Enable debugging to stderr and don't fork.
-L Enable debugging to log file.
-g num groupd compatibility mode, 0 off, 1 on. Default 0.
-h Print a help message describing available options, then exit.
-V Print program version information, then exit.
SEE ALSO
cman(5), fenced(8), dlm_controld(8), gfs_controld(8)
cluster 2009-01-19 GROUPD(8)