02-04-2009
perhaps any experience on this without cluster? so just with the extend of a LUN and mirrored zpool?
10 More Discussions You Might Find Interesting
1. Solaris
Hi all
I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM:
- 2 internal disks: c0t0d0 and c0t1d0
- bootable root-volume (mirrored, both disks)
- 1 non-mirrored swap slice
- 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies
2. Infrastructure Monitoring
Here are the details.
cnjr-opennms>root$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
openpool 20.6G 46.3G 35.5K /openpool
openpool/ROOT 15.4G 46.3G 18K legacy
openpool/ROOT/rds 15.4G 46.3G 15.3G /
openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies
3. Solaris
I created a pool the other day. I created a 10 gig files just for a test, then deleted it.
I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool.
When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies
4. Solaris
I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies
5. Solaris
Hi All!
I have been tasked with creating a clustered file system for two systems running Sol 10 u8. These systems have 3 zones each and the global zone has ZFS on the boot disk. We currently have one system sharing an NFS mount to both of these systems.
The root zfs pool status (on the... (2 Replies)
Discussion started by: bluescreen
2 Replies
6. Solaris
Other than export/import, is there a cleaner way to rename a pool without unmounting de FS?
Something like, say "zpool rename a b"?
Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies
7. Solaris
installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it.
Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies
8. Solaris
I messed up my pool by doing zfs send...recive So I got the following :
zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 928G 17.3G 911G 1% 1.00x ONLINE -
tank1 928G 35.8G 892G 3% 1.00x ONLINE -
So I have "tank1" pool.
zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies
9. Solaris
I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this?
... (0 Replies)
Discussion started by: jeffsr
0 Replies
10. Solaris
I accidently added a disk in different zpool instead of pool, where I want.
root@prtdrd21:/# zpool status cvfdb2_app_pool
pool: cvfdb2_app_pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies
LEARN ABOUT HPUX
cmdeleteconf
cmdeleteconf(1m) cmdeleteconf(1m)
NAME
cmdeleteconf - Delete either the cluster or the package configuration
SYNOPSIS
cmdeleteconf [-f] [-v] [-c cluster_name] [[-p package_name]...]
DESCRIPTION
cmdeleteconf deletes either the entire cluster configuration, including all its packages, or only the specified package configuration. If
neither cluster_name nor package_name is specified, cmdeleteconf will delete the local cluster's configuration and all its packages. If
the local node's cluster configuration is outdated, cmdeleteconf without any argument will only delete the local node's configuration. If
only the package_name is specified, the configuration of package_name in the local cluster is deleted. If both cluster_name and pack-
age_name are specified, the package must be configured in the cluster_name, and only the package package_name will be deleted. cmdelete-
conf with only cluster_name specified will delete the entire cluster configuration on all the nodes in the cluster, regardless of the con-
figuration version. The local cluster is the cluster that the node running the cmdeleteconf command belongs to.
Only a superuser, whose effective user ID is zero (see id(1) and su(1)), can delete the configuration.
To delete the cluster configuration, halt the cluster first. To delete a package configuration you must halt the package first, but you do
not need to halt the cluster (it may remain up or be brought down). To delete the package VxVM-CVM-pkg (HP-UX only), you must first delete
all packages with STORAGE_GROUP defined.
While deleting the cluster, if any of the cluster nodes are powered down, the user can choose to continue deleting the configuration. In
this case, the cluster configuration on the down node will remain in place and, therefore, be out of sync with the rest of the cluster. If
the powered-down node ever comes up, the user should execute the cmdeleteconf command with no argument on that node to clean up the config-
uration before doing any other Serviceguard command.
Options
cmdeleteconf supports the following options:
-f Force the deletion of either the cluster configuration or the package configuration.
-v Verbose output will be displayed.
-c cluster_name
Name of the cluster to delete. The cluster must be halted already, if intending to delete the cluster.
-p package_name
Name of an existing package to delete from the cluster. The package must be halted already. There should not be any
packages in the cluster with STORAGE_GROUP defined before having a package_name of VxVM-CVM-pkg (HP-UX only).
RETURN VALUE
Upon completion, cmdeleteconf returns one of the following values:
0 Successful completion.
1 Command failed.
EXAMPLES
The high availability environment contains the cluster, clusterA , and a package, pkg1.
To delete package pkg1 in clusterA, do the following:
cmdeleteconf -f -c clusterA -p pkg1
To delete the cluster clusterA and all its packages, do the following:
cmdeleteconf -f -c clusterA
AUTHOR
cmdeleteconf was developed by HP.
SEE ALSO
cmcheckconf(1m), cmapplyconf(1m), cmgetconf(1m), cmmakepkg(1m), cmquerycl(1m).
Requires Optional Serviceguard Software cmdeleteconf(1m)