09-08-2011
Yeah, there isn't much point in setting up your meta database on your ZFS disks. You should have your meta database on the disks that you're actually using for LVM.
10 More Discussions You Might Find Interesting
1. Solaris
Hi All,
If solaris has metadb services on the disk, it means that it has a HW raid controller. Or what.
Thanks in advance. (1 Reply)
Discussion started by: itik
1 Replies
2. Solaris
Hi All,
I have to do disk mirroring, for that I have to create a metadb as disk mirroring have to do with SVM. However I do not have any slice free for metadb.
What are the options? Please suggest (4 Replies)
Discussion started by: kumarmani
4 Replies
3. Solaris
Hello,
I have a 520GB SAN resource connected to the server via FC (4Gbit).
In the next step he wants to create pool storage:
After 15 hours Pool has not yet created...
Is such a long waiting time is normal? Should I have done?
Regards (1 Reply)
Discussion started by: bieszczaders
1 Replies
4. Solaris
I have a solaris zone of 12 GB and i have to increase the / filesystem to 31GB as requested. Earlier I had expanded filesystems other than / by setting quota to new value like "zfs set quota=new value mountpoint" but I am not sure whether its a good practice in zfs because by default in my... (5 Replies)
Discussion started by: vikkash
5 Replies
5. BSD
I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error:
# zpool create zfspool /dev/da0s1a
cannot create 'zfspool': no such pool or dataset
# zpool create zfspool /dev/da0
cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies
6. Solaris
Check Solaris VM Databases metadb does not have enough information about logical volumes. Current value is 0%
I have checked the SVM status, all disks are good state and synched perfectly. no errors in metadb -i.
what is this alert exact mean? what we have to check for the value?
Please... (1 Reply)
Discussion started by: Naveen.6025
1 Replies
7. Solaris
Hi everyone,
I'm hoping someone can help me out here. I've googled lots and don't think I can find an easy answer to this.
We're in the process of upgrading Solaris from v10 5/08 to v10 9/10. The zpools for luns are currently at version 10, and I understand Solaris v10 9/10 has support for... (3 Replies)
Discussion started by: badoshi
3 Replies
8. UNIX for Advanced & Expert Users
Dear all,
recently, I migrated a solaris zone from one host to another. The zone was inside of a zpool. The zpool cotains two volumes.
I did the following:
host1:
$ zlogin zone1 shutdown -y -g0 -i0 #Zone status changes from running to installed
$ zpool export zone1
host2:
$ zpool... (2 Replies)
Discussion started by: custos
2 Replies
9. Solaris
Hi all,
I added a new disk slice to the current metadb.
Below is what I see
bash-3.2# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c0t0d0s7
a p luo 8208 8192 ... (3 Replies)
Discussion started by: javanoob
3 Replies
10. Solaris
how to recovery metadb?
Thanks! (1 Reply)
Discussion started by: dzung
1 Replies
LEARN ABOUT NETBSD
vgreduce
VGREDUCE(8) System Manager's Manual VGREDUCE(8)
NAME
vgreduce - reduce a volume group
SYNOPSIS
vgreduce [-a|--all] [-A|--autobackup y|n] [-d|--debug] [-h|-?|--help] [--removemissing] [-t|--test] [-v|--verbose] VolumeGroupName [Physi-
calVolumePath...]
DESCRIPTION
vgreduce allows you to remove one or more unused physical volumes from a volume group.
OPTIONS
See lvm for common options.
-a, --all
Removes all empty physical volumes if none are given on command line.
--removemissing
Removes all missing physical volumes from the volume group, if there are no logical volumes allocated on those. This resumes normal
operation of the volume group (new logical volumes may again be created, changed and so on).
If this is not possible (there are logical volumes referencing the missing physical volumes) and you cannot or do not want to remove
them manually, you can run this option with --force to have vgreduce remove any partial LVs.
Any logical volumes and dependent snapshots that were partly on the missing disks get removed completely. This includes those parts
that lie on disks that are still present.
If your logical volumes spanned several disks including the ones that are lost, you might want to try to salvage data first by acti-
vating your logical volumes with --partial as described in lvm (8).
SEE ALSO
lvm(8), vgextend(8)
Sistina Software UK LVM TOOLS 2.02.44-cvs (02-17-09) VGREDUCE(8)