11-16-2012
Quote:
Originally Posted by
bartus11
The question is: why would you want to do that?
You might want to put your new BE on a different ZFS pool to avoid creating snapshots and clones. For a system with lots of non-sparse zones that gets upgraded about every three months or so by using LU on a new BE, all those snapshots and clones get really nasty to deal with.
If, on the other hand, every time you create a new BE using LU you do it on a different ZFS pool than the current live BE, you get a simple clean copy.
9 More Discussions You Might Find Interesting
1. Solaris
Hi all
I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM:
- 2 internal disks: c0t0d0 and c0t1d0
- bootable root-volume (mirrored, both disks)
- 1 non-mirrored swap slice
- 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies
2. Solaris
# zpool import
pool: emcpool1
id: 5596268873059055768
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: Sun Message ID: ZFS-8000-3C
config:
emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies
3. Infrastructure Monitoring
Here are the details.
cnjr-opennms>root$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
openpool 20.6G 46.3G 35.5K /openpool
openpool/ROOT 15.4G 46.3G 18K legacy
openpool/ROOT/rds 15.4G 46.3G 15.3G /
openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies
4. Solaris
I created a pool the other day. I created a 10 gig files just for a test, then deleted it.
I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool.
When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies
5. Solaris
I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies
6. Solaris
Other than export/import, is there a cleaner way to rename a pool without unmounting de FS?
Something like, say "zpool rename a b"?
Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies
7. Solaris
installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it.
Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies
8. Solaris
I messed up my pool by doing zfs send...recive So I got the following :
zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 928G 17.3G 911G 1% 1.00x ONLINE -
tank1 928G 35.8G 892G 3% 1.00x ONLINE -
So I have "tank1" pool.
zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies
9. UNIX for Beginners Questions & Answers
I have installed FreeBSD onto a raw image file using QEMU Emulator successfully. I have formatted the image file using the ZFS file system (ZFS POOL).
Using the following commands below I have successfully mounted the image file ready to be opened by zpool
sudo losetup /dev/loop0 .img sudo... (1 Reply)
Discussion started by: alphatron150
1 Replies
LEARN ABOUT CENTOS
thin_metadata_size
THIN_METADATA_SIZE(8) System Manager's Manual THIN_METADATA_SIZE(8)
NAME
thin_metadata_size - thin provisioning metadata device/file size calculator.
SYNOPSIS
thin_metadata_size [options]
DESCRIPTION
thin_metadata_size calculates the size of the thin provisioning metadata based on the block size of the thin provisioned devices, the size
of the thin provisioning pool and the maximum number of all thin prisioned devices and snapshots. Because thin provisioning pools are
holding widely variable contents, this tool is needed to provide sensible initial default size.
-b, --block-size BLOCKSIZE[bskKmMgGtTpPeEzZyY]
Block size of thin provisioned devices in units of bytes,sectors,kilobytes,kibibytes,... respectively. Default is in sectors with-
out a block size unit specifier. Size/number option arguments can be followed by unit specifiers in short one character and long
form (eg. -b1m or -b1megabytes).
-s, --pool-size POOLSIZE[bskKmMgGtTpPeEzZyY]
Thin provisioning pool size in units of bytes,sectors,kilobytes,kibibytes,... respectively. Default is in sectors without a pool
size unit specifier.
-m, --max-thins #[bskKmMgGtTpPeEzZyY]
Maximum sum of all thin provisioned devices and snapshots. Unit identifier supported to allow for convenient entry of large quanti-
ties, eg. 1000000 = 1M. Default is absolute quantity without a number unit specifier.
-u, --unit {bskKmMgGtTpPeEzZyY}
Output unit specifier in units of bytes,sectors,kilobytes,kibibytes,... respectively. Default is in sectors without an output unit
specifier.
-n, --numeric-only [short|long]
Limit output to just the size number with the optional unit specifier character/string.
-h, --help
Print help and exit.
-V, --version
Output version information and exit.
EXAMPLES
Calculates the thin provisioning metadata device size for block size 64 kilobytes, pool size 1 terabytes and maximum number of thin provi-
sioned devices and snapshots of 1000 in units of sectors with long output:
thin_metadata_size -b64k -s1t -m1000
Or (using the long options instead) for block size 1 gigabyte, pool size 1 petabytes and maximum number of thin provisioned devices and
snapshots of 1 million with numeric only output in units of gigabytes:
thin_metadata_size --block-size=1g --pool-size=1p --max-thins=1M --unit=g --numeric-only
Same as before (1g,1p,1M,numeric-only) but with unit specifier character appended:
thin_metadata_size --block-size=1giga --pool-size=1petabytes --max-thins=1mebi --unit=g --numeric-only=short
Or with unit specifier string appended:
thin_metadata_size --block-size=1giga --pool-size=1petabytes --max-thins=1mebi --unit=g -nlong
DIAGNOSTICS
thin_metadata_size returns an exit code of 0 for success or 1 for error.
SEE ALSO
thin_dump(8) thin_check(8) thin_repair(8) thin_restore(8) thin_rmap(8)
AUTHOR
Joe Thornber <ejt@redhat.com>
Heinz Mauelshagen <HeinzM@RedHat.com>
Red Hat, Inc. Thin Provisioning Tools THIN_METADATA_SIZE(8)