Sponsored Content
Operating Systems BSD Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset Post 302700577 by DukeNuke2 on Thursday 13th of September 2012 04:49:46 PM
Old 09-13-2012
This User Gave Thanks to DukeNuke2 For This Post:
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

flarecreate for zfs root dataset and ignore multiple dataset

Hi All, I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers. but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate. I... (0 Replies)
Discussion started by: uxravi
0 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
ZPOOL-FEATURES(7)				       BSD Miscellaneous Information Manual					 ZPOOL-FEATURES(7)

NAME
zpool-features -- ZFS pool feature descriptions DESCRIPTION
ZFS pool on-disk format versions are specified via "features" which replace the old on-disk format numbers (the last supported on-disk format number is 28). To enable a feature on a pool use the upgrade subcommand of the zpool(8) command, or set the feature@feature_name property to enabled. The pool format does not affect file system version compatibility or the ability to send file systems between pools. Since most features can be enabled independently of each other the on-disk format of the pool is specified by the set of all features marked as active on the pool. If the pool was created by another software version this set may include unsupported features. Identifying features Every feature has a guid of the form com.example:feature_name. The reverse DNS name ensures that the feature's guid is unique across all ZFS implementations. When unsupported features are encountered on a pool they will be identified by their guids. Refer to the documentation for the ZFS implementation that created the pool for information about those features. Each supported feature also has a short name. By convention a feature's short name is the portion of its guid which follows the ':' (e.g. com.example:feature_name would have the short name feature_name ), however a feature's short name may differ across ZFS implementations if following the convention would result in name conflicts. Feature states Features can be in one of three states: active This feature's on-disk format changes are in effect on the pool. Support for this feature is required to import the pool in read-write mode. If this feature is not read-only compatible, support is also required to import the pool in read-only mode (see "Read-only compatibility"). enabled An administrator has marked this feature as enabled on the pool, but the feature's on-disk format changes have not been made yet. The pool can still be imported by software that does not support this feature, but changes may be made to the on-disk format at any time which will move the feature to the active state. Some features may support returning to the enabled state after becoming active. See feature-specific documentation for details. disabled This feature's on-disk format changes have not been made and will not be made unless an administrator moves the feature to the enabled state. Features cannot be disabled once they have been enabled. The state of supported features is exposed through pool properties of the form feature@short_name. Read-only compatibility Some features may make on-disk format changes that do not interfere with other software's ability to read from the pool. These features are referred to as "read-only compatible". If all unsupported features on a pool are read-only compatible, the pool can be imported in read-only mode by setting the readonly property during import (see zpool(8) for details on importing pools). Unsupported features For each unsupported feature enabled on an imported pool a pool property named unsupported@feature_guid will indicate why the import was allowed despite the unsupported feature. Possible values for this property are: inactive The feature is in the enabled state and therefore the pool's on-disk format is still compatible with software that does not support this feature. readonly The feature is read-only compatible and the pool has been imported in read-only mode. Feature dependencies Some features depend on other features being enabled in order to function properly. Enabling a feature will automatically enable any fea- tures it depends on. FEATURES
The following features are supported on this system: async_destroy GUID com.delphix:async_destroy READ-ONLY COMPATIBLE yes DEPENDENCIES none Destroying a file system requires traversing all of its data in order to return its used space to the pool. Without async_destroy the file system is not fully removed until all space has been reclaimed. If the destroy operation is interrupted by a reboot or power outage the next attempt to open the pool will need to complete the destroy operation synchronously. When async_destroy is enabled the file system's data will be reclaimed by a background process, allowing the destroy operation to complete without traversing the entire file system. The background process is able to resume interrupted destroys after the pool has been opened, eliminating the need to finish interrupted destroys as part of the open operation. The amount of space remaining to be reclaimed by the background process is available through the freeing property. This feature is only active while freeing is non-zero. empty_bpobj GUID com.delphix:empty_bpobj READ-ONLY COMPATIBLE yes DEPENDENCIES none This feature increases the performance of creating and using a large number of snapshots of a single filesystem or volume, and also reduces the disk space required. When there are many snapshots, each snapshot uses many Block Pointer Objects (bpobj's) to track blocks associated with that snap- shot. However, in common use cases, most of these bpobj's are empty. This feature allows us to create each bpobj on-demand, thus eliminating the empty bpobjs. This feature is active while there are any filesystems, volumes, or snapshots which were created after enabling this feature. filesystem_limits GUID com.joyent:filesystem_limits READ-ONLY COMPATIBLE yes DEPENDENCIES extensible_dataset This feature enables filesystem and snapshot limits. These limits can be used to control how many filesystems and/or snapshots can be created at the point in the tree on which the limits are set. This feature is active once either of the limit properties has been set on a dataset. Once activated the feature is never deacti- vated. lz4_compress GUID org.illumos:lz4_compress READ-ONLY COMPATIBLE no DEPENDENCIES none lz4 is a high-performance real-time compression algorithm that features significantly faster compression and decompression as well as a higher compression ratio than the older lzjb compression. Typically, lz4 compression is approximately 50% faster on compress- ible data and 200% faster on incompressible data than lzjb. It is also approximately 80% faster on decompression, while giving approximately 10% better compression ratio. When the lz4_compress feature is set to enabled, the administrator can turn on lz4 compression on any dataset on the pool using the zfs(8) command. Also, all newly written metadata will be compressed with lz4 algorithm. Since this feature is not read-only com- patible, this operation will render the pool unimportable on systems without support for the lz4_compress feature. Booting off of lz4 -compressed root pools is supported. This feature becomes active as soon as it is enabled and will never return to being enabled. multi_vdev_crash_dump GUID com.joyent:multi_vdev_crash_dump READ-ONLY COMPATIBLE no DEPENDENCIES none This feature allows a dump device to be configured with a pool comprised of multiple vdevs. Those vdevs may be arranged in any mirrored or raidz configuration. spacemap_histogram GUID com.delphix:spacemap_histogram READ-ONLY COMPATIBLE yes DEPENDENCIES none This features allows ZFS to maintain more information about how free space is organized within the pool. If this feature is enabled, ZFS will set this feature to active when a new space map object is created or an existing space map is upgraded to the new format. Once the feature is active, it will remain in that state until the pool is destroyed. extensible_dataset GUID com.delphix:extensible_dataset READ-ONLY COMPATIBLE no DEPENDENCIES none This feature allows more flexible use of internal ZFS data structures, and exists for other features to depend on. This feature will be active when the first dependent feature uses it, and will be returned to the enabled state when all datasets that use this feature are destroyed. bookmarks GUID com.delphix:bookmarks READ-ONLY COMPATIBLE yes DEPENDENCIES extensible_dataset This feature enables use of the zfs bookmark subcommand. This feature is active while any bookmarks exist in the pool. All bookmarks in the pool can be listed by running zfs list -t bookmark -r poolname. enabled_txg GUID com.delphix:enabled_txg READ-ONLY COMPATIBLE yes DEPENDENCIES none Once this feature is enabled ZFS records the transaction group number in which new features are enabled. This has no user-visible impact, but other features may depend on this feature. This feature becomes active as soon as it is enabled and will never return to being enabled. hole_birth GUID com.delphix:hole_birth READ-ONLY COMPATIBLE no DEPENDENCIES enabled_txg This feature improves performance of incremental sends (``zfs send -i'') and receives for objects with many holes. The most common case of hole-filled objects is zvols. An incremental send stream from snapshot A to snapshot B contains information about every block that changed between A and B. Blocks which did not change between those snapshots can be identified and omitted from the stream using a piece of metadata called the 'block birth time', but birth times are not recorded for holes (blocks filled only with zeroes). Since holes created after A cannot be distinguished from holes created before A, information about every hole in the entire filesystem or zvol is included in the send stream. For workloads where holes are rare this is not a problem. However, when incrementally replicating filesystems or zvols with many holes (for example a zvol formatted with another filesystem) a lot of time will be spent sending and receiving unnecessary informa- tion about holes that already exist on the receiving side. Once the hole_birth feature has been enabled the block birth times of all new holes will be recorded. Incremental sends between snapshots created after this feature is enabled will use this new metadata to avoid sending information about holes that already exist on the receiving side. This feature becomes active as soon as it is enabled and will never return to being enabled. embedded_data GUID com.delphix:embedded_data READ-ONLY COMPATIBLE no DEPENDENCIES none This feature improves the performance and compression ratio of highly-compressible blocks. Blocks whose contents can compress to 112 bytes or smaller can take advantage of this feature. When this feature is enabled, the contents of highly-compressible blocks are stored in the block "pointer" itself (a misnomer in this case, as it contains the compressed data, rather than a pointer to its location on disk). Thus the space of the block (one sector, typically 512 bytes or 4KB) is saved, and no additional i/o is needed to read and write the data block. This feature becomes active as soon as it is enabled and will never return to being enabled. large_blocks GUID org.open-zfs:large_block READ-ONLY COMPATIBLE no DEPENDENCIES extensible_dataset The large_block feature allows the record size on a dataset to be set larger than 128KB. This feature becomes active once a recordsize property has been set larger than 128KB, and will return to being enabled once all filesystems that have ever had their recordsize larger than 128KB are destroyed. Please note that booting from datasets that have recordsize greater than 128KB is NOT supported by the FreeBSD boot loader. SEE ALSO
zpool(8) AUTHORS
This manual page is a mdoc(7) reimplementation of the illumos manual page zpool-features(5), modified and customized for FreeBSD and licensed under the Common Development and Distribution License (CDDL). The mdoc(7) implementation of this manual page was initially written by Martin Matuska <mm@FreeBSD.org>. BSD
November 10, 2014 BSD
All times are GMT -4. The time now is 03:21 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy