Sponsored Content
Top Forums UNIX for Beginners Questions & Answers Opening up ZFS pool as writable Post 303035461 by alphatron150 on Friday 24th of May 2019 05:46:25 PM
Old 05-24-2019
Opening up ZFS pool as writable

I have installed FreeBSD onto a raw image file using QEMU Emulator successfully. I have formatted the image file using the ZFS file system (ZFS POOL).
Using the following commands below I have successfully mounted the image file ready to be opened by zpool
Code:
sudo losetup /dev/loop0 [path-to-file].img sudo kpartx -l /dev/loop0 sudo kpartx -av /dev/loop0

However with the next command show below....
Code:
sudo zpool import -R [MOUNT-PATH] -d /dev/mapper

I get the following error message
Code:
The pool can only be accessed in read-only mode on this system. It     cannot be accessed in read-write mode because it uses the following     feature(s) not supported on this system:     com.delphix:spacemap_v2 (Space maps representing large segments are more efficient.) The pool cannot be imported in read-write mode. Import the pool with     "-o readonly=on", access the pool on a system that supports the     required feature(s), or recreate the pool from backup.

I cannot find anywhere online about the feature called 'spacemap_v2'. How do I install this or how do I mount my zfs pool to be writable. I know I can mount it as read-only but that defeats the purpose of what I want to do as I want to be able to write data to copy/paste data in its mountable platform interface.
Does anyone know how to achieve this. I shall be grateful for a response.
Regards
 

10 More Discussions You Might Find Interesting

1. AIX

"Backup to pool 'default' waiting for 1 writable tape" when autostart disabled ?

This morning I started receiving an alert saying "Legato Storage Manager media (waiting) backup to pool 'Default' waiting for 1 writable tape" but when I go check the status of the Legato autostart, its disabled. So why is it asking a tape ? (1 Reply)
Discussion started by: Browser_ice
1 Replies

2. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

3. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

4. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

5. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

6. Solaris

ZFS - list of disks used in a pool

Hi guys, We had created a pool as follows: zpool create filing_pool raidz c1t2d0 c1t3d0 ........ Due to some requirement, we need to destroy the pool and re-create another one. We wish to know now which disks have been included in the filing_pool, how do we list the disks used to create... (2 Replies)
Discussion started by: frum
2 Replies

7. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

8. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

9. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

10. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies
AMZFS-SNAPSHOT(8)					  System Administration Commands					 AMZFS-SNAPSHOT(8)

NAME
amzfs-snapshot - Amanda script to create zfs snapshot DESCRIPTION
amzfs-snapshot is an Amanda script implementing the Script API. It should not be run by users directly. It create a zfs snapshot of the filesystem where the path specified is mounted. PRE-DLE-* create a snapshot and the POST-DLE-* destroy the snapshot, *-DLE-AMCHECK, *-DLE-ESTIMATE and *-DLE-BACKUP must be set to be executed on the client: execute-on pre-dle-amcheck, post-dle-amcheck, pre-dle-estimate, post-dle-estimate, pre-dle-backup, post-dle-backup execute-where client The PRE_DLE_* script output a DIRECTORY property telling where the directory is located in the snapshot. The application must be able to use the DIRECTORY property, amgtar can do it. The script is run as the amanda user, it must have the priviledge to create and destroy snapshot: zfs allow -ldu AMANDA_USER mount,snapshot,destroy FILESYSTEM Some system doesn't have "zfs allow", but you can give the Amanda backup user the rights to manipulate ZFS filesystems by using the following command: usermod -P "ZFS File System Management,ZFS Storage Management" AMANDA_USER This will require that your run zfs under pfexec, set the PFEXEC property to YES. The format of the DLE must be one of: Desciption Example ---------- ------- Mountpoint /data Arbitrary mounted dir /data/interesting_dir ZFS pool name datapool ZFS filesystem datapool/database ZFS logical volume datapool/dbvol The filesystem must be mounted. PROPERTIES
This section lists the properties that control amzfs-snapshot's functionality. See amanda-scripts(7) for information on the Script API, script configuration. DF-PATH Path to the 'df' binary, search in $PATH by default. ZFS-PATH Path to the 'zfs' binary, search in $PATH by default. PFEXEC-PATH Path to the 'pfexec' binary, search in $PATH by default. PFEXEC If "NO" (the default), pfexec is not used, if set to "YES" then pfexec is used. EXAMPLE
In this example, a dumptype is defined to use amzfs-snapshot script to create a snapshot and use amgtar to backup the snapshot. define script-tool amzfs_snapshot { comment "backup of zfs snapshot" plugin "amzfs-snapshot" execute-on pre-dle-amcheck, post-dle-amcheck, pre-dle-estimate, post-dle-estimate, pre-dle-backup, post-dle-backup execute-where client #property "DF-PATH" "/usr/sbin/df" #property "ZFS-PATH" "/usr/sbin/zfs" #property "PFEXEC-PATH" "/usr/sbin/pfexec" #property "PFEXEC" "NO" } define dumptype user-zfs-amgtar { dt_amgtar script "amzfs_snapshot" } SEE ALSO
amanda(8), amanda.conf(5), amanda-client.conf(5), amanda-scripts(7) The Amanda Wiki: : http://wiki.zmanda.com/ AUTHORS
Jean-Louis Martineau <martineau@zmanda.com> Zmanda, Inc. (http://www.zmanda.com) Dustin J. Mitchell <dustin@zmanda.com> Zmanda, Inc. (http://www.zmanda.com) Amanda 3.3.3 01/10/2013 AMZFS-SNAPSHOT(8)
All times are GMT -4. The time now is 02:59 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy