Sponsored Content
Operating Systems BSD Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset Post 302700615 by bstring on Thursday 13th of September 2012 06:46:36 PM
Old 09-13-2012
I found my problem: I only had one disk, and I guess it was entirely allocated to my existing ufs filesystem. Once I added a new disk, I was able to specify it as the device and I was able to successfully create a zpool/zfs filesystem:

Code:
[root@vm-fbsd82-64 ~]# egrep 'da[0-9]' /var/run/dmesg.boot
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device

da1 at mpt0 bus 0 scbus0 target 1 lun 0
da1: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device

Code:
[root@vm-fbsd82-64 ~]# zpool create zfspool /dev/da1
[root@vm-fbsd82-64 ~]# zfs create zfspool/test-zfs
[root@vm-fbsd82-64 ~]# df
Filesystem                         1K-blocks        Used     Avail Capacity  Mounted on
zfspool                              10257328         21  10257307     0%    /zfspool
zfspool/test-zfs                     10257328         21  10257307     0%    /zfspool/test-zfs


Quote:
Originally Posted by DukeNuke2
Thank you for the link.
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

flarecreate for zfs root dataset and ignore multiple dataset

Hi All, I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers. but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate. I... (0 Replies)
Discussion started by: uxravi
0 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
ZFS-FUSE(8)							  [FIXME: manual]						       ZFS-FUSE(8)

NAME
zfs-fuse - ZFS filesystem daemon SYNOPSIS
zfs-fuse [--pidfile filename] [--no-daemon] [--no-kstat-mount] [--disable-block-cache] [--disable-page-cache] [--fuse-attr-timeout SECONDS] [--fuse-entry-timeout SECONDS] [--log-uberblocks] [--max-arc-size MB] [--fuse-mount-options OPT,OPT,OPT...] [--min-uberblock-txg MIN] [--stack-size=size] [--enable-xattr] [--help] DESCRIPTION
This manual page documents briefly the zfs-fuse command. zfs-fuse is a daemon which provides support for the ZFS filesystem, via fuse. Ordinarily this daemon will be invoked from system boot scripts. OPTIONS
This program follows the usual GNU command line syntax, with long options starting with two dashes (`-'). A summary of options is included below. For a complete description, see the Info files. -h --help Show summary of options. -p filename --pidfile filename Write the daemon's PID to filename after daemonizing. Ignored if --no-daemon is passed. filename should be a fully-qualified path. -n --no-daemon Stay in foreground, don't daemonize. --no-kstat-mount Do not mount kstats in /zfs-kstat --disable-block-cache Enable direct I/O for disk operations. Completely disables caching reads and writes in the kernel block cache. Breaks mmap() in ZFS datasets too. --disable-page-cache Disable the page cache for files residing within ZFS filesystems. Not recommended as it slows down I/O operations considerably. -a SECONDS --fuse-attr-timeout SECONDS Sets timeout for caching FUSE attributes in kernel. Defaults to 0.0. Higher values give a 40% performance boost. -e SECONDS --fuse-entry-timeout SECONDS Sets timeout for caching FUSE entries in kernel. Defaults to 0.0. Higher values give a 10000% performance boost but cause file permission checking security issues. --log-uberblocks Logs uberblocks of any mounted filesystem to syslog -m MB --max-arc-size MB Forces the maximum ARC size (in megabytes). Range: 16 to 16384. -o OPT... --fuse-mount-options OPT,OPT,OPT... Sets FUSE mount options for all filesystems. Format: comma-separated string of characters. -u MIN --min-uberblock-txg MIN Skips uberblocks with a TXG < MIN when mounting any fs -v MB --vdev-cache-size MB adjust the size of the vdev cache. Default : 10 --zfs-prefetch-disable Disable the high level prefetch cache in zfs. This thing can eat up to 150 Mb of ram, maybe more --stack-size=size Limit the stack size of threads (in kb). default : no limit (8 Mb for linux) -x --enable-xattr Enable support for extended attributes. Not generally recommended because it currently has a significant performance penalty for many small IOPS -h --help Show this usage summary. REMARKS ON PRECEDENCE
Note that the parameters passed on the command line take precedence over those supplied through /etc/zfs/zfsrc. BUGS
/CAVEATS The path to the configuration file (/etc/zfs/zfsrc) cannot at this time be configured. Most existing packages suggest settings can be set at the top of their init script. These get frequently overridden by a (distribution specific) /etc/default/zfs-fuse file, if it exists. Be sure to look at these places if you want your changes to options to take effect. The /etc/zfs/zfsrc is going to be the recommended approach in the future. So, packagers, please refrain from passing commandline parameters within the initscript (except for --pid-file). SEE ALSO
zfs (8), zpool (8), zdb(8), zstreamdump(8), /etc/zfs/zfsrc AUTHOR
This manual page was written by Bryan Donlan bdonlan@gmail.com for the Debian(TM) system (but may be used by others). Permission is granted to copy, distribute and/or modify this document under the terms of the GNU General Public License, Version 2 any later version published by the Free Software Foundation, or the Common Development and Distribution License. Revised by Seth Heeren zfs-fuse@sehe.nl On Debian systems, the complete text of the GNU General Public License can be found in /usr/share/common-licenses/GPL. The text of the Common Development and Distribution Licence may be found at /usr/share/doc/zfs-fuse/copyright COPYRIGHT
Copyright (C) 2010 Bryan Donlan [FIXME: source] 2010-06-09 ZFS-FUSE(8)
All times are GMT -4. The time now is 10:23 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy