Sponsored Content
Operating Systems BSD Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset Post 302700615 by bstring on Thursday 13th of September 2012 06:46:36 PM
Old 09-13-2012
I found my problem: I only had one disk, and I guess it was entirely allocated to my existing ufs filesystem. Once I added a new disk, I was able to specify it as the device and I was able to successfully create a zpool/zfs filesystem:

Code:
[root@vm-fbsd82-64 ~]# egrep 'da[0-9]' /var/run/dmesg.boot
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device

da1 at mpt0 bus 0 scbus0 target 1 lun 0
da1: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device

Code:
[root@vm-fbsd82-64 ~]# zpool create zfspool /dev/da1
[root@vm-fbsd82-64 ~]# zfs create zfspool/test-zfs
[root@vm-fbsd82-64 ~]# df
Filesystem                         1K-blocks        Used     Avail Capacity  Mounted on
zfspool                              10257328         21  10257307     0%    /zfspool
zfspool/test-zfs                     10257328         21  10257307     0%    /zfspool/test-zfs


Quote:
Originally Posted by DukeNuke2
Thank you for the link.
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

flarecreate for zfs root dataset and ignore multiple dataset

Hi All, I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers. but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate. I... (0 Replies)
Discussion started by: uxravi
0 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
sharefs(7FS)							   File Systems 						      sharefs(7FS)

NAME
sharefs - Kernel sharetab filesystem DESCRIPTION
The sharefs filesystem describes the state of all shares currently loaded by the kernel. It is mounted during boot time as a read-only file at /etc/dfs/sharetab. Filesystem contents are dynamic and reflect the current set of shares in the system. File contents are described in sharetab(4). File contents can be modified as a result of share(1M), sharectl(1M), sharemgr(1M) and changing properties of a zfs(1M) data set. The module may not be unloaded dynamically by the kernel. FILES
/etc/dfs/sharetab System record of shared file systems. SEE ALSO
share(1M), sharectl(1M), sharemgr(1M), zfs(1M), sharetab(4) SunOS 5.11 31 Oct 2007 sharefs(7FS)
All times are GMT -4. The time now is 10:43 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy