Sponsored Content
Operating Systems BSD Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset Post 302700509 by bstring on Thursday 13th of September 2012 01:21:44 PM
Old 09-13-2012
Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error:

Code:
[root@vm-fbsd82-64 /]# zpool create zfspool /dev/da0s1a
cannot create 'zfspool': no such pool or dataset

[root@vm-fbsd82-64 /]# zpool create zfspool /dev/da0
cannot create 'zfspool': no such pool or dataset

I was not sure what to put for the device, so I tried both da0s1a and da0, due to this output:

Code:
[root@vm-fbsd82-64 /]# egrep 'da[0-9]' /var/run/dmesg.boot
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a

Just wondering if I am missing a step or doing something wrong.
edit: I'm starting to wonder if it's because I only have 1 'disk' in this VM, and perhaps the entire disk has been formatted to ufs and so I can't create a zpool on that disk? Do I need to add another disk in VMware?



Also, does anyone know what filesystems are natively supported in FBSD 6.x and 8.x? I believe 6.x supports ufs and 8.x supports ufs and zfs, but I am not positive.

Thank you for any help

Last edited by bstring; 09-13-2012 at 03:27 PM..
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

flarecreate for zfs root dataset and ignore multiple dataset

Hi All, I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers. but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate. I... (0 Replies)
Discussion started by: uxravi
0 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
GNOP(8) 						    BSD System Manager's Manual 						   GNOP(8)

NAME
gnop -- control utility for NOP GEOM class SYNOPSIS
gnop create [-v] [-e error] [-o offset] [-r rfailprob] [-s size] [-S secsize] [-w wfailprob] dev ... gnop configure [-v] [-e error] [-r rfailprob] [-w wfailprob] prov ... gnop destroy [-fv] prov ... gnop reset [-v] prov ... gnop list gnop status gnop load gnop unload DESCRIPTION
The gnop utility is used for setting up transparent providers on existing ones. Its main purpose is testing other GEOM classes, as it allows forced provider removal and I/O error simulation with a given probability. It also gathers the following statistics: number of read requests, number of write requests, number of bytes read and number of bytes written. In addition, it can be used as a good starting point for implementing new GEOM classes. The first argument to gnop indicates an action to be performed: create Set up a transparent provider on the given devices. If the operation succeeds, the new provider should appear with name /dev/<dev>.nop. The kernel module geom_nop.ko will be loaded if it is not loaded already. configure Configure existing transparent provider. At the moment it is only used for changing failure probability. destroy Turn off the given transparent providers. reset Reset statistics for the given transparent providers. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options: -e error Specifies the error number to return on failure. -f Force the removal of the specified provider. -o offset Where to begin on the original provider. -r rfailprob Specifies read failure probability in percent. -s size Size of the transparent provider. -S secsize Sector size of the transparent provider. -w wfailprob Specifies write failure probability in percent. -v Be more verbose. SYSCTL VARIABLES
The following sysctl(8) variables can be used to control the behavior of the NOP GEOM class. The default value is shown next to each vari- able. kern.geom.nop.debug: 0 Debug level of the NOP GEOM class. This can be set to a number between 0 and 2 inclusive. If set to 0, minimal debug information is printed. If set to 1, basic debug information is logged along with the I/O requests that were returned as errors. If set to 2, the maximum amount of debug information is printed including all I/O requests. EXIT STATUS
Exit status is 0 on success, and 1 if the command fails. EXAMPLES
The following example shows how to create a transparent provider for disk /dev/da0 with 50% write failure probability, and how to destroy it. gnop create -v -w 50 da0 gnop destroy -v da0.nop The traffic statistics for the given transparent providers can be obtained with the list command. The example below shows the number of bytes written with newfs(8): gnop create da0 newfs /dev/da0.nop gnop list SEE ALSO
geom(4), geom(8) HISTORY
The gnop utility appeared in FreeBSD 5.3. AUTHORS
Pawel Jakub Dawidek <pjd@FreeBSD.org> BSD
April 14, 2013 BSD
All times are GMT -4. The time now is 01:51 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy