Sponsored Content
Operating Systems BSD Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset Post 302700509 by bstring on Thursday 13th of September 2012 01:21:44 PM
Old 09-13-2012
Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error:

Code:
[root@vm-fbsd82-64 /]# zpool create zfspool /dev/da0s1a
cannot create 'zfspool': no such pool or dataset

[root@vm-fbsd82-64 /]# zpool create zfspool /dev/da0
cannot create 'zfspool': no such pool or dataset

I was not sure what to put for the device, so I tried both da0s1a and da0, due to this output:

Code:
[root@vm-fbsd82-64 /]# egrep 'da[0-9]' /var/run/dmesg.boot
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a

Just wondering if I am missing a step or doing something wrong.
edit: I'm starting to wonder if it's because I only have 1 'disk' in this VM, and perhaps the entire disk has been formatted to ufs and so I can't create a zpool on that disk? Do I need to add another disk in VMware?



Also, does anyone know what filesystems are natively supported in FBSD 6.x and 8.x? I believe 6.x supports ufs and 8.x supports ufs and zfs, but I am not positive.

Thank you for any help

Last edited by bstring; 09-13-2012 at 03:27 PM..
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

flarecreate for zfs root dataset and ignore multiple dataset

Hi All, I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers. but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate. I... (0 Replies)
Discussion started by: uxravi
0 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
GCONCAT(8)						    BSD System Manager's Manual 						GCONCAT(8)

NAME
gconcat -- disk concatenation control utility SYNOPSIS
gconcat create [-v] name prov ... gconcat destroy [-fv] name ... gconcat label [-hv] name prov ... gconcat stop [-fv] name ... gconcat clear [-v] prov ... gconcat dump prov ... gconcat list gconcat status gconcat load gconcat unload DESCRIPTION
The gconcat utility is used for device concatenation configuration. The concatenation can be configured using two different methods: ``manual'' or ``automatic''. When using the ``manual'' method, no metadata are stored on the devices, so the concatenated device has to be configured by hand every time it is needed. The ``automatic'' method uses on-disk metadata to detect devices. Once devices are labeled, they will be automatically detected and configured. The first argument to gconcat indicates an action to be performed: create Concatenate the given devices with specified name. This is the ``manual'' method. The kernel module geom_concat.ko will be loaded if it is not loaded already. label Concatenate the given devices with the specified name. This is the ``automatic'' method, where metadata are stored in every device's last sector. The kernel module geom_concat.ko will be loaded if it is not loaded already. stop Turn off existing concatenate device by its name. This command does not touch on-disk metadata! destroy Same as stop. clear Clear metadata on the given devices. dump Dump metadata stored on the given devices. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options: -f Force the removal of the specified concatenated device. -h Hardcode providers' names in metadata. -v Be more verbose. SYSCTL VARIABLES
The following sysctl(8) variables can be used to control the behavior of the CONCAT GEOM class. The default value is shown next to each variable. kern.geom.concat.debug: 0 Debug level of the CONCAT GEOM class. This can be set to a number between 0 and 3 inclusive. If set to 0 minimal debug information is printed, and if set to 3 the maximum amount of debug information is printed. EXIT STATUS
Exit status is 0 on success, and 1 if the command fails. EXAMPLES
The following example shows how to configure four disks for automatic concatenation, create a file system on it, and mount it: gconcat label -v data /dev/da0 /dev/da1 /dev/da2 /dev/da3 newfs /dev/concat/data mount /dev/concat/data /mnt [...] umount /mnt gconcat stop data gconcat unload Configure concatenated provider on one disk only. Create file system. Add two more disks and extend existing file system. gconcat label data /dev/da0 newfs /dev/concat/data gconcat label data /dev/da0 /dev/da1 /dev/da2 growfs /dev/concat/data SEE ALSO
geom(4), loader.conf(5), geom(8), growfs(8), gvinum(8), mount(8), newfs(8), sysctl(8), umount(8) HISTORY
The gconcat utility appeared in FreeBSD 5.3. AUTHORS
Pawel Jakub Dawidek <pjd@FreeBSD.org> BSD
May 21, 2004 BSD
All times are GMT -4. The time now is 08:29 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy