Sponsored Content
Operating Systems BSD Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset Post 302700509 by bstring on Thursday 13th of September 2012 01:21:44 PM
Old 09-13-2012
Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error:

Code:
[root@vm-fbsd82-64 /]# zpool create zfspool /dev/da0s1a
cannot create 'zfspool': no such pool or dataset

[root@vm-fbsd82-64 /]# zpool create zfspool /dev/da0
cannot create 'zfspool': no such pool or dataset

I was not sure what to put for the device, so I tried both da0s1a and da0, due to this output:

Code:
[root@vm-fbsd82-64 /]# egrep 'da[0-9]' /var/run/dmesg.boot
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a

Just wondering if I am missing a step or doing something wrong.
edit: I'm starting to wonder if it's because I only have 1 'disk' in this VM, and perhaps the entire disk has been formatted to ufs and so I can't create a zpool on that disk? Do I need to add another disk in VMware?



Also, does anyone know what filesystems are natively supported in FBSD 6.x and 8.x? I believe 6.x supports ufs and 8.x supports ufs and zfs, but I am not positive.

Thank you for any help

Last edited by bstring; 09-13-2012 at 03:27 PM..
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

flarecreate for zfs root dataset and ignore multiple dataset

Hi All, I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers. but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate. I... (0 Replies)
Discussion started by: uxravi
0 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
GMULTIPATH(8)						    BSD System Manager's Manual 					     GMULTIPATH(8)

NAME
gmultipath -- disk multipath control utility SYNOPSIS
gmultipath create [-ARv] name prov ... gmultipath label [-ARv] name prov ... gmultipath configure [-APRv] name gmultipath add [-v] name prov gmultipath remove [-v] name prov gmultipath fail [-v] name prov gmultipath restore [-v] name prov gmultipath rotate [-v] name gmultipath prefer [-v] name prov gmultipath getactive [-v] name gmultipath destroy [-v] name gmultipath stop [-v] name gmultipath clear [-v] prov ... gmultipath list gmultipath status gmultipath load gmultipath unload DESCRIPTION
The gmultipath utility is used for device multipath configuration. The multipath device can be configured using two different methods: ``manual'' or ``automatic''. When using the ``manual'' method, no meta- data are stored on the devices, so the multipath device has to be configured by hand every time it is needed. Additional device paths also won't be detected automatically. The ``automatic'' method uses on-disk metadata to detect device and all it's paths. Metadata use the last sector of the underlying disk device and include device name and UUID. The UUID guarantees uniqueness in a shared storage environment but is in general too cumbersome to use. The name is what is exported via the device interface. The first argument to gmultipath indicates an action to be performed: create Create multipath device with ``manual'' method without writing any on-disk metadata. It is up to administrator, how to properly identify device paths. Kernel will only check that all given providers have same media and sector sizes. -A option enables Active/Active mode, -R option enables Active/Read mode, otherwise Active/Passive mode is used by default. label Create multipath device with ``automatic'' method. Label the first given provider with on-disk metadata using the specified name. The rest of given providers will be retasted to detect these metadata. It reliably protects against specifying unrelated providers. Providers with no matching metadata detected will not be added to the device. -A option enables Active/Active mode, -R option enables Active/Read mode, otherwise Active/Passive mode is used by default. configure Configure the given multipath device. -A option enables Active/Active mode, -P option enables Active/Passive mode, -R option enables Active/Read mode. add Add the given provider as a path to the given multipath device. Should normally be used only for devices created with ``manual'' method, unless you know what you are doing (you are sure that it is another device path, but tasting its metadata in regular ``automatic'' way is not possible). remove Remove the given provider as a path from the given multipath device. If the last path removed, the multipath device will be destroyed. fail Mark specified provider as a path of the specified multipath device as failed. If there are other paths present, new requests will be forwarded there. restore Mark specified provider as a path of the specified multipath device as operational, allowing it to handle requests. rotate Change the active provider/path to the next available provider in Active/Passive mode. prefer Change the active provider/path to the specified provider in Active/Passive mode. getactive Get the currently active provider(s)/path(s). destroy Destroy the given multipath device clearing metadata. stop Stop the given multipath device without clearing metadata. clear Clear metadata on the given provider. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). SYSCTL VARIABLES
The following sysctl(8) variable can be used to control the behavior of the MULTIPATH GEOM class. kern.geom.multipath.debug: 0 Debug level of the MULTIPATH GEOM class. This can be set to 0 (default) or 1 to disable or enable various forms of chattiness. kern.geom.multipath.exclusive: 1 Open underlying providers exclusively, preventing individual paths access. EXIT STATUS
Exit status is 0 on success, and 1 if the command fails. MULTIPATH ARCHITECTURE
This is a multiple path architecture with no device knowledge or presumptions other than size matching built in. Therefore the user must exercise some care in selecting providers that do indeed represent multiple paths to the same underlying disk device. The reason for this is that there are several criteria across multiple underlying transport types that can indicate identity, but in all respects such identity can rarely be considered definitive. For example, if you use the World Word Port Name of a Fibre Channel disk object you might believe that two disks that have the same WWPN on different paths (or even disjoint fabrics) might be considered the same disk. Nearly always this would be a safe assumption, until you real- ize that a WWPN, like an Ethernet MAC address, is a soft programmable entity, and that a misconfigured Director Class switch could lead you to believe incorrectly that you have found multiple paths to the same device. This is an extreme and theoretical case, but it is possible enough to indicate that the policy for deciding which of multiple pathnames refer to the same device should be left to the system operator who will use tools and knowledge of their own storage subsystem to make the correct configuration selection. There are Active/Passive, Active/Read and Active/Active operation modes supported. In Active/Passive mode only one path has I/O moving on it at any point in time. This I/O continues until an I/O is returned with a generic I/O error or a "Nonexistent Device" error. When this occurs, that path is marked FAIL, the next path in a list is selected as active and the failed I/O reissued. In Active/Active mode all paths not marked FAIL may handle I/O same time. Requests are distributed between paths to equalize load. For capable devices it allows to utilize bandwidth of all paths. In Active/Read mode all paths not marked FAIL may handle reads same time, but unlike Active/Active only one path handles write requests at any point in time. It allows to closer follow original write request order if above layer needs it for data con- sistency (not waiting for requisite write completion before sending dependent write). When new devices are added to the system the MULTIPATH GEOM class is given an opportunity to taste these new devices. If a new device has a MULTIPATH on-disk metadata label, the device is used to either create a new MULTIPATH GEOM, or been added the list of paths for an existing MULTIPATH GEOM. It is this mechanism that works reasonably with isp(4) and mpt(4) based Fibre Channel disk devices. For these devices, when a device disap- pears (due e.g., to a cable pull or power failure to a switch), the device is proactively marked as gone and I/O to it failed. This causes the MULTIPATH failure event just described. When Fibre Channel events inform either isp(4) or mpt(4) host bus adapters that new devices may have arrived (e.g., the arrival of an RSCN event from the Fabric Domain Controller), they can cause a rescan to occur and cause the attachment and configuration of any (now) new devices to occur, causing the taste event described above. This means that this multipath architecture is not a one-shot path failover, but can be considered to be steady state as long as failed paths are repaired (automatically or otherwise). Automatic rescanning is not a requirement. Nor is Fibre Channel. The same failover mechanisms work equally well for traditional "Parallel" SCSI but may require manual intervention with camcontrol(8) to cause the reattachment of repaired device links. EXAMPLES
The following example shows how to use camcontrol(8) to find possible multiple path devices and to create a MULTIPATH GEOM class for them. mysys# camcontrol devlist <ECNCTX @WESTVILLE > at scbus0 target 0 lun 0 (da0,pass0) <ECNCTX @WESTVILLE > at scbus0 target 0 lun 1 (da1,pass1) <ECNCTX @WESTVILLE > at scbus1 target 0 lun 0 (da2,pass2) <ECNCTX @WESTVILLE > at scbus1 target 0 lun 1 (da3,pass3) mysys# camcontrol inquiry da0 -S ECNTX0LUN000000SER10ac0d01 mysys# camcontrol inquiry da2 -S ECNTX0LUN000000SER10ac0d01 Now that you have used the Serial Number to compare two disk paths it is not entirely unreasonable to conclude that these are multiple paths to the same device. However, only the user who is familiar with their storage is qualified to make this judgement. You can then use the gmultipath command to label and create a MULTIPATH GEOM provider named FRED. gmultipath label -v FRED /dev/da0 /dev/da2 disklabel -Brw /dev/multipath/FRED auto newfs /dev/multipath/FREDa mount /dev/multipath/FREDa /mnt.... The resultant console output looks something like: GEOM_MULTIPATH: da0 added to FRED GEOM_MULTIPATH: da0 is now active path in FRED GEOM_MULTIPATH: da2 added to FRED SEE ALSO
geom(4), isp(4), mpt(4), loader.conf(5), camcontrol(8), geom(8), mount(8), newfs(8), sysctl(8) AUTHORS
Matthew Jacob <mjacob@FreeBSD.org> Alexander Motin <mav@FreeBSD.org> BSD
April 18, 2012 BSD
All times are GMT -4. The time now is 01:17 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy