Sponsored Content
Operating Systems BSD Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset Post 302700509 by bstring on Thursday 13th of September 2012 01:21:44 PM
Old 09-13-2012
Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error:

Code:
[root@vm-fbsd82-64 /]# zpool create zfspool /dev/da0s1a
cannot create 'zfspool': no such pool or dataset

[root@vm-fbsd82-64 /]# zpool create zfspool /dev/da0
cannot create 'zfspool': no such pool or dataset

I was not sure what to put for the device, so I tried both da0s1a and da0, due to this output:

Code:
[root@vm-fbsd82-64 /]# egrep 'da[0-9]' /var/run/dmesg.boot
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 204800MB (419430400 512 byte sectors: 255H 63S/T 26108C)
Trying to mount root from ufs:/dev/da0s1a

Just wondering if I am missing a step or doing something wrong.
edit: I'm starting to wonder if it's because I only have 1 'disk' in this VM, and perhaps the entire disk has been formatted to ufs and so I can't create a zpool on that disk? Do I need to add another disk in VMware?



Also, does anyone know what filesystems are natively supported in FBSD 6.x and 8.x? I believe 6.x supports ufs and 8.x supports ufs and zfs, but I am not positive.

Thank you for any help

Last edited by bstring; 09-13-2012 at 03:27 PM..
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

flarecreate for zfs root dataset and ignore multiple dataset

Hi All, I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers. but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate. I... (0 Replies)
Discussion started by: uxravi
0 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
GMIRROR(8)						    BSD System Manager's Manual 						GMIRROR(8)

NAME
gmirror -- control utility for mirrored devices SYNOPSIS
gmirror label [-Fhnv] [-b balance] [-s slice] name prov ... gmirror clear [-v] prov ... gmirror configure [-adfFhnv] [-b balance] [-s slice] name gmirror configure [-v] -p priority name prov gmirror rebuild [-v] name prov ... gmirror resize [-v] [-s size] name gmirror insert [-hiv] [-p priority] name prov ... gmirror remove [-v] name prov ... gmirror activate [-v] name prov ... gmirror deactivate [-v] name prov ... gmirror destroy [-fv] name ... gmirror forget [-v] name ... gmirror stop [-fv] name ... gmirror dump prov ... gmirror list gmirror status gmirror load gmirror unload DESCRIPTION
The gmirror utility is used for mirror (RAID1) configurations. After a mirror's creation, all components are detected and configured auto- matically. All operations like failure detection, stale component detection, rebuild of stale components, etc. are also done automatically. The gmirror utility uses on-disk metadata (stored in the provider's last sector) to store all needed information. Since the last sector is used for this purpose, it is possible to place a root file system on a mirror. The first argument to gmirror indicates an action to be performed: label Create a mirror. The order of components is important, because a component's priority is based on its position (starting from 0 to 255). The component with the biggest priority is used by the prefer balance algorithm and is also used as a master component when resynchronization is needed, e.g. after a power failure when the device was open for writing. Additional options include: -b balance Specifies balance algorithm to use, one of: load Read from the component with the lowest load. This is the default balance algorithm. prefer Read from the component with the biggest priority. round-robin Use round-robin algorithm when choosing component to read. split Split read requests, which are bigger than or equal to slice size on N pieces, where N is the number of active components. -F Do not synchronize after a power failure or system crash. Assumes device is in consistent state. -h Hardcode providers' names in metadata. -n Turn off autosynchronization of stale components. -s slice When using the split balance algorithm and an I/O READ request is bigger than or equal to this value, the I/O request will be split into N pieces, where N is the number of active components. Defaults to 4096 bytes. clear Clear metadata on the given providers. configure Configure the given device. Additional options include: -a Turn on autosynchronization of stale components. -b balance Specifies balance algorithm to use. -d Do not hardcode providers' names in metadata. -f Synchronize device after a power failure or system crash. -F Do not synchronize after a power failure or system crash. Assumes device is in consistent state. -h Hardcode providers' names in metadata. -n Turn off autosynchronization of stale components. -p priority Specifies priority for the given component prov. -s slice Specifies slice size for split balance algorithm. rebuild Rebuild the given mirror components forcibly. If autosynchronization was not turned off for the given device, this command should be unnecessary. resize Change the size of the given mirror. Additional options include: -s size New size of the mirror is expressed in logical block numbers. This option can be omitted, then it will be automatically calculated to maximum available size. insert Add the given component(s) to the existing mirror. Additional options include: -h Hardcode providers' names in metadata. -i Mark component(s) as inactive immediately after insertion. -p priority Specifies priority of the given component(s). remove Remove the given component(s) from the mirror and clear metadata on it. activate Activate the given component(s), which were marked as inactive before. deactivate Mark the given component(s) as inactive, so it will not be automatically connected to the mirror. destroy Stop the given mirror and clear metadata on all its components. Additional options include: -f Stop the given mirror even if it is opened. forget Forget about components which are not connected. This command is useful when a disk has failed and cannot be reconnected, pre- venting the remove command from being used to remove it. stop Stop the given mirror. Additional options include: -f Stop the given mirror even if it is opened. dump Dump metadata stored on the given providers. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options include: -v Be more verbose. EXIT STATUS
Exit status is 0 on success, and 1 if the command fails. EXAMPLES
Use 3 disks to setup a mirror. Choose split balance algorithm, split only requests which are bigger than or equal to 2kB. Create file sys- tem, mount it, then unmount it and stop device: gmirror label -v -b split -s 2048 data da0 da1 da2 newfs /dev/mirror/data mount /dev/mirror/data /mnt ... umount /mnt gmirror stop data gmirror unload Create a mirror on disk with valid data (note that the last sector of the disk will be overwritten). Add another disk to this mirror, so it will be synchronized with existing disk: gmirror label -v -b round-robin data da0 gmirror insert data da1 Create a mirror, but do not use automatic synchronization feature. Add another disk and rebuild it: gmirror label -v -n -b load data da0 da1 gmirror insert data da2 gmirror rebuild data da2 One disk failed. Replace it with a brand new one: gmirror forget data gmirror insert data da1 Create a mirror, deactivate one component, do the backup and connect it again. It will not be resynchronized, if there is no need to do so (there were no writes in the meantime): gmirror label data da0 da1 gmirror deactivate data da1 dd if=/dev/da1 of=/backup/data.img bs=1m gmirror activate data da1 NOTES
Doing kernel dumps to gmirror providers is possible, but some conditions have to be met. First of all, a kernel dump will go only to one component and gmirror always chooses the component with the highest priority. Reading a dump from the mirror on boot will only work if the prefer balance algorithm is used (that way gmirror will read only from the component with the highest priority). If you use a different bal- ance algorithm, you should add: gmirror configure -b prefer data to the /etc/rc.early script and: gmirror configure -b round-robin data to the /etc/rc.local script. The decision which component to choose for dumping is made when dumpon(8) is called. If on the next boot a component with a higher priority will be available, the prefer algorithm will choose to read from it and savecore(8) will find nothing. If on the next boot a component with the highest priority will be synchronized, the prefer balance algorithm will read from the next one, thus will find nothing there. SEE ALSO
geom(4), dumpon(8), geom(8), gvinum(8), mount(8), newfs(8), savecore(8), umount(8) HISTORY
The gmirror utility appeared in FreeBSD 5.3. AUTHORS
Pawel Jakub Dawidek <pjd@FreeBSD.org> BUGS
There should be a way to change a component's priority inside a running mirror. There should be a section with an implementation description. Documentation for sysctls kern.geom.mirror.* is missing. BSD
December 27, 2013 BSD
All times are GMT -4. The time now is 03:52 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy