Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset
I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error:
I was not sure what to put for the device, so I tried both da0s1a and da0, due to this output:
Just wondering if I am missing a step or doing something wrong.
edit: I'm starting to wonder if it's because I only have 1 'disk' in this VM, and perhaps the entire disk has been formatted to ufs and so I can't create a zpool on that disk? Do I need to add another disk in VMware?
Also, does anyone know what filesystems are natively supported in FBSD 6.x and 8.x? I believe 6.x supports ufs and 8.x supports ufs and zfs, but I am not positive.
Hi all
I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM:
- 2 internal disks: c0t0d0 and c0t1d0
- bootable root-volume (mirrored, both disks)
- 1 non-mirrored swap slice
- 1 non-mirrored slices for Live... (1 Reply)
# zpool import
pool: emcpool1
id: 5596268873059055768
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: Sun Message ID: ZFS-8000-3C
config:
emcpool1 ... (7 Replies)
I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Other than export/import, is there a cleaner way to rename a pool without unmounting de FS?
Something like, say "zpool rename a b"?
Thanks. (2 Replies)
Hi All,
I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers.
but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate.
I... (0 Replies)
installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it.
Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
I messed up my pool by doing zfs send...recive So I got the following :
zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 928G 17.3G 911G 1% 1.00x ONLINE -
tank1 928G 35.8G 892G 3% 1.00x ONLINE -
So I have "tank1" pool.
zfs get all... (8 Replies)
I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
On an OmniOS server, I removed a single-disk pool I was using for testing.
Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore.
# zpool import
pool: fido
id: 7452075738474086658
state: FAULTED
status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
LEARN ABOUT FREEBSD
gconcat
GCONCAT(8) BSD System Manager's Manual GCONCAT(8)NAME
gconcat -- disk concatenation control utility
SYNOPSIS
gconcat create [-v] name prov ...
gconcat destroy [-fv] name ...
gconcat label [-hv] name prov ...
gconcat stop [-fv] name ...
gconcat clear [-v] prov ...
gconcat dump prov ...
gconcat list
gconcat status
gconcat load
gconcat unload
DESCRIPTION
The gconcat utility is used for device concatenation configuration. The concatenation can be configured using two different methods:
``manual'' or ``automatic''. When using the ``manual'' method, no metadata are stored on the devices, so the concatenated device has to be
configured by hand every time it is needed. The ``automatic'' method uses on-disk metadata to detect devices. Once devices are labeled,
they will be automatically detected and configured.
The first argument to gconcat indicates an action to be performed:
create Concatenate the given devices with specified name. This is the ``manual'' method. The kernel module geom_concat.ko will be loaded
if it is not loaded already.
label Concatenate the given devices with the specified name. This is the ``automatic'' method, where metadata are stored in every
device's last sector. The kernel module geom_concat.ko will be loaded if it is not loaded already.
stop Turn off existing concatenate device by its name. This command does not touch on-disk metadata!
destroy Same as stop.
clear Clear metadata on the given devices.
dump Dump metadata stored on the given devices.
list See geom(8).
status See geom(8).
load See geom(8).
unload See geom(8).
Additional options:
-f Force the removal of the specified concatenated device.
-h Hardcode providers' names in metadata.
-v Be more verbose.
SYSCTL VARIABLES
The following sysctl(8) variables can be used to control the behavior of the CONCAT GEOM class. The default value is shown next to each
variable.
kern.geom.concat.debug: 0
Debug level of the CONCAT GEOM class. This can be set to a number between 0 and 3 inclusive. If set to 0 minimal debug information
is printed, and if set to 3 the maximum amount of debug information is printed.
EXIT STATUS
Exit status is 0 on success, and 1 if the command fails.
EXAMPLES
The following example shows how to configure four disks for automatic concatenation, create a file system on it, and mount it:
gconcat label -v data /dev/da0 /dev/da1 /dev/da2 /dev/da3
newfs /dev/concat/data
mount /dev/concat/data /mnt
[...]
umount /mnt
gconcat stop data
gconcat unload
Configure concatenated provider on one disk only. Create file system. Add two more disks and extend existing file system.
gconcat label data /dev/da0
newfs /dev/concat/data
gconcat label data /dev/da0 /dev/da1 /dev/da2
growfs /dev/concat/data
SEE ALSO geom(4), loader.conf(5), geom(8), growfs(8), gvinum(8), mount(8), newfs(8), sysctl(8), umount(8)HISTORY
The gconcat utility appeared in FreeBSD 5.3.
AUTHORS
Pawel Jakub Dawidek <pjd@FreeBSD.org>
BSD May 21, 2004 BSD