Sponsored Content
Operating Systems BSD Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset Post 302704483 by AnbuBlack on Friday 21st of September 2012 05:24:05 PM
Old 09-21-2012
Quote:
Originally Posted by bstring
Also, does anyone know what filesystems are natively supported in FBSD 6.x and 8.x? I believe 6.x supports ufs and 8.x supports ufs and zfs, but I am not positive.
ZFS is a combined file system and logical volume manager originally designed by Sun Microsystems. It was ported to FreeBSD and has been part of the operating system since FreeBSD 7.0. FreeBSD 6.x use UFS.

The Z File System
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

flarecreate for zfs root dataset and ignore multiple dataset

Hi All, I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers. but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate. I... (0 Replies)
Discussion started by: uxravi
0 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
System Administration Commands			     beadm(1M)

NAME
beadm - utility for managing zfs boot environments SYNOPSIS
/usr/sbin/beadm beadm create [-a] [-d description] [-e non-activeBeName | beName@snapshot] [-o property=value] ... [-p zpool] beName beadm create beName@snapshot beadm destroy [-fF] beName | beName@snapshot beadm list [-a | -ds] [-H] [beName] beadm mount beName mountpoint beadm unmount [-f] beName beadm rename beName newBeName beadm activate beName DESCRIPTION
The beadm command is the user interface for managing zfs Boot Environments (BEs). This utility is intended to be used by System Administrators who want to manage multiple Solaris Instances on a single system. The beadm command will support the following operations: - Create a new BE, based on the active BE. - Create a new BE, based on an inactive BE. - Create a snapshot of an existing BE. - Create a new BE, based on an existing snapshot. - Create a new BE, and copy it to a different zpool. - Activate an existing, inactive BE. - Mount a BE. - Unmount a BE. - Destroy a BE. - Destroy a snapshot of a BE. - Rename an existing, inactive BE. - Display information about your snapshots and datasets. SUBCOMMANDS
The beadm command has the subcommands and options listed below. Also see EXAMPLES below. beadm Displays command usage. beadm create [-a] [-d description] [-e non-activeBeName | beName@snapshot] [-o property=value] ... [-p zpool] beName Creates a new boot environment named beName. If the -e option is not provided, the new boot environment will be created as a clone of the currently running boot environment. If the -d option is provided then the description is also used as the title for the BE's entry in the GRUB menu for x86 systems or in the boot menu for SPARC systems. If the -d option is not provided, beName will be used as the title. -a Activate the newly created BE upon creation. The default is to not activate the newly created BE. -d description Create a new BE with a desc- ription associated with it. -e non-activeBeName Create a new BE from an existing inactive BE. -e beName@snapshot Create a new BE from an existing snapshot of the BE named beName. -o property=value Create the datasets for new BE with specific ZFS properties. Multiple -o options can be specified. See zfs(1M) for more information on the -o option. -p zpool Create the new BE in the specified zpool. If this is not provided, the default behavior is to create the new BE in the same pool as as the origin BE. beadm create beName@snapshot Creates a snapshot of the existing BE named beName. beadm destroy [-fF] beName | beName@snapshot Destroys the boot environment named beName or destroys an existing snapshot of the boot environment named beName@snapshot. Destroying a boot environment will also destroy all snapshots of that boot environment. Use this command with caution. -f Forcefully unmount the boot environment if it is currently mounted. -F Force the action without prompting to verify the destruction of the boot environment. beadm list [-a | -ds] [-H] [beName] Lists information about the existing boot environment named beName, or lists information for all boot environments if beName is not provided. The 'Active' field indicates whether the boot environment is active now, represented by 'N'; active on reboot, represented by 'R'; or both, represented by 'NR'. Each line in the machine parasable output has the boot environment name as the first field. The 'Space' field is displayed in bytes and the 'Created' field is displayed in UTC format. The -H option used with no other options gives the boot environment's uuid in the second field. This field will be blank if the boot environment does not have a uuid. See the EXAMPLES section. -a Lists all available information about the boot environment. This includes subordinate file systems and snapshots. -d Lists information about all subordinate file systems belonging to the boot environment. -s Lists information about the snapshots of the boot environment. -H Do not list header information. Each field in the list information is separated by a semicolon. beadm mount beName mountpoint Mounts a boot environment named beName at mountpoint. mountpoint must be an already existing empty directory. beadm unmount [-f] beName Unmounts the boot environment named beName. -f Forcefully unmount the boot environment even if its currently busy. beadm rename beName newBeName Renames the boot environment named beName to newBeName. beadm activate beName Makes beName the active BE on next reboot. EXAMPLES
Example 1: Create a new BE named BE1, by cloning the current live BE. # beadm create BE1 Example 2: Create a new BE named BE2, by cloning the existing inactive BE named BE1. # beadm create -e BE1 BE2 Example 3: Create a snapshot named now of the existing BE named BE1. # beadm create BE1@now Example 4: Create a new BE named BE3, by cloning an existing snapshot of BE1. # beadm create -e BE1@now BE3 Example 5: Create a new BE named BE4 based on the currently running BE. Create the new BE in rpool2. # beadm create -p rpool2 BE4 Example 6: Create a new BE named BE5 based on the currently running BE. Create the new BE in rpool2, and create its datasets with compression turned on. # beadm create -p rpool2 -o compression=on BE5 Example 7: Create a new BE named BE6 based on the currently running BE and provide a description for it. # beadm create -d "BE6 used as test environment" BE6 Example 8: Activate an existing, inactive BE named BE3. # beadm activate BE3 Example 9: Mount the BE named BE3 at /mnt. # beadm mount BE3 /mnt Example 10: Unmount the mounted BE named BE3. # beadm unmount BE3 Example 11: Destroy the BE named BE3 without verification. # beadm destroy -f BE3 Example 12: Destroy the snapshot named now of BE1. # beadm destroy BE1@now Example 13: Rename the existing, inactive BE named BE1 to BE3. # beadm rename BE1 BE3 Example 14: List all existing boot environments. # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- BE2 - - 72.0K static 2008-05-21 12:26 BE3 - - 332.0K static 2008-08-26 10:28 BE4 - - 15.78M static 2008-09-05 18:20 BE5 NR / 7.25G static 2008-09-09 16:53 Example 14: List all existing boot environmets and list all dataset and snapshot information about those boot environments. # beadm list -d -s BE/Dataset/Snapshot Active Mountpoint Space Policy Created ------------------- ------ ---------- ----- ------ ------- BE2 p/ROOT/BE2 - - 36.0K static 2008-05-21 12:26 p/ROOT/BE2/opt - - 18.0K static 2008-05-21 16:26 p/ROOT/BE2/opt@now - - 0 static 2008-09-08 22:43 p/ROOT/BE2@now - - 0 static 2008-09-08 22:43 BE3 p/ROOT/BE3 - - 192.0K static 2008-08-26 10:28 p/ROOT/BE3/opt - - 86.0K static 2008-08-26 10:28 p/ROOT/BE3/opt/local - - 36.0K static 2008-08-28 10:58 BE4 p/ROOT/BE4 - - 15.78M static 2008-09-05 18:20 BE5 p/ROOT/BE5 NR / 6.10G static 2008-09-09 16:53 p/ROOT/BE5/opt - /opt 24.55M static 2008-09-09 16:53 p/ROOT/BE5/opt@bar - - 18.38M static 2008-09-10 00:59 p/ROOT/BE5/opt@foo - - 18.38M static 2008-06-10 16:37 p/ROOT/BE5@bar - - 139.44M static 2008-09-10 00:59 p/ROOT/BE5@foo - - 912.85M static 2008-06-10 16:37 Example 15: List all dataset and snapshot information about BE5 # beadm list -a BE5 BE/Dataset/Snapshot Active Mountpoint Space Policy Created ------------------- ------ ---------- ----- ------ ------- BE5 p/ROOT/BE5 NR / 6.10G static 2008-09-09 16:53 p/ROOT/BE5/opt - /opt 24.55M static 2008-09-09 16:53 p/ROOT/BE5/opt@bar - - 18.38M static 2008-09-10 00:59 p/ROOT/BE5/opt@foo - - 18.38M static 2008-06-10 16:37 p/ROOT/BE5@bar - - 139.44M static 2008-09-10 00:59 p/ROOT/BE5@foo - - 912.85M static 2008-06-10 16:37 Example 16: List machine parsable information about all boot environments. # beadm list -H BE2;;;;55296;static;1211397974 BE3;;;;339968;static;1219771706 BE4;;;;16541696;static;1220664051 BE5;215b8387-4968-627c-d2d0-f4a011414bab;NR;/;7786206208;static;1221004384 EXIT STATUS
The following exit values are returned: 0 - Success >0 - Failure FILES
/var/log/beadm/<beName>/create.log.<yyyymmdd_hhmmss> Log used for capturing beadm create output yyyymmdd_hhmmss - 20071130_140558 yy - year; 2007 mm - month; 11 dd - day; 30 hh - hour; 14 mm - minute; 05 ss - second; 58 ATTRIBUTES
See attributes(5) for descriptions of the following attri- butes: ____________________________________________________________ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | |_____________________________|_____________________________| | Availability | SUNWbeadm | |_____________________________|_____________________________| | Interface Stability | Uncommitted | |_____________________________|_____________________________| SEE ALSO
zfs(1M) NOTES
Last change: 10 September 2008
All times are GMT -4. The time now is 07:39 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy