Sponsored Content
Operating Systems Solaris Need to remove a disk from zfs pool Post 302834347 by solaris_1977 on Thursday 18th of July 2013 06:07:52 PM
Old 07-18-2013
Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want.
Code:
root@prtdrd21:/# zpool status cvfdb2_app_pool
  pool: cvfdb2_app_pool
 state: ONLINE
 scan: none requested
config:
        NAME           STATE     READ WRITE CKSUM
        cvfdb2_app_pool  ONLINE       0     0     0
          emcpower62c  ONLINE       0     0     0
          emcpower63c  ONLINE       0     0     0
          emcpower64c  ONLINE       0     0     0
          emcpower65c  ONLINE       0     0     0
          emcpower77c  ONLINE       0     0     0
          emcpower78a  ONLINE       0     0     0

I need to remove last disk, emcpower78a. It is coming from SAN. Somebody please suggest.
 

8 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

3. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

4. Solaris

ZFS - list of disks used in a pool

Hi guys, We had created a pool as follows: zpool create filing_pool raidz c1t2d0 c1t3d0 ........ Due to some requirement, we need to destroy the pool and re-create another one. We wish to know now which disks have been included in the filing_pool, how do we list the disks used to create... (2 Replies)
Discussion started by: frum
2 Replies

5. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

Installing using ZFS - need to remove EFI disk labels

What is the preferred way of doing this from a bare metal install? (3 Replies)
Discussion started by: LittleLebowski
3 Replies

8. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies
System Administration Commands			     beadm(1M)

NAME
beadm - utility for managing zfs boot environments SYNOPSIS
/usr/sbin/beadm beadm create [-a] [-d description] [-e non-activeBeName | beName@snapshot] [-o property=value] ... [-p zpool] beName beadm create beName@snapshot beadm destroy [-fF] beName | beName@snapshot beadm list [-a | -ds] [-H] [beName] beadm mount beName mountpoint beadm unmount [-f] beName beadm rename beName newBeName beadm activate beName DESCRIPTION
The beadm command is the user interface for managing zfs Boot Environments (BEs). This utility is intended to be used by System Administrators who want to manage multiple Solaris Instances on a single system. The beadm command will support the following operations: - Create a new BE, based on the active BE. - Create a new BE, based on an inactive BE. - Create a snapshot of an existing BE. - Create a new BE, based on an existing snapshot. - Create a new BE, and copy it to a different zpool. - Activate an existing, inactive BE. - Mount a BE. - Unmount a BE. - Destroy a BE. - Destroy a snapshot of a BE. - Rename an existing, inactive BE. - Display information about your snapshots and datasets. SUBCOMMANDS
The beadm command has the subcommands and options listed below. Also see EXAMPLES below. beadm Displays command usage. beadm create [-a] [-d description] [-e non-activeBeName | beName@snapshot] [-o property=value] ... [-p zpool] beName Creates a new boot environment named beName. If the -e option is not provided, the new boot environment will be created as a clone of the currently running boot environment. If the -d option is provided then the description is also used as the title for the BE's entry in the GRUB menu for x86 systems or in the boot menu for SPARC systems. If the -d option is not provided, beName will be used as the title. -a Activate the newly created BE upon creation. The default is to not activate the newly created BE. -d description Create a new BE with a desc- ription associated with it. -e non-activeBeName Create a new BE from an existing inactive BE. -e beName@snapshot Create a new BE from an existing snapshot of the BE named beName. -o property=value Create the datasets for new BE with specific ZFS properties. Multiple -o options can be specified. See zfs(1M) for more information on the -o option. -p zpool Create the new BE in the specified zpool. If this is not provided, the default behavior is to create the new BE in the same pool as as the origin BE. beadm create beName@snapshot Creates a snapshot of the existing BE named beName. beadm destroy [-fF] beName | beName@snapshot Destroys the boot environment named beName or destroys an existing snapshot of the boot environment named beName@snapshot. Destroying a boot environment will also destroy all snapshots of that boot environment. Use this command with caution. -f Forcefully unmount the boot environment if it is currently mounted. -F Force the action without prompting to verify the destruction of the boot environment. beadm list [-a | -ds] [-H] [beName] Lists information about the existing boot environment named beName, or lists information for all boot environments if beName is not provided. The 'Active' field indicates whether the boot environment is active now, represented by 'N'; active on reboot, represented by 'R'; or both, represented by 'NR'. Each line in the machine parasable output has the boot environment name as the first field. The 'Space' field is displayed in bytes and the 'Created' field is displayed in UTC format. The -H option used with no other options gives the boot environment's uuid in the second field. This field will be blank if the boot environment does not have a uuid. See the EXAMPLES section. -a Lists all available information about the boot environment. This includes subordinate file systems and snapshots. -d Lists information about all subordinate file systems belonging to the boot environment. -s Lists information about the snapshots of the boot environment. -H Do not list header information. Each field in the list information is separated by a semicolon. beadm mount beName mountpoint Mounts a boot environment named beName at mountpoint. mountpoint must be an already existing empty directory. beadm unmount [-f] beName Unmounts the boot environment named beName. -f Forcefully unmount the boot environment even if its currently busy. beadm rename beName newBeName Renames the boot environment named beName to newBeName. beadm activate beName Makes beName the active BE on next reboot. EXAMPLES
Example 1: Create a new BE named BE1, by cloning the current live BE. # beadm create BE1 Example 2: Create a new BE named BE2, by cloning the existing inactive BE named BE1. # beadm create -e BE1 BE2 Example 3: Create a snapshot named now of the existing BE named BE1. # beadm create BE1@now Example 4: Create a new BE named BE3, by cloning an existing snapshot of BE1. # beadm create -e BE1@now BE3 Example 5: Create a new BE named BE4 based on the currently running BE. Create the new BE in rpool2. # beadm create -p rpool2 BE4 Example 6: Create a new BE named BE5 based on the currently running BE. Create the new BE in rpool2, and create its datasets with compression turned on. # beadm create -p rpool2 -o compression=on BE5 Example 7: Create a new BE named BE6 based on the currently running BE and provide a description for it. # beadm create -d "BE6 used as test environment" BE6 Example 8: Activate an existing, inactive BE named BE3. # beadm activate BE3 Example 9: Mount the BE named BE3 at /mnt. # beadm mount BE3 /mnt Example 10: Unmount the mounted BE named BE3. # beadm unmount BE3 Example 11: Destroy the BE named BE3 without verification. # beadm destroy -f BE3 Example 12: Destroy the snapshot named now of BE1. # beadm destroy BE1@now Example 13: Rename the existing, inactive BE named BE1 to BE3. # beadm rename BE1 BE3 Example 14: List all existing boot environments. # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- BE2 - - 72.0K static 2008-05-21 12:26 BE3 - - 332.0K static 2008-08-26 10:28 BE4 - - 15.78M static 2008-09-05 18:20 BE5 NR / 7.25G static 2008-09-09 16:53 Example 14: List all existing boot environmets and list all dataset and snapshot information about those boot environments. # beadm list -d -s BE/Dataset/Snapshot Active Mountpoint Space Policy Created ------------------- ------ ---------- ----- ------ ------- BE2 p/ROOT/BE2 - - 36.0K static 2008-05-21 12:26 p/ROOT/BE2/opt - - 18.0K static 2008-05-21 16:26 p/ROOT/BE2/opt@now - - 0 static 2008-09-08 22:43 p/ROOT/BE2@now - - 0 static 2008-09-08 22:43 BE3 p/ROOT/BE3 - - 192.0K static 2008-08-26 10:28 p/ROOT/BE3/opt - - 86.0K static 2008-08-26 10:28 p/ROOT/BE3/opt/local - - 36.0K static 2008-08-28 10:58 BE4 p/ROOT/BE4 - - 15.78M static 2008-09-05 18:20 BE5 p/ROOT/BE5 NR / 6.10G static 2008-09-09 16:53 p/ROOT/BE5/opt - /opt 24.55M static 2008-09-09 16:53 p/ROOT/BE5/opt@bar - - 18.38M static 2008-09-10 00:59 p/ROOT/BE5/opt@foo - - 18.38M static 2008-06-10 16:37 p/ROOT/BE5@bar - - 139.44M static 2008-09-10 00:59 p/ROOT/BE5@foo - - 912.85M static 2008-06-10 16:37 Example 15: List all dataset and snapshot information about BE5 # beadm list -a BE5 BE/Dataset/Snapshot Active Mountpoint Space Policy Created ------------------- ------ ---------- ----- ------ ------- BE5 p/ROOT/BE5 NR / 6.10G static 2008-09-09 16:53 p/ROOT/BE5/opt - /opt 24.55M static 2008-09-09 16:53 p/ROOT/BE5/opt@bar - - 18.38M static 2008-09-10 00:59 p/ROOT/BE5/opt@foo - - 18.38M static 2008-06-10 16:37 p/ROOT/BE5@bar - - 139.44M static 2008-09-10 00:59 p/ROOT/BE5@foo - - 912.85M static 2008-06-10 16:37 Example 16: List machine parsable information about all boot environments. # beadm list -H BE2;;;;55296;static;1211397974 BE3;;;;339968;static;1219771706 BE4;;;;16541696;static;1220664051 BE5;215b8387-4968-627c-d2d0-f4a011414bab;NR;/;7786206208;static;1221004384 EXIT STATUS
The following exit values are returned: 0 - Success >0 - Failure FILES
/var/log/beadm/<beName>/create.log.<yyyymmdd_hhmmss> Log used for capturing beadm create output yyyymmdd_hhmmss - 20071130_140558 yy - year; 2007 mm - month; 11 dd - day; 30 hh - hour; 14 mm - minute; 05 ss - second; 58 ATTRIBUTES
See attributes(5) for descriptions of the following attri- butes: ____________________________________________________________ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | |_____________________________|_____________________________| | Availability | SUNWbeadm | |_____________________________|_____________________________| | Interface Stability | Uncommitted | |_____________________________|_____________________________| SEE ALSO
zfs(1M) NOTES
Last change: 10 September 2008
All times are GMT -4. The time now is 05:25 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy