Sponsored Content
Full Discussion: zfs pool migration
Operating Systems Solaris zfs pool migration Post 302440010 by jac on Sunday 25th of July 2010 08:52:19 PM
Old 07-25-2010
zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity.
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

5. Solaris

ZFS - list of disks used in a pool

Hi guys, We had created a pool as follows: zpool create filing_pool raidz c1t2d0 c1t3d0 ........ Due to some requirement, we need to destroy the pool and re-create another one. We wish to know now which disks have been included in the filing_pool, how do we list the disks used to create... (2 Replies)
Discussion started by: frum
2 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

10. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies
vxpool(1M)																vxpool(1M)

NAME
vxpool - create and administer storage pools SYNOPSIS
vxpool [-g diskgroup] adddisk storage_pool dm=dm1[,dm2...] vxpool [-g diskgroup ] assoctemplate storage_pool template=t1[,t2...] vxpool [-g diskgroup ] assoctemplateset storage_pool template_set=ts1[,ts2...] vxpool [-g diskgroup] create storage_pool [dm=dm1[,dm2...]] [description=description] [autogrow=level] [selfsufficient=level] [pooldefinition=storage_pool_definition] vxpool [-g diskgroup] [-r] delete storage_pool vxpool [-g diskgroup] distemplate storage_pool template=t1[,t2...] vxpool [-g diskgroup] distemplateset storage_pool template_set=ts1[,ts2...] vxpool [-g diskgroup] getpolicy storage_pool vxpool help [keywords | options | attributes] vxpool [-g diskgroup ] list vxpool listpoolset [pooldefn=p1[,p2...]] vxpool listpooldefinition vxpool [-g diskgroup] organize storage_pool_set vxpool [-g diskgroup] print [storage_pool [storage_pool ...]] vxpool printpooldefinition [storage_pool_definition [storage_pool_definition ...]] vxpool printpoolset [storage_pool_set [storage_pool_set ...]] vxpool [-g diskgroup] rename storage_pool new_pool_name vxpool [-g diskgroup] rmdisk storage_pool dm=dm1[,dm2...] vxpool [-g diskgroup] setpolicy storage_pool [autogrow=level] [selfsufficient=level] DESCRIPTION
The vxpool utility provides a command line interface for the creation and administration of storage pools that are used with the Veritas Intelligent Storage Provisioning (ISP) feature of Veritas Volume Manager (VxVM). The operations that can be performed by vxpool are selected by specifying the appropriate keyword on the command line. See the KEYWORDS section for a description of the available operations. Most operations can be applied to a single disk group only. If a disk group is not specified by using the -g option, and an alternate default disk group is not defined by specifying the diskgroup attribute on the command line or in a defaults file (usually /etc/default/allocator), the default disk group is determined using the rules given in the vxdg(1M) manual page. KEYWORDS
adddisk Adds one or more disks to a storage pool. assoctemplate Associates one or more templates with a storage pool. assoctemplateset Associates one or more template sets with a storage pool. create Creates a storage pool and associate it with a disk group. This operation allows disks be added to the pool when it is created. Use the dm attribute to specified a comma-separated list of disk media names for these disks. Policies for the pool such as autogrow and selfsufficient can also be specified. By default, the values of autogrow and selfsufficient are set to 2 (diskgroup) and 1 (pool) respectively. If you specify a storage pool definition, the storage pool is created using this definition. Any other policies that you specify override the corresponding values in the definition. Note: Only a single data storage pool may be configured in a disk group. Any storage pools that you configure subsequently in a disk group are clone storage pools. A clone storage pool is used to hold instant full-sized snapshot copies of volumes in the data storage pool. delete Deletes a storage pool. If the -r option is specified, any disks in the pool are also dissociated from the pool provided that they are not allocated to volumes. Note: If any volumes are configured in the storage pool, the command fails and returns an error. distemplate Disassociates one or more templates from a storage pool. distemplateset Disassociates one or more template sets from a storage pool. getpolicy Displays the values of the policies that are set on a storage pool. help Displays information on vxpool usage, keywords, options or attributes. list Displays the storage pools (data and clone) that are configured in a disk group. listpoolset Lists all available storage pool sets. If a list of storage pool definitions is specified to the pooldefn attribute, only the pool sets that contain the specified pool definitions are listed. listpooldefinition Lists all available storage pool definitions. organize Creates data and clone storage pools using the storage pool definitions that are contained in a storage pool set. Unique storage pool names are generated by appending a number to the definition name. If required, you can use the rename operation to change these names. print Displays the details of one or more storage pools. If no storage pool is specified, the details of all storage pools are dis- played. printpooldefinition Displays the definitions for one or more storage pools. If no storage pool is specified, the definitions of all storage pools are displayed. printpoolset Displays the details of one or more storage pool sets. If no storage pool set is specified, the details of all storage pool sets are displayed. rename Renames a storage pool. rmdisk Removes one or more disks from a storage pool. The disks to be removed are specified as a comma-separated list of disk media names to the dm attribute. Note: A disk cannot be removed from a storage pool if it is currently allocated to a volume. setpolicy Sets the value of the autogrow and/or the selfsuffiecient policy for a storage pool. See the ATTRIBUTES section for a description of the policy level values that may be specified. OPTIONS
-g diskgroup Specifies a disk group by name or ID for an operation. If this option is not specified, and an alternate default disk group is not defined by specifying the diskgroup attribute on the command line or in a defaults file (usually /etc/default/allocator), the default disk group is determined using the rules given in the vxdg(1M) manual page. -r Removes all disks from a storage pool as part of a delete operation. ATTRIBUTES
autogrow=[{1|pool}|{2|diskgroup}] A storage pool's autogrow policy determines whether the pool can be grown to accommodate additional storage. If set to 1 or pool, the pool cannot be grown, and only storage that is currently configured in the pool can be used. If set to 2 or diskgroup}, it can be grown by bringing in additional storage from the disk group outside the storage pool. The default value of autogrow is 2 (diskgroup). description=description Provides a short description of the pool that is being created. dm=dmname,... Specifies disks by their disk media names (for example, mydg01). The disks must have already been initialized by Veritas Volume Manager. pooldefinition=storage_pool_definition Specifies the name of the pool definition that is to be used for creating a storage pool. selfsufficient=[{1|pool}|{2|diskgroup}|{3|host}] A storage pool's selfsufficient policy determines whether the pool can use templates that are not currently associated with it. If set to 1 or pool, the pool can only use template that have been associated with it. If set to 2 or diskgroup}, the pool can use templates as necessary that are associated with the disk group. If set to 3 or host}, the pool can use templates if required that are configured on the host system. The default value of selfsufficient is 1 (pool). template=t1[,t2...] Specifies one or more volume templates to an operation. template_set=ts1[,ts2...] Specifies one or more volume template sets to an operation. EXAMPLES
Create a storage pool called ReliablePool, in the disk group mydg, containing the disks mydg01 through mydg04, and with the autogrow and selfsufficient policies both set to diskgroup: vxpool -g mydg create ReliablePool dm=mydg01,mydg02,mydg03,mydg04 autogrow=diskgroup selfsufficient=diskgroup Delete the storage pool testpool from the disk group mydg, and also remove all disks from the pool: vxpool -g mydg -r delete testpool Rename the pool ReliablePool, in the disk group mydg to HardwareReliablePool: vxpool -g dg rename ReliablePool HardwareReliablePool Associate the templates DataMirroring and PrefabricatedDataMirroring with the storage pool HardwareReliablePool: vxpool -g mydg assoctemplate HardwareReliablePool template=DataMirroring,PrefabricatedDataMirroring Disassociate the template DataMirroring from the storage pool HardwareReliablePool: vxpool -g mydg distemplate HardwareReliablePool template=DataMirroring Add the disks mydg05, mydg06 and mydg07 to the storage pool datapool: vxpool -g mydg adddisk datapool dm=mydg05,mydg06,mydg07 Remove the disks mydg05 and mydg06 from the storage pool datapool: vxpool -g mydg rmdisk datapool dm=mydg05,mydg06 Set the autogrow and selfsufficient policies to pool for the storage pool mypool: vxpool -g mydg setpolicy mypool autogrow=pool selfsufficient=pool Display the policies that are associated with the storage pool mypool: vxpool -g mydg getpolicy mypool Display a list of all the storage pools in the disk group mydg: vxpool -g mydg list Obtain details of the storage pool HardwareReliablePool: vxpool -g mydg print HardwareReliablePool EXIT STATUS
The vxpool utility exits with a non-zero status if the attempted operation fails. A non-zero exit code is not a complete indicator of the problems encountered, but rather denotes the first condition that prevented further execution of the utility. NOTES
vxpool displays only disks that are in a pool, and which have at least one path available. Use the vxprint command to list full informa- tion about disks and their states. SEE ALSO
vxprint(1M), vxtemplate(1M), vxusertemplate(1M), vxvoladm(1M) Veritas Storage Foundation Intelligent Storage Provisioning Administrator's Guide VxVM 5.0.31.1 24 Mar 2008 vxpool(1M)
All times are GMT -4. The time now is 11:29 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy