Sponsored Content
Full Discussion: PowerHA Disk on VIO Server
Operating Systems AIX PowerHA Disk on VIO Server Post 302612537 by funksen on Monday 26th of March 2012 06:00:16 AM
Old 03-26-2012
hi,

map the disks to the lpar(s), and create the concurrent vg there

local vio-server disks, or external storage?




I would create the vg on one lpar, create all the filesystems etc. switch to concurrent mode
then varyoff and import on the other lpar, using the same vg major number

just ask if you need more details
This User Gave Thanks to funksen For This Post:
 

10 More Discussions You Might Find Interesting

1. AIX

Help urgent : vio server - add extral disk on virtual lpar

Hi, I define 2 new LV on the vio server and run a cfgmgr on the partition and see my 2 new hdisk (hdisk6 and hdisk7). I extend my vg vg000 (on the partition) and add the 2 hdisks. I had a filesystem on the vg000 and when I added the disk I would like to increase the filesystem. But I cannot do a... (0 Replies)
Discussion started by: touny
0 Replies

2. AIX

LPAR and vio disk mapping

We have a frame the uses 2 vios that assign disk storage to LPAR's. We have a LPAr with multiple disk and I want to know how do I tell which vio is serving the disk. For example the LPAr has hdisk 0, 1, 2, 3 all the same size. I want to know which vio is serving hdisk0, 1. (4 Replies)
Discussion started by: daveisme
4 Replies

3. AIX

vio server and vio client

Hi, I want to know wheather partition size for installation of vio client can be specified on vio server example If I am installing vio server on blade with 2*300gb hard disk,after that I want to create 2 vio client (AIX Operating system) wheather I can specify hard disk size while... (1 Reply)
Discussion started by: manoj.solaris
1 Replies

4. AIX

VIO Backing Disk LPAR how to find which one ?

hello Folks, my vio: $ lsmap -all SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost0 U9117.MMA.6534BE4-V2-C11 0x00000003 VTD ... (0 Replies)
Discussion started by: filosophizer
0 Replies

5. AIX

LPAR and AIX VIO Disk Mappring for Linux Clients

VIO Server is managing both AIX Clients and Linux Clients. For AIX Clients, we could do a disk mapping from slot numbers to VIO and also uname -L to determine the lparid and serial number frame its running on. From a Linux Client, How do I know which IBM frame its running on? Any command to... (4 Replies)
Discussion started by: rpangulu
4 Replies

6. AIX

vio server's rootvg disk migratepv

Hi I need to do the migratepv for the rootvg disks of the vio servers ? The current rootvg disks are on DMX3 storage and the new disks will be given from VMAX. Our setup is dual vio servers with multipathing to the vio clients. What is the command to do the migratepv for the vio rootvg disks... (5 Replies)
Discussion started by: newtoaixos
5 Replies

7. AIX

vio server ethernet to vio client ethernet(concepts confusing)

Hi In the vio server when I do # lsattr -El hdisk*, I get a PVID. The same PVID is also seen when I put the lspv command on the vio client partition. This way Im able to confirm the lun using the PVID. Similarly how does the vio client partition gets the virtual ethernet scsi client adapter... (1 Reply)
Discussion started by: newtoaixos
1 Replies

8. AIX

Mirroring vio server

Hi, I would like to know installing vio server on local disk and mirroring rootvg, if I am creating AIX VIO CLIENT(lpar), and any of single local hard disk failuare. will it affect lpars? will lpars able to boot. what needs to be done? (1 Reply)
Discussion started by: manoj.solaris
1 Replies

9. AIX

VIO and LPAR disk/fcs communications

:wall::wall::wall: 1. I have created an LPAR in the HMC. 2. I have allocated the storage from an Hitachi AMS2500 and assigned it to the host group. 3. I have zoned the LPAR and Storage on a Brocade 5100. (The zone sees the AMS) Next I activated the LPAR in the HMC, SMS mode for the mksysb... (3 Replies)
Discussion started by: Dallasguy7
3 Replies

10. AIX

VIO Server

Hi, I am facing an issue in vio server. When I run bosboot -ad /dev/hdisk0 I am getting an error trustchk: Verification of attributes failed: /usr/sbin/bootinfo : accessauths regards, vjm Please use code tags next time for your code and data. (8 Replies)
Discussion started by: vjm
8 Replies
vxpool(1M)																vxpool(1M)

NAME
vxpool - create and administer storage pools SYNOPSIS
vxpool [-g diskgroup] adddisk storage_pool dm=dm1[,dm2...] vxpool [-g diskgroup ] assoctemplate storage_pool template=t1[,t2...] vxpool [-g diskgroup ] assoctemplateset storage_pool template_set=ts1[,ts2...] vxpool [-g diskgroup] create storage_pool [dm=dm1[,dm2...]] [description=description] [autogrow=level] [selfsufficient=level] [pooldefinition=storage_pool_definition] vxpool [-g diskgroup] [-r] delete storage_pool vxpool [-g diskgroup] distemplate storage_pool template=t1[,t2...] vxpool [-g diskgroup] distemplateset storage_pool template_set=ts1[,ts2...] vxpool [-g diskgroup] getpolicy storage_pool vxpool help [keywords | options | attributes] vxpool [-g diskgroup ] list vxpool listpoolset [pooldefn=p1[,p2...]] vxpool listpooldefinition vxpool [-g diskgroup] organize storage_pool_set vxpool [-g diskgroup] print [storage_pool [storage_pool ...]] vxpool printpooldefinition [storage_pool_definition [storage_pool_definition ...]] vxpool printpoolset [storage_pool_set [storage_pool_set ...]] vxpool [-g diskgroup] rename storage_pool new_pool_name vxpool [-g diskgroup] rmdisk storage_pool dm=dm1[,dm2...] vxpool [-g diskgroup] setpolicy storage_pool [autogrow=level] [selfsufficient=level] DESCRIPTION
The vxpool utility provides a command line interface for the creation and administration of storage pools that are used with the Veritas Intelligent Storage Provisioning (ISP) feature of Veritas Volume Manager (VxVM). The operations that can be performed by vxpool are selected by specifying the appropriate keyword on the command line. See the KEYWORDS section for a description of the available operations. Most operations can be applied to a single disk group only. If a disk group is not specified by using the -g option, and an alternate default disk group is not defined by specifying the diskgroup attribute on the command line or in a defaults file (usually /etc/default/allocator), the default disk group is determined using the rules given in the vxdg(1M) manual page. KEYWORDS
adddisk Adds one or more disks to a storage pool. assoctemplate Associates one or more templates with a storage pool. assoctemplateset Associates one or more template sets with a storage pool. create Creates a storage pool and associate it with a disk group. This operation allows disks be added to the pool when it is created. Use the dm attribute to specified a comma-separated list of disk media names for these disks. Policies for the pool such as autogrow and selfsufficient can also be specified. By default, the values of autogrow and selfsufficient are set to 2 (diskgroup) and 1 (pool) respectively. If you specify a storage pool definition, the storage pool is created using this definition. Any other policies that you specify override the corresponding values in the definition. Note: Only a single data storage pool may be configured in a disk group. Any storage pools that you configure subsequently in a disk group are clone storage pools. A clone storage pool is used to hold instant full-sized snapshot copies of volumes in the data storage pool. delete Deletes a storage pool. If the -r option is specified, any disks in the pool are also dissociated from the pool provided that they are not allocated to volumes. Note: If any volumes are configured in the storage pool, the command fails and returns an error. distemplate Disassociates one or more templates from a storage pool. distemplateset Disassociates one or more template sets from a storage pool. getpolicy Displays the values of the policies that are set on a storage pool. help Displays information on vxpool usage, keywords, options or attributes. list Displays the storage pools (data and clone) that are configured in a disk group. listpoolset Lists all available storage pool sets. If a list of storage pool definitions is specified to the pooldefn attribute, only the pool sets that contain the specified pool definitions are listed. listpooldefinition Lists all available storage pool definitions. organize Creates data and clone storage pools using the storage pool definitions that are contained in a storage pool set. Unique storage pool names are generated by appending a number to the definition name. If required, you can use the rename operation to change these names. print Displays the details of one or more storage pools. If no storage pool is specified, the details of all storage pools are dis- played. printpooldefinition Displays the definitions for one or more storage pools. If no storage pool is specified, the definitions of all storage pools are displayed. printpoolset Displays the details of one or more storage pool sets. If no storage pool set is specified, the details of all storage pool sets are displayed. rename Renames a storage pool. rmdisk Removes one or more disks from a storage pool. The disks to be removed are specified as a comma-separated list of disk media names to the dm attribute. Note: A disk cannot be removed from a storage pool if it is currently allocated to a volume. setpolicy Sets the value of the autogrow and/or the selfsuffiecient policy for a storage pool. See the ATTRIBUTES section for a description of the policy level values that may be specified. OPTIONS
-g diskgroup Specifies a disk group by name or ID for an operation. If this option is not specified, and an alternate default disk group is not defined by specifying the diskgroup attribute on the command line or in a defaults file (usually /etc/default/allocator), the default disk group is determined using the rules given in the vxdg(1M) manual page. -r Removes all disks from a storage pool as part of a delete operation. ATTRIBUTES
autogrow=[{1|pool}|{2|diskgroup}] A storage pool's autogrow policy determines whether the pool can be grown to accommodate additional storage. If set to 1 or pool, the pool cannot be grown, and only storage that is currently configured in the pool can be used. If set to 2 or diskgroup}, it can be grown by bringing in additional storage from the disk group outside the storage pool. The default value of autogrow is 2 (diskgroup). description=description Provides a short description of the pool that is being created. dm=dmname,... Specifies disks by their disk media names (for example, mydg01). The disks must have already been initialized by Veritas Volume Manager. pooldefinition=storage_pool_definition Specifies the name of the pool definition that is to be used for creating a storage pool. selfsufficient=[{1|pool}|{2|diskgroup}|{3|host}] A storage pool's selfsufficient policy determines whether the pool can use templates that are not currently associated with it. If set to 1 or pool, the pool can only use template that have been associated with it. If set to 2 or diskgroup}, the pool can use templates as necessary that are associated with the disk group. If set to 3 or host}, the pool can use templates if required that are configured on the host system. The default value of selfsufficient is 1 (pool). template=t1[,t2...] Specifies one or more volume templates to an operation. template_set=ts1[,ts2...] Specifies one or more volume template sets to an operation. EXAMPLES
Create a storage pool called ReliablePool, in the disk group mydg, containing the disks mydg01 through mydg04, and with the autogrow and selfsufficient policies both set to diskgroup: vxpool -g mydg create ReliablePool dm=mydg01,mydg02,mydg03,mydg04 autogrow=diskgroup selfsufficient=diskgroup Delete the storage pool testpool from the disk group mydg, and also remove all disks from the pool: vxpool -g mydg -r delete testpool Rename the pool ReliablePool, in the disk group mydg to HardwareReliablePool: vxpool -g dg rename ReliablePool HardwareReliablePool Associate the templates DataMirroring and PrefabricatedDataMirroring with the storage pool HardwareReliablePool: vxpool -g mydg assoctemplate HardwareReliablePool template=DataMirroring,PrefabricatedDataMirroring Disassociate the template DataMirroring from the storage pool HardwareReliablePool: vxpool -g mydg distemplate HardwareReliablePool template=DataMirroring Add the disks mydg05, mydg06 and mydg07 to the storage pool datapool: vxpool -g mydg adddisk datapool dm=mydg05,mydg06,mydg07 Remove the disks mydg05 and mydg06 from the storage pool datapool: vxpool -g mydg rmdisk datapool dm=mydg05,mydg06 Set the autogrow and selfsufficient policies to pool for the storage pool mypool: vxpool -g mydg setpolicy mypool autogrow=pool selfsufficient=pool Display the policies that are associated with the storage pool mypool: vxpool -g mydg getpolicy mypool Display a list of all the storage pools in the disk group mydg: vxpool -g mydg list Obtain details of the storage pool HardwareReliablePool: vxpool -g mydg print HardwareReliablePool EXIT STATUS
The vxpool utility exits with a non-zero status if the attempted operation fails. A non-zero exit code is not a complete indicator of the problems encountered, but rather denotes the first condition that prevented further execution of the utility. NOTES
vxpool displays only disks that are in a pool, and which have at least one path available. Use the vxprint command to list full informa- tion about disks and their states. SEE ALSO
vxprint(1M), vxtemplate(1M), vxusertemplate(1M), vxvoladm(1M) Veritas Storage Foundation Intelligent Storage Provisioning Administrator's Guide VxVM 5.0.31.1 24 Mar 2008 vxpool(1M)
All times are GMT -4. The time now is 03:20 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy