Sponsored Content
Operating Systems Solaris new attached lun in solaris 10 Post 302553551 by christr on Thursday 8th of September 2011 01:14:21 AM
Old 09-08-2011
There are a lot of different scenarios. Are you using Powerpath and/or Veritas Storage Foundation (vxfs)? Most companies have varying procedures on how they want new LUNs configured.
 

10 More Discussions You Might Find Interesting

1. Solaris

Solaris 9 or 10 LUN Limitations

Is there a limit to the number of LUNS that can be concatenated using Solaris Volume manager with Soft partitions? I have worked with some AIX admins in the past and there was such a limitation therefore limiting the size the filesystem could grow to. Is there such a limitation in Solaris 9... (6 Replies)
Discussion started by: BG_JrAdmin
6 Replies

2. Shell Programming and Scripting

Script to login to attached SUN Storage through Solaris m/c w/o user intervention

I want to create a shell script to CLI login to attached SUN 6140 storage from Sun Solaris 9 m/c (instead of using CAM ) but that prompts me for password despite the fact that i am adding them in script .. i am using "expect" feature for this .. however as i never used "expect " before .. so... (0 Replies)
Discussion started by: yogesh29sharma
0 Replies

3. Solaris

LUN allocation in solaris server

hi all can anyone tell me how to track a new attached LUN in a solaris server?? (3 Replies)
Discussion started by: raynu.sharma
3 Replies

4. Solaris

Largest LUN size in Solaris 10

What is the largest possible LUN size that can be presented to Solaris 10. I've been googling a lot about this. The new EFI lablels (an alternative to VTOC) supports LUNs greater than 2TB. I need to know the upper limit. please help me find it. (4 Replies)
Discussion started by: pingmeback
4 Replies

5. Solaris

Set up iscsi LUN on solaris 9?

Hi, I need to set up iscsi LUN on Solaris 9. I've done it on Solaris 10 with iscsiadm. How do you do it on Solaris 9 though? Currently using Solaris 9 update 2. Your help is appreciated. Thanks, Sparcman (6 Replies)
Discussion started by: sparcman
6 Replies

6. Solaris

Problem with Solaris LUN and New FS

Hi All, I'm using Solaris server, SunOS 5.10 Generic_144488-08 sun4u sparc SUNW, SPARC-Enterprise. There is a newly created LUN of 250GB (EMC). I've scanned the system and able to see the new LUN. For example: 103. emcpower19a <DGC-VRAID-0430 cyl 48638 alt 2 hd 256 sec 16>... (4 Replies)
Discussion started by: superHonda123
4 Replies

7. Solaris

Solaris- How to scan newly attached NIC's

Hi folks, How can I scan newly attached network interfaces to server without reboot? Is there any command or something to scan without reboot. Thanks (5 Replies)
Discussion started by: snchaudhari2
5 Replies

8. Solaris

How to get LUN WWN in Solaris?

How to get LUN WWN (i.e LUN mapped from a storage box say Symmetrix or clariion) in Solaris. fcinfo command does give the Target port wwn but what i'm looking for is the LUN WWN. Any help is appreciated. (2 Replies)
Discussion started by: Manish00712
2 Replies

9. Solaris

Solaris 10, adding new LUN from SAN storage

Hello to all, Actually, currently on my Solaris box, I've a LUN (5TB space) from a EMC storage which is working fine, and a partition with ZFS filesystem is created for that LUN. as further you'll see in the logs, the "c4t6006016053802E00E6A9196B6506E211d0s2" is the current configured LUN in the... (4 Replies)
Discussion started by: Anti_Evil
4 Replies

10. Linux

Identify newly attached LUN from NetApp

Hi I need to identify a newly attached LUN from NetApp on a linuxserver running uname -o GNU/Linux I have first run the df -h and got the following: df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_outsystemdb-lv_root 50G 2.7G 45G ... (3 Replies)
Discussion started by: fretagi
3 Replies
vxpool(1M)																vxpool(1M)

NAME
vxpool - create and administer storage pools SYNOPSIS
vxpool [-g diskgroup] adddisk storage_pool dm=dm1[,dm2...] vxpool [-g diskgroup ] assoctemplate storage_pool template=t1[,t2...] vxpool [-g diskgroup ] assoctemplateset storage_pool template_set=ts1[,ts2...] vxpool [-g diskgroup] create storage_pool [dm=dm1[,dm2...]] [description=description] [autogrow=level] [selfsufficient=level] [pooldefinition=storage_pool_definition] vxpool [-g diskgroup] [-r] delete storage_pool vxpool [-g diskgroup] distemplate storage_pool template=t1[,t2...] vxpool [-g diskgroup] distemplateset storage_pool template_set=ts1[,ts2...] vxpool [-g diskgroup] getpolicy storage_pool vxpool help [keywords | options | attributes] vxpool [-g diskgroup ] list vxpool listpoolset [pooldefn=p1[,p2...]] vxpool listpooldefinition vxpool [-g diskgroup] organize storage_pool_set vxpool [-g diskgroup] print [storage_pool [storage_pool ...]] vxpool printpooldefinition [storage_pool_definition [storage_pool_definition ...]] vxpool printpoolset [storage_pool_set [storage_pool_set ...]] vxpool [-g diskgroup] rename storage_pool new_pool_name vxpool [-g diskgroup] rmdisk storage_pool dm=dm1[,dm2...] vxpool [-g diskgroup] setpolicy storage_pool [autogrow=level] [selfsufficient=level] DESCRIPTION
The vxpool utility provides a command line interface for the creation and administration of storage pools that are used with the Veritas Intelligent Storage Provisioning (ISP) feature of Veritas Volume Manager (VxVM). The operations that can be performed by vxpool are selected by specifying the appropriate keyword on the command line. See the KEYWORDS section for a description of the available operations. Most operations can be applied to a single disk group only. If a disk group is not specified by using the -g option, and an alternate default disk group is not defined by specifying the diskgroup attribute on the command line or in a defaults file (usually /etc/default/allocator), the default disk group is determined using the rules given in the vxdg(1M) manual page. KEYWORDS
adddisk Adds one or more disks to a storage pool. assoctemplate Associates one or more templates with a storage pool. assoctemplateset Associates one or more template sets with a storage pool. create Creates a storage pool and associate it with a disk group. This operation allows disks be added to the pool when it is created. Use the dm attribute to specified a comma-separated list of disk media names for these disks. Policies for the pool such as autogrow and selfsufficient can also be specified. By default, the values of autogrow and selfsufficient are set to 2 (diskgroup) and 1 (pool) respectively. If you specify a storage pool definition, the storage pool is created using this definition. Any other policies that you specify override the corresponding values in the definition. Note: Only a single data storage pool may be configured in a disk group. Any storage pools that you configure subsequently in a disk group are clone storage pools. A clone storage pool is used to hold instant full-sized snapshot copies of volumes in the data storage pool. delete Deletes a storage pool. If the -r option is specified, any disks in the pool are also dissociated from the pool provided that they are not allocated to volumes. Note: If any volumes are configured in the storage pool, the command fails and returns an error. distemplate Disassociates one or more templates from a storage pool. distemplateset Disassociates one or more template sets from a storage pool. getpolicy Displays the values of the policies that are set on a storage pool. help Displays information on vxpool usage, keywords, options or attributes. list Displays the storage pools (data and clone) that are configured in a disk group. listpoolset Lists all available storage pool sets. If a list of storage pool definitions is specified to the pooldefn attribute, only the pool sets that contain the specified pool definitions are listed. listpooldefinition Lists all available storage pool definitions. organize Creates data and clone storage pools using the storage pool definitions that are contained in a storage pool set. Unique storage pool names are generated by appending a number to the definition name. If required, you can use the rename operation to change these names. print Displays the details of one or more storage pools. If no storage pool is specified, the details of all storage pools are dis- played. printpooldefinition Displays the definitions for one or more storage pools. If no storage pool is specified, the definitions of all storage pools are displayed. printpoolset Displays the details of one or more storage pool sets. If no storage pool set is specified, the details of all storage pool sets are displayed. rename Renames a storage pool. rmdisk Removes one or more disks from a storage pool. The disks to be removed are specified as a comma-separated list of disk media names to the dm attribute. Note: A disk cannot be removed from a storage pool if it is currently allocated to a volume. setpolicy Sets the value of the autogrow and/or the selfsuffiecient policy for a storage pool. See the ATTRIBUTES section for a description of the policy level values that may be specified. OPTIONS
-g diskgroup Specifies a disk group by name or ID for an operation. If this option is not specified, and an alternate default disk group is not defined by specifying the diskgroup attribute on the command line or in a defaults file (usually /etc/default/allocator), the default disk group is determined using the rules given in the vxdg(1M) manual page. -r Removes all disks from a storage pool as part of a delete operation. ATTRIBUTES
autogrow=[{1|pool}|{2|diskgroup}] A storage pool's autogrow policy determines whether the pool can be grown to accommodate additional storage. If set to 1 or pool, the pool cannot be grown, and only storage that is currently configured in the pool can be used. If set to 2 or diskgroup}, it can be grown by bringing in additional storage from the disk group outside the storage pool. The default value of autogrow is 2 (diskgroup). description=description Provides a short description of the pool that is being created. dm=dmname,... Specifies disks by their disk media names (for example, mydg01). The disks must have already been initialized by Veritas Volume Manager. pooldefinition=storage_pool_definition Specifies the name of the pool definition that is to be used for creating a storage pool. selfsufficient=[{1|pool}|{2|diskgroup}|{3|host}] A storage pool's selfsufficient policy determines whether the pool can use templates that are not currently associated with it. If set to 1 or pool, the pool can only use template that have been associated with it. If set to 2 or diskgroup}, the pool can use templates as necessary that are associated with the disk group. If set to 3 or host}, the pool can use templates if required that are configured on the host system. The default value of selfsufficient is 1 (pool). template=t1[,t2...] Specifies one or more volume templates to an operation. template_set=ts1[,ts2...] Specifies one or more volume template sets to an operation. EXAMPLES
Create a storage pool called ReliablePool, in the disk group mydg, containing the disks mydg01 through mydg04, and with the autogrow and selfsufficient policies both set to diskgroup: vxpool -g mydg create ReliablePool dm=mydg01,mydg02,mydg03,mydg04 autogrow=diskgroup selfsufficient=diskgroup Delete the storage pool testpool from the disk group mydg, and also remove all disks from the pool: vxpool -g mydg -r delete testpool Rename the pool ReliablePool, in the disk group mydg to HardwareReliablePool: vxpool -g dg rename ReliablePool HardwareReliablePool Associate the templates DataMirroring and PrefabricatedDataMirroring with the storage pool HardwareReliablePool: vxpool -g mydg assoctemplate HardwareReliablePool template=DataMirroring,PrefabricatedDataMirroring Disassociate the template DataMirroring from the storage pool HardwareReliablePool: vxpool -g mydg distemplate HardwareReliablePool template=DataMirroring Add the disks mydg05, mydg06 and mydg07 to the storage pool datapool: vxpool -g mydg adddisk datapool dm=mydg05,mydg06,mydg07 Remove the disks mydg05 and mydg06 from the storage pool datapool: vxpool -g mydg rmdisk datapool dm=mydg05,mydg06 Set the autogrow and selfsufficient policies to pool for the storage pool mypool: vxpool -g mydg setpolicy mypool autogrow=pool selfsufficient=pool Display the policies that are associated with the storage pool mypool: vxpool -g mydg getpolicy mypool Display a list of all the storage pools in the disk group mydg: vxpool -g mydg list Obtain details of the storage pool HardwareReliablePool: vxpool -g mydg print HardwareReliablePool EXIT STATUS
The vxpool utility exits with a non-zero status if the attempted operation fails. A non-zero exit code is not a complete indicator of the problems encountered, but rather denotes the first condition that prevented further execution of the utility. NOTES
vxpool displays only disks that are in a pool, and which have at least one path available. Use the vxprint command to list full informa- tion about disks and their states. SEE ALSO
vxprint(1M), vxtemplate(1M), vxusertemplate(1M), vxvoladm(1M) Veritas Storage Foundation Intelligent Storage Provisioning Administrator's Guide VxVM 5.0.31.1 24 Mar 2008 vxpool(1M)
All times are GMT -4. The time now is 11:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy