Sponsored Content
Full Discussion: Mount Emc Clones in VxVm
Operating Systems Solaris Mount Emc Clones in VxVm Post 302583661 by npandith on Wednesday 21st of December 2011 02:23:33 AM
Old 12-21-2011
You can create a disk group and encapsulate the disk.

# vxdg init rootdg
# vxdctl add disk <DEVICE> type=sliced
# vxencap -g rootdg rootdisk=<DEVICE>
This User Gave Thanks to npandith For This Post:
 

9 More Discussions You Might Find Interesting

1. HP-UX

is anybody using EMC symmetrix?

Hello, I'm looking for reference sites using HP-UX and EMC symmetrix disk. Then, May I ask you questions? (6 Replies)
Discussion started by: cooldugong
6 Replies

2. Solaris

Emc

Dear gentelmen kindly please update me me how can i know disks on EMC and get size for all disks on EMC? (1 Reply)
Discussion started by: magasem
1 Replies

3. HP-UX

Emc

Dear gentelmen kindly please update me how can i know disks on EMC and get size for all disks on EMC? (0 Replies)
Discussion started by: magasem
0 Replies

4. Solaris

Mount points of available Vxvm disks

I have situation where I want to know about all the vxvm mount points information on my solaris box, because after rebooting the server the few mount points are not mounted automatically because they where not copied in /etc/vfstab. So please help me out on the same.:( (0 Replies)
Discussion started by: nimish_mehta
0 Replies

5. Solaris

EMC Failover

Hi guys, I'm running vxdmp and powerpath at the same time. Vxdmp for internal disks and powerpath for external. The problem is that, on the failover tests which a fiber cable should be removed, the system cannot recognize the disks. Any hints on how to configure powerpath in order to... (4 Replies)
Discussion started by: glenioborges
4 Replies

6. Programming

Wait for clones

Hi, I have a methode creating several clones with the following flags: CLONE_VM | CLONE_THREAD | CLONE_SIGHAND How to tell the parent process to wait for the clones' completion? pid = clone( ...) waitpid(pid, NULL , 0) always returns me -1 waitpid(-1, NULL, 0) does the same. (1 Reply)
Discussion started by: The-Forgotten
1 Replies

7. Solaris

Solaris/vxvm/EMC Lun configuration

Hello all, i try to allocate the same LUN to two server (or more in the future) i use solaris 10, vxvm (vxfs) for data and solaris zones and EMC DMX-4, i try to migrate solaris zones between servers in case of problem. and this is what i want to do - affect LUN to srv00124 and srv10155 -... (5 Replies)
Discussion started by: mat_solaris
5 Replies

8. UNIX for Dummies Questions & Answers

Updating git clones

Hi, I'm fairly new to the git command and I'm trying to figure out how to check if your local clone is up to date with the master. I know you can do the same thing on packages with apt-get by using update and then upgrade. Is there something similar with git? (0 Replies)
Discussion started by: silverdust
0 Replies

9. Programming

Chinese Arduino UNO Clones - The Wavgat versus the generic UNO R3 Clone - The Winner Is?

Waiting for more fun Ardunio parts from AliExpress, I decided to test two cheap Chinese Arduino UNO clones. The Arduino UNO R3 (CH340G) MEGA328P The Wavgat UNO R3 (CH340G) MEGA328P Both of these Chinese Ardunio clones sell for about $3 USD, delivered to your door. The bottom line is... (0 Replies)
Discussion started by: Neo
0 Replies
volsetup(8)						      System Manager's Manual						       volsetup(8)

NAME
volsetup, lsmsetup - Initializes Logical Storage Manager (LSM) by creating the rootdg disk group SYNOPSIS
/usr/sbin/volsetup [-c] [-o force] [-n num] [-s] [diskname | partition...] [attribute...] OPTIONS
The following options are recognized: Clears the lock protecting multiple nodes in a cluster from simultaneously running the volsetup com- mand. After clearing the lock, it is taken out on behalf of the initiating node. Specifies the approximate number of disks to be managed by LSM. This option is currently ignored and is only provided for compatibility with existing scripts. Forces re-initialization if LSM has already been initialized. Synchronizes a node with cluster members. DESCRIPTION
The volsetup script is an interactive script that should be run after installing LSM. The diskname or partition parameter specifies the name of at least one disk or partition to be used in creating the rootdg disk group. If no disk or partition name is given on the command line, the volsetup script prompts for this information. If more than one disk name or partition name is given as input, all the disks and partitions are added to the rootdg disk group. The -o force option can be used to remove an existing LSM configuration and reinitialize LSM. The volsetup script starts the vold daemon and one voliod daemon per CPU by default. After volsetup has been run, LSM is fully functional. To configure LSM in a TruCluster Version 5.0 multi-member cluster, run the volsetup command from one of the cluster members and run the volsetup -s on the other cluster members. If additional members are later added to the cluster with the clu_add_member utility, do not run volsetup -s command on the new member. The clu_add_member utility automatically synchronizes LSM on the new node. ATTRIBUTES
The following attributes can be specified to affect the layout strategy used by volsetup: Specifies the length of the public area to create on the disk. This defaults to the size of the disk minus the private area on the disk. Specifies the length of the private area to create on the disk. The default is 4096 sectors. Specifies the number of configuration copies and log copies to be initialized on the disk. The number of configuration copies will be the same as the number of log copies. This defaults to 1. Specifies the length in sectors of each configuration copy. The default values are calculated based on the value of nconfig. Specifies the length in sectors of each log copy. The default values are calculated based on the value of nconfig. Specifies a user-defined comment. ERRORS
You may receive the following messages when using the volsetup command. LSM initialization fails if none of the disks specified can be ini- tialized by LSM. The following message indicates that LSM is initialized on the system. To reinitialize LSM, use the -o force option, which removes previ- ous LSM configuration. A previous LSM configuration exists (err=22). Use the "-o force" option to reinitialize LSM. Stop. The following message indicates that you tried to initialize an LSM disk on a partition or on a disk that is actively in use. The partition could be a mounted UFS or AdvFS filesystem that is initialized as an LSM disk or used as a swap device. special-device or an overlapping partition is open. The following message indicates that you tried to initialize an LSM disk on a partition that is not currently in active use, but is marked for use in the disk label's partition map. For example, the partition may be part of a UFS filesystem (4.2BSD) or an AdvFS domain. spe- cial-device is marked in use for fstype in the disklabel. If you continue with the operation you can possibly destroy existing data CON- TINUE? [y/n] If you know that the partition you specified to volsetup does not contain any data, you can choose to override the warning. In this case, the fstype in the disk label is modified to an LSM fstype such as LSMsimp, LSMpubl or LSMpriv. The exact fstype depends on whether a disk or a partition is given as an argument to voldisksetup. Note that you can use the command disklabel -s to set the fstype in the disk label to unused for partitions that do not contain any valid data. See disklabel(8) for more information. The following message indicates that the partition you specified is not marked for use, but other, overlapping partitions on the disk are marked for use. Partition(s) which overlap special-device are marked in use. If you continue with the operation you can possibly destroy existing data. CONTINUE? [y/n] If you override this warning, the fstype in the disk's label is modified. The partition you specified to volsetup will be marked as in use by LSM and all overlapping partitions will be marked UNUSED. The following examples illustrate these messages: Initializing an LSM disk on a partition that is open and actively in use: # /usr/sbin/volsetup dsk11c dsk11c or an overlapping partition is open. Initializing an LSM sliced disk on a disk which has partition g marked for use by UFS (4.2BSD): # /usr/sbin/volsetup dsk11 /dev/rdisk/dsk11g is marked in use for 4.2BSD in the disklabel. If you continue with the operation you can possibly destroy exist- ing data. CONTINUE? [y/n] Partition g of disk dsk11 is marked for use by UFS (4.2BSD). If UFS is not actively using this partition and the partition does not contain any data, you may want to override this warning, by answering y. In this case, partition g will be marked as LSMpubl and partition h will be marked as LSMpriv in the disk label. Initializing an LSM simple disk on a partition whose overlapping parti- tions are marked for use: # /usr/sbin/volsetup dsk11c Partition(s) which overlap /dev/rdisk/dsk11c are marked in use. If you continue with the operation you can possibly destroy exist- ing data. CONTINUE? [y/n] Partition c, which is being initialized into LSM, is not currently in use, but other partition(s) which overlap with partition c are marked in use in the disk label. If you answer y, partition c on disk dsk11 will be marked LSMsimp in the disk label and all parti- tions that overlap partition c will be marked UNUSED. Initializing an LSM disk on a disk that has no disk label: # /usr/sbin/volsetup dsk11 The disklabel for dsk11 does not exist or is corrupted. Quitting... See disklabel(8) for information on installing a disk label on a disk. EXAMPLES
The following is an example of volsetup usage : # /usr/sbin/volsetup dsk3 dsk8h This will add disk dsk3 and partition dsk8h to the rootdg disk group. SEE ALSO
disklabel(8), volintro(8), vold(8), voliod(8) volsetup(8)
All times are GMT -4. The time now is 03:52 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy