Sponsored Content
Operating Systems Solaris Exporting physical disk to ldom or ZFS volume Post 303016309 by Peasant on Tuesday 24th of April 2018 11:25:25 AM
Old 04-24-2018
It's not inherently bad, just slower.
And adding additional layer will cause you more prone to bugs.
Which exist for zvol performance.
It has its uses, but probably not for scenario you are in.

Keep it simple and closest to hardware you can get.

Question is, what is that ldom being used for and what release are you running ?

Regards
Peasant.
This User Gave Thanks to Peasant For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

physical volume and physical disk.

Hello, I need explanations about physical disks and physical volumes. What is the difference between these 2 things? In fact, i am trying to understand what the AIX lspv2command does. Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies

2. UNIX for Dummies Questions & Answers

Physical volume- no free physical partitions

I was in smit, checking on disc space, etc. and it appears that one of our physical volumes that is part of a large volume group, has no free physical partitions. The server is running AIX 5.1. What would be the advisable step to take in this instance? (9 Replies)
Discussion started by: markper
9 Replies

3. AIX

Basic Filesystem / Physical Volume / Logical Volume Check

Hi! Can anyone help me on how I can do a basic check on the Unix filesystems / physical volumes and logical volumes? What items should I check, like where do I look at in smit? Or are there commands that I should execute? I need to do this as I was informed by IBM that there seems to be... (1 Reply)
Discussion started by: chipahoys
1 Replies

4. HP-UX

Unmount and remove all Logical vol.Volume group and physical disk

Hi, Someone please help me with how i can unmount and remove all the files systems from a cluster. This is being shared by two servers that are active_standby. (3 Replies)
Discussion started by: joeli
3 Replies

5. Solaris

Ldom OS on SAN based zfs volume

Is it possible to use zvol from SAN LUN to install LDOM OS ? I 'm using following VDS from my service domain VDS NAME LDOM VOLUME DEVICE primary-vds0 primary iso sol-10-u6-ga1-sparc-dvd.iso cdrom ... (16 Replies)
Discussion started by: fugitive
16 Replies

6. UNIX for Dummies Questions & Answers

Confusion Regarding Physical Volume,Volume Group,Logical Volume,Physical partition

Hi, I am new to unix. I am working on Red Hat Linux and side by side on AIX also. After reading the concepts of Storage, I am now really confused regarding the terminologies 1)Physical Volume 2)Volume Group 3)Logical Volume 4)Physical Partition Please help me to understand these concepts. (6 Replies)
Discussion started by: kashifsd17
6 Replies

7. Solaris

ZFS rpool physical disk move

I'd like to finish setting up this system and then move the secondary or primary disk to another system that is the exact same hardware. I've done things like this in the past with ufs and disk suite mirroring just fine. But I have yet to do it with a zfs root pool mirror. Are there any... (1 Reply)
Discussion started by: Metasin
1 Replies

8. Linux

Logical Volume to physical disk mapping

When installing Linux, I choose some default setting to use all the disk space. My server has a single internal 250Gb SCSI disk. By default the install appears to have created 3 logical volumes lv_root, lv_home and lv_swap. fdisk -l shows the following lab3.nms:/dev>fdisk -l Disk... (2 Replies)
Discussion started by: jimthompson
2 Replies

9. AIX

Trouble removing Physical Disk from Volume Group

I want to remove hdisk1 from volume group diskpool_4 and migrate PV from hdisk1 to hdisk2 , but facing problems, so what is the quickest way to migratepv and remove hdisk1 -- # lspv | grep diskpool_4 hdisk1 00c7780e2e21ec86 diskpool_4 active hdisk2 ... (2 Replies)
Discussion started by: filosophizer
2 Replies

10. Solaris

Sharing a physical disk with an LDOM

I have a guest LDOM running Solaris 10U11 on a Sun T4-1 host running Solaris 11.4. The host has a disk named bkpool that I'd like to share with the LDOM so both can read and write it. The host is hemlock, the guest is sol10. root@hemlock:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP ... (3 Replies)
Discussion started by: Michele31416
3 Replies
vxrootmir(1M)															     vxrootmir(1M)

NAME
vxrootmir - create a mirror of a Veritas Volume Manager root disk SYNOPSIS
/etc/vx/bin/vxrootmir [-g diskgroup] [-t tasktag] [-p Pool_1,Pool_2,...] [[-v] [-b] [-R] root_mirror] DESCRIPTION
The vxrootmir command creates mirrors of all of the volumes on a Veritas Volume Manager (VxVM) rootable boot disk, and makes the new disk bootable. A disk to be used as a mirror can be specified either by its VM disk name (disk media name) or by its device name (disk access name). If a disk media name is specified, it is validated to make sure that it exists and that it has been properly initialized. This validation includes making sure that the private region is at the same location and has the same length as the private region on the primary root disk, and that the sum of the lengths of all of the subdisks located on the primary root disk will fit within the available space in the public region of the specified disk. If a disk access name is specified, it is validated to make sure it exists and is not in use, and that the total length of all the subdisks on the primary root disk will fit within the public region. The disk is then initialized to contain a private region with the same offset and length as the private region on the primary root disk. A new disk media name is assigned to the disk formed from the prefix rootdisk followed by the next available number (for example, rootdisk02, rootdisk03, and so on). All volumes that have a subdisk on the primary VxVM root disk are mirrored on the specified disk. When the root volume (rootvol) is mir- rored, the vxassist command executes vxbootsetup to set up the new disk as a boot disk. The -p option allows you to mirror the volumes on the root disk as stripe columns across several disks. The argument to this option is a list of disks that are to be used for the stripe column mirrors. If not enough disks are specified, vxrootmir prints a message to the standard error output, including information on how many disks are required, and then exits. When initialized for VxVM use, these stripe column disks are named with the prefix rootaux followed by the next available number (for example, rootaux01, rootaux02, and so on). OPTIONS
-b If the system was booted from the VxVM root disk that is being mirrored, this option uses the setboot command to set the alter- nate boot disk to the specified mirror. If the system was booted from another root disk (such as an LVM root disk), an alternate root disk is not set. If the -v option is also specified, information is displayed on the current setboot settings, and on whether the alternate boot disk is set to the specified mirror. -g diskgroup Specifies the boot disk group. -p Pool_1,Pool_2,... Specifies the disks that are to be used for stripe column targets when mirroring the VxVM root disk. The disks can be specified either as disk access names, or as disk media names if they have previously been initialized for use with VxVM. If specified as disk access names, the disks are checked for existence, correct size, and availability for use. -R Indicates that only the volumes required to boot successfully from the new mirror are to be mirrored. -t tasktag Marks any tasks that are registered to track the progress of an operation with the tag tasktag. This option is passed to vxas- sist when mirroring volumes, so any child tasks are also tagged with tasktag. -v Displays verbose output including timestamps for operations in progress. This option is useful as mirroring large volumes can take a long time. ARGUMENTS
daname Specifies the disk to be used as a mirror by its disk access name (such as c0t2d0). dmname Specifies the disk to be used as a mirror by its disk media name (such as rootdisk03). EXAMPLES
This example shows the vxrootmir command being invoked in its simpliest form: /etc/vx/bin/vxrootmir c5t1d0 The next example shows how to use the -R option with vxrootmir. # /etc/vx/bin/vxrootmir -v -b -R c5t10d0 vxrootmir: 10:10: Gathering information on the current VxVM root configuration vxrootmir: 10:10: Checking specified disk(s) for usability vxrootmir: 10:10: Preparing disk c5t10d0 as a VxVM disk vxrootmir: 10:10: Adding disk c5t10d0 to rootdg as rootdisk02 vxrootmir: 10:10: Mirroring only volumes required for root mirror boot vxrootmir: 10:10: Mirroring volume standvol vxrootmir: 10:11: Mirroring volume swapvol vxrootmir: 10:18: Mirroring volume rootvol vxrootmir: 10:20: Current setboot values: vxrootmir: 10:20: Primary: 0/4/0/1.11.0 vxrootmir: 10:20: Alternate: 0/4/0/1.13.0 vxrootmir: 10:20: Making c5t10d0 (0/4/0/1.10.0) the alternate boot disk vxrootmir: 10:20: Disk c5t10d0 is now a mirrored root disk The final example shows how to specify a list of disks for use as stripe column mirrors: # /etc/vx/bin/vxrootmir -v -p c5t11d0,c5t12d0,c5t13d0 c5t10d0 vxrootmir: 12:11: Gathering information on the current VxVM root configuration vxrootmir: 12:11: Checking specified disk(s) for usability vxrootmir: 12:11: Preparing disk c5t10d0 as a VxVM disk vxrootmir: 12:11: Adding disk c5t10d0 to rootdg as rootdisk02 vxrootmir: 12:11: Preparing disk c5t11d0 as a VxVM disk vxrootmir: 12:11: Adding disk c5t11d0 to rootdg as DM rootstpm01 vxrootmir: 12:11: Preparing disk c5t12d0 as a VxVM disk vxrootmir: 12:11: Adding disk c5t12d0 to rootdg as DM rootstpm02 vxrootmir: 12:11: Preparing disk c5t13d0 as a VxVM disk vxrootmir: 12:11: Adding disk c5t13d0 to rootdg as DM rootstpm03 vxrootmir: 12:11: Mirroring all volumes on root disk vxrootmir: 12:11: Mirroring volume standvol vxrootmir: 12:12: Mirroring volume swapvol vxrootmir: 12:19: Mirroring volume rootvol vxrootmir: 12:21: Mirroring volume optvol vxrootmir: 12:24: Mirroring volume usrvol vxrootmir: 12:27: Mirroring volume homevol vxrootmir: 12:28: Mirroring volume tmpvol vxrootmir: 12:28: Mirroring volume varvol vxrootmir: 12:36: Disk c5t10d0 is now a mirrored root disk NOTES
If the vxrootmir command aborts for any reason, or if you interrupt the command during execution, an attempt is made to clean up the VxVM objects that had been generated up to the time of the abort or interruption. All mirror plexes that had already been added, or that were in the process of being added when the interruption occurred, are removed. All Data Media (DM) objects that were created are also removed. If a plex or a DM object cannot be removed, an explanatory message is displayed. SEE ALSO
setboot(1M), vxassist(1M), vxbootsetup(1M), vxintro(1M), vxmirror(1M), vxtask(1M) VxVM 5.0.31.1 24 Mar 2008 vxrootmir(1M)
All times are GMT -4. The time now is 08:57 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy