Sponsored Content
Operating Systems Solaris Exporting physical disk to ldom or ZFS volume Post 303016292 by psychocandy on Tuesday 24th of April 2018 04:37:26 AM
Old 04-24-2018
Quote:
Originally Posted by Peasant
In the second example, are you using ZVOLs ?

Using zvols is not so good performance wise.
You are adding additional layers without need.

If not, can you explain with a bit more detail how are you exporting mirrored zpool from primary to ldom ?

In the first example, you are doing things properly.

An output of the following commands would be useful :
Code:
ldm list -l <slow_perf_ldom>
ldm list-services

Yes ZVOLS....

We've managed to get rid of this now. Heres what we did:-

1. Added new LUN as vdisk to existing (exported zvol) on ldom as a mirrored disk. (Took a LONG time to resilver - which shows how slow the disk is).

2. Added another LUN as vdisk to create 3 way mirror (this was faster).

3. Dropped the old zvol from the mirror.

Performance is now MASSIVELY much faster.

Understand its not ideal to use the zvol but I can't see anything in manual which says its this bad.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

physical volume and physical disk.

Hello, I need explanations about physical disks and physical volumes. What is the difference between these 2 things? In fact, i am trying to understand what the AIX lspv2command does. Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies

2. UNIX for Dummies Questions & Answers

Physical volume- no free physical partitions

I was in smit, checking on disc space, etc. and it appears that one of our physical volumes that is part of a large volume group, has no free physical partitions. The server is running AIX 5.1. What would be the advisable step to take in this instance? (9 Replies)
Discussion started by: markper
9 Replies

3. AIX

Basic Filesystem / Physical Volume / Logical Volume Check

Hi! Can anyone help me on how I can do a basic check on the Unix filesystems / physical volumes and logical volumes? What items should I check, like where do I look at in smit? Or are there commands that I should execute? I need to do this as I was informed by IBM that there seems to be... (1 Reply)
Discussion started by: chipahoys
1 Replies

4. HP-UX

Unmount and remove all Logical vol.Volume group and physical disk

Hi, Someone please help me with how i can unmount and remove all the files systems from a cluster. This is being shared by two servers that are active_standby. (3 Replies)
Discussion started by: joeli
3 Replies

5. Solaris

Ldom OS on SAN based zfs volume

Is it possible to use zvol from SAN LUN to install LDOM OS ? I 'm using following VDS from my service domain VDS NAME LDOM VOLUME DEVICE primary-vds0 primary iso sol-10-u6-ga1-sparc-dvd.iso cdrom ... (16 Replies)
Discussion started by: fugitive
16 Replies

6. UNIX for Dummies Questions & Answers

Confusion Regarding Physical Volume,Volume Group,Logical Volume,Physical partition

Hi, I am new to unix. I am working on Red Hat Linux and side by side on AIX also. After reading the concepts of Storage, I am now really confused regarding the terminologies 1)Physical Volume 2)Volume Group 3)Logical Volume 4)Physical Partition Please help me to understand these concepts. (6 Replies)
Discussion started by: kashifsd17
6 Replies

7. Solaris

ZFS rpool physical disk move

I'd like to finish setting up this system and then move the secondary or primary disk to another system that is the exact same hardware. I've done things like this in the past with ufs and disk suite mirroring just fine. But I have yet to do it with a zfs root pool mirror. Are there any... (1 Reply)
Discussion started by: Metasin
1 Replies

8. Linux

Logical Volume to physical disk mapping

When installing Linux, I choose some default setting to use all the disk space. My server has a single internal 250Gb SCSI disk. By default the install appears to have created 3 logical volumes lv_root, lv_home and lv_swap. fdisk -l shows the following lab3.nms:/dev>fdisk -l Disk... (2 Replies)
Discussion started by: jimthompson
2 Replies

9. AIX

Trouble removing Physical Disk from Volume Group

I want to remove hdisk1 from volume group diskpool_4 and migrate PV from hdisk1 to hdisk2 , but facing problems, so what is the quickest way to migratepv and remove hdisk1 -- # lspv | grep diskpool_4 hdisk1 00c7780e2e21ec86 diskpool_4 active hdisk2 ... (2 Replies)
Discussion started by: filosophizer
2 Replies

10. Solaris

Sharing a physical disk with an LDOM

I have a guest LDOM running Solaris 10U11 on a Sun T4-1 host running Solaris 11.4. The host has a disk named bkpool that I'd like to share with the LDOM so both can read and write it. The host is hemlock, the guest is sol10. root@hemlock:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP ... (3 Replies)
Discussion started by: Michele31416
3 Replies
scconf_dg_vxvm(1M)					  System Administration Commands					scconf_dg_vxvm(1M)

NAME
scconf_dg_vxvm - add, change, or update VxVM device group configuration. SYNOPSIS
scconf -a -D type=vxvm,devicegroup-options[,localonly=true|false] scconf -c -D devicegroup-options[,sync] scconf -r -D name=devicegroupname DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The following information is specific to the scconf command. To use the equivalent object-oriented commands, see the cldevicegroup(1CL) man page. The scconf_dg_vxvm command is used to add, change, and remove the VERITAS Volume Manager (VxVM) device groups to the Sun Cluster device- groups configuration. OPTIONS
See the scconf(1M) man page for the list of supported generic device-group options. The following action options describe the actions that the command performs. Only one action option is allowed in the command. -a Add a VxVM device group to the cluster configuration. The -a (add) option adds a new VxVM device group to the Sun Cluster device-groups configuration. With this option you define a name for the new device group, specify the nodes on which this group can be accessed, and specify a set of properties used to control actions. For VxVM device groups, you can only assign one VxVM disk group to a device group, and the device-group name must always match the name of the VxVM disk group. You cannot create a VxVM device group unless you first import the corresponding VxVM disk group on one of the nodes in that device's node list. Before you can add a node to a VxVM device group, every physical disk in the disk group must be physically ported to that node. After you register the disk group as a VxVM device group, you must first deport the disk group from the current node owner and turn off the auto-import flag for the disk group. To create a VxVM device group for a disk group, you must run the scconf command from the same node where the disk group was created. -c Change the ordering of the node preference list, change preference and failback policy, and change the desired number of sec- ondaries. The scconf -c (change) option changes the order of the potential primary node preference, to enable or disable failback, to add more global devices to the device group, and to change the desired number of secondaries. The sync suboption is used to synchronize the clustering software with VxVM disk-group volume information. The sync suboption is only valid with the change form of the command. Use the sync suboption whenever you add or remove a VxVM volume from a VxVM device group or change any volume attribute, such as owner, group, or access permissions. Also use the sync suboption to change a device-group configuration to a replicated or non-replicated configuration. For device groups that contain disks that use Hitachi TrueCopy data replication, this sync suboption synchronizes the device- group configuration and the replication configuration. This synchronization makes Sun Cluster software aware of disks that are configured for data replication and enables the software to handle failover or switchover as necessary. After you create a Solaris Volume Manager disk set that contain disks that are configured for replication, you must run the sync suboption for the corresponding svm or sds device group. A Solaris Volume Manager disk set is automatically registered with Sun Cluster software as an svm or sds device group, but replication information is not synchronized at that time. For newly created vxvm and rawdisk device-group types, you do not need to manually synchronize replication information for the disks. When you register a VxVM disk group or a raw-disk device group with Sun Cluster software, the software automatically discovers any replication information on the disks. To change the order-of-node preference list from false to true, you must specify in the nodelist all the nodes that currently exist in the device group. You must also set the preferenced suboption to true. If you do not specify the preferenced suboption with the change form of the command, the already established true or false setting is used. If a disk group should be accessed by only one node, it should be configured with the localonly property set to true. This property setting puts the disk group outside the control of Sun Cluster software. Only one node can be specified in the node list to create a localonly disk group. To change a local-only disk group to a regular VxVM disk group, set the localonly property to false. -r Remove the specified VxVM device group from the cluster. The -r (remove) option removes a VxVM device group from the Sun Cluster device-groups configuration. You can also use this form of command to remove the nodes from the VxVM device group configuration. EXAMPLES
Example 1 Using scconf Commands The following scconf commands create a VxVM device group, change the order of the potential primary nodes, change the preference and fail- back policy for the device group, change the desired number of secondaries, and remove the VxVM device group from the cluster configura- tion. host1# scconf -a -D type=vxvm,name=diskgrp1,nodelist=host1:host2:host3,preferenced=false,failback=enabled host1# scconf -c -D name=diskgrp1,nodelist=host2:host1:host3,preferenced=true,failback=disabled,numsecondaries=2 sync host1# scconf -r -D name=diskgrp1,nodelist=node1 ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWscu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevicegroup(1CL), scconf(1M), attributes(5) Sun Cluster 3.2 2 Aug 2006 scconf_dg_vxvm(1M)
All times are GMT -4. The time now is 10:15 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy