Sponsored Content
Operating Systems Solaris Sharing a physical disk with an LDOM Post 303042254 by Michele31416 on Thursday 19th of December 2019 05:17:31 PM
Old 12-19-2019
OK, I'm glad I asked then. So I have to mount the /bkpool disk in the LDOM as an NFS share? Can you give me a pointer on how to do that? Is this what Oracle calls "virtual disk multipathing"? There's an example of that further down in the link in the OP but I'm not quite sure how to do it. Also, do I first need to undo the add-vdsdev and add-vdisk commands I gave earlier? I don't want to mess up my disk.

UPDATE

Well as usual the Oracle documentation was overly complex and ambiguous. I figured it out, thanks to the suggestion above:

On the host, assuming the IP of the LDOM at 192.168.0.78, do:
Code:
root@hemlock:/# share -F nfs -o rw,root=192.168.0.78 /bkpool/

Then in the LDOM (with the IP of the host hemlock at 192.168.0.183), do:
Code:
# cd /
# mkdir bkpool
# mount -F nfs -o vers=3 192.168.0.183:/bkpool /bkpool

The LDOM now has a mountpoint named /bkpool containing everything on the host's /bkpool disk. The host and the LDOM can both read and write the disk. No rebooting anywhere required. Easy! :-)

Last edited by Michele31416; 12-19-2019 at 09:13 PM..
These 2 Users Gave Thanks to Michele31416 For This Post:
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

physical volume and physical disk.

Hello, I need explanations about physical disks and physical volumes. What is the difference between these 2 things? In fact, i am trying to understand what the AIX lspv2command does. Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies

2. HP-UX

determine the physical size of the hard disk

Hi is there a cmd in hpux 11 to determine the physical size of the hard disk. not bdf command. i have searched the other threads here but cant find an answer. thank you guys (4 Replies)
Discussion started by: hoffies
4 Replies

3. Solaris

List all resources on physical host LDOM server

Hello, I have a SUN T5240 running Solaris 10 with Logical Domain Manager (v 1.0.3). You can use the "ldm" command to display current resources on the box. Is there away to display all the "physical resources" on the box(i.e.,used and unused). For example, "ldm ls" will tell me what the... (5 Replies)
Discussion started by: stephanpitts
5 Replies

4. Solaris

Sharing a local disk between to solaris machines

Hi, I recently added a disk on a solaris 9 and I wanted to make it accessible for another machine, using the same name here is what i did : On the machine holding the internal disk in vfstab i added the line /dev/dsk/c1t1d0s4 /dev/rdsk/c1t1d0s4 /SHARED2 ufs 2 yes ... (2 Replies)
Discussion started by: zionassedo
2 Replies

5. Solaris

Installing Solaris OS on LDOM SAN Disk

I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue. I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN. I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies

6. Red Hat

Sharing SAN disk with multiple severs

Hi , I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies

7. Solaris

Disk expansion on LDOM Guest

Hi, There is LDOM Guest where I need to expand /u02 file systems on it. It is residing on a Solaris 11 Hypervisor (Primary Domain). The storage is expanded on vdisk presented to Hypervisor. I need steps to expand the /u02 on LDOM Guest. (2 Replies)
Discussion started by: vidya_sagar2003
2 Replies

8. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies

9. Solaris

Disk alignment inside of an LDOM

Hi! Quick background for the question... I have Solaris 11.4 control/primary zone with some LDOM's on top of it. I have some raw iSCSI LUN's presented to the control zone/primary zone from a NetApp, which I then pass up to the LDOM's via the VDS/vdisk. So basically the LDOM VM's see the disk as... (1 Reply)
Discussion started by: rtmg
1 Replies
CARP(4) 						   BSD Kernel Interfaces Manual 						   CARP(4)

NAME
carp -- Common Address Redundancy Protocol SYNOPSIS
pseudo-device carp [count] DESCRIPTION
The carp interface is a pseudo-device which implements and controls the CARP protocol. carp allows multiple hosts on the same local network to share a set of IP addresses. Its primary purpose is to ensure that these addresses are always available, but in some configurations carp can also provide load balancing functionality. A carp interface can be created at runtime using the ifconfig carpN create command. To use carp, the administrator needs to configure at minimum a common virtual host ID and virtual host IP address on each machine which is to take part in the virtual group. Additional parameters can also be set on a per-interface basis: advbase and advskew, which are used to con- trol how frequently the host sends advertisements when it is the master for a virtual host, and pass which is used to authenticate carp advertisements. Finally carpdev is used to specify which interface the carp device attaches to. If unspecified, the kernel attempts to set carpdev by looking for another interface with the same subnet. These configurations can be done using ifconfig(8), or through the SIOCSVH ioctl. Additionally, there are a number of global parameters which can be set using sysctl(8): net.inet.carp.allow Accept incoming carp packets. Enabled by default. net.inet.carp.preempt Allow virtual hosts to preempt each other. It is also used to failover carp interfaces as a group. When the option is enabled and one of the carp enabled physical interfaces goes down, advskew is changed to 240 on all carp interfaces. See also the first example. Disabled by default. net.inet.carp.log Log bad carp packets. Disabled by default. net.inet.carp.arpbalance Balance local traffic using ARP. Disabled by default. EXAMPLES
For firewalls and routers with multiple interfaces, it is desirable to failover all of the carp interfaces together, when one of the physical interfaces goes down. This is achieved by the preempt option. Enable it on both host A and B: # sysctl -w net.inet.carp.preempt=1 Assume that host A is the preferred master and 192.168.1.x/24 is configured on one physical interface and 192.168.2.y/24 on another. This is the setup for host A: # ifconfig carp0 create # ifconfig carp0 vhid 1 pass mekmitasdigoat 192.168.1.1 netmask 255.255.255.0 # ifconfig carp1 create # ifconfig carp1 vhid 2 pass mekmitasdigoat 192.168.2.1/24 netmask 255.255.255.0 The setup for host B is identical, but it has a higher advskew: # ifconfig carp0 create # ifconfig carp0 vhid 1 advskew 100 pass mekmitasdigoat 192.168.1.1 netmask 255.255.255.0 # ifconfig carp1 create # ifconfig carp1 vhid 2 advskew 100 pass mekmitasdigoat 192.168.2.1 netmask 255.255.255.0 Because of the preempt option, when one of the physical interfaces of host A fails, advskew is adjusted to 240 on all its carp interfaces. This will cause host B to preempt on both interfaces instead of just the failed one. In order to set up an ARP balanced virtual host, it is necessary to configure one virtual host for each physical host which would respond to ARP requests and thus handle the traffic. In the following example, two virtual hosts are configured on two hosts to provide balancing and failover for the IP address 192.168.1.10. First the carp interfaces on Host A are configured. The advskew of 100 on the second virtual host means that its advertisements will be sent out slightly less frequently. # ifconfig carp0 create # ifconfig carp0 vhid 1 pass mekmitasdigoat 192.168.1.10 netmask 255.255.255.0 # ifconfig carp1 create # ifconfig carp1 vhid 2 advskew 100 pass mekmitasdigoat 192.168.1.10 netmask 255.255.255.0 The configuration for host B is identical, except the skew is on virtual host 1 rather than virtual host 2. # ifconfig carp0 create # ifconfig carp0 vhid 1 advskew 100 pass mekmitasdigoat 192.168.1.10 netmask 255.255.255.0 # ifconfig carp1 create # ifconfig carp1 vhid 2 pass mekmitasdigoat 192.168.1.10 netmask 255.255.255.0 Finally, the ARP balancing feature must be enabled on both hosts: # sysctl -w net.inet.carp.arpbalance=1 When the hosts receive an ARP request for 192.168.1.10, the source IP address of the request is used to compute which virtual host should answer the request. The host which is master of the selected virtual host will reply to the request, the other(s) will ignore it. This way, locally connected systems will receive different ARP replies and subsequent IP traffic will be balanced among the hosts. If one of the hosts fails, the other will take over the virtual MAC address, and begin answering ARP requests on its behalf. Note: ARP balancing only works on the local network segment. It cannot balance traffic that crosses a router, because the router itself will always be balanced to the same virtual host. SEE ALSO
netstat(1), sysctl(3), arp(4), arp(8), ifconfig(8), sysctl(8) HISTORY
The carp device first appeared in OpenBSD 3.5. BSD
October 16, 2003 BSD
All times are GMT -4. The time now is 12:39 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy