Sponsored Content
Operating Systems Solaris Sharing a physical disk with an LDOM Post 303042254 by Michele31416 on Thursday 19th of December 2019 05:17:31 PM
Old 12-19-2019
OK, I'm glad I asked then. So I have to mount the /bkpool disk in the LDOM as an NFS share? Can you give me a pointer on how to do that? Is this what Oracle calls "virtual disk multipathing"? There's an example of that further down in the link in the OP but I'm not quite sure how to do it. Also, do I first need to undo the add-vdsdev and add-vdisk commands I gave earlier? I don't want to mess up my disk.

UPDATE

Well as usual the Oracle documentation was overly complex and ambiguous. I figured it out, thanks to the suggestion above:

On the host, assuming the IP of the LDOM at 192.168.0.78, do:
Code:
root@hemlock:/# share -F nfs -o rw,root=192.168.0.78 /bkpool/

Then in the LDOM (with the IP of the host hemlock at 192.168.0.183), do:
Code:
# cd /
# mkdir bkpool
# mount -F nfs -o vers=3 192.168.0.183:/bkpool /bkpool

The LDOM now has a mountpoint named /bkpool containing everything on the host's /bkpool disk. The host and the LDOM can both read and write the disk. No rebooting anywhere required. Easy! :-)

Last edited by Michele31416; 12-19-2019 at 09:13 PM..
These 2 Users Gave Thanks to Michele31416 For This Post:
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

physical volume and physical disk.

Hello, I need explanations about physical disks and physical volumes. What is the difference between these 2 things? In fact, i am trying to understand what the AIX lspv2command does. Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies

2. HP-UX

determine the physical size of the hard disk

Hi is there a cmd in hpux 11 to determine the physical size of the hard disk. not bdf command. i have searched the other threads here but cant find an answer. thank you guys (4 Replies)
Discussion started by: hoffies
4 Replies

3. Solaris

List all resources on physical host LDOM server

Hello, I have a SUN T5240 running Solaris 10 with Logical Domain Manager (v 1.0.3). You can use the "ldm" command to display current resources on the box. Is there away to display all the "physical resources" on the box(i.e.,used and unused). For example, "ldm ls" will tell me what the... (5 Replies)
Discussion started by: stephanpitts
5 Replies

4. Solaris

Sharing a local disk between to solaris machines

Hi, I recently added a disk on a solaris 9 and I wanted to make it accessible for another machine, using the same name here is what i did : On the machine holding the internal disk in vfstab i added the line /dev/dsk/c1t1d0s4 /dev/rdsk/c1t1d0s4 /SHARED2 ufs 2 yes ... (2 Replies)
Discussion started by: zionassedo
2 Replies

5. Solaris

Installing Solaris OS on LDOM SAN Disk

I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue. I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN. I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies

6. Red Hat

Sharing SAN disk with multiple severs

Hi , I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies

7. Solaris

Disk expansion on LDOM Guest

Hi, There is LDOM Guest where I need to expand /u02 file systems on it. It is residing on a Solaris 11 Hypervisor (Primary Domain). The storage is expanded on vdisk presented to Hypervisor. I need steps to expand the /u02 on LDOM Guest. (2 Replies)
Discussion started by: vidya_sagar2003
2 Replies

8. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies

9. Solaris

Disk alignment inside of an LDOM

Hi! Quick background for the question... I have Solaris 11.4 control/primary zone with some LDOM's on top of it. I have some raw iSCSI LUN's presented to the control zone/primary zone from a NetApp, which I then pass up to the LDOM's via the VDS/vdisk. So basically the LDOM VM's see the disk as... (1 Reply)
Discussion started by: rtmg
1 Replies
numa_sched_launch(5)						File Formats Manual					      numa_sched_launch(5)

NAME
numa_sched_launch - change process default launch policy VALUES
Failsafe Default Allowed values Recommended values unless the application requires explicit different behavior. DESCRIPTION
The dynamic tunable controls the default launch policy for newly created processes. The process launch policy controls the initial place- ment of the child process at creation time. The scheduler can migrate threads from one locality domain (LDOM) to another to distribute workload for better throughput and responsiveness. The default launch policy is applicable only to processes that have no explicit launch policy, processor binding, or LDOM binding applied to them (see mpctl(2) for details). There are three possible values of this tunable: This value explicitly disables any change in the default launch policy for processes irrespective of the system configuration. A newly created process will be placed using the legacy default launch policy. This is the default and recommended value. HP-UX will autosense the right policy setting based on system configuration. This policy directs HP-UX to optimize the launch policy for multi-process applications that share data. Such applications can get better performance when the applications are packed together in the same LDOM. The policy will cause child processes created using to be placed in the same locality domain as the parent process. Note that a different default launch policy may be used in the future with new system configurations for improved application performance when this tunable is enabled. Processes created using will be treated as if they are a new application and will continue to be launched using the legacy default launch policy. This value explicitly enables the new default launch policy for processes. A process created using is placed in the same locality domain as its parent process irrespective of the system configuration. Who Is Expected to Change This Tunable? System administrators who prefer to explicitly control the default launch policy for applications even when LORA (Locality Optimized Resource Alignment) mode is enabled (see numa_policy(5) for details). Restrictions on Changing The tunable changes take effect immediately. However, changes to this tunable will not affect processes that are already created. Such processes will need to be stopped and restarted to be launched with a modified tunable setting. When Should the Value of This Tunable Be Changed to 0? The value of should be set to to preserve the legacy process default launch policy even when the system is configured in LORA mode. When Should the Value of This Tunable Be Changed to 1? The value of should be set to to improve the performance of multi-process applications. When Should the Value of This Tunable Be Changed to 2? The value of should be set to when a multi-process application is likely to see improved performance even if the system is not configured for LORA mode. What Are the Side Effects of Changing the Value? The distribution of CPU utilization across the system will change. This situation can result in a change in performance. The change in performance is highly dependent on the workload and the partition configuration. What Other Tunable Values Should Be Changed at the Same Time? None. WARNINGS
All HP-UX kernel tunable parameters are release specific. This parameter may be removed or have its meaning changed in future releases of HP-UX. Installation of optional kernel software, from HP or other vendors, may cause changes to tunable parameter values. After installation, some tunable parameters may no longer be at the default or recommended values. For information about the effects of installation on tun- able values, consult the documentation for the kernel software being installed. For information about optional kernel software that was factory installed on your system, see at AUTHOR
was developed by HP. SEE ALSO
fork(2), mpctl(2), vfork(2), numa_policy(5). Tunable Kernel Parameters numa_sched_launch(5)
All times are GMT -4. The time now is 07:41 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy