Sponsored Content
Operating Systems Solaris Sharing a physical disk with an LDOM Post 303042253 by hicksd8 on Thursday 19th of December 2019 04:16:14 PM
Old 12-19-2019
One thing for sure is that only one of the nodes (Solaris 11 Global or Solaris 10 LDOM) can have control of the volume. In any situation, having two operating systems writing to a volume simultaneously is a recipe for instant filesystem corruption. One operating system must control file opening, locking, etc. Even in a cluster scenario using dual tailed storage, a major function of the cluster suite is to control which node has exclusive control of the volume and effect disciplined failover when necessary.

Therefore, like any two nodes, one option is to mount the volume on one node, configure a NFS share on that node, and mount the volume using a NFS client from the second node. The first node then controls ALL activity on the volume.
These 4 Users Gave Thanks to hicksd8 For This Post:
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

physical volume and physical disk.

Hello, I need explanations about physical disks and physical volumes. What is the difference between these 2 things? In fact, i am trying to understand what the AIX lspv2command does. Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies

2. HP-UX

determine the physical size of the hard disk

Hi is there a cmd in hpux 11 to determine the physical size of the hard disk. not bdf command. i have searched the other threads here but cant find an answer. thank you guys (4 Replies)
Discussion started by: hoffies
4 Replies

3. Solaris

List all resources on physical host LDOM server

Hello, I have a SUN T5240 running Solaris 10 with Logical Domain Manager (v 1.0.3). You can use the "ldm" command to display current resources on the box. Is there away to display all the "physical resources" on the box(i.e.,used and unused). For example, "ldm ls" will tell me what the... (5 Replies)
Discussion started by: stephanpitts
5 Replies

4. Solaris

Sharing a local disk between to solaris machines

Hi, I recently added a disk on a solaris 9 and I wanted to make it accessible for another machine, using the same name here is what i did : On the machine holding the internal disk in vfstab i added the line /dev/dsk/c1t1d0s4 /dev/rdsk/c1t1d0s4 /SHARED2 ufs 2 yes ... (2 Replies)
Discussion started by: zionassedo
2 Replies

5. Solaris

Installing Solaris OS on LDOM SAN Disk

I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue. I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN. I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies

6. Red Hat

Sharing SAN disk with multiple severs

Hi , I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies

7. Solaris

Disk expansion on LDOM Guest

Hi, There is LDOM Guest where I need to expand /u02 file systems on it. It is residing on a Solaris 11 Hypervisor (Primary Domain). The storage is expanded on vdisk presented to Hypervisor. I need steps to expand the /u02 on LDOM Guest. (2 Replies)
Discussion started by: vidya_sagar2003
2 Replies

8. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies

9. Solaris

Disk alignment inside of an LDOM

Hi! Quick background for the question... I have Solaris 11.4 control/primary zone with some LDOM's on top of it. I have some raw iSCSI LUN's presented to the control zone/primary zone from a NetApp, which I then pass up to the LDOM's via the VDS/vdisk. So basically the LDOM VM's see the disk as... (1 Reply)
Discussion started by: rtmg
1 Replies
cmhaltnode(1m)															    cmhaltnode(1m)

NAME
cmhaltnode - halt a node in a high availability cluster SYNOPSIS
cmhaltnode [-f] [-v] [-t] [node_name...] DESCRIPTION
cmhaltnode causes a node to halt its cluster daemon and remove itself from the existing cluster. To halt cluster on the node, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the cluster configu- ration file. See access policy in cmquerycl. When cmhaltnode is run on a node, the cluster daemon is halted and, optionally, all packages that were running on that node are moved to other nodes if possible. If node_name is not specified, the cluster daemon running on the local node will be halted and removed from the existing cluster. If you issue this command while a cluster is still in the process of forming, the command will fail with the message "Unable to connect to daemon." If this happens, wait for the cluster to form successfully, then issue the command again. Options cmhaltnode supports the following options: -f Force the node to halt even if there are packages or group members running on it. The group members on the node will be terminated. The halt scripts for all packages running on the node will be run; based on priority or dependency relation- ships, this may affect packages on other nodes. In other words, packages on other nodes may either start or halt based on this package halting. If the package configuration and current cluster membership permit, and if the package halt script succeeds, the packages will be started on other nodes. Without this option, if packages are running on the given node, the command will fail. If a package fails to halt, the node halt will also fail. -v Verbose output will be displayed. -t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages. This option validates the node's eligibility with respect to the package dependencies as well as the external dependencies such as EMS resources, package subnets, and storage before predicting any package placement decisions. If there is a pack- age in maintenance mode running on the nodes being halted, the package will always be halted and not failover to another node; the report will not display an assessment for that package. node_name... The name of the node(s) to halt. RETURN VALUE
cmhaltnode returns the following value: 0 Successful completion. 1 Command failed. EXAMPLES
Halt the cluster daemon on two other nodes: cmhaltnode node2 node3 AUTHOR
cmhaltnode was developed by HP. SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmruncl(1m), cmrunnode(1m), cmviewcl(1m), cmeval(1m). Requires Optional Serviceguard Software cmhaltnode(1m)
All times are GMT -4. The time now is 10:56 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy