Sponsored Content
Operating Systems Solaris Sharing a physical disk with an LDOM Post 303042253 by hicksd8 on Thursday 19th of December 2019 04:16:14 PM
Old 12-19-2019
One thing for sure is that only one of the nodes (Solaris 11 Global or Solaris 10 LDOM) can have control of the volume. In any situation, having two operating systems writing to a volume simultaneously is a recipe for instant filesystem corruption. One operating system must control file opening, locking, etc. Even in a cluster scenario using dual tailed storage, a major function of the cluster suite is to control which node has exclusive control of the volume and effect disciplined failover when necessary.

Therefore, like any two nodes, one option is to mount the volume on one node, configure a NFS share on that node, and mount the volume using a NFS client from the second node. The first node then controls ALL activity on the volume.
These 4 Users Gave Thanks to hicksd8 For This Post:
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

physical volume and physical disk.

Hello, I need explanations about physical disks and physical volumes. What is the difference between these 2 things? In fact, i am trying to understand what the AIX lspv2command does. Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies

2. HP-UX

determine the physical size of the hard disk

Hi is there a cmd in hpux 11 to determine the physical size of the hard disk. not bdf command. i have searched the other threads here but cant find an answer. thank you guys (4 Replies)
Discussion started by: hoffies
4 Replies

3. Solaris

List all resources on physical host LDOM server

Hello, I have a SUN T5240 running Solaris 10 with Logical Domain Manager (v 1.0.3). You can use the "ldm" command to display current resources on the box. Is there away to display all the "physical resources" on the box(i.e.,used and unused). For example, "ldm ls" will tell me what the... (5 Replies)
Discussion started by: stephanpitts
5 Replies

4. Solaris

Sharing a local disk between to solaris machines

Hi, I recently added a disk on a solaris 9 and I wanted to make it accessible for another machine, using the same name here is what i did : On the machine holding the internal disk in vfstab i added the line /dev/dsk/c1t1d0s4 /dev/rdsk/c1t1d0s4 /SHARED2 ufs 2 yes ... (2 Replies)
Discussion started by: zionassedo
2 Replies

5. Solaris

Installing Solaris OS on LDOM SAN Disk

I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue. I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN. I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies

6. Red Hat

Sharing SAN disk with multiple severs

Hi , I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies

7. Solaris

Disk expansion on LDOM Guest

Hi, There is LDOM Guest where I need to expand /u02 file systems on it. It is residing on a Solaris 11 Hypervisor (Primary Domain). The storage is expanded on vdisk presented to Hypervisor. I need steps to expand the /u02 on LDOM Guest. (2 Replies)
Discussion started by: vidya_sagar2003
2 Replies

8. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies

9. Solaris

Disk alignment inside of an LDOM

Hi! Quick background for the question... I have Solaris 11.4 control/primary zone with some LDOM's on top of it. I have some raw iSCSI LUN's presented to the control zone/primary zone from a NetApp, which I then pass up to the LDOM's via the VDS/vdisk. So basically the LDOM VM's see the disk as... (1 Reply)
Discussion started by: rtmg
1 Replies
svadm(1M)						  System Administration Commands						 svadm(1M)

NAME
svadm - command line interface to control Availability Suite Storage Volume operations SYNOPSIS
svadm -h svadm -v svadm [-C tag] svadm [-C tag] -i svadm [-C tag] -e {-f config_file | volume} svadm [-C tag] -d {-f config_file | volume} svadm [-C tag] -r {-f config_file | volume} DESCRIPTION
The svadm command controls the Storage Volume (SV) driver by providing facilities to enable and disable the SV driver for specified vol- umes, and to dynamically reconfigure the system. OPTIONS
If you specify no arguments to an svadm command, the utility displays the list of volumes currently under SV control. svadm supports the following options: -C tag On a clustered node, limits operations to only those volumes belonging to the cluster resource group, or disk group name, specified by tag. This option is illegal on a system that is not clustered. The special tag, local, can be used to limit operations to only those volumes that cannot switch over to other nodes in the cluster. -d Disables the SV devices specified on the command line or in the configuration file. If -C tag is specified with this option, then the volume should be in this cluster disk group. -e Enables the SV devices specified on the command line or in the configuration file. Details of the volume are saved in the current con- figuration. See dscfg(1M). If -C tag is specified with this option, then the volume should be in this cluster disk group. -f config_file Specifies a configuration file that contains a list of volumes. A command reads this volume list and then perform the operation. The format of the config_file is a simple list of volume pathnames, one per line. Blank lines and lines starting with the comment character (#) are ignored. -h Displays the svadm usage summary. -i Displays extended status for the volumes currently under SV control. -r When a config_file is specified, reconfigure the running system to match the configuration specified in the config_file. When the -C option is specified, compare the cluster tag for each volume and change it to cluster_tag. If a volume is specified with this option, it is valid only to reconfigure the cluster tag associated with the volume. The -e or -d options should be used to enable or disable single volumes. -v Displays the SV version number. USAGE
When an SV volume is enabled, normal system call access to the device (see intro(2)) is redirected into the StoreEdge architecture soft- ware. This allows standard applications to use StorageTek features such as Sun StorageTek Point-in-Time Copy and Remote Mirror Software. The svadm command generates an entry in the Availability Suite log file, /var/adm/ds.log (see ds.log(4)), when performing enable (-e) and disable (-d) operations. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWspsvr, SUNWspsvu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
dscfg(1M), ds.log(4), attributes(5), sv(7D) SunOS 5.11 2 Oct 2007 svadm(1M)
All times are GMT -4. The time now is 01:04 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy