Sponsored Content
Operating Systems Solaris Sharing a physical disk with an LDOM Post 303042253 by hicksd8 on Thursday 19th of December 2019 04:16:14 PM
Old 12-19-2019
One thing for sure is that only one of the nodes (Solaris 11 Global or Solaris 10 LDOM) can have control of the volume. In any situation, having two operating systems writing to a volume simultaneously is a recipe for instant filesystem corruption. One operating system must control file opening, locking, etc. Even in a cluster scenario using dual tailed storage, a major function of the cluster suite is to control which node has exclusive control of the volume and effect disciplined failover when necessary.

Therefore, like any two nodes, one option is to mount the volume on one node, configure a NFS share on that node, and mount the volume using a NFS client from the second node. The first node then controls ALL activity on the volume.
These 4 Users Gave Thanks to hicksd8 For This Post:
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

physical volume and physical disk.

Hello, I need explanations about physical disks and physical volumes. What is the difference between these 2 things? In fact, i am trying to understand what the AIX lspv2command does. Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies

2. HP-UX

determine the physical size of the hard disk

Hi is there a cmd in hpux 11 to determine the physical size of the hard disk. not bdf command. i have searched the other threads here but cant find an answer. thank you guys (4 Replies)
Discussion started by: hoffies
4 Replies

3. Solaris

List all resources on physical host LDOM server

Hello, I have a SUN T5240 running Solaris 10 with Logical Domain Manager (v 1.0.3). You can use the "ldm" command to display current resources on the box. Is there away to display all the "physical resources" on the box(i.e.,used and unused). For example, "ldm ls" will tell me what the... (5 Replies)
Discussion started by: stephanpitts
5 Replies

4. Solaris

Sharing a local disk between to solaris machines

Hi, I recently added a disk on a solaris 9 and I wanted to make it accessible for another machine, using the same name here is what i did : On the machine holding the internal disk in vfstab i added the line /dev/dsk/c1t1d0s4 /dev/rdsk/c1t1d0s4 /SHARED2 ufs 2 yes ... (2 Replies)
Discussion started by: zionassedo
2 Replies

5. Solaris

Installing Solaris OS on LDOM SAN Disk

I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue. I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN. I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies

6. Red Hat

Sharing SAN disk with multiple severs

Hi , I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies

7. Solaris

Disk expansion on LDOM Guest

Hi, There is LDOM Guest where I need to expand /u02 file systems on it. It is residing on a Solaris 11 Hypervisor (Primary Domain). The storage is expanded on vdisk presented to Hypervisor. I need steps to expand the /u02 on LDOM Guest. (2 Replies)
Discussion started by: vidya_sagar2003
2 Replies

8. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies

9. Solaris

Disk alignment inside of an LDOM

Hi! Quick background for the question... I have Solaris 11.4 control/primary zone with some LDOM's on top of it. I have some raw iSCSI LUN's presented to the control zone/primary zone from a NetApp, which I then pass up to the LDOM's via the VDS/vdisk. So basically the LDOM VM's see the disk as... (1 Reply)
Discussion started by: rtmg
1 Replies
ii(7D)								      Devices								    ii(7D)

NAME
ii - Instant Image control device DESCRIPTION
The ii device is a control interface for Instant Image devices and controls the Instant Image module through the ioctl(2) interface. Instant Image is a point-in-time volume copy facility for the Solaris operating environment that is administered through the iiadm(1M) com- mand. With Instant Image, you can create an independent point-in-time copy of a volume or a master volume-dependent point-in-time view. You can also independently access the master and shadow volume for read and write operations. Instant Image also lets you update the shadow volume from the master volume or restore the master volume from the shadow. (Restore operations to volumes can be full or incremental). Instant Image supports fast volume re-synchronization, letting you create a new point-in-time volume copy by updating the specified volume with only changed data. To create a shadow volume you need a: 1. Master volume to be shadowed. 2. Shadow volume where the copy will reside. This volume must be equal to or larger than the master volume. 3. Administrative bitmap volume or file for tracking differences between the shadow and master volumes. The administrative bitmap volume or file must be at least 24Kbytes in size and requires 8KBytes for each GByte (or part thereof) of master volume size, plus an additional 8KBytes overhead. For example, to shadow a 3GByte master volume, the administration volume must be 8KBytes + (3 * 8KBytes) =32KBytes in size. The Instant Image module uses services provided by the SDBC and SD_GEN modules. The SV module is required to present a conventional block device interface to the storage product interface of the Instant Image, SDBC and SD_GEN modules. When a shadow operation is suspended or resumed, the administration volumes may be stored in permanent SDBC storage or loaded and saved to and from kernel memory. The ii_bitmap variable in the /kernel/drv/ii.conf configuration file determines the administration volume storage type. A value of 0 indicates kernel memory, while a value of 1 indicates permanent SDBC storage. If the system is part of a storage prod- ucts cluster, use the 1 value (permanent storage), otherwise use kernel memory (0 value). FILES
kernel/drv/ii 32- bit ELF kernel module (x86). /kernel/drv/ii.conf Configuration file. ATTRIBUTES
See attributes(5) for a description of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Architecture |x86 | +-----------------------------+-----------------------------+ |Availability |SUNWiu | +-----------------------------+-----------------------------+ |Interface Stability |Committed | +-----------------------------+-----------------------------+ SEE ALSO
iiadm(1M), ioctl(2), attributes(5), sv(7D) SunOS 5.11 8 Jun 2007 ii(7D)
All times are GMT -4. The time now is 11:19 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy