12-19-2019
One thing for sure is that only one of the nodes (Solaris 11 Global or Solaris 10 LDOM) can have control of the volume. In any situation, having two operating systems writing to a volume simultaneously is a recipe for instant filesystem corruption. One operating system must control file opening, locking, etc. Even in a cluster scenario using dual tailed storage, a major function of the cluster suite is to control which node has exclusive control of the volume and effect disciplined failover when necessary.
Therefore, like any two nodes, one option is to mount the volume on one node, configure a NFS share on that node, and mount the volume using a NFS client from the second node. The first node then controls ALL activity on the volume.
These 4 Users Gave Thanks to hicksd8 For This Post:
9 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hello,
I need explanations about physical disks and physical volumes. What is the difference between these 2 things?
In fact, i am trying to understand what the AIX lspv2command does.
Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies
2. HP-UX
Hi
is there a cmd in hpux 11 to determine the physical size of the hard disk.
not bdf command.
i have searched the other threads here but cant find an answer.
thank you guys (4 Replies)
Discussion started by: hoffies
4 Replies
3. Solaris
Hello,
I have a SUN T5240 running Solaris 10 with Logical Domain Manager (v 1.0.3). You can use the "ldm" command to display current resources on the box. Is there away to display all the "physical resources" on the box(i.e.,used and unused). For example, "ldm ls" will tell me what the... (5 Replies)
Discussion started by: stephanpitts
5 Replies
4. Solaris
Hi,
I recently added a disk on a solaris 9 and I wanted to make it accessible for another machine, using the same name
here is what i did :
On the machine holding the internal disk
in vfstab i added the line
/dev/dsk/c1t1d0s4 /dev/rdsk/c1t1d0s4 /SHARED2 ufs 2 yes ... (2 Replies)
Discussion started by: zionassedo
2 Replies
5. Solaris
I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue.
I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN.
I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies
6. Red Hat
Hi ,
I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies
7. Solaris
Hi,
There is LDOM Guest where I need to expand /u02 file systems on it.
It is residing on a Solaris 11 Hypervisor (Primary Domain).
The storage is expanded on vdisk presented to Hypervisor.
I need steps to expand the /u02 on LDOM Guest. (2 Replies)
Discussion started by: vidya_sagar2003
2 Replies
8. Solaris
Generally, this is what we do:-
On primary, export 2 LUNs (add-vdsdev).
On primary, assign these disks to the ldom in question (add-vdisk).
On ldom, created mirrored zpool from these two disks.
On one server (which is older) we have:-
On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies
9. Solaris
Hi! Quick background for the question...
I have Solaris 11.4 control/primary zone with some LDOM's on top of it. I have some raw iSCSI LUN's presented to the control zone/primary zone from a NetApp, which I then pass up to the LDOM's via the VDS/vdisk. So basically the LDOM VM's see the disk as... (1 Reply)
Discussion started by: rtmg
1 Replies
LEARN ABOUT FREEBSD
hv_kvp_daemon
HV_KVP_DAEMON(8) BSD System Manager's Manual HV_KVP_DAEMON(8)
NAME
hv_kvp_daemon -- Hyper-V Key Value Pair Daemon
SYNOPSIS
hv_kvp_daemon [-dn]
DESCRIPTION
The hv_kvp_daemon daemon provides the ability to store, retrieve, modify and delete Key Value pairs for FreeBSD guest partitions running on
Hyper-V.
Hyper-V allows administrators to store custom metadata in the form of Key Value pairs inside the FreeBSD guest partition. Administrators can
use Windows Powershell scripts to add, read, modify and delete such Key Value pairs.
The hv_kvp_daemon accepts Key Value pair management requests from the hv_utils(4) driver and performs the actual metadata management on the
file-system.
The same daemon and driver combination is also used to set and get IP addresses from a FreeBSD guest.
The set functionality is particularly useful when the FreeBSD guest is assigned a static IP address and is failed over from one Hyper-V host
to another. After failover, Hyper-V uses the set IP functionality to automatically update the FreeBSD guest's IP address to its original
static value.
On the other hand, the get IP functionality is used to update the guest IP address in the Hyper-V management console window.
The options are as follows:
-d Run as regular process instead of a daemon for debugging purpose.
-n Generate debugging output.
SEE ALSO
hv_vmbus(4), hv_utils(4), hv_netvsc(4), hv_storvsc(4), hv_ata_pci_disengage(4), hv_kvp(4)
HISTORY
Support for Hyper-V in the form of ports was first released in September 2013. The daemon was developed through a joint effort between Cit-
rix Inc., Microsoft Corp. and Network Appliance Inc..
AUTHORS
FreeBSD support for hv_kvp_daemon was first added by Microsoft BSD Integration Services Team <bsdic@microsoft.com>.
BSD
October 27, 2014 BSD