Hi
I want to install VCS 5 on solaris 10
the product states it needs 3 nic cards. how to install it if I have 2 cards only (this is just for demo)?
thank you for your help. (3 Replies)
I'm using clustered zones on my machine. i'm only at the test phase of my design and ultimately the oracle zones will be using VxVM.
When the testing phase is complete, VxVM will be used in the containers. It is necessary for VxVM to run in the global zone for the containers to use it (is... (5 Replies)
Yesterday my customer told me to expect a vcs upgrade to happen in the future. He also plans to stop using HDS and move to EMC.
Am thinking how to migrate to sun cluster setup instead.
My plan as follows leave the existing vcs intact as a fallback plan.
Then install and build suncluster on... (5 Replies)
Need help getting all disk devices back on node 2 the same as node 1.
Recently Veritas and/or Sun cluster got wrecked on my 2 node Sun cluster after installing the latest patch cluster. I backed out the patches and then node 2 could see only half of the devices and Veritas drives (though format... (0 Replies)
Hello experts -
I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts.
(1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over)
(2) Or should i perform the installation on... (0 Replies)
Hi Experts,
I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system.
#hastatus -sum
-- System State Frozen
A node1 running 0
A node2 running 0
-- Group State
-- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies
LEARN ABOUT OPENSOLARIS
did
did(7) Sun Cluster Device and Network Interfaces did(7)NAME
did - user configurable disk id driver
DESCRIPTION
Note -
Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software
still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor-
mation about the object-oriented command set, see the Intro(1CL) man page.
Disk ID (DID) is a user configurable pseudo device driver that provides access to underlying disk, tape, and CDROM devices. When the
device supports unique device ids, multiple paths to a device are determined according to the device id of the device. Even if multiple
paths are available with the same device id, only one DID name is given to the actual device.
In a clustered environment, a particular physical device will have the same DID name regardless of its connectivity to more than one host
or controller. This, however, is only true of devices that support a global unique device identifier such as physical disks.
DID maintains parallel directories for each type of device that it manages under /dev/did. The devices in these directories behave the same
as their non-DID counterparts. This includes maintaining slices for disk and CDROM devices as well as names for different tape device
behaviors. Both raw and block device access is also supported for disks by means of /dev/did/rdsk and /dev/did/rdsk.
At any point in time, I/O is only supported down one path to the device. No multipathing support is currently available through DID.
Before a DID device can be used, it must first be initialized by means of the scdidadm(1M) command.
IOCTLS
The DID driver maintains an admin node as well as nodes for each DID device minor.
No user ioctls are supported by the admin node.
The DKIOCINFO ioctl is supported when called against the DID device nodes such as /dev/did/rdsk/d0s2.
All other ioctls are passed directly to the driver below.
FILES
/dev/did/dsk/dnsm block disk or CDROM device, where n is the device number and m is the slice number
/dev/did/rdsk/dnsm raw disk or CDROM device, where n is the device number and m is the slice number
/dev/did/rmt/n tape device , where n is the device number
/dev/did/admin administrative device
/kernel/drv/did driver module
/kernel/drv/did.conf driver configuration file
/etc/did.conf scdidadm configuration file for non-clustered systems
Cluster Configuration Repository (CCscdidadm(1M) maintains configuration in the CCR for clustered systems
SEE ALSO devfsadm(1M), Intro(1CL), cldevice(1CL), scdidadm(1M)NOTES
DID creates names for devices in groups, in order to decrease the overhead during device hot-plug. For disks, device names are created in
/dev/did/dsk and /dev/did/rdsk in groups of 100 disks at a time. For tapes, device names are created in /dev/did/rmt in groups of 10
tapes at a time. If more devices are added to the cluster than are handled by the current names, another group will be created.
Sun Cluster 3.2 24 April 2001 did(7)