It has occurred to me that, since the filesystems have been ported to new devices, those devices are probably not configured as device nodes e.g. /dev/rdsk/<whatever>. Therefore, try running a configuration scan for them:
Running Solaris 9 with SVM. I'm not that familiar with it, but metastat output gives "needs maintenance" message on 2 of the mirrors. There are no errors in /var/adm/messages. What do I need to do to fix this error? Thanks. (14 Replies)
hi all,
can someone pls pass on your suggestion?
Firs thing I am testing a script which checks for the pattern 'Needs Maintenance' from metastat output and prints some messages in the screen. So i need to simulate an error in mirrored disk for metastat to give this message 'Needs Maintenance'.... (3 Replies)
Hi people,
I have on problem when execute the command METASTAT...
d60: Soft Partition
Device: d10
State: Errored
Size: 12582912 blocks (6.0 GB)
Someone help me?
Thank you very much (4 Replies)
Hi All,
Sorry to post a problem for my first post but I'm in a bit of a pickle at the minute!
I have an Ultra45 connected to a Storedge 3100 series, 2 internal, 2 external disks with a db application running on the external disks.
Now everything is working fine and we've had no downtime or... (4 Replies)
Can any one of you suggest me the method to get apache server in online from maintenance mode. I tried in the following way, but couldn't get that service to online.
bash-3.00# svcs -a | grep apache
legacy_run 9:51:55 lrc:/etc/rc3_d/S50apache
offline 9:51:22... (3 Replies)
Dear,
Required an script such that :
If metastat |grep Needs , results in some output then this command to be executed for the same :
opcmsg object=metastat a=OS msg_grp=OpC severity=critical msg_text="Need maintenance for the system $line"
With regards,
Mjoshi (3 Replies)
Hi
I need to export a directory named /sybase from my solaris machine via NFS. The svcs command shows the state as disabled. Please let me know how to export the directory.
Once the directory is exported from the solaris machine it has to be mounted locally in an aix machine. Can some one... (2 Replies)
Hi,
I'm new to Solaris. I have an issue with ssh service. When I restart the service it exits with an exit status of 0
$svcadm restart svc:/network/ssh:default
$echo $?
0
$
However, the service goes into maintenance mode after restart. I'm able to connect even though the service is in... (3 Replies)
Discussion started by: maverick_here
3 Replies
LEARN ABOUT OPENSOLARIS
did
did(7) Sun Cluster Device and Network Interfaces did(7)NAME
did - user configurable disk id driver
DESCRIPTION
Note -
Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software
still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor-
mation about the object-oriented command set, see the Intro(1CL) man page.
Disk ID (DID) is a user configurable pseudo device driver that provides access to underlying disk, tape, and CDROM devices. When the
device supports unique device ids, multiple paths to a device are determined according to the device id of the device. Even if multiple
paths are available with the same device id, only one DID name is given to the actual device.
In a clustered environment, a particular physical device will have the same DID name regardless of its connectivity to more than one host
or controller. This, however, is only true of devices that support a global unique device identifier such as physical disks.
DID maintains parallel directories for each type of device that it manages under /dev/did. The devices in these directories behave the same
as their non-DID counterparts. This includes maintaining slices for disk and CDROM devices as well as names for different tape device
behaviors. Both raw and block device access is also supported for disks by means of /dev/did/rdsk and /dev/did/rdsk.
At any point in time, I/O is only supported down one path to the device. No multipathing support is currently available through DID.
Before a DID device can be used, it must first be initialized by means of the scdidadm(1M) command.
IOCTLS
The DID driver maintains an admin node as well as nodes for each DID device minor.
No user ioctls are supported by the admin node.
The DKIOCINFO ioctl is supported when called against the DID device nodes such as /dev/did/rdsk/d0s2.
All other ioctls are passed directly to the driver below.
FILES
/dev/did/dsk/dnsm block disk or CDROM device, where n is the device number and m is the slice number
/dev/did/rdsk/dnsm raw disk or CDROM device, where n is the device number and m is the slice number
/dev/did/rmt/n tape device , where n is the device number
/dev/did/admin administrative device
/kernel/drv/did driver module
/kernel/drv/did.conf driver configuration file
/etc/did.conf scdidadm configuration file for non-clustered systems
Cluster Configuration Repository (CCscdidadm(1M) maintains configuration in the CCR for clustered systems
SEE ALSO devfsadm(1M), Intro(1CL), cldevice(1CL), scdidadm(1M)NOTES
DID creates names for devices in groups, in order to decrease the overhead during device hot-plug. For disks, device names are created in
/dev/did/dsk and /dev/did/rdsk in groups of 100 disks at a time. For tapes, device names are created in /dev/did/rmt in groups of 10
tapes at a time. If more devices are added to the cluster than are handled by the current names, another group will be created.
Sun Cluster 3.2 24 April 2001 did(7)