Sponsored Content
Operating Systems Solaris Help with faulty Disk on Sun OS Post 302614739 by Yeaboem on Wednesday 28th of March 2012 07:29:27 PM
Old 03-28-2012
So, what's your question? It appears that every slice of the failed disk (c0t0d0) is mirrored, so you can follow your normal detach-replace-attach process to fix this.. but your configuration does require some care, as c0t0d0 is not mirrored by a single disk. You will likely have to hand construct the partition table to match the sizes of the slices on the other 2 disks involved in the various mirrors (c2t3d0 and c0t2d0).

also, since you are replacing a disk which is part of your boot mirror, you will need to ensure that you put the bootblocks onto the replacement.. see the manpage for 'installboot' for details, but I suspect the command will be something like:

Code:
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0

..and, just in case some prior administrator didn't put the bootblocks on c2t3s0, you might want to proactively apply them there FIRST, before you eject c0t0d0 from the chassis.

Those old 18G and 36G scsi drives are becoming harder to find reliable replacements for... good luck finding some!
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

multiple disk in sun os

i have unix box , which currently has 2 scsi disk , as shown by format command, one at target 1 and another at target3 (which is current boot disk). can i use both the disk , if so will df -k show usage of both ? can any one guide me how to span file system across multiple disk. i m using sun 5.7... (4 Replies)
Discussion started by: raju
4 Replies

2. UNIX for Advanced & Expert Users

Help: Sun Disk partitioning for Sun V240 & StorEdge 3300

Dear Sun gurus, I have Sun Fire V240 server with its StorEdge 3300 disk-array. Following are its disks appeared in format command. I have prepared its partitions thru format and metainit & metattach (may be i have made wrong steps, causing the errors below because I have done thru some document... (1 Reply)
Discussion started by: shafeeq
1 Replies

3. AIX

Removing Faulty Disk SSA

Hi Experts, I have configured A D40 Array. There is an faulty disk which is not part of an raid volume but shows fault in the diagnostics. pdisk15 U0.1-P1-I1/Q1-W40AA83CC2400D SSA160 Physical Disk Drive ( MB) Is there a way to stop this... (2 Replies)
Discussion started by: vuppala360
2 Replies

4. Emergency UNIX and Linux Support

disk replacment, SUN M3000

we have a SUN M3000 server. setup as only 1 domain. disk c0t0d0 and c0t1d0 and setup as SVM mirrors. a few days ago disk T1 failed. new we have replaced the disk, but can's see the disk in format. have done cfgadm and devfsadm. still can't access the new disk in format. the output... (6 Replies)
Discussion started by: robsonde
6 Replies

5. HP-UX

Remove Faulty disk from HP-UX LVM VG

Requirement to remove a faulty mirrored disk from hp-ux LVM <root@pdwp1s>/etc # vgdisplay -v /dev/vg00 vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c2t0d0": The specified path does not correspond to physical volume attached to this volume group vgdisplay: Warning: couldn't... (9 Replies)
Discussion started by: Shirishlnx
9 Replies

6. HP-UX

Remove faulty disk LV from VG

Hi, Have mirrored the primary disk to 3 . Server and OS: # uname -a HP-UX pdwp1s B.11.11 U 9000/800 118434630 unlimited-user license # model 9000/800/L3000-7x # strings /etc/lvmtab /dev/vg00 +F@< /dev/dsk/c1t2d0 /dev/dsk/c2t2d0 /dev/dsk/c2t0d0 But now I have only 1 disk... (5 Replies)
Discussion started by: Shirishlnx
5 Replies

7. HP-UX

FAULTY DISK replacement HP rx4640

Hello, I'm new to this forum and as you will see from my question I'm new to UNIX as well. One of our costumers has HP rx4640 running on UNIX with two 300GB hot-swappable disks that are mirrored. They reported to us that one of the disks is faulty and they want us to take care of it. Below is... (16 Replies)
Discussion started by: gjk
16 Replies

8. Solaris

[solved] How to blink faulty disk in Solaris hardware?

Hi Guys, One of two disks in my solaris machine has failed, the name is disk0, this is SUN physical sparc machine But I work remotely, so people working near that physical server are not that technical, so from OS command prompt can run some command to bink faulty disk at front panel of Server.... (9 Replies)
Discussion started by: manalisharmabe
9 Replies

9. UNIX for Beginners Questions & Answers

Show faulty shows PS1 faulty

I plugged both power cables in both power supply. When I unplugged each power cable one by one, the SPARC T4-1 machine keep running. However, show faulty command shows below message. (I have also attached the picture of both power supply) -> show faulty Target ... (1 Reply)
Discussion started by: z_haseeb
1 Replies
scdpm(1M)						  System Administration Commands						 scdpm(1M)

NAME
scdpm - manage disk path monitoring daemon SYNOPSIS
scdpm [-a] {node | all} scdpm -f filename scdpm -m {[node | all][:/dev/did/rdsk/]dN | [:/dev/rdsk/]cNtXdY | all} scdpm -n {node | all} scdpm -p [-F] {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} scdpm -u {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scdpm command manages the disk path monitoring daemon in a cluster. You use this command to monitor and unmonitor disk paths. You can also use this command to display the status of disk paths or nodes. All of the accessible disk paths in the cluster or on a specific node are printed on the standard output. You must run this command on a cluster node that is online and in cluster mode. You can specify either a global disk name or a UNIX path name when you monitor a new disk path. Additionally, you can force the daemon to reread the entire disk configuration. You can use this command only in the global zone. OPTIONS
The following options are supported: -a Enables the automatic rebooting of a node when all monitored disk paths fail, provided that the following conditions are met: o All monitored disk paths on the node fail. o At least one of the disks is accessible from a different node in the cluster. You can use this option only in the global zone. Rebooting the node restarts all resource and device groups that are mastered on that node on another node. If all monitored disk paths on a node remain inaccessible after the node automatically reboots, the node does not automatically reboot again. However, if any monitored disk paths become available after the node reboots but then all monitored disk paths again fail, the node automatically reboots again. You need solaris.cluster.device.admin role-based access control (RBAC) authorization to use this option. See rbac(5). -F If you specify the -F option with the -p option, scdpm also prints the faulty disk paths in the cluster. The -p option prints the cur- rent status of a node or a specified disk path from all the nodes that are attached to the storage. -f filename Reads a list of disk paths to monitor or unmonitor in filename. You can use this option only in the global zone. The following example shows the contents of filename. u schost-1:/dev/did/rdsk/d5 m schost-2:all Each line in the file must specify whether to monitor or unmonitor the disk path, the node name, and the disk path name. You specify the m option for monitor and the u option for unmonitor. You must insert a space between the command and the node name. You must also insert a colon (:) between the node name and the disk path name. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -m Monitors the new disk path that is specified by node:diskpath. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -n Disables the automatic rebooting of a node when all monitored disk paths fail. You can use this option only in the global zone. If all monitored disk paths on the node fail, the node is not rebooted. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -p Prints the current status of a node or a specified disk path from all the nodes that are attached to the storage. You can use this option only in the global zone. If you also specify the -F option, scdpm prints the faulty disk paths in the cluster. Valid status values for a disk path are Ok, Fail, Unmonitored, or Unknown. The valid status value for a node is Reboot_on_disk_failure. See the description of the -a and the -n options for more information about the Reboot_on_disk_failure status. You need solaris.cluster.device.read RBAC authorization to use this option. See rbac(5). -u Unmonitors a disk path. The daemon on each node stops monitoring the specified path. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). EXAMPLES
Example 1 Monitoring All Disk Paths in the Cluster Infrastructure The following command forces the daemon to monitor all disk paths in the cluster infrastructure. # scdpm -m all Example 2 Monitoring a New Disk Path The following command monitors a new disk path.All nodes monitor /dev/did/dsk/d3 where this path is valid. # scdpm -m /dev/did/dsk/d3 Example 3 Monitoring New Disk Paths on a Single Node The following command monitors new paths on a single node. The daemon on the schost-2 node monitors paths to the /dev/did/dsk/d4 and /dev/did/dsk/d5 disks. # scdpm -m schost-2:d4 -m schost-2:d5 Example 4 Printing All Disk Paths and Their Status The following command prints all disk paths in the cluster and their status. # scdpm -p schost-1:reboot_on_disk_failure enabled schost-2:reboot_on_disk_failure disabled schost-1:/dev/did/dsk/d4 Ok schost-1:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d5 Unmonitored schost-2:/dev/did/dsk/d6 Ok Example 5 Printing All Failed Disk Paths The following command prints all of the failed disk paths on the schost-2 node. # scdpm -p -F all schost-2:/dev/did/dsk/d4 Fail Example 6 Printing the Status of All Disk Paths From a Single Node The following command prints the disk path and the status of all disks that are monitored on the schost-2 node. # scdpm -p schost-2:all schost-2:reboot_on_disk_failure disabled schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok EXIT STATUS
The following exit values are returned: 0 The command completed successfully. 1 The command failed completely. 2 The command failed partially. Note - The disk path is represented by a node name and a disk name. The node name must be the host name or all. The disk name must be the global disk name, a UNIX path name, or all. The disk name can be either the full global path name or the disk name: /dev/did/dsk/d3 or d3. The disk name can also be the full UNIX path name: /dev/rdsk/c0t0d0s0. Disk path status changes are logged with the syslogd LOG_INFO facility level. All failures are logged with the LOG_ERR facility level. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevice(1CL), clnode(1CL), attributes(5) Sun Cluster System Administration Guide for Solaris OS Sun Cluster 3.2 22 Jun 2006 scdpm(1M)
All times are GMT -4. The time now is 11:03 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy