Sponsored Content
Full Discussion: Disk replacement with svm
Operating Systems Solaris Disk replacement with svm Post 302118128 by BG_JrAdmin on Friday 18th of May 2007 10:22:20 PM
Old 05-18-2007
Disk replacement with svm

I dont even know what raid level this is, but its raid 5 mirrored from the looks of it.

I have a failed disk (t12) within this mirror. What is the best way to replace this disk? 2 things concern me, isn't there a command to prepare the disk for a hot swap? and what should i do with the metadevices? and metadb's? delete them and rebuild after the disk is replaced? I thgouth there was an easier way to do it without reconfigure rebooting.


su25e1n: / # metastat -p

d100 -m d103 d113 1
d103 3 1 d102 \
1 d104 \
1 d106
d102 -p c0t11d0s0 -o 1 -b 2097152
d104 -p c0t11d0s0 -o 2097154 -b 2097152
d106 -p c0t11d0s0 -o 4194307 -b 2097152
d113 3 1 d112 \
1 d114 \
1 d116
d112 -p c0t12d0s0 -o 1 -b 2097152
d114 -p c0t12d0s0 -o 2097154 -b 2097152
d116 -p c0t12d0s0 -o 4194307 -b 2097152
su25e1n: / # echo | format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,4000/scsi@3/sd@0,0
1. c0t8d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,4000/scsi@3/sd@8,0
2. c0t9d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,4000/scsi@3/sd@9,0
3. c0t10d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,4000/scsi@3/sd@a,0
4. c0t11d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,4000/scsi@3/sd@b,0
5. c0t12d0 <drive not available: formatting>
/pci@1f,4000/scsi@3/sd@c,0
 

10 More Discussions You Might Find Interesting

1. Solaris

Disk Replacement SVM

Hello, Can someone advise the proper procedure for replacing a mirrored disk in SVM. I have checked the docs and various websites but the procedure seems to vary. This is what I would do... 1. Remove the db replicas from the bad disk. 2. Detach it from the mirror 3. Clear it with... (4 Replies)
Discussion started by: Actuator
4 Replies

2. Solaris

Removing Disk from SVM

Hi All, I have to remove the disk from SVM. Kindly guide me or suggest me some link where in I can steps to remove SVM from Solaris 10 .Also I have one metaset which require deletion. Thanks in anticipation! (10 Replies)
Discussion started by: kumarmani
10 Replies

3. Solaris

Disk space missing under SVM

Hi Gurus, I've got an issue here: (1) Hardware: Sun NetraT1, (2) OS: Solaris 10, (3) SVM metastat shows /var having 12 GB. df shows /var having 4 GB. Real space for /var is about 4 GB since I can't move a big file to it. How is 8 GB space missing? Does /var/run swap need to account? Can I... (5 Replies)
Discussion started by: aixlover
5 Replies

4. Solaris

Root Disk mirroring in SVM

Dear All, Please help me to configure root mirroring using SVM in Solaris 9. Thanks and Regards, Lakkireddy BR (3 Replies)
Discussion started by: lbreddy
3 Replies

5. Solaris

Replacing a hard disk (SVM) with a soft partition?

The following is the summarry:- 1) Four disks in server ie (c1t0d0. c1t1d0, c1t2d0, c1t3d0). c1t2d0 is the disk to be replaced. c1t0d0 and c1t2d0 are mirrors. c1t1d0 and c1t3d0 are mirrors. Metadb to be deleted is in c1t2d0s7 a) Mirror d35 has 2 submirrors d38 and d39 d38 is a stripe... (0 Replies)
Discussion started by: aji1729
0 Replies

6. Solaris

Root disk mirroring in SVM

I tried doing rootdisk mirroring in my local host , i added a new Ide disk in my system and copied the prtvtoc from root disk to the newly added disk, and then when i tried to add database replicas on both the disks, it was added for boot disk but for the newly added disk i gave the error, which... (6 Replies)
Discussion started by: Laxxi
6 Replies

7. Solaris

Replacing a Disk in a ODS/SVM Mirror

Hi All BAsed on the below I would like to verifu two things (1) The udnerlying mirroris for '/mnt' na dit onlcy contaisne 1 sub-mirror, with one sliceon is one disk and hence, data loss on the mount point (the mount point, '/mnt' is backed up) (2) the Procedure for renewal # df -kh /mnt... (2 Replies)
Discussion started by: stevie_velvet
2 Replies

8. Solaris

Replacing a failed disk using SVM

Hi Please can you help me on replacing or removing a faulty disk drive on a SUN NETRA X4250server with 4 internal drives only. the format comand show me the following: format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <drive type unknown> ... (9 Replies)
Discussion started by: fretagi
9 Replies

9. Filesystems, Disks and Memory

DISK ARRAY PROTECTION SUSPENDED message displayed following disk replacement

Hello, On 4/20/2018, we performed a disk replacement on our IBM 8202 P7 server. After the disk was rebuilt, the SAS Disk Array sissas0 showed a status of degraded. However, the pdisks in the array all show a status of active. We did see a message in errpt. DISK ARRAY PROTECTION SUSPENDED. ... (1 Reply)
Discussion started by: terrya
1 Replies

10. AIX

DISK ARRAY PROTECTION SUSPENDED message following disk replacement

Hello, On 4/20/2018, we performed a disk replacement on our IBM 8202 P7 server. After the disk was rebuilt, the SAS Disk Array sissas0 showed a status of degraded. However, the pdisks in the array all show a status of active. We did see a message in errpt. DISK ARRAY PROTECTION SUSPENDED. ... (3 Replies)
Discussion started by: terrya
3 Replies
scdpm(1M)						  System Administration Commands						 scdpm(1M)

NAME
scdpm - manage disk path monitoring daemon SYNOPSIS
scdpm [-a] {node | all} scdpm -f filename scdpm -m {[node | all][:/dev/did/rdsk/]dN | [:/dev/rdsk/]cNtXdY | all} scdpm -n {node | all} scdpm -p [-F] {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} scdpm -u {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scdpm command manages the disk path monitoring daemon in a cluster. You use this command to monitor and unmonitor disk paths. You can also use this command to display the status of disk paths or nodes. All of the accessible disk paths in the cluster or on a specific node are printed on the standard output. You must run this command on a cluster node that is online and in cluster mode. You can specify either a global disk name or a UNIX path name when you monitor a new disk path. Additionally, you can force the daemon to reread the entire disk configuration. You can use this command only in the global zone. OPTIONS
The following options are supported: -a Enables the automatic rebooting of a node when all monitored disk paths fail, provided that the following conditions are met: o All monitored disk paths on the node fail. o At least one of the disks is accessible from a different node in the cluster. You can use this option only in the global zone. Rebooting the node restarts all resource and device groups that are mastered on that node on another node. If all monitored disk paths on a node remain inaccessible after the node automatically reboots, the node does not automatically reboot again. However, if any monitored disk paths become available after the node reboots but then all monitored disk paths again fail, the node automatically reboots again. You need solaris.cluster.device.admin role-based access control (RBAC) authorization to use this option. See rbac(5). -F If you specify the -F option with the -p option, scdpm also prints the faulty disk paths in the cluster. The -p option prints the cur- rent status of a node or a specified disk path from all the nodes that are attached to the storage. -f filename Reads a list of disk paths to monitor or unmonitor in filename. You can use this option only in the global zone. The following example shows the contents of filename. u schost-1:/dev/did/rdsk/d5 m schost-2:all Each line in the file must specify whether to monitor or unmonitor the disk path, the node name, and the disk path name. You specify the m option for monitor and the u option for unmonitor. You must insert a space between the command and the node name. You must also insert a colon (:) between the node name and the disk path name. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -m Monitors the new disk path that is specified by node:diskpath. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -n Disables the automatic rebooting of a node when all monitored disk paths fail. You can use this option only in the global zone. If all monitored disk paths on the node fail, the node is not rebooted. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -p Prints the current status of a node or a specified disk path from all the nodes that are attached to the storage. You can use this option only in the global zone. If you also specify the -F option, scdpm prints the faulty disk paths in the cluster. Valid status values for a disk path are Ok, Fail, Unmonitored, or Unknown. The valid status value for a node is Reboot_on_disk_failure. See the description of the -a and the -n options for more information about the Reboot_on_disk_failure status. You need solaris.cluster.device.read RBAC authorization to use this option. See rbac(5). -u Unmonitors a disk path. The daemon on each node stops monitoring the specified path. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). EXAMPLES
Example 1 Monitoring All Disk Paths in the Cluster Infrastructure The following command forces the daemon to monitor all disk paths in the cluster infrastructure. # scdpm -m all Example 2 Monitoring a New Disk Path The following command monitors a new disk path.All nodes monitor /dev/did/dsk/d3 where this path is valid. # scdpm -m /dev/did/dsk/d3 Example 3 Monitoring New Disk Paths on a Single Node The following command monitors new paths on a single node. The daemon on the schost-2 node monitors paths to the /dev/did/dsk/d4 and /dev/did/dsk/d5 disks. # scdpm -m schost-2:d4 -m schost-2:d5 Example 4 Printing All Disk Paths and Their Status The following command prints all disk paths in the cluster and their status. # scdpm -p schost-1:reboot_on_disk_failure enabled schost-2:reboot_on_disk_failure disabled schost-1:/dev/did/dsk/d4 Ok schost-1:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d5 Unmonitored schost-2:/dev/did/dsk/d6 Ok Example 5 Printing All Failed Disk Paths The following command prints all of the failed disk paths on the schost-2 node. # scdpm -p -F all schost-2:/dev/did/dsk/d4 Fail Example 6 Printing the Status of All Disk Paths From a Single Node The following command prints the disk path and the status of all disks that are monitored on the schost-2 node. # scdpm -p schost-2:all schost-2:reboot_on_disk_failure disabled schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok EXIT STATUS
The following exit values are returned: 0 The command completed successfully. 1 The command failed completely. 2 The command failed partially. Note - The disk path is represented by a node name and a disk name. The node name must be the host name or all. The disk name must be the global disk name, a UNIX path name, or all. The disk name can be either the full global path name or the disk name: /dev/did/dsk/d3 or d3. The disk name can also be the full UNIX path name: /dev/rdsk/c0t0d0s0. Disk path status changes are logged with the syslogd LOG_INFO facility level. All failures are logged with the LOG_ERR facility level. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevice(1CL), clnode(1CL), attributes(5) Sun Cluster System Administration Guide for Solaris OS Sun Cluster 3.2 22 Jun 2006 scdpm(1M)
All times are GMT -4. The time now is 04:02 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy