Sponsored Content
Operating Systems Linux SuSE SLES11 - RAID6 all disks marked as Spare Post 302324170 by xavix on Wednesday 10th of June 2009 06:58:48 AM
Old 06-10-2009
SLES11 - RAID6 all disks marked as Spare

Hello,

After a replacement of the motherboard of my server, all disks belonging to a raid6 are now marked as spare.

Is there any way to mark those disks as active and restore the raid6?

Code:
$ cat /proc/mdstat 
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] 
md1 : inactive sda1[1](S) sdk1[11](S) sdj1[10](S) sdi1[9](S) sdh1[8](S) sdg1[7](S) sdf1[6](S) sde1[5](S) sdd1[4](S) sdc1[3](S) sdb1[2](S)
      10744359296 blocks
       
md0 : active raid1 sdm1[0] sdn1[1]
      2096384 blocks [2/2] [UU]
      
md2 : active raid1 sdm3[0] sdn3[1]
      104864192 blocks [2/2] [UU]
      
unused devices: <none>

All disks show the same info:
Code:
$ mdadm -E /dev/sdb1 
/dev/sdb1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : d293389b:7b11f66f:33b8bc9f:5417f8db
  Creation Time : Tue Dec  9 20:52:05 2008
     Raid Level : raid6
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 9767599360 (9315.11 GiB 10002.02 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 1

    Update Time : Wed Jun  3 09:34:06 2009
          State : active
Active Devices : 12
Working Devices : 12
Failed Devices : 0
  Spare Devices : 0
       Checksum : 4c480fd3 - correct
         Events : 505329

     Chunk Size : 128K

      Number   Major   Minor   RaidDevice State
this     1       8       17        1      active sync   /dev/sdb1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       8       33        2      active sync   /dev/sdc1
   3     3       8       49        3      active sync   /dev/sdd1
   4     4       8       65        4      active sync   /dev/sde1
   5     5       8       81        5      active sync   /dev/sdf1
   6     6       8       97        6      active sync   /dev/sdg1
   7     7       8      113        7      active sync   /dev/sdh1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8      145        9      active sync   /dev/sdj1
  10    10       8      161       10      active sync   /dev/sdk1
  11    11       8      177       11      active sync   /dev/sdl1

UUID of all disks are the same as in mdadm.conf
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

file marked unretrievable

Hi When i try to ftp download a file from a Solaris 10 server, it returns the error "is marked unretrievable" how can i get around this to download the file? (2 Replies)
Discussion started by: arielgoldberg
2 Replies

2. Solaris

Hot Spare replacement

Hi Guys, Can Someone pls let me know the thorough process for Hot spare replacement as current Hot spare slice has broken down . :mad: Thanks ---------- Post updated at 06:34 PM ---------- Previous update was at 05:21 PM ---------- Update : Its a solaris 10 box (1 Reply)
Discussion started by: Solarister
1 Replies

3. SuSE

XEN and SLES11

Good morning, Server:HP ProLiant DL165 G7 diskless with Disc on Storage OS:SLES11 SP1 and xen-3.3.1_18546_12-3.1 iSCSI:INTEL Gigabit ET Dual Port Server Adapter 825768 When I start SLES11 with Xen in boot-loader menu, then the boot will stop because linux could'nt find the iscsi interface... (0 Replies)
Discussion started by: hiddenshadow
0 Replies

4. Shell Programming and Scripting

GnuPlot-area marked by filledcurve

Hi ! I try to highlight data. therefor i want du emphasize a rectangle area in the background by filledcurves inbetween to limits. #Sektor Inden --> Anfang <--> Ende SIa(x)=212 SIe(x)=252 SHa(x)=64 SHe(x)=122 plot 'PATH/mete.txt' using 1:3 t 'WR' w l lt 3 lw 1 lc rgbcolor "#0000ff", \ ... (1 Reply)
Discussion started by: IMPe
1 Replies

5. Solaris

How to get spare disks working

Dears how can i make this spare disks working online to replace a defective disks vxdisk list DEVICE TYPE DISK GROUP STATUS c0t10d0s2 sliced - - error c0t11d0s2 sliced disk08 rootdg online c1t16d0s2 sliced ... (3 Replies)
Discussion started by: thecobra151
3 Replies

6. Linux

Apache2-mpm-itk installation SLES11

Hi, do anyone know how to apply the corresponding apache2-mpm-itk patches to an already installed apache2 on a SLES11 machine? This is the link to the module: apache2-mpm-itk And this is the link to the patches: Index of /apache2.2-mpm-itk-2.2.17-01 Thanks (0 Replies)
Discussion started by: asanchez
0 Replies

7. UNIX for Advanced & Expert Users

Superblock marked dirty

Good morning! I met a problem on a FS with AIX 5.3 It's not possible to mount the FS because of a dirty superblock. I tried few things without success. I need your help to solve my problem guys. Do you have any idea please? Thanks a lot drp01,/home/root # mount /GSPRES/data Replaying... (9 Replies)
Discussion started by: Castelior
9 Replies

8. SuSE

Network issue in SLES11.2

I had installed suse linux 11.2 . I had some strange problem with my network , after configuring it showing some error like tulip_stop_rxtx() failed . The server is not reachable from outside. Could some one help me with it. (1 Reply)
Discussion started by: nanduri
1 Replies

9. Web Development

Mysql table is marked as crashed and should be repaired

140312 13:43:54 /usr/libexec/mysqld: Table './***/phpbb_posts' is marked as crashed and should be repaired Its mysqld.log in var/log alot of messages, but before around hour i tried to "repaid table" from within phpmyadmin, but appears it has no effect.. why? How to fix? (1 Reply)
Discussion started by: postcd
1 Replies

10. Solaris

How to determine if i have spare disks in Solaris?

Hi Guys, obviously new to SOLARIS SUN SPARC 5.10 I would really appreciate if you help me see how to find free disks available in my system. Like i am a linux admin. If i want to grow a file system in linux. I would first have a look at my volume groups to see if they have free PEs if not then... (2 Replies)
Discussion started by: aiqbal
2 Replies
vxrelocd(1M)															      vxrelocd(1M)

NAME
vxrelocd - monitor Veritas Volume Manager for failure events and relocate failed subdisks SYNOPSIS
/etc/vx/bin/vxrelocd [-o vxrecover_argument] [-O old_version] [-s save_max] [mail_address...] DESCRIPTION
The vxrelocd command monitors Veritas Volume Manager (VxVM) by analyzing the output of the vxnotify command, and waits for a failure. When a failure occurs, vxrelocd sends mail via mailx to root (by default) or to other specified users and relocates failed subdisks. After com- pleting the relocation, vxrelocd sends more mail indicating the status of each subdisk replacement. The vxrecover utility is then run on volumes with relocated subdisks to restore data. Mail is sent after vxrecover executes. OPTIONS
-o The -o option and its argument are passed directly to vxrecover if vxrecover is called. This allows specifying -o slow[=iodelay] to keep vxrecover from overloading a busy system during recovery. The default value for the delay is 250 milliseconds. -O Reverts back to an older version. Specifying -O VxVM_version directs vxrelocd to use the relocation scheme in that version. -s Before vxrelocd attempts a relocation, a snapshot of the current configuration is saved in /etc/vx/saveconfig.d. This option specifies the maximum number of configurations to keep for each diskgroup. The default is 32. Mail Notification By default, vxrelocd sends mail to root with information about a detected failure and the status of any relocation and recovery attempts. To send mail to other users, add the user login name to the vxrelocd startup line in the startup script /sbin/init.d/vxvm-recover, and reboot the system. For example, if the line appears as: nohup vxrelocd root & and you want mail also to be sent to user1 and user2, change the line to read: nohup vxrelocd root user1 user2 & Alternatively, you can kill the vxrelocd process and restart it as vxrelocd root mail_address, where mail_address is a user's login name. Do not kill the vxrelocd process while a relocation attempt is in progress. The mail notification that is sent when a failure is detected follows this format: Failures have been detected by the Veritas Volume Manager: failed disks: medianame ... failed plexes: plexname ... failed log plexes: plexname ... failing disks: medianame ... failed subdisks: subdiskname ... The Volume Manager will attempt to find spare disks, relocate failed subdisks and then recover the data in the failed plexes. The medianame list under failed disks specifies disks that appear to have completely failed; the medianame list under failing disks indi- cates a partial disk failure or a disk that is in the process of failing. When a disk has failed completely, the same medianame list appears under both failed disks and failing disks. The plexname list under failed plexes shows plexes that were detached due to I/O fail- ures that occurred while attempting to do I/O to subdisks they contain. The plexname list under failed log plexes indicates RAID-5 or DRL (dirty region logging) log plexes that have failed. The subdiskname list specifies subdisks in RAID-5 volumes that were detached due to I/O errors. Spare Space A disk can be marked as ``spare.'' This makes the disk available as a site for relocating failed subdisks. Disks that are marked as spares are not used for normal allocations unless you explicitly specify them. This ensures that there is a pool of spare space available for relocating failed subdisks and that this space does not get consumed by normal operations. Spare space is the first space used to relocate failed subdisks. However, if no spare space is available, or the available spare space is not suitable or sufficient, free space is also used except for those marked with the nohotuse flag. See the vxedit(1M) and vxdiskadm(1M) manual pages for more information on marking a disk as a spare or nohotuse. Nohotuse Space A disk can be marked as ``nohotuse.'' This excludes the disk from being used by vxrelocd, but it is still available as free space. See the vxedit(1M) and vxdiskadm(1M) manual pages for more information on marking a disk as a spare or nohotuse. Replacement Procedure After mail is sent, vxrelocd relocates failed subdisks (those listed in the subdisks list). This requires finding appropriate spare or free space in the same disk group as the failed subdisk. A disk is eligible as replacement space if it is a valid Veritas Volume Manager disk (VM disk) and contains enough space to hold the data contained in the failed subdisk. If no space is available on spare disks, the relocation uses free space that is not marked nohotuse. To determine which of the eligible disks to use, vxrelocd first tries the disk that is closest to the failed disk. The value of ``close- ness'' depends on the controller, target, and disk number of the failed disk. A disk on the same controller as the failed disk is closer than a disk on a different controller; a disk under the same target as the failed disk is closer than one under a different target. vxrelocd moves all subdisks from a failing drive to the same destination disk if possible. If no spare or free space is found, mail is sent explaining the disposition of volumes that had storage on the failed disk: Hot-relocation was not successful for subdisks on disk dm_name in volume v_name in disk group dg_name. No replacement was made and the disk is still unusable. The following volumes have storage on medianame: volumename ... These volumes are still usable, but the redundancy of those volumes is reduced. Any RAID-5 volumes with storage on the failed disk may become unusable in the face of further failures. If any non-RAID-5 volumes were made unusable due to the disk failure, the following message is included: The following volumes: volumename ... have data on medianame but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored. If any RAID-5 volumes were made unavailable due to the disk failure, the following message is included: The following RAID-5 volumes: volumename ... had storage on medianame and have experienced other failures. These RAID-5 volumes are now unusable and data on them is unavailable. These RAID-5 volumes must have their data restored. If there is spare space available, a snapshot of the current configuration is saved in /etc/vx/saveconfig.d/dg_name.yymmdd_hhmmss.mpvsh before attempting a subdisk relocation. Relocation requires setting up a subdisk on the spare or free space not marked with nohotuse and using it to replace the failed subdisk. If this is successful, the vxrecover command runs in the background to recover the data in volumes that had storage on the disk. If the relocation fails, the following message is sent: Hot-relocation was not successful for subdisks on disk dm_name in volume v_name in disk group dg_name. No replacement was made and the disk is still unusable. If any volumes (RAID-5 or otherwise) become unusable due to the failure, the following message is included: The following volumes: volumename ... have data on dm_name but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored. If the relocation procedure was successful and recovery has begun, the following mail message is sent: Volume v_name Subdisk sd_name relocated to newsd_name, but not yet recovered. After recovery completes, a mail message is sent relaying the result of the recovery procedure. If the recovery is successful, the follow- ing message is included in the mail: Recovery complete for volume v_name in disk group dg_name. If the recovery was not successful, the following message is included in the mail: Failure recovering v_name in disk group dg_name. Disabling vxrelocd If you do not want automatic subdisk relocation, you can disable the hot-relocation feature by killing the relocation daemon, vxrelocd, and preventing it from restarting. However, do not kill the daemon while it is doing the relocation. To kill the daemon, run the command: ps -ef from the command line and find the two entries for vxrelocd. Execute the command: kill -9 PID1 PID2 (substituting PID1 and PID2 with the process IDs for the two vxrelocd processes). To prevent vxrelocd from being started again, you must comment out the line that starts up vxrelocd in the startup script /sbin/init.d/vxvm-recover. FILES
/sbin/init.d/vxvm-recover The startup file for vxrelocd. /etc/vx/saveconfig.d/dg_name.yymmdd_hhmmss.mpvsh File where vxrelocd saves a snapshot of the current configuration before performing a relocation. SEE ALSO
kill(1), mailx(1), ps(1), vxdiskadm(1M), vxedit(1M), vxintro(1M), vxnotify(1M), vxrecover(1M), vxsparecheck(1M), vxunreloc(1M) VxVM 5.0.31.1 24 Mar 2008 vxrelocd(1M)
All times are GMT -4. The time now is 04:02 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy