Sponsored Content
Full Discussion: Reconstructing RAID
Special Forums Hardware Filesystems, Disks and Memory Reconstructing RAID Post 302695721 by tonyaldr on Monday 3rd of September 2012 02:37:34 PM
Old 09-03-2012
Reconstructing RAID

I am trying to reconstruct a failed 4 disk RAID5 Western Digital ShareSpace device using 3 of the 4 disks connected via USB to an Ubuntu 12.04 machine. I get what seems like a successful re-assemble from -

Code:
mdadm --assemble --force /dev/md2 /dev/sde4 /dev/sdf4 /dev/sdg4
mdadm: /dev/md2 has been started with 3 drives (out of 4).
But then when I try to mount, it fails.  I am logged in as root and when I try to troubleshoot with mdadm, I get odd returns such as -
mdadm --examine /dev/md2
mdadm: No md superblock detected on /dev/md2.
Also, the system can't seem to find the volume -
vgscan -v
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
  Reading all physical volumes.  This may take a while...
    Finding all volume groups
  No volume groups found

I read in some other posts that the WD system uses LVM2.  Could that be the issue?  Here is the output from mdadm --detail
mdadm --detail /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Mon Oct 19 10:26:15 2009
     Raid Level : raid5
     Array Size : 5854981248 (5583.75 GiB 5995.50 GB)
  Used Dev Size : 1951660416 (1861.25 GiB 1998.50 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sun Sep  2 15:22:50 2012
          State : clean, degraded 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 4c4952ae:1477d756:234bdad8:bdaa1368
         Events : 0.9246753

    Number   Major   Minor   RaidDevice State
       0       8       84        0      active sync   /dev/sdf4
       1       8       68        1      active sync   /dev/sde4
       2       0        0        2      removed
       3       8      100        3      active sync   /dev/sdg4

Here's the result of the mount attempt-
mount -t auto dev/md2 /mnt/raid
mount: special device dev/md2 does not exist

Appreciate any assistance! Thanx!

Last edited by Neo; 11-21-2017 at 09:59 AM..
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

reconstructing a record in a diffrent order

Can sed be used to take a existing record and reverse the order of defined character placement if there is no delimeters? existing record: 0123456789CO expected result: 9876543210CO if there were delimeters I could define the delimeter and each placement would have an id which I... (1 Reply)
Discussion started by: r1500
1 Replies

2. UNIX for Dummies Questions & Answers

regarding raid

Hello, I am aware that our system has two hard drives with raid but i'm not sure as to the type of raid the system uses. I tried this. # df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 229376 76272 67% 6748 12% / /dev/hd2 3080192... (1 Reply)
Discussion started by: h1timmboy
1 Replies

3. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

4. Solaris

implementing RAID 1 from RAID 5

Dear ALl, I have a RAID 5 volume which is as below d120 r 60GB c1t2d0s5 c1t3d0s5 c1t4d0s5 c1t5d0s5 d7 r 99GB c1t2d0s0 c1t3d0s0 c1t4d0s0 c1t5d0s0 d110 r 99GB c1t2d0s4 c1t3d0s4 c1t4d0s4 c1t5d0s4 d8 r 99GB c1t2d0s1 c1t3d0s1... (2 Replies)
Discussion started by: jegaraman
2 Replies

5. Solaris

Creation of Raid 01 and Raid 10

Hello All, I have read enough of texts on Raid 01 and Raid 10 on solaris :wall: . But no-where found a way to create them using SVM. Some one pls tell me how to do or Post some link if that helps. TIA Curious solarister (1 Reply)
Discussion started by: Solarister
1 Replies

6. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

7. UNIX for Dummies Questions & Answers

Need help with RAID.

Hi Gurus, Can any one explain me the difference between hardware RAID and s/w RAID. Thanks in Advance. (1 Reply)
Discussion started by: rama krishna
1 Replies

8. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

9. Red Hat

RAID Configuration for IBM Serveraid-7k SCSI RAID Controller

Hello, I want to delete a RAID configuration an old server has. Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton? I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies
vxsparecheck(1M)														  vxsparecheck(1M)

NAME
vxsparecheck - monitor Veritas Volume Manager for failure events and replace failed disks SYNOPSIS
/etc/vx/bin/vxsparecheck [mail-address...] DESCRIPTION
The vxsparecheck command monitors Veritas Volume Manager (VxVM) by analyzing the output of the vxnotify command, waiting for failures to occur. It then sends mail via mailx to the logins specified on the command line, or (by default) to root. It then replaces any failed disks. After an attempt at replacement is complete, mail will be sent indicating the status of each disk replacement. The mail notification that is sent when a failure is detected follows this format: Failures have been detected by the Veritas Volume Manager: failed disks: medianame ... failed plexes: plexname ... failed subdisks: subdiskname ... failed volumes: volumename ... The Volume Manager will attempt to find hot-spare disks to replace any failed disks and attempt to reconstruct any data in volumes that have storage on the failed disk. The medianame list specifies disks that appear to have completely failed. The plexname list show plexes of mirrored volumes that have been detached due to I/O failures experienced while attempting to do I/O to subdisks they contain. The subdiskname list specifies subdisks in RAID-5 volumes that have been detached due to I/O errors. The volumename list shows non-RAID-5 volumes that have become unusable because disks in all of their plexes have failed (and are listed in the ``failed disks'' list) and shows those RAID-5 volumes that have become unusable because of multiple failures. If any volumes appear to have failed, the following paragraph will be included in the mail: The data in the failed volumes listed above is no longer available. It will need to be restored from backup. Replacement Procedure After mail has been sent, vxsparecheck finds a hot spare replacement for any disks that appear to have failed (that is, those listed in the medianame list). This involves finding an appropriate replacement for those eligible hot spares in the same disk group as the failed disk. A disk is eligible as a replacement if it is a valid Veritas Volume Manager disk (VM disk), has been marked as a hot-spare disk and con- tains enough space to hold the data contained in all the subdisks on the failed disk. To determine which disk from among the eligible hot spares to use, vxsparecheck first checks the file /etc/vx/sparelist (see Sparelist File below). If this file does not exist or lists no eligible hot spares for the failed disk, the disk that is ``closest'' to the failed disk is chosen. The value of ``closeness'' depends on the controller, target and disk number of the failed disk. A disk on the same controller as the failed disk is closer than a disk on a different controller; and a disk under the same target as the failed disk is closer than one under a different target. If no hot spare disk can be found, the following mail is sent: No hot spare could be found for disk medianame in diskgroup. No replacement has been made and the disk is still unusable. The mail then explains the disposition of volumes that had storage on the failed disk. The following message lists disks that had storage on the failed disk, but are still usable: The following volumes have storage on medianame: volumename These volumes are still usable, but the redundancy of those volumes is reduced. Any RAID-5 volumes with storage on the failed disk may become unusable in the face of further failures. If any non-RAID-5 volumes were made unusable due to the failure of the disk, the following message is included: The following volumes: volumename have data on medianame but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. If any RAID-5 volumes were made unavailable due to the disk failure, the following message is included The following RAID-5 volumes: volumename had storage on medianame and have experienced other failures. These RAID-5 volumes are now unusable and data on them is unavailable. If a hot-spare disk was found, a hot-spare replacement is attempted. This involves associating the device marked as a hot spare with the media record that was associated with the failed disk. If this is successful, the vxrecover(1M) command is used in the background to recover the contents of any data in volumes that had storage on the disk. If the hot-spare replacement fails, the following message is sent: Replacement of disk medianame in group diskgroup failed. The error is: error message If any volumes (RAID-5 or otherwise) are rendered unusable due to the failure, the following message is included: The following volumes: volumename occupy space on the failed disk and have no other available mirrors or have experienced other failures. These volumes are unusable, and the data they contain is unavailable. If the hot-spare replacement procedure completed successfully and recovery is under way, a final mail message is sent: Replacement of disk medianame in group diskgroup with disk device sparedevice has successfully completed and recovery is under way. If any non-RAID-5 volumes were rendered unusable by the failure despite the successful hot-spare procedure, the following message is included in the mail: The following volumes: volumename occupy spare on the replaced disk, but have no other enabled mirrors on other disks from which to perform recovery. These volumes must have their data restored. If any RAID-5 volumes were rendered unusable by the failure despite the successful hot-spare procedure, the following message is included in the mail: The following RAID-5 volumes: volumename have subdisks on the replaced disk and have experienced other failures that prevent recovery. These RAID-5 volumes must have their data restored. If any volumes (RAID-5 or otherwise) were rendered unusable, the following message is also included: To restore the contents of any volumes listed above, the volume should be started with the command: vxvol -f start volumename and the data restored from backup. Sparelist File The sparelist file is a text file that specifies an ordered list of disks to be used as hot spares when a specific disk fails. The system- wide sparelist file is located in /etc/vx/sparelist. Each line in the sparelist file specifies a list of spares for one disk. Lines beginning with the pound (#) character and empty lines are ignored. The format for a line in the sparelist file is: [ diskgroup:] diskname : spare1 [ spare2 ... ] The diskgroup field, if present, specifies the disk group within which the disk and designated spares reside. If this field is not speci- fied, the default disk group is determined using the rules given in the vxdg(1M) manual page. The diskname specifies the disk for which spares are being designated. The spare list after the colon lists the disks to be used as hot spares. The list is order dependent; in case of failure of diskname, the spares are tried in order. A spare will be used only if it is a valid hot spare (see above). If the list is exhausted without finding any spares, the default policy of using the closest disk is used. FILES
/etc/vx/sparelist Specifies a list of disks to serve as hot spares for a disk. NOTES
The sparelist file is not checked in any way for correctness until a disk failure occurs. It is possible to inadvertently specify a non- existent disk or inappropriate disk or disk group. Malformed lines are also ignored. SEE ALSO
mailx(1), vxintro(1M), vxnotify(1M), vxrecover(1M), vxrelocd(1M), vxunreloc(1M) VxVM 5.0.31.1 24 Mar 2008 vxsparecheck(1M)
All times are GMT -4. The time now is 02:31 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy