How to fix mistake on raid: mdadm create instead of assemble?


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users How to fix mistake on raid: mdadm create instead of assemble?
# 1  
Old 10-07-2016
How to fix mistake on raid: mdadm create instead of assemble?

Hi guys,

I'm new to RAID although I've had a server running raid5 for a while. It was delivered preinstalled like this and I never really wondered how to monitor and maintain it. This quick introduction just to let you understand why I'm such an idiot asking such a silly question.

Now what happened?

I have a server with 4 disks and raid5 configured. /dev/md10 is made of sda10, sdb10, sdc10 and sdd10.

Unfortunately, /dev/sdd died, the server crashed, etc. After restart, md10 did not rebuilt. I understood sdd was dead and did not try to force rebuild or even touch the existing system.

First thing I did is ddrescue the remaining partitions sd[abc]10. ddrescue did not stumble into any read error so I assume all remaining partitions are perfectly safe.

Then I examined the partitions with :
Code:
# mdadm --examine /dev/loop[012]

/dev/loop0:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 9d37bc89:711887ae:a4d2adc2:26fd5302
  Creation Time : Wed Jan 25 09:08:11 2012
     Raid Level : raid5
  Used Dev Size : 1926247296 (1837.01 GiB 1972.48 GB)
     Array Size : 5778741888 (5511.04 GiB 5917.43 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 10

    Update Time : Mon Sep  5 23:29:23 2016
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 9d0ce26d - correct
         Events : 81589

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       10        0      active sync

   0     0       8       10        0      active sync
   1     1       8       26        1      active sync
   2     2       8       42        2      active sync
   3     3       0        0        3      faulty removed
/dev/loop1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 9d37bc89:711887ae:a4d2adc2:26fd5302
  Creation Time : Wed Jan 25 09:08:11 2012
     Raid Level : raid5
  Used Dev Size : 1926247296 (1837.01 GiB 1972.48 GB)
     Array Size : 5778741888 (5511.04 GiB 5917.43 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 10

    Update Time : Mon Sep  5 23:36:23 2016
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 2
  Spare Devices : 0
       Checksum : 9d0ce487 - correct
         Events : 81626

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       26        1      active sync

   0     0       0        0        0      removed
   1     1       8       26        1      active sync
   2     2       0        0        2      faulty removed
   3     3       0        0        3      faulty removed
/dev/loop2:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 9d37bc89:711887ae:a4d2adc2:26fd5302
  Creation Time : Wed Jan 25 09:08:11 2012
     Raid Level : raid5
  Used Dev Size : 1926247296 (1837.01 GiB 1972.48 GB)
     Array Size : 5778741888 (5511.04 GiB 5917.43 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 10

    Update Time : Mon Sep  5 23:29:23 2016
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 9d0ce291 - correct
         Events : 81589

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       42        2      active sync

   0     0       8       10        0      active sync
   1     1       8       26        1      active sync
   2     2       8       42        2      active sync
   3     3       0        0        3      faulty removed

There comes my mistake: I ran the --create command instead of --assemble:
Code:
# mdadm --create --verbose /dev/md1 --raid-devices=4 --level=raid5 --run --readonly /dev/loop0 /dev/loop1 /dev/loop2 missing

mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/loop0 appears to contain an ext2fs file system
       size=5778741888K  mtime=Sat Sep  3 11:00:22 2016
mdadm: /dev/loop0 appears to be part of a raid array:
       level=raid5 devices=4 ctime=Wed Jan 25 09:08:11 2012
mdadm: /dev/loop1 appears to be part of a raid array:
       level=raid5 devices=4 ctime=Wed Jan 25 09:08:11 2012
mdadm: /dev/loop2 appears to be part of a raid array:
       level=raid5 devices=4 ctime=Wed Jan 25 09:08:11 2012
mdadm: size set to 1926115840K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: creation continuing despite oddities due to --run
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

After that, mounting failed:
Code:
# mount /dev/md1 /raid/
mount: /dev/md1 is write-protected, mounting read-only
mount: wrong fs type, bad option, bad superblock on /dev/md1,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Here's more info about the new raid to be compared with the initial one:
Code:
# mdadm --examine /dev/loop[012]

/dev/loop0:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : aa56f42f:bb95fbde:11ce620e:878b2b1c
           Name : tucana.caoba.fr:1  (local to host tucana.caoba.fr)
  Creation Time : Mon Sep 19 23:17:04 2016
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 3852232703 (1836.89 GiB 1972.34 GB)
     Array Size : 5778347520 (5510.66 GiB 5917.03 GB)
  Used Dev Size : 3852231680 (1836.89 GiB 1972.34 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=1023 sectors
          State : clean
    Device UUID : b4622e59:f0735f5a:825086d1:57f89efb

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Sep 19 23:17:04 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 3ec8dda7 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/loop1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : aa56f42f:bb95fbde:11ce620e:878b2b1c
           Name : tucana.caoba.fr:1  (local to host tucana.caoba.fr)
  Creation Time : Mon Sep 19 23:17:04 2016
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 3852232703 (1836.89 GiB 1972.34 GB)
     Array Size : 5778347520 (5510.66 GiB 5917.03 GB)
  Used Dev Size : 3852231680 (1836.89 GiB 1972.34 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=1023 sectors
          State : clean
    Device UUID : 9d42153b:4173aeea:51f41ebc:3789f98a

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Sep 19 23:17:04 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 2af1f191 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/loop2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : aa56f42f:bb95fbde:11ce620e:878b2b1c
           Name : tucana.caoba.fr:1  (local to host tucana.caoba.fr)
  Creation Time : Mon Sep 19 23:17:04 2016
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 3852232703 (1836.89 GiB 1972.34 GB)
     Array Size : 5778347520 (5510.66 GiB 5917.03 GB)
  Used Dev Size : 3852231680 (1836.89 GiB 1972.34 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=1023 sectors
          State : clean
    Device UUID : ada52b0e:f2c4a680:ece59800:6425a9b2

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Sep 19 23:17:04 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 2a341a - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)

With the help of the initial mdadm --examine, is it possible to recreate my raid in a way that I can read data out of it?

Regards
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

MDADM Failure - where it came from?

Hello, i have a system with 6 sata3 seagate st3000dm01 disks running on stable Debian with software raid mdadm. i have md0 for root and md1 for swap and md2 for the files. i now want to add one more disk = sdh4 for md2 but i got this errors: The new disk is connected to an 4 port sata... (7 Replies)
Discussion started by: Sunghost
7 Replies

2. Red Hat

RAID Configuration for IBM Serveraid-7k SCSI RAID Controller

Hello, I want to delete a RAID configuration an old server has. Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton? I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies

3. UNIX for Advanced & Expert Users

USB RAID 5 Problem on Joli OS 1.2 (Ubuntu) using mdadm

Hi All, I have been trying to create a USB RAID 5 using mdadm tool on Joli OS 1.2 (Ubuntu) but with no luck. I cannot even get pass the creation of array device (/dev/md0) and superblock. I am using 3 USB keys (2 16.4 GB kingston and 1 16GB sandisk). My steps are: ... (5 Replies)
Discussion started by: powelltallen
5 Replies

4. Red Hat

mdadm for / and /boot

had this RHEL 5 installation with /dev/sda1 and /dev/sda2 running.. created two more partitions /dev/sdj1 and /dev/sdj2 , the same sized partition as /dev/sda trying to use mdadm to create RAID1 .. I cannot even do it in "rescue" mode, I wonder if it can be done.. it kept... (2 Replies)
Discussion started by: ppchu99
2 Replies

5. UNIX for Advanced & Expert Users

mdadm question

Hello, I have 4 drives (500G each) in a raid 10, I got a power failior and this is the result? cat /proc/mdstat Personalities : md126 : inactive sdb sdc sdd sde 1953536528 blocks super external:-md127/0 md127 : inactive sdd(S) sde(S) sdb(S) sdc(S) 9028 blocks super... (3 Replies)
Discussion started by: rmokros
3 Replies

6. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

7. UNIX for Advanced & Expert Users

Create RAID - Smart Array Tool - ML370

Hi guys, i must install an old old old ml370 server... I must create a RAID 5 with my 4 SCSI disk. I need a SmartStart disk for create it or a Floppy Disk called "Array configuration Tool". I don't find it on the hp website...:mad::mad::mad: Anyone have it?? Thanks in advance. Zio (0 Replies)
Discussion started by: Zio Bill
0 Replies

8. Solaris

Cannot assemble drivers for root

Hi, I flashed one SunFire V120 (flarcreate) and dump the flar on to a SunFire V880. It goes smooth until the first reboot. SunOS Release 5.8 Version Generic_108528-22 64-bit Copyright 1983-2003 Sun Microsystems, Inc. All rights reserved. Cannot assemble drivers for root... (8 Replies)
Discussion started by: spacewalker
8 Replies

9. UNIX for Advanced & Expert Users

Is there a tutorial on how to create RAID 1 on top of lvm?

Hi, there are tons of RAID1 tutorials, but none of them deal with lvm. The problem is that I want to expand my current lvm partition over RAID1 rather than creating a new lvm partition after RAID1 is created. My master harddrive has lvm partition. I'm wondering how to create a RAID1 image of... (1 Reply)
Discussion started by: onthetopo
1 Replies
Login or Register to Ask a Question