Revive RAID 0 Array From Buffalo Duo NAS

Tags
advanced, array, na, nas, raid, revive

 
Thread Tools Search this Thread
# 1  
Old 11-24-2018
Revive RAID 0 Array From Buffalo Duo NAS

Thank you in advanced,

I had a Buffalo DUO crap out on me that was setup as RAID 0. I dont belive it was the drives but rather the controller in the DUO unit. I bought another external HDD enclosure and was able to fireup the two older DUO drives in it and I think I resembled the RAID successfully:

Code:
sudo mdadm --examine /dev/sdd6
/dev/sdd6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8577ffd0:8892c451:2dd41f56:0e001d01
           Name : UNINSPECT-EME0C:2
  Creation Time : Sat Jan 18 08:17:34 2014
     Raid Level : raid0
   Raid Devices : 2

 Avail Dev Size : 1923497952 (917.20 GiB 984.83 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 874c45ae:5b772a48:396f6e41:79f42c62

    Update Time : Sat Jan 18 08:17:34 2014
       Checksum : eeaa9c97 - correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
daman$ sudo mdadm --examine /dev/sde6
/dev/sde6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8577ffd0:8892c451:2dd41f56:0e001d01
           Name : UNINSPECT-EME0C:2
  Creation Time : Sat Jan 18 08:17:34 2014
     Raid Level : raid0
   Raid Devices : 2

 Avail Dev Size : 1923497952 (917.20 GiB 984.83 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : bcde0889:4934f6b6:e1af9882:9b7ad11e

    Update Time : Sat Jan 18 08:17:34 2014
       Checksum : 3608b286 - correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
daman$ sudo mdadm --create /dev/md123 --assume-clean --level=0 --verbose --chunk=64 --raid-devices=2 --metadata=0.90 /dev/sdd6 /dev/sde6
mdadm: /dev/sdd6 appears to be part of a raid array:
       level=raid0 devices=2 ctime=Sat Jan 18 08:17:34 2014
mdadm: /dev/sde6 appears to be part of a raid array:
       level=raid0 devices=2 ctime=Sat Jan 18 08:17:34 2014
Continue creating array? y
mdadm: array /dev/md123 started.

Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5BD4E39E-AC17-4070-9569-94B2D6F52367

Device        Start        End    Sectors   Size Type
/dev/sdd1      2048    2002943    2000896   977M Microsoft basic data
/dev/sdd2   2002944   12003327   10000384   4.8G Microsoft basic data
/dev/sdd3  12003328   12005375       2048     1M Microsoft basic data
/dev/sdd4  12005376   12007423       2048     1M Microsoft basic data
/dev/sdd5  12007424   14008319    2000896   977M Microsoft basic data
/dev/sdd6  14008320 1937508319 1923500000 917.2G Microsoft basic data


Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 82EA65C5-432A-4966-8110-EBF425364748

Device        Start        End    Sectors   Size Type
/dev/sde1      2048    2002943    2000896   977M Microsoft basic data
/dev/sde2   2002944   12003327   10000384   4.8G Microsoft basic data
/dev/sde3  12003328   12005375       2048     1M Microsoft basic data
/dev/sde4  12005376   12007423       2048     1M Microsoft basic data
/dev/sde5  12007424   14008319    2000896   977M Microsoft basic data
/dev/sde6  14008320 1937508319 1923500000 917.2G Microsoft basic data


Disk /dev/md123: 1.8 TiB, 1969663770624 bytes, 3846999552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes

But when I try and mount the array using:
Code:
$ sudo mount /dev/md123 /mnt/caca/
NTFS signature is missing.
Failed to mount '/dev/md123': Invalid argument
The device '/dev/md123' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

sudo mount -t ntfs /dev/md123 /mnt/caca/
NTFS signature is missing.
Failed to mount '/dev/md123': Invalid argument
The device '/dev/md123' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a

cat /proc/mdstat
Personalities : [raid0] 
md123 : active raid0 sde6[1] sdd6[0]
      1923499776 blocks 64k chunks
      
md124 : inactive sdc5[0](S) sdb5[1](S)
      2000872 blocks super 1.2
       
md125 : inactive sdc6[0](S) sdb6[1](S)
      1923497952 blocks super 1.2
       
md126 : inactive sdc2[0](S) sdb2[1](S)
      9998336 blocks super 1.2
       
md127 : inactive sdc1[0](S) sdb1[1](S)
      2000768 blocks
       
unused devices: <none>


It simply wont mount. What am I doing wrong?

------ Post updated at 10:41 PM ------

Code:
[27866.641697]  sdd: sdd1 sdd2 sdd3 sdd4 sdd5 sdd6
[27866.641998]  sde: sde1 sde2 sde3 sde4 sde5 sde6
[27866.642985] sd 4:0:0:1: [sde] Attached SCSI disk
[27866.643433] sd 4:0:0:0: [sdd] Attached SCSI disk
[28455.514768] md: array md125 already has disks!
[28479.074982] md: array md125 already has disks!
[30110.845958] md123: detected capacity change from 0 to 1969663770624
[30190.553774] XFS (md123): Invalid superblock magic number
[30430.769434] EXT4-fs (md123): VFS: Can't find ext4 filesystem

------ Post updated at 10:41 PM ------

Code:
[27866.641697]  sdd: sdd1 sdd2 sdd3 sdd4 sdd5 sdd6
[27866.641998]  sde: sde1 sde2 sde3 sde4 sde5 sde6
[27866.642985] sd 4:0:0:1: [sde] Attached SCSI disk
[27866.643433] sd 4:0:0:0: [sdd] Attached SCSI disk
[28455.514768] md: array md125 already has disks!
[28479.074982] md: array md125 already has disks!
[30110.845958] md123: detected capacity change from 0 to 1969663770624
[30190.553774] XFS (md123): Invalid superblock magic number
[30430.769434] EXT4-fs (md123): VFS: Can't find ext4 filesystem

Moderator's Comments:
Mod Comment Changed PHP BB code tags to CODE tags.

Last edited by metallica1973; 11-24-2018 at 11:25 PM..
# 2  
Old 11-25-2018
Can you show output of lsblk -f ?

Regards
Peasant.
# 3  
Old 11-25-2018
What operating system are you trying to mount this on? I notice that the system complains that a NTFS signature is missing, but you don't specify that it's a Windows filesystem on your 'mount' command line. Most Linux/Unix OS's don't look for a Windows filesystem unless you tell it to.
# 4  
Old 11-25-2018
Gentleman,

thanks for the response: ** Moderator - sorry about the php tags **

Code:
lsblk -f

NAME      FSTYPE            LABEL              UUID                                 MOUNTPOINT
sda                                                                                 
├─sda1    ext4                                 1eb8913a-d1e9-4b33-8780-0652bdbfe1fb /
├─sda2                                                                              
├─sda5    swap                                 7460f125-5abb-47f0-947d-d07453d094ca [SWAP]
└─sda6    ext4                                 02a8d909-6a3d-49e9-9879-37b24c6f10f2 /home
sdd                                                                                 
├─sdd1    linux_raid_member                    156bff7d-9ec1-1723-eddf-b9e77f24a349 
├─sdd2    linux_raid_member UNINSPECT-EME0C:1  3d6e5a18-8af9-0d86-cd86-4b8a18142b59 
├─sdd3                                                                              
├─sdd4                                                                              
├─sdd5    linux_raid_member UNINSPECT-EME0C:10 af16b765-c95a-10de-a3f3-c8f0eb93b21a 
└─sdd6    linux_raid_member                    fa3576f5-0b51-66a9-c247-9c29a02d54af 
  └─md123                                                                           
sde                                                                                 
├─sde1    linux_raid_member                    156bff7d-9ec1-1723-eddf-b9e77f24a349 
├─sde2    linux_raid_member UNINSPECT-EME0C:1  3d6e5a18-8af9-0d86-cd86-4b8a18142b59 
├─sde3                                                                              
├─sde4                                                                              
├─sde5    linux_raid_member UNINSPECT-EME0C:10 af16b765-c95a-10de-a3f3-c8f0eb93b21a 
└─sde6    linux_raid_member                    fa3576f5-0b51-66a9-c247-9c29a02d54af 
  └─md123 

sudo fdisk -l
Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x87266eff

Device     Boot    Start       End   Sectors  Size Id Type
/dev/sda1  *        2048  58593279  58591232   28G 83 Linux
/dev/sda2       58595326 234440703 175845378 83.9G  5 Extended
/dev/sda5       58595328  66605055   8009728  3.8G 82 Linux swap / Solaris
/dev/sda6       66607104 234440703 167833600   80G 83 Linux


Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5BD4E39E-AC17-4070-9569-94B2D6F52367

Device        Start        End    Sectors   Size Type
/dev/sdd1      2048    2002943    2000896   977M Microsoft basic data
/dev/sdd2   2002944   12003327   10000384   4.8G Microsoft basic data
/dev/sdd3  12003328   12005375       2048     1M Microsoft basic data
/dev/sdd4  12005376   12007423       2048     1M Microsoft basic data
/dev/sdd5  12007424   14008319    2000896   977M Microsoft basic data
/dev/sdd6  14008320 1937508319 1923500000 917.2G Microsoft basic data


Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 82EA65C5-432A-4966-8110-EBF425364748

Device        Start        End    Sectors   Size Type
/dev/sde1      2048    2002943    2000896   977M Microsoft basic data
/dev/sde2   2002944   12003327   10000384   4.8G Microsoft basic data
/dev/sde3  12003328   12005375       2048     1M Microsoft basic data
/dev/sde4  12005376   12007423       2048     1M Microsoft basic data
/dev/sde5  12007424   14008319    2000896   977M Microsoft basic data
/dev/sde6  14008320 1937508319 1923500000 917.2G Microsoft basic data


Disk /dev/md123: 1.8 TiB, 1969663770624 bytes, 3846999552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes

sudo file -s /dev/md123
/dev/md123: data

sudo mdadm --detail /dev/md123
/dev/md123:
           Version : 0.90
     Creation Time : Sat Nov 24 21:28:30 2018
        Raid Level : raid0
        Array Size : 1923499776 (1834.39 GiB 1969.66 GB)
      Raid Devices : 2
     Total Devices : 2
   Preferred Minor : 123
       Persistence : Superblock is persistent

       Update Time : Sat Nov 24 21:28:30 2018
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 64K

Consistency Policy : none

              UUID : fa3576f5:0b5166a9:c2479c29:a02d54af (local to host Hijo-De-Dios)
            Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       54        0      active sync   /dev/sdd6
       1       8       70        1      active sync   /dev/sde6

Additionally, I tried to mount the stuff using:

Code:
sudo mount -t ntfs /dev/md123 /mnt/caca/
NTFS signature is missing.
Failed to mount '/dev/md123': Invalid argument
The device '/dev/md123' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

sudo mount -t ntfs-3g /dev/md123 /mnt/caca/
NTFS signature is missing.
Failed to mount '/dev/md123': Invalid argument
The device '/dev/md123' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?


The OS that I am using to mount this on is Kali 2018 linux which is essentially Debian,

Last edited by metallica1973; 11-25-2018 at 05:01 PM..
# 5  
Old 11-25-2018
Hmmmm.......if it was Debian then I'd be typing.............

Code:
$ sudo mount t ntfs-3g /dev/md123 /mnt/caca

to mount a NTFS filesystem.
# 6  
Old 11-25-2018
Sorry about the typo. I did do that. I changed it to what it was caca. So:

Code:
sudo mount -t ntfs-3g /dev/md123 /mnt/caca

# 7  
Old 11-26-2018
I'm not familiar with Buffalo NAS specifically but there are two different architectures of RAID controller.

One stores the array configuration information on the disks, including type of array (Raid 0,1,3,5,6 or whatever) and the disk member number of that array. When this type of controller fails you can replace it with a new one which, at boot, will read the disk labels and run with the previous RAID array(s).

The second, and most common type, stores the array configuration in its own NVRAM and if the controller fails you have to find a way of resetting the correct configuration into NVRAM. Often this is by using some new disks to configure the same array and then swapping the disks back to the originals. The manufacturer would usually be referred to for advice.

It seems to me like the NTFS filesystem is not as it was so your recovery technique was flawed. Perhaps the disks are in the wrong order or something like that. Kali doesn't like what it's seeing. It might be quicker to initialise the array and restore from backup.

|
Thread Tools Search this Thread
Search this Thread:
Advanced Search

More UNIX and Linux Forum Topics You Might Find Helpful
RAID Configuration for IBM Serveraid-7k SCSI RAID Controller @dagio Red Hat 0 04-30-2013 10:11 AM
Best RAID settings for Debian Server? Help!! (1+0 or 5 or NAS) Marcus Aurelius Debian 0 12-30-2011 03:54 PM
Software RAID on top of Hardware RAID KhawHL Solaris 4 12-16-2011 02:52 AM
SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared filosophizer AIX 0 06-14-2011 03:20 PM
missing raid array after reboot sriniv666 Red Hat 2 06-09-2011 06:51 AM
Loading a RAID array after OS crash CRGreathouse Emergency UNIX and Linux Support 2 07-20-2010 10:06 PM
RAID software vs hardware RAID presul UNIX for Dummies Questions & Answers 2 07-20-2010 03:09 PM
EFI Disk labels on 3510 raid array callmebob Solaris 2 02-26-2009 09:55 AM
Create RAID - Smart Array Tool - ML370 Zio Bill UNIX for Advanced & Expert Users 0 09-23-2008 07:49 AM
difference between Dual-core & Core-to-duo Ajith kumar.G UNIX for Dummies Questions & Answers 1 05-31-2008 08:50 AM
RAID level of array = false? scottsl AIX 0 05-17-2006 10:27 AM
Percent complete error while scanning RAID array during 5.0.6 load Henrys UNIX for Advanced & Expert Users 0 11-28-2005 10:38 AM