Revive RAID 0 Array From Buffalo Duo NAS


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users Revive RAID 0 Array From Buffalo Duo NAS
# 1  
Old 11-24-2018
Revive RAID 0 Array From Buffalo Duo NAS

Thank you in advanced,

I had a Buffalo DUO crap out on me that was setup as RAID 0. I dont belive it was the drives but rather the controller in the DUO unit. I bought another external HDD enclosure and was able to fireup the two older DUO drives in it and I think I resembled the RAID successfully:

Code:
sudo mdadm --examine /dev/sdd6
/dev/sdd6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8577ffd0:8892c451:2dd41f56:0e001d01
           Name : UNINSPECT-EME0C:2
  Creation Time : Sat Jan 18 08:17:34 2014
     Raid Level : raid0
   Raid Devices : 2

 Avail Dev Size : 1923497952 (917.20 GiB 984.83 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 874c45ae:5b772a48:396f6e41:79f42c62

    Update Time : Sat Jan 18 08:17:34 2014
       Checksum : eeaa9c97 - correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
daman$ sudo mdadm --examine /dev/sde6
/dev/sde6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8577ffd0:8892c451:2dd41f56:0e001d01
           Name : UNINSPECT-EME0C:2
  Creation Time : Sat Jan 18 08:17:34 2014
     Raid Level : raid0
   Raid Devices : 2

 Avail Dev Size : 1923497952 (917.20 GiB 984.83 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : bcde0889:4934f6b6:e1af9882:9b7ad11e

    Update Time : Sat Jan 18 08:17:34 2014
       Checksum : 3608b286 - correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
daman$ sudo mdadm --create /dev/md123 --assume-clean --level=0 --verbose --chunk=64 --raid-devices=2 --metadata=0.90 /dev/sdd6 /dev/sde6
mdadm: /dev/sdd6 appears to be part of a raid array:
       level=raid0 devices=2 ctime=Sat Jan 18 08:17:34 2014
mdadm: /dev/sde6 appears to be part of a raid array:
       level=raid0 devices=2 ctime=Sat Jan 18 08:17:34 2014
Continue creating array? y
mdadm: array /dev/md123 started.

Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5BD4E39E-AC17-4070-9569-94B2D6F52367

Device        Start        End    Sectors   Size Type
/dev/sdd1      2048    2002943    2000896   977M Microsoft basic data
/dev/sdd2   2002944   12003327   10000384   4.8G Microsoft basic data
/dev/sdd3  12003328   12005375       2048     1M Microsoft basic data
/dev/sdd4  12005376   12007423       2048     1M Microsoft basic data
/dev/sdd5  12007424   14008319    2000896   977M Microsoft basic data
/dev/sdd6  14008320 1937508319 1923500000 917.2G Microsoft basic data


Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 82EA65C5-432A-4966-8110-EBF425364748

Device        Start        End    Sectors   Size Type
/dev/sde1      2048    2002943    2000896   977M Microsoft basic data
/dev/sde2   2002944   12003327   10000384   4.8G Microsoft basic data
/dev/sde3  12003328   12005375       2048     1M Microsoft basic data
/dev/sde4  12005376   12007423       2048     1M Microsoft basic data
/dev/sde5  12007424   14008319    2000896   977M Microsoft basic data
/dev/sde6  14008320 1937508319 1923500000 917.2G Microsoft basic data


Disk /dev/md123: 1.8 TiB, 1969663770624 bytes, 3846999552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes

But when I try and mount the array using:
Code:
$ sudo mount /dev/md123 /mnt/caca/
NTFS signature is missing.
Failed to mount '/dev/md123': Invalid argument
The device '/dev/md123' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

sudo mount -t ntfs /dev/md123 /mnt/caca/
NTFS signature is missing.
Failed to mount '/dev/md123': Invalid argument
The device '/dev/md123' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a

cat /proc/mdstat
Personalities : [raid0] 
md123 : active raid0 sde6[1] sdd6[0]
      1923499776 blocks 64k chunks
      
md124 : inactive sdc5[0](S) sdb5[1](S)
      2000872 blocks super 1.2
       
md125 : inactive sdc6[0](S) sdb6[1](S)
      1923497952 blocks super 1.2
       
md126 : inactive sdc2[0](S) sdb2[1](S)
      9998336 blocks super 1.2
       
md127 : inactive sdc1[0](S) sdb1[1](S)
      2000768 blocks
       
unused devices: <none>


It simply wont mount. What am I doing wrong?

------ Post updated at 10:41 PM ------

Code:
[27866.641697]  sdd: sdd1 sdd2 sdd3 sdd4 sdd5 sdd6
[27866.641998]  sde: sde1 sde2 sde3 sde4 sde5 sde6
[27866.642985] sd 4:0:0:1: [sde] Attached SCSI disk
[27866.643433] sd 4:0:0:0: [sdd] Attached SCSI disk
[28455.514768] md: array md125 already has disks!
[28479.074982] md: array md125 already has disks!
[30110.845958] md123: detected capacity change from 0 to 1969663770624
[30190.553774] XFS (md123): Invalid superblock magic number
[30430.769434] EXT4-fs (md123): VFS: Can't find ext4 filesystem

------ Post updated at 10:41 PM ------

Code:
[27866.641697]  sdd: sdd1 sdd2 sdd3 sdd4 sdd5 sdd6
[27866.641998]  sde: sde1 sde2 sde3 sde4 sde5 sde6
[27866.642985] sd 4:0:0:1: [sde] Attached SCSI disk
[27866.643433] sd 4:0:0:0: [sdd] Attached SCSI disk
[28455.514768] md: array md125 already has disks!
[28479.074982] md: array md125 already has disks!
[30110.845958] md123: detected capacity change from 0 to 1969663770624
[30190.553774] XFS (md123): Invalid superblock magic number
[30430.769434] EXT4-fs (md123): VFS: Can't find ext4 filesystem

Moderator's Comments:
Mod Comment Changed PHP BB code tags to CODE tags.

Last edited by metallica1973; 11-24-2018 at 11:25 PM..
# 2  
Old 11-25-2018
Can you show output of lsblk -f ?

Regards
Peasant.
# 3  
Old 11-25-2018
What operating system are you trying to mount this on? I notice that the system complains that a NTFS signature is missing, but you don't specify that it's a Windows filesystem on your 'mount' command line. Most Linux/Unix OS's don't look for a Windows filesystem unless you tell it to.
# 4  
Old 11-25-2018
Gentleman,

thanks for the response: ** Moderator - sorry about the php tags **

Code:
lsblk -f

NAME      FSTYPE            LABEL              UUID                                 MOUNTPOINT
sda                                                                                 
├─sda1    ext4                                 1eb8913a-d1e9-4b33-8780-0652bdbfe1fb /
├─sda2                                                                              
├─sda5    swap                                 7460f125-5abb-47f0-947d-d07453d094ca [SWAP]
└─sda6    ext4                                 02a8d909-6a3d-49e9-9879-37b24c6f10f2 /home
sdd                                                                                 
├─sdd1    linux_raid_member                    156bff7d-9ec1-1723-eddf-b9e77f24a349 
├─sdd2    linux_raid_member UNINSPECT-EME0C:1  3d6e5a18-8af9-0d86-cd86-4b8a18142b59 
├─sdd3                                                                              
├─sdd4                                                                              
├─sdd5    linux_raid_member UNINSPECT-EME0C:10 af16b765-c95a-10de-a3f3-c8f0eb93b21a 
└─sdd6    linux_raid_member                    fa3576f5-0b51-66a9-c247-9c29a02d54af 
  └─md123                                                                           
sde                                                                                 
├─sde1    linux_raid_member                    156bff7d-9ec1-1723-eddf-b9e77f24a349 
├─sde2    linux_raid_member UNINSPECT-EME0C:1  3d6e5a18-8af9-0d86-cd86-4b8a18142b59 
├─sde3                                                                              
├─sde4                                                                              
├─sde5    linux_raid_member UNINSPECT-EME0C:10 af16b765-c95a-10de-a3f3-c8f0eb93b21a 
└─sde6    linux_raid_member                    fa3576f5-0b51-66a9-c247-9c29a02d54af 
  └─md123 

sudo fdisk -l
Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x87266eff

Device     Boot    Start       End   Sectors  Size Id Type
/dev/sda1  *        2048  58593279  58591232   28G 83 Linux
/dev/sda2       58595326 234440703 175845378 83.9G  5 Extended
/dev/sda5       58595328  66605055   8009728  3.8G 82 Linux swap / Solaris
/dev/sda6       66607104 234440703 167833600   80G 83 Linux


Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5BD4E39E-AC17-4070-9569-94B2D6F52367

Device        Start        End    Sectors   Size Type
/dev/sdd1      2048    2002943    2000896   977M Microsoft basic data
/dev/sdd2   2002944   12003327   10000384   4.8G Microsoft basic data
/dev/sdd3  12003328   12005375       2048     1M Microsoft basic data
/dev/sdd4  12005376   12007423       2048     1M Microsoft basic data
/dev/sdd5  12007424   14008319    2000896   977M Microsoft basic data
/dev/sdd6  14008320 1937508319 1923500000 917.2G Microsoft basic data


Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 82EA65C5-432A-4966-8110-EBF425364748

Device        Start        End    Sectors   Size Type
/dev/sde1      2048    2002943    2000896   977M Microsoft basic data
/dev/sde2   2002944   12003327   10000384   4.8G Microsoft basic data
/dev/sde3  12003328   12005375       2048     1M Microsoft basic data
/dev/sde4  12005376   12007423       2048     1M Microsoft basic data
/dev/sde5  12007424   14008319    2000896   977M Microsoft basic data
/dev/sde6  14008320 1937508319 1923500000 917.2G Microsoft basic data


Disk /dev/md123: 1.8 TiB, 1969663770624 bytes, 3846999552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes

sudo file -s /dev/md123
/dev/md123: data

sudo mdadm --detail /dev/md123
/dev/md123:
           Version : 0.90
     Creation Time : Sat Nov 24 21:28:30 2018
        Raid Level : raid0
        Array Size : 1923499776 (1834.39 GiB 1969.66 GB)
      Raid Devices : 2
     Total Devices : 2
   Preferred Minor : 123
       Persistence : Superblock is persistent

       Update Time : Sat Nov 24 21:28:30 2018
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 64K

Consistency Policy : none

              UUID : fa3576f5:0b5166a9:c2479c29:a02d54af (local to host Hijo-De-Dios)
            Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       54        0      active sync   /dev/sdd6
       1       8       70        1      active sync   /dev/sde6

Additionally, I tried to mount the stuff using:

Code:
sudo mount -t ntfs /dev/md123 /mnt/caca/
NTFS signature is missing.
Failed to mount '/dev/md123': Invalid argument
The device '/dev/md123' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

sudo mount -t ntfs-3g /dev/md123 /mnt/caca/
NTFS signature is missing.
Failed to mount '/dev/md123': Invalid argument
The device '/dev/md123' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?


The OS that I am using to mount this on is Kali 2018 linux which is essentially Debian,

Last edited by metallica1973; 11-25-2018 at 05:01 PM..
# 5  
Old 11-25-2018
Hmmmm.......if it was Debian then I'd be typing.............

Code:
$ sudo mount -t ntfs-3g /dev/md123 /mnt/caca

to mount a NTFS filesystem.
# 6  
Old 11-25-2018
Sorry about the typo. I did do that. I changed it to what it was caca. So:

Code:
sudo mount -t ntfs-3g /dev/md123 /mnt/caca

# 7  
Old 11-26-2018
I'm not familiar with Buffalo NAS specifically but there are two different architectures of RAID controller.

One stores the array configuration information on the disks, including type of array (Raid 0,1,3,5,6 or whatever) and the disk member number of that array. When this type of controller fails you can replace it with a new one which, at boot, will read the disk labels and run with the previous RAID array(s).

The second, and most common type, stores the array configuration in its own NVRAM and if the controller fails you have to find a way of resetting the correct configuration into NVRAM. Often this is by using some new disks to configure the same array and then swapping the disks back to the originals. The manufacturer would usually be referred to for advice.

It seems to me like the NTFS filesystem is not as it was so your recovery technique was flawed. Perhaps the disks are in the wrong order or something like that. Kali doesn't like what it's seeing. It might be quicker to initialise the array and restore from backup.
Login or Register to Ask a Question

Previous Thread | Next Thread

8 More Discussions You Might Find Interesting

1. What is on Your Mind?

Revive Ad Server MySQL Injection Attack

No rest for the weary, a Revive Ad Server I am responsible for experienced a MySQL injection attack due to a vulnerability uncovered in the past few months. I was busy developing Vue.js code for the forums and thought to myself "I will get around to upgrading to Revive 4.2.0 (supposedly the... (0 Replies)
Discussion started by: Neo
0 Replies

2. Debian

Best RAID settings for Debian Server? Help!! (1+0 or 5 or NAS)

I am installing a Debian Server on a: HP Proliant DL380 G4 Dual CPU's 3.20 ghz / 800 mhz / 1MB L2 5120 MB RAM 6 hard disks on HP Smart Array 6i controller (36.4 GB Ultra320 SCSI HD each) I will be using this server to capture VHS video, encode, compress, cut, edit, make DVD's, rip... (0 Replies)
Discussion started by: Marcus Aurelius
0 Replies

3. Red Hat

missing raid array after reboot

Dear all , i ve configured raid 0 in redhat machine(VM ware), by following steps: #mdadm -C /dev/md0 -l 0 -n 2 /dev/sdb1 /dev/sdc1 #mkfs.ext3 /dev/md0 #mdadm --detail --scan --config=mdadm.conf >/etc/mdadm.conf then mounted the/dev/md0 device and also added a entry in fstab. how... (2 Replies)
Discussion started by: sriniv666
2 Replies

4. Emergency UNIX and Linux Support

Loading a RAID array after OS crash

One of my very old drive farm servers had an OS fault and can't boot now but I'd like to restore some files from it. I tried booting Ubuntu from a CD, but it couldn't see the drives -- possibly because they're RAIDed together. Is there a good way to get at my files? The data in question is a... (2 Replies)
Discussion started by: CRGreathouse
2 Replies

5. Solaris

EFI Disk labels on 3510 raid array

Hi Peeps, Can anyone help me an EFI lablel on a 3510 raid array that I cannot get rid of, format -e and label just asks you if you want to label it. Want an SMI label writing to it. Anyone got any ideas on how to remove the EFI label? Thanks in advance Martin (2 Replies)
Discussion started by: callmebob
2 Replies

6. UNIX for Advanced & Expert Users

Create RAID - Smart Array Tool - ML370

Hi guys, i must install an old old old ml370 server... I must create a RAID 5 with my 4 SCSI disk. I need a SmartStart disk for create it or a Floppy Disk called "Array configuration Tool". I don't find it on the hp website...:mad::mad::mad: Anyone have it?? Thanks in advance. Zio (0 Replies)
Discussion started by: Zio Bill
0 Replies

7. AIX

RAID level of array = false?

I created a RAID 5 array and when I list out the attributes of the "hdisk" it reports back raid_level = 5 but the RAID level of the array = false. What does this actually indicate about my array? I've never paid much attention to this until now since I have a disk reporting failure I want to make... (0 Replies)
Discussion started by: scottsl
0 Replies

8. UNIX for Advanced & Expert Users

Percent complete error while scanning RAID array during 5.0.6 load

Percent complete SCO 5.0.6 / No longer an issue (0 Replies)
Discussion started by: Henrys
0 Replies
Login or Register to Ask a Question