Revive RAID 0 Array From Buffalo Duo NAS


Login or Register to Reply

 
Thread Tools Search this Thread
# 8  
I changed machines to a raspberry pi. Could it be that when I assembled the array, I did it as raid 0 when it needed to be raid 1? The reason I ask this is because I see this from lsblk as in:
Code:
sudo lsblk 
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda           8:0    0 931.5G  0 disk  
├─sda1        8:1    0   977M  0 part  
│ └─md126     9:126  0   977M  0 raid1 
├─sda2        8:2    0   4.8G  0 part  
│ └─md125     9:125  0   4.8G  0 raid1 
├─sda3        8:3    0     1M  0 part  
├─sda4        8:4    0     1M  0 part  
├─sda5        8:5    0   977M  0 part  
│ └─md124     9:124  0   977M  0 raid1 
└─sda6        8:6    0 917.2G  0 part  
  └─md127     9:127  0   1.8T  0 raid0 
sdb           8:16   0 931.5G  0 disk  
├─sdb1        8:17   0   977M  0 part  
│ └─md126     9:126  0   977M  0 raid1 
├─sdb2        8:18   0   4.8G  0 part  
│ └─md125     9:125  0   4.8G  0 raid1 
├─sdb3        8:19   0     1M  0 part  
├─sdb4        8:20   0     1M  0 part  
├─sdb5        8:21   0   977M  0 part  
│ └─md124     9:124  0   977M  0 raid1 
└─sdb6        8:22   0 917.2G  0 part  
  └─md127     9:127  0   1.8T  0 raid0 
mmcblk0     179:0    0   1.9G  0 disk  
├─mmcblk0p1 179:1    0  43.2M  0 part  /boot
└─mmcblk0p2 179:2    0   1.8G  0 part  /

? Should i have been:
Code:
sudo mdadm --create /dev/md127 --assume-clean --level=1 --verbose --chunk=64 --raid-devices=2 --metadata=0.90 /dev/sda6 /dev/sdb6

If I resamble it as a raid 1, will it destroy the data?
# 9  
Those do rather look like a RAID1 (identical data partitioning).

It depends whether initialising an array also creates an empty array on Buffalo.

As I said, if you have a couple of spare drives exactly the same I'd be inclined to configure the array with them, and then swap to the originals.

Have you got a Buffalo dealer/distributor that you can talk to. This is the kind of question that they get asked all the time, i.e. recovery.
# 10  
Hi,

Hicksd8 is partially right, from what I see there are three raid 1 one partitions (md124, md125 and md126) there is also a raid 0 stripe (md127 comprising sda6 and sdb6) are you sure these disks are being addressed in the right order?

Regards

Gull04
# 11  
@gull04........I've never come across a RAID controller that can RAID0 a couple of partitions on different disks and, at the same time, mirror (RAID1) other partitions on the same disks. I assume that the RAID0 we're seeing here is as a result of the previous attempt at recreating a RAID0 array.
# 12  
Hi,

Yes I have the drift here now, I suspect that you should have specified --level=1 in the mdadm command. I am pretty sure that destroying and recreating the mirror will leave the data intact - but I'd dd the disk to an image "clonezilla" or something similar if you have the tin.

@Hicksd8, at the bottom end yes it's not that common for hardware RAID to support only at device level. When it comes to software, whole different ball game - take a look at LVM, VxFS, SVM or in this case mdadm. You can slice and dice any way that you want.

Regards

Gull04
# 13  
Code:
mdadm --create /dev/md127 --assume-clean --level=1 --verbose --chunk=64 --raid-devices=2 --metadata=1.2 /dev/sda6 /dev/sdb6

NAME          SIZE FSTYPE            TYPE  MOUNTPOINT
sda         931.5G                   disk  
├─sda1        977M linux_raid_member part  
│ └─md127     977M ext3              raid1 
├─sda2        4.8G linux_raid_member part  
│ └─md1       4.8G ext3              raid1 
├─sda3          1M                   part  
├─sda4          1M                   part  
├─sda5        977M linux_raid_member part  
│ └─md10      977M swap              raid1 
└─sda6      917.2G linux_raid_member part  
  └─md126   917.1G                   raid1 
sdb         931.5G                   disk  
├─sdb1        977M linux_raid_member part  
│ └─md127     977M ext3              raid1 
├─sdb2        4.8G linux_raid_member part  
│ └─md1       4.8G ext3              raid1 
├─sdb3          1M                   part  
├─sdb4          1M                   part  
├─sdb5        977M linux_raid_member part  
│ └─md10      977M swap              raid1 
└─sdb6      917.2G linux_raid_member part  
  └─md126   917.1G                   raid1 
mmcblk0       1.9G                   disk  
├─mmcblk0p1  43.2M vfat              part  /boot
└─mmcblk0p2   1.8G ext4              part  /

sudo cat /proc/mdstat
Personalities : [raid1] 
md126 : active (auto-read-only) raid1 sda6[0] sdb6[1]
      961618880 blocks super 1.2 [2/2] [UU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

md1 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      4999156 blocks super 1.2 [2/2] [UU]
      
md10 : active (auto-read-only) raid1 sdb5[1] sda5[0]
      1000436 blocks super 1.2 [2/2] [UU]
      
md127 : active (auto-read-only) raid1 sdb1[1] sda1[0]
      1000384 blocks [2/2] [UU]
      
unused devices: <none>

daman@alpha:~ $ sudo mount /dev/md126 /media/caca/
mount: wrong fs type, bad option, bad superblock on /dev/md126,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
daman@alpha:~ $ sudo mount -t ext4 /dev/md126 /media/caca/
mount: wrong fs type, bad option, bad superblock on /dev/md126,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
daman@alpha:~ $ sudo mount -t ext3 /dev/md126 /media/caca/
mount: wrong fs type, bad option, bad superblock on /dev/md126,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
daman@alpha:~ $ sudo mount -t ntfs /dev/md126 /media/caca/
mount: wrong fs type, bad option, bad superblock on /dev/md126,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

daman@alpha:~ $ sudo gdisk -l /dev/sda
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 1953525168 sectors, 931.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 5BD4E39E-AC17-4070-9569-94B2D6F52367
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 16018829 sectors (7.6 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2002943   977.0 MiB   0700  primary
   2         2002944        12003327   4.8 GiB     0700  primary
   3        12003328        12005375   1024.0 KiB  0700  primary
   4        12005376        12007423   1024.0 KiB  0700  primary
   5        12007424        14008319   977.0 MiB   0700  primary
   6        14008320      1937508319   917.2 GiB   0700  primary
daman@alpha:~ $ sudo gdisk -l /dev/sdb
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 82EA65C5-432A-4966-8110-EBF425364748
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 16018829 sectors (7.6 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2002943   977.0 MiB   0700  primary
   2         2002944        12003327   4.8 GiB     0700  primary
   3        12003328        12005375   1024.0 KiB  0700  primary
   4        12005376        12007423   1024.0 KiB  0700  primary
   5        12007424        14008319   977.0 MiB   0700  primary
   6        14008320      1937508319   917.2 GiB   0700  primary

udo mdadm --examine /dev/sd[ab]6
/dev/sda6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 482ed47b:00163f12:651d5389:7a570e47
           Name : alpha:127  (local to host alpha)
  Creation Time : Wed Nov 28 12:19:01 2018
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1923237856 (917.07 GiB 984.70 GB)
     Array Size : 961618880 (917.07 GiB 984.70 GB)
  Used Dev Size : 1923237760 (917.07 GiB 984.70 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=96 sectors
          State : clean
    Device UUID : fca88779:0cb1197c:fbc3353a:d1ad1b83

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Nov 28 12:19:01 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : f2831a9d - correct
         Events : 1


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 482ed47b:00163f12:651d5389:7a570e47
           Name : alpha:127  (local to host alpha)
  Creation Time : Wed Nov 28 12:19:01 2018
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1923237856 (917.07 GiB 984.70 GB)
     Array Size : 961618880 (917.07 GiB 984.70 GB)
  Used Dev Size : 1923237760 (917.07 GiB 984.70 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=96 sectors
          State : clean
    Device UUID : f2c5ca18:584daa0d:ced70de7:ccd6bcc7

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Nov 28 12:19:01 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 14d010af - correct
         Events : 1


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5BD4E39E-AC17-4070-9569-94B2D6F52367

Device        Start        End    Sectors   Size Type
/dev/sda1      2048    2002943    2000896   977M Microsoft basic data
/dev/sda2   2002944   12003327   10000384   4.8G Microsoft basic data
/dev/sda3  12003328   12005375       2048     1M Microsoft basic data
/dev/sda4  12005376   12007423       2048     1M Microsoft basic data
/dev/sda5  12007424   14008319    2000896   977M Microsoft basic data
/dev/sda6  14008320 1937508319 1923500000 917.2G Microsoft basic data


Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 82EA65C5-432A-4966-8110-EBF425364748

Device        Start        End    Sectors   Size Type
/dev/sdb1      2048    2002943    2000896   977M Microsoft basic data
/dev/sdb2   2002944   12003327   10000384   4.8G Microsoft basic data
/dev/sdb3  12003328   12005375       2048     1M Microsoft basic data
/dev/sdb4  12005376   12007423       2048     1M Microsoft basic data
/dev/sdb5  12007424   14008319    2000896   977M Microsoft basic data
/dev/sdb6  14008320 1937508319 1923500000 917.2G Microsoft basic data


Disk /dev/md127: 977 MiB, 1024393216 bytes, 2000768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md10: 977 MiB, 1024446464 bytes, 2000872 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md1: 4.8 GiB, 5119135744 bytes, 9998312 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md126: 917.1 GiB, 984697733120 bytes, 1923237760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

I can mount md1, md10, md127 fine which tells me the data is in tacked but its something with md126 (/dev/sda6,/dev/sdb6) that has something special with it that I cannot mount.
[code]
Login or Register to Reply

|
Thread Tools Search this Thread
Search this Thread:
Advanced Search

More UNIX and Linux Forum Topics You Might Find Helpful
Best RAID settings for Debian Server? Help!! (1+0 or 5 or NAS)
Marcus Aurelius
I am installing a Debian Server on a: HP Proliant DL380 G4 Dual CPU's 3.20 ghz / 800 mhz / 1MB L2 5120 MB RAM 6 hard disks on HP Smart Array 6i controller (36.4 GB Ultra320 SCSI HD each) I will be using this server to capture VHS video, encode, compress, cut, edit, make DVD's, rip...... Debian
0
Debian
missing raid array after reboot
sriniv666
Dear all , i ve configured raid 0 in redhat machine(VM ware), by following steps: #mdadm -C /dev/md0 -l 0 -n 2 /dev/sdb1 /dev/sdc1 #mkfs.ext3 /dev/md0 #mdadm --detail --scan --config=mdadm.conf >/etc/mdadm.conf then mounted the/dev/md0 device and also added a entry in fstab. how...... Red Hat
2
Red Hat
Loading a RAID array after OS crash
CRGreathouse
One of my very old drive farm servers had an OS fault and can't boot now but I'd like to restore some files from it. I tried booting Ubuntu from a CD, but it couldn't see the drives -- possibly because they're RAIDed together. Is there a good way to get at my files? The data in question is a...... Emergency UNIX and Linux Support
2
Emergency UNIX and Linux Support
RAID level of array = false?
scottsl
I created a RAID 5 array and when I list out the attributes of the "hdisk" it reports back raid_level = 5 but the RAID level of the array = false. What does this actually indicate about my array? I've never paid much attention to this until now since I have a disk reporting failure I want to make...... AIX
0
AIX