Ubuntu desktop RAID1 boot problem.

Tags
ubuntu

 
Thread Tools Search this Thread
# 1  
Old 06-26-2016
Ubuntu desktop RAID1 boot problem.

Hello,
I installed ubuntu desktop just recently in aim to create a RAID1 configuration using software RAID MDADM.

I have the following configuration as fdisk -l reports:

Code:
Disk /dev/sda: 223,6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1191B7C1-FB4B-4151-B71D-CE0BE03AB12B

Device        Start       End   Sectors   Size Type
/dev/sda1      2048    206847    204800   100M EFI System
/dev/sda2    206848  67315711  67108864    32G Linux RAID
/dev/sda3  67315712 468862094 401546383 191,5G Linux RAID


Disk /dev/sdb: 223,6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9BF52357-7ABC-4838-A448-B8259F278040

Device        Start       End   Sectors   Size Type
/dev/sdb1      2048    206847    204800   100M EFI System
/dev/sdb2    206848  67315711  67108864    32G Linux RAID
/dev/sdb3  67315712 468862094 401546383 191,5G Linux RAID
Disk /dev/md1: 191,4 GiB, 205457522688 bytes, 401284224 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 44C8ED5A-7423-4D08-B57D-59E0A38DC5CE

Device     Start       End   Sectors   Size Type
/dev/md1p1  2048 401284190 401282143 191,4G Linux filesystem


Disk /dev/md0: 32 GiB, 34326183936 bytes, 67043328 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: BAA6BE5F-36F4-4B70-B4B6-2F28FAC7F72D

Device     Start      End  Sectors Size Type
/dev/md0p1  2048 67043294 67041247  32G Linux swap

Moderator's Comments:
Mod Comment Please use code tags as required by forum rules!


cat /proc/mdstat give me:
Code:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb2[2] sda2[0]
      33521664 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md1 : active raid1 sdb3[2] sda3[0]
      200642112 blocks super 1.2 [2/2] [UU]
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

When i remove either data cable of the SSD:s i get boot to busybox.
Busybox displays followin alert:
Code:
ALERT UIID=74230d1d-b85c-49bf-995b-d7a00160e41b does not exist. Dropping to shell!

cat /proc/cmdline gives me:
Code:
BOOT_IMAGE=/boot/vmlinuz-4.4.0-24-generic.efi.signed root=UUID=74230d1d-b85c-49bf-995b-d7a00160e41b ro​

exit wont help it just nags about that UIID does not exist.
SO what i want is that if one of the drives fails i get to boot to ubuntu.
Any help is very much apperciated, i quite novice to Linux operating systems.
blkid gives me:
Code:
blkid
/dev/sda1: UUID="1C68-1C4B" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="f3aa88a3-9c78-4ef7-9f1e-935785598337"
/dev/sda2: UUID="6eced8d1-5d91-5025-1a74-edb1ad409d83" UUID_SUB="868d7324-5c57-446c-d938-e836debec024" LABEL="ubuntu:0" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="da5176bb-8cfc-42a3-87a8-3660168cd955"

/dev/sda3: UUID="bf6c978d-e3b6-e814-d3d2-a131bf325998" UUID_SUB="3c7e4f7d-3b9d-0b24-2226-7b18c40bbd55" LABEL="ubuntu:1" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="ec46a009-690e-433b-8129-67a22dc82e54"
/dev/sdb1: UUID="1C68-1C4B" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="2c07476e-c57a-40ff-b599-129c8b149479"
/dev/sdb2: UUID="6eced8d1-5d91-5025-1a74-edb1ad409d83" UUID_SUB="bf9b9116-eec5-edea-2f27-80b80d9c9c90" LABEL="ubuntu:0" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="660e4f54-95e0-4255-becc-780fe029fdf2"
/dev/sdb3: UUID="bf6c978d-e3b6-e814-d3d2-a131bf325998" UUID_SUB="144cdbb9-92d3-f77e-60d7-ee5be414f1fe" LABEL="ubuntu:1" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="3dfc5388-d79b-4e0d-a4e8-2e666923a47a"
/dev/sdc1: LABEL="Media" UUID="902EECD22EECB280" TYPE="ntfs" PARTUUID="41138b36-01"
/dev/md1: PTUUID="44c8ed5a-7423-4d08-b57d-59e0a38dc5ce" PTTYPE="gpt"
/dev/md1p1: UUID="74230d1d-b85c-49bf-995b-d7a00160e41b" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="70826ab5-b977-4942-8686-0abf7f70fcf9"
/dev/md0: PTUUID="baa6be5f-36f4-4b70-b4b6-2f28fac7f72d" PTTYPE="gpt"
/dev/md0p1: UUID="8b831ade-c61c-46fd-b4fc-6fba596ec9f6" TYPE="swap" PARTLABEL="Linux swap" PARTUUID="668f70b9-0c11-41b5-bedb-86f214398982"
/dev/mapper/cryptswap1: UUID="455953ff-6980-4ca7-a8c1-aa27a4de6e9a" TYPE="swap"

List of mountings:
Code:
 # /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/md1p1 during installation
UUID=74230d1d-b85c-49bf-995b-d7a00160e41b /               ext4    errors=remount-ro 0       1
# /boot/efi was on /dev/sda1 during installation
UUID=1C68-1C4B  /boot/efi       vfat    umask=0077      0       1
# swap was on /dev/md0p1 during installation
#UUID=8b831ade-c61c-46fd-b4fc-6fba596ec9f6 none            swap    sw              0       0
/dev/mapper/cryptswap1 none swap sw 0 0


Last edited by Reol; 07-05-2016 at 11:35 AM.. Reason: Added code tags.
# 2  
Old 06-27-2016
Best way to check is the output of (you can paste output here as well!) :
Code:
blkid # list all UUID on the system, the RAID devices (sdb1/2,sda1/2) should have the same UUID, which i believe should be used in configuration.
cat /etc/fstab # list current configuration, where and what is being used as reference for mounting.
mdadm --examine <devices used in array(s)> # check the output to compare

I'm suspecting there is some mis-configuration with UUIDs, preventing normal boot.

How did you install/configure your system, is mdadm raid configured during or after the install ?

It's a bit confusing, since everything has it UUID (a md device with ext4 filesystem, and devices used in array), but well...
# 3  
Old 07-05-2016
I added the blkid and mountings cat /etc/fstab .
The raid was configured in degraded mode during install. When the installation completed i selected continue testing and configured raid as follows:
Code:
 sudo mdadm --add /dev/md0 /dev/sdb2
sudo mdadm --add /dev/md1 /dev/sdb3

# 4  
Old 07-09-2016
Blkid after the drop to bash.

I just checked the blkid when dropped to bash when i removed another of the raided SSD:s. It did not show the raid array mdo, md01p1 and md1, md1p1 at all. Does this mean that the mdadm is not functioning as it should?
# 5  
Old 07-14-2016
Finally got the RAID1 working.

Hi, just let you know and if someone stumbles in same problem i was able to fix the problem by following measures:

So firstly in busybox i was able to boot while degraded by entering these commands.
Code:
 
(initramfs): mdadm --run /dev/md0
(initramfs): mdadm --run /dev/md1
(initramfs): exit

And to automate the boot when degraded by creating a file /usr/share/initramfs-tools/scripts/local-top/mdadm adding the following to the file:

Code:
patch --verbose --ignore-whitespace <<'EndOfPatch'
--- mdadm
+++ mdadm
@@ -76,7 +76,15 @@
   if $MDADM --assemble --scan --run --auto=yes${extra_args:+ $extra_args}; then
     verbose && log_success_msg "assembled all arrays."
   else
-    log_failure_msg "failed to assemble all arrays."
+    log_warning_msg "failed to assemble all arrays...attempting individual starts"
+    for dev in $(cat /proc/mdstat | grep md | cut -d ' ' -f 1); do
+      log_begin_msg "attempting mdadm --run $dev"
+      if $MDADM --run $dev; then
+        verbose && log_success_msg "started $dev"
+      else
+        log_failure_msg "failed to start $dev"
+      fi
+    done
   fi
   verbose && log_end_msg

EndOfPatch

Then update initrafms with command:
update-initramfs -u
# 6  
Old 08-12-2016
If mdadm raid configured after install?

|
Thread Tools Search this Thread
Search this Thread:
Advanced Search

More UNIX and Linux Forum Topics You Might Find Helpful
Installing latest Ubuntu on my desktop sundaresh Ubuntu 3 04-12-2014 12:29 AM
Boot problem after updates, Ubuntu, black screen and cursor nutoubuntu UNIX Desktop Questions & Answers 0 01-16-2014 04:14 AM
How to Get Started on Ubuntu Desktop for the first time Rohit Bhanot Ubuntu 1 06-19-2012 10:24 AM
Get file on our windows desktop from a ubuntu server Pouchie1 Windows & DOS: Issues & Discussions 3 01-02-2012 02:36 PM
How do I link my Ubuntu desktop to a separate partition? Marcus Aurelius UNIX for Dummies Questions & Answers 0 02-22-2011 08:54 PM
Ubuntu creating a new user with Desktop royalibrahim Ubuntu 1 08-10-2010 04:38 AM
how to create multiple-boot system with disks mirrored(RAID1+0) and disk alone lifegeek UNIX for Advanced & Expert Users 0 11-15-2009 11:12 PM
UBUNTU 8.1 Desktop Edition not running after installation singla Ubuntu 5 10-26-2009 05:20 PM
Would like to install x86 desktop Ubuntu over AMD64 Ubuntu server docflyboy UNIX for Dummies Questions & Answers 0 02-20-2009 09:58 PM