04-11-2013
Help needed! Raid 5 failure on a Debian System
Hello!
I have a 4-disc Raid 5 server running Open Media Vault (Debian). The other day, it disappeared from OMV, which was reporting 3 drives failed. Panic Stations. However, using MDADM I can get info from 3 of the drives which suggests they are functioning ok (info below). The remaining 4th drive doesn't give anything back via mdadm --examine. Any ideas how I can rebuild the drive without destroying the data? According to what I have read, as the three apparently working drives all have the same events number (103), it is fairly likely the data is intact on them - but how to I rebuild?
Thanks my lovelies!
Jon
/dev/sdf:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : dc344271:82f55bd0:fcfd0e16:a2a60bc8
Name : TTVServer:TTV2 (local to host TTVServer)
Creation Time : Mon Jan 7 11:03:39 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860531120 (2794.52 GiB 3000.59 GB)
Array Size : 17581590528 (8383.56 GiB 9001.77 GB)
Used Dev Size : 5860530176 (2794.52 GiB 3000.59 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : active
Device UUID : c792c6b2:78fd4e78:e4f008ea:826e25e8
Update Time : Sat Apr 6 13:17:10 2013
Checksum : 30386dbe - correct
Events : 103
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdg:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : dc344271:82f55bd0:fcfd0e16:a2a60bc8
Name : TTVServer:TTV2 (local to host TTVServer)
Creation Time : Mon Jan 7 11:03:39 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860531120 (2794.52 GiB 3000.59 GB)
Array Size : 17581590528 (8383.56 GiB 9001.77 GB)
Used Dev Size : 5860530176 (2794.52 GiB 3000.59 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : active
Device UUID : c0426118:614cb315:15c9a0ee:2ad88e26
Update Time : Sat Apr 6 13:17:10 2013
Checksum : 7638ae70 - correct
Events : 103
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdi:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : dc344271:82f55bd0:fcfd0e16:a2a60bc8
Name : TTVServer:TTV2 (local to host TTVServer)
Creation Time : Mon Jan 7 11:03:39 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860531120 (2794.52 GiB 3000.59 GB)
Array Size : 17581590528 (8383.56 GiB 9001.77 GB)
Used Dev Size : 5860530176 (2794.52 GiB 3000.59 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : active
Device UUID : 8a96d5fe:594418b6:c63dafd0:c459e498
Update Time : Sat Apr 6 13:17:10 2013
Checksum : 5175f080 - correct
Events : 103
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing)
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
I was trying to get a server using a raid controller card up and running. I could not get the card configured right so i just installed the system strait onto a scsi drive.
Questions?
Is is nescessary to have the operating system on raid? Pros/Cons
Is it really difficult to go back later... (1 Reply)
Discussion started by: macdonto
1 Replies
2. UNIX for Dummies Questions & Answers
Hi guys,
I was asked to setup an Ultra60 (Sol 8) with an StorEdge A1000. Does anyone know if a probe-scsi-all is suppose to detect it? Right now it doesn't, so maybe I answered my own question :rolleyes:
We have an the same setup running already, but I wasn't around when that was setup.
... (3 Replies)
Discussion started by: Yinzer955i
3 Replies
3. SCO
recently we have purchased hp proliant ml350 g5 server and configured raid 5 with hp smartarray p200i sas controller.but i could not found any sas raid controller hp smartarry p200i driver for sco unix 5.0.7 :(.i searched on hp support site,but no use.any one can help. (3 Replies)
Discussion started by: prakrithi
3 Replies
4. Solaris
hi
this may be a very stupid question, but im quite new to Solaris (gonna buid my first system, Solaris 10 on x86 system, connected to other windows systems in a home network)
i wanna put a RAID 5 system in there to back up my other systems at home; iv read that its really so easy with SVM to... (4 Replies)
Discussion started by: Landser
4 Replies
5. UNIX for Advanced & Expert Users
Hi,
Could somebody sent me sample output of below commands on
1) Debian linux and 2) RHEL3 and 3) any RHEL version less than 3,
a) uname -a
b) cat /etc/issue
c) cat /etc/redhat-release or other equivalent file
Thanks in advance
- Krishna (0 Replies)
Discussion started by: krishnamurthig
0 Replies
6. Solaris
I have inhereted a Solaris 8 server which is running an important application in our production environment.
The dilema is that the server has just one internal hard drive I believe it was installed using jump start, it does not even have a CD ROM drive and root is not mirrored (since there is... (2 Replies)
Discussion started by: Tirmazi
2 Replies
7. SCO
After System power get failed
File system is not getting mounted in read- write mode (1 Reply)
Discussion started by: gtkpmbpl
1 Replies
8. AIX
Hello,
I have a scsi pci x raid controller card on which I had created a disk array of 3 disks
when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk )
suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies
9. Debian
I am installing a Debian Server on a:
HP Proliant DL380 G4
Dual CPU's 3.20 ghz / 800 mhz / 1MB L2
5120 MB RAM
6 hard disks on HP Smart Array 6i controller (36.4 GB Ultra320 SCSI HD each)
I will be using this server to capture VHS video, encode, compress, cut, edit, make DVD's, rip... (0 Replies)
Discussion started by: Marcus Aurelius
0 Replies
10. Shell Programming and Scripting
Hi Gurus,
Good morning... :)
OS Info:
Linux 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux
I have a script which takes multiples parameters from a properties file one by one and run in background (to do parallel processing). As example:
$ cat... (4 Replies)
Discussion started by: saps19
4 Replies
LEARN ABOUT REDHAT
mdadm.conf
MDADM.CONF(5) File Formats Manual MDADM.CONF(5)
NAME
mdadm.conf - configuration for management of Software Raid with mdadm
SYNOPSIS
/etc/mdadm.conf
DESCRIPTION
mdadm is a tool for creating, managing, and monitoring RAID devices using the md driver in Linux.
Some common tasks, such as assembling all arrays, can be simplified by describing the devices and arrays in this configuration file.
SYNTAX
The file should be seen as a collection of words separated by white space (space, tab, or newline). Any word that beings with a hash sign
(#) starts a comment and that word together with the remainder of the line is ignored.
Any line that starts with white space (space or tab) is treated as though it were a continuation of the previous line.
Empty lines are ignored, but otherwise each (non continuation) line must start with a keyword as listed below. The keywords are case
insensitive and can be abbreviated to 3 characters.
The keywords are:
DEVICE A device line lists the devices (whole devices or partitions) that might contain a component of an MD array. When looking for the
components of an array, mdadm will scan these devices (or any devices listed on the command line).
The device line may contain a number of different devices (separated by spaces) and each device name can contain wild cards as
defined by glob(7).
Also, there may be several device lines present in the file.
For example:
DEVICE /dev/hda* /dev/hdc*
DEV /dev/sd*
DEVICE /dev/discs/disc*/disc
ARRAY The ARRAY lines identify actual arrays. The second word on the line should be the name of the device where the array is normally
assembled, such as /dev/md1. Subsequent words identify the array, or identify the array as a member of a group. If multiple identi-
ties are given, then a component device must match ALL identities to be considered a match. Each identity word has a tag, and
equals sign, and some value. The tags are:
uuid= The value should be a 128 bit uuid in hexadecimal, with punctuation interspersed if desired. This must match the uuid stored in
the superblock.
super-minor=
The value is an integer which indicates the minor number that was stored in the superblock when the array was created. When an
array is created as /dev/mdX, then the minor number X is stored.
devices=
The value is a comma separated list of device names. Precisely these devices will be used to assemble the array. Note that the
devices listed there must also be listed on a DEVICE line.
level= The value is a raid level. This is not normally used to identify an array, but is supported so that the output of
mdadm --examine --scan
can be use directly in the configuration file.
num-devices=
The value is the number of devices in a complete active array. As with level= this is mainly for compatibility with the output
of
mdadm --examine --scan.
spare-group=
The value is a textual name for a group of arrays. All arrays with the same spare-group name are considered to be part of the
same group. The significance of a group of arrays is that mdadm will, when monitoring the arrays, move a spare drive from one
array in a group to another array in that group if the first array had a failed or missing drive but no spare.
MAILADDR
The mailaddr line gives an E-mail address that alerts should be sent to when is running in --monitor mode (and was given the --scan
option). There should only be one MAILADDR line and it should have only one address.
PROGRAM
The program line gives the name of a program to be run when mdadm --monitor detects potentially interesting events on any of the
arrays that it is monitoring. This program gets run with two or three arguments, they being the Event, the md device, and possibly
the related component device.
There should only be one program line and it should be give only one program.
EXAMPLE
DEVICE /dev/sd[bcdjkl]1
DEVICE /dev/hda1 /dev/hdb1
# /dev/md0 is known by it's UID.
ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
# /dev/md1 contains all devices with a minor number of
# 1 in the superblock.
ARRAY /dev/md1 superminor=1
# /dev/md2 is made from precisey these two devices
ARRAY /dev/md2 devices=/dev/hda1,/dev/hda2
# /dev/md4 and /dev/md5 are a spare-group and spares
# can be moved between them
ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df
spare-group=group1
ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977
spare-group=group1
MAILADDR root@mydomain.tld
PROGRAM /usr/sbin/handle-mdadm-events
SEE ALSO
mdadm(8), md(4).
MDADM.CONF(5)