Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Help needed! Raid 5 failure on a Debian System Post 302797099 by jonlisty on Sunday 21st of April 2013 11:29:41 PM
Old 04-22-2013
Ok after some more reading, I tried this:

Quote:
mdadm --create /dev/md8 --verbose --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc missing
and got this:

Quote:
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sda: Device or resource busy
mdadm: failed container membership check
mdadm: device /dev/sda not suitable for any style of array

aaaghhh!!!

---------- Post updated at 10:17 PM ---------- Previous update was at 10:13 PM ----------

also...

Quote:
$ sudo cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md8 : inactive sda[0] sdc[2] sdb[1]
8790796680 blocks super 1.2
---------- Post updated at 10:29 PM ---------- Previous update was at 10:17 PM ----------

also:

Quote:
$ sudo mdadm --detail /dev/md8
/dev/md8:
Version : 1.2
Creation Time : Mon Jan 7 11:03:39 2013
Raid Level : raid5
Used Dev Size : -1
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Sat Apr 6 13:17:10 2013
State : active, degraded, Not Started
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : TTVServer:TTV2 (local to host TTVServer)
UUID : dc344271:82f55bd0:fcfd0e16:a2a60bc8
Events : 103

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 0 0 3 removed
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Raid control vs scsi for operating system

I was trying to get a server using a raid controller card up and running. I could not get the card configured right so i just installed the system strait onto a scsi drive. Questions? Is is nescessary to have the operating system on raid? Pros/Cons Is it really difficult to go back later... (1 Reply)
Discussion started by: macdonto
1 Replies

2. UNIX for Dummies Questions & Answers

Ultra60 and A1000....raid manager needed just to see it?

Hi guys, I was asked to setup an Ultra60 (Sol 8) with an StorEdge A1000. Does anyone know if a probe-scsi-all is suppose to detect it? Right now it doesn't, so maybe I answered my own question :rolleyes: We have an the same setup running already, but I wasn't around when that was setup. ... (3 Replies)
Discussion started by: Yinzer955i
3 Replies

3. SCO

driver needed for hp smartarry p200i sas raid controller

recently we have purchased hp proliant ml350 g5 server and configured raid 5 with hp smartarray p200i sas controller.but i could not found any sas raid controller hp smartarry p200i driver for sco unix 5.0.7 :(.i searched on hp support site,but no use.any one can help. (3 Replies)
Discussion started by: prakrithi
3 Replies

4. Solaris

RAID controller needed for SVM?

hi this may be a very stupid question, but im quite new to Solaris (gonna buid my first system, Solaris 10 on x86 system, connected to other windows systems in a home network) i wanna put a RAID 5 system in there to back up my other systems at home; iv read that its really so easy with SVM to... (4 Replies)
Discussion started by: Landser
4 Replies

5. UNIX for Advanced & Expert Users

need sample system o/p RHEL/Debian

Hi, Could somebody sent me sample output of below commands on 1) Debian linux and 2) RHEL3 and 3) any RHEL version less than 3, a) uname -a b) cat /etc/issue c) cat /etc/redhat-release or other equivalent file Thanks in advance - Krishna (0 Replies)
Discussion started by: krishnamurthig
0 Replies

6. Solaris

Contingency planning for System Failure

I have inhereted a Solaris 8 server which is running an important application in our production environment. The dilema is that the server has just one internal hard drive I believe it was installed using jump start, it does not even have a CD ROM drive and root is not mirrored (since there is... (2 Replies)
Discussion started by: Tirmazi
2 Replies

7. SCO

file system not getting mounted in read write mode after system power failure

After System power get failed File system is not getting mounted in read- write mode (1 Reply)
Discussion started by: gtkpmbpl
1 Replies

8. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

9. Debian

Best RAID settings for Debian Server? Help!! (1+0 or 5 or NAS)

I am installing a Debian Server on a: HP Proliant DL380 G4 Dual CPU's 3.20 ghz / 800 mhz / 1MB L2 5120 MB RAM 6 hard disks on HP Smart Array 6i controller (36.4 GB Ultra320 SCSI HD each) I will be using this server to capture VHS video, encode, compress, cut, edit, make DVD's, rip... (0 Replies)
Discussion started by: Marcus Aurelius
0 Replies

10. Shell Programming and Scripting

Help needed on restart-from-point-of-failure in Parallel Processing

Hi Gurus, Good morning... :) OS Info: Linux 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux I have a script which takes multiples parameters from a properties file one by one and run in background (to do parallel processing). As example: $ cat... (4 Replies)
Discussion started by: saps19
4 Replies
MDADM(8)						      System Manager's Manual							  MDADM(8)

NAME
mdadm - manage MD devices aka Linux Software Raid. SYNOPSIS
mdadm [mode] <raiddevice> [options] <component-devices> DESCRIPTION
RAID devices are virtual devices created from two or more real block devices. This allows multiple devices (typically disk drives or parti- tions there-of) to be combined into a single device to hold (for example) a single filesystem. Some RAID levels include redundancy and so can survive some degree of device failure. Linux Software RAID devices are implemented through the md (Multiple Devices) device driver. Currently, Linux supports LINEAR md devices, RAID0 (striping), RAID1 (mirroring), RAID4 and RAID5. Recent kernels (2002) also support a mode known as MULTIPATH. mdadm only provides limited support for MULTIPATH as yet. mdadm is a program that can be used to create, manage, and monitor MD devices. As such it provides a similar set of functionality to the raidtools packages. The key differences between mdadm and raidtools are: o mdadm is a single program and not a collection of programs. o mdadm can perform (almost) all of its functions without having a configuration file. Also mdadm helps with management of the configu- ration file. o mdadm can provide information about your arrays (through Query, Detail, and Examine) that raidtools cannot. MODES
mdadm has 6 major modes of operation: Assemble Assemble the parts of a previously created array into an active array. Components can be explicitly given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on request, fiddle superblock information so as to assemble a faulty array. Build Build a legacy array without per-device superblocks. Create Create a new array with per-device superblocks. Manage This is for doing things to specific components of an array such as adding new spares and removing faulty devices. Misc This mode allows operations on independent devices such as examine MD superblocks, erasing old superblocks and stopping active arrays. Follow or Monitor Monitor one or more md devices and act on any state changes. OPTIONS
Available options are: -A, --assemble Assemble a pre-existing array. -B, --build Build a legacy array without superblocks. -C, --create Create a new array. -Q, --query Examine a device to see (1) if it is an md device and (2) if it is a component of an md array. Information about what is discovered is presented. -D, --detail Print detail of one or more md devices. -E, --examine Print content of md superblock on device(s). -F, --follow, --monitor Select Monitor mode. -h, --help Display help message or, after above option, mode specific help message. -V, --version Print version information for mdadm. -v, --verbose Be more verbose about what is happening. -b, --brief Be less verbose. This is used with --detail and --examine. -f, --force Be more forceful about certain operations. See the various modes of the exact meaning of this option in different contexts. -c, --config= Specify the config file. Default is /etc/mdadm.conf. -s, --scan scan config file or /proc/mdstat for missing information. In general, this option gives mdadm permission to get any missing infor- mation, like component devices, array devices, array identities, and alert destination from the configuration file: /etc/mdadm.conf. One exception is MISC mode when using --detail or --stop in which case --scan says to get a list of array devices from /proc/mdstat. For create or build: -c, --chunk= Specify chunk size of kibibytes. The default is 64. --rounding= Specify rounding factor for linear array (==chunk size) -l, --level= Set raid level. Options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid5, 4, raid5, 5, multipath, mp. Obviously some of these are synonymous. Only the first 4 are valid when Building. -p, --parity= Set raid5 parity algorithm. Options are: left-asymmetric, left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs. The default is left-symmetric. --layout= same as --parity -n, --raid-devices= number of active devices in array. -x, --spare-devices= number of spare (eXtra) devices in initial array. Spares can be added and removed later. -z, --size= Amount (in Kibibytes) of space to use from each drive in RAID1/4/5. This must be a multiple of the chunk size, and must leave about 128Kb of space at the end of the drive for the RAID superblock. If this is not specified (as it normally is not) the smallest drive (or partition) sets the size, though if there is a variance among the drives of greater than 1%, a warning is issued. For assemble: -u, --uuid= uuid of array to assemble. Devices which don't have this uuid are excluded -m, --super-minor= Minor number of device that array was created for. Devices which don't have this minor number are excluded. If you create an array as /dev/md1, then all superblocks will contain the minor number 1, even if the array is later assembled as /dev/md2. -f, --force Assemble the array even if some superblocks appear out-of-date -R, --run Attempt to start the array even if fewer drives were given than are needed for a full array. Normally if not all drives are found and --scan is not used, then the array will be assembled but not started. With --run an attempt will be made to start it anyway. For Manage mode: -a, --add hotadd listed devices. -r, --remove remove listed devices. They must not be active. i.e. they should be failed or spare devices. -f, --fail mark listed devices as faulty. --set-faulty same as --fail. For Misc mode: -R, --run start a partially built array. -S, --stop deactivate array, releasing all resources. -o, --readonly mark array as readonly. -w, --readwrite mark array as readwrite. --zero-superblock If the device contains a valid md superblock, the block is over-written with zeros. With --force the block where the superblock would be is over-written even if it doesn't appear to be valid. For Monitor mode: -m, --mail Give a mail address to send alerts to. -p, --program, --alert Give a program to be run whenever an event is detected. -d, --delay Give a delay in seconds. mdadm polls the md arrays and then waits this many seconds before polling again. The default is 60 sec- onds. ASSEMBLE MODE
Usage: mdadm --assemble device options... Usage: mdadm --assemble --scan options... This usage assembles one or more raid arrays from pre-existing components. For each array, mdadm needs to know the md device, the identity of the array, and a number of component-devices. These can be found in a number of ways. The md device is either given before --scan or is found from the config file. In the latter case, multiple md devices can be started with a single mdadm command. The identity can be given with the --uuid option, with the --super-minor option, can be found in in the config file, or will be taken from the super block on the first component-device listed on the command line. Devices can be given on the --assemble command line or from the config file. Only devices which have an md superblock which contains the right identity will be considered for any device. The config file is only used if explicitly named with --config or requested with --scan. In the later case, /etc/mdadm.conf is used. If --scan is not given, then the config file will only be used to find the identity of md arrays. Normally the array will be started after it is assembled. However if --scan is not given and insufficient drives were listed to start a complete (non-degraded) array, then the array is not started (to guard against usage errors). To insist that the array be started in this case (as may work for RAID1 or RAID5), give the --run flag. BUILD MODE
Usage: mdadm --build device --chunk=X --level=Y --raid-devices=Z devices This usage is similar to --create. The difference is that it creates a legacy array without a superblock. With these arrays there is no difference between initially creating the array and subsequently assembling the array, except that hopefully there is useful data there in the second case. The level may only be 0, raid0, or linear. All devices must be listed and the array will be started once complete. CREATE MODE
Usage: mdadm --create device --chunk=X --level=Y --raid-devices=Z devices This usage will initialise a new md array, associate some devices with it, and activate the array. As devices are added, they are checked to see if they contain raid superblocks or filesystems. They are also checked to see if the variance in device size exceeds 1%. If any discrepancy is found, the array will not automatically be run, though the presence of a --run can override this caution. To create a "degraded" array in which some devices are missing, simply give the word missing in place of a device name. This will cause mdadm to leave the corresponding slot in the array empty. For a RAID4 or RAID5 array at most one slot can be missing. For a RAID1 array, only one real device needs to be given. All of the others can be missing. The General Management options that are valid with --create are: --run insist of running the array even if some devices look like they might be in use. --readonly start the array readonly - not supported yet. MANAGE MODE
Usage: mdadm device options... devices... This usage will allow individual devices in an array to be failed, removed or added. It is possible to perform multiple operations with on command. For example: mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 /a /dev/hda1 will firstly mark /dev/hda1 as faulty in /dev/md0 and will then remove it from the array and finally add it back in as a spare. However only one md array can be affected by a single command. MISC MODE
Usage: mdadm options ... devices ... MISC mode includes a number if distinct operations that operate on distinct devices. The operations are: --query The device is examined to see if it is (1) an active md array, or (2) a component of an md array. The information discovered is reported. --detail The device should be an active md device. mdadm will display a detailed description of the array. --brief or --scan will cause the output to be less detailed and the format to be suitable for inclusion in /etc/mdadm.conf. --examine The device should be a component of an md array. mdadm will read the md superblock of the device and display the contents. If --brief is given, or --scan then multiple devices that are components of the one array are grouped together and reported in a single entry suitable for inclusion in /etc/mdadm.conf. Having --scan without listing any devices will cause all devices listed in the config file to be examined. --stop This devices should active md arrays which will be deactivated, if they are not currently in use. --run This will fully activate a partially assembled md array. --readonly This will mark an active array as read-only, providing that it is not currently being used. --readwrite This will change a readonly array back to being read/write. --scan For all operations except --examine, --scan will cause the operation to be applied to all arrays listed in /proc/mdstat. For --examine, --scan causes all devices listed in the config file to be examined. MONITOR MODE
Usage: mdadm --monitor options... devices... This usage causes mdadm to periodically poll a number of md arrays and to report on any events noticed. mdadm will never exit once it decides that there are arrays to be checked, so it should normally be run in the background. As well as reporting events, mdadm may move a spare drive from one array to another if they are in the same spare-group and if the destina- tion array has a failed drive but not spares. If any devices are listed on the command line, mdadm will only monitor those devices. Otherwise all arrays listed in the configuration file will be monitored. Further, if --scan is given, then any other md devices that appear in /proc/mdstat will also be monitored. The result of monitoring the arrays is the generation of events. These events are passed to a separate program (if specified) and may be mailed to a given E-mail address. When passing event to program, the program is run once for each event and is given 2 or 3 command-line arguements. The first is the name of the event (see below). The second is the name of the md device which is affected, and the third is the name of a related device if rel- evant, such as a component device that has failed. If --scan is given, then a program or an E-mail address must be specified on the command line or in the config file. If neither are avail- able, then mdadm will not monitor anything. Without --scan mdadm will continue monitoring as long as something was found to monitor. If no program or email is given, then each event is reported to stdout. The different events are: DeviceDisappeared An md array which previously was configured appears to no longer be configured. RebuildStarted An md array started reconstruction. RebuildNN Where NN is 20, 40, 60, or 80, this indicates that rebuild has passed that many percentage of the total. Fail An active component device of an array has been marked as faulty. FailSpare A spare component device which was being rebuilt to replace a faulty device has failed. SpareActive A spare component device which was being rebuilt to replace a faulty device as been successfully rebuild and has been made active. NewArray A new md array has been detected in the /proc/mdstat file. MoveSpare A spare drive has been moved from one array in a spare-group to another to allow a failed drive to be replaced. Only Fail and FailSpare cause Email to be sent. All events cause the program to be run. The program is run with two or three arguments, they being the event name, the array device and possibly a second device. Each event has an associated array device (e.g. /dev/md1) and possibly a second device. For Fail, FailSpare, and SpareActive the second device is the relevant component device. For MoveSpare the second device is the array that the spare was moved from. For mdadm to move spares from one array to another, the different arrays need to be labelled with the same spare-group in the configuration file. The spare-group name can be any string. It is only necessary that different spare groups use different names. When mdadm detects that an array which is in a spare group has fewer active devices than necessary for the complete array, and has no spare devices, it will look for another array in the same spare group that has a full complement of working drive and a spare. It will then attempt to remove the spare from the second drive and add it to the first. If the removal succeeds but the adding fails, then it is added back to the original array. EXAMPLES
To find out if a devices is a raid array or part of one: mdadm -Q /dev/name-of-device To assemble and start all array listed in the standard config file: mdadm -As To shut down all arrays (that are not still in used): mdadm --stop --scan To monitor all arrays if (and only if) an email address or program was given in the config file, but poll every 2 minutes: mdadm -Fs --delay=120 To create /dev/md0 as a RAID1 array with /dev/hda1 and /dev/hdc1: mdadm -C /dev/md0 -l1 -n2 /dev/hd[ac]1 To create prototype a config file that describes currently active arrays that are known to be made from partitions of IDE or SCSI drives: echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf mdadm --detail --scan >> mdadm.conf This file should be reviewed before being used as it may contain unwanted detail. To find out what raid arrays could be assembled from existing IDE and SCSI whole drives (not partitions): echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf mdadm -Es -c mdadm.conf >> mdadm.conf This file is very likely to contain unwanted detail, particularly the devices= entries. To get help about Create mode: mdadm --create --help To get help about the format of the config file: mdadm --config --help To get general help: mdadm --help FILES
/proc/mdstat If you're using the /proc filesystem, /proc/mdstat lists all active md devices with information about them. mdadm uses this to find arrays when --scan is given in Misc mode, and to monitor array reconstruction on Monitor mode. /etc/mdadm.conf The config file lists which devices may be scanned to see if they contain MD super block, and gives identifying information (e.g. UUID) about known MD arrays. See mdadm.conf(5) for more details. NOTE
mdadm was previously known as mdctl. SEE ALSO
For information on the various levels of RAID, check out: http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/ <http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/> for new releases of the RAID driver check out: ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches <ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches> or http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/ mdadm.conf(5), md(4). raidtab(5), raid0run(8), raidstop(8), mkraid(8) MDADM(8)
All times are GMT -4. The time now is 06:44 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy