Visit Our UNIX and Linux User Community

Software RAID doubt

Thread Tools Search this Thread
Operating Systems Linux Red Hat Software RAID doubt
# 8  
Old 08-04-2009
what is the output of the command? How did you build this array? What is the output of cat /proc/mdstat ?

Previous Thread | Next Thread
Test Your Knowledge in Computers #710
Difficulty: Easy
Chrome DevTools. when you want to inspect the styles or attributes of a DOM node , right-click the element and select Inspect.
True or False?

9 More Discussions You Might Find Interesting

1. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

2. Red Hat

Software RAID configuration

We have configured software based RAID5 with LVM on our RHEL5 servers. Please let us know if its good to configure software RAID on live environment servers. What can be the disadvantages of software RAID against hardware RAID (4 Replies)
Discussion started by: mitchnelson
4 Replies

3. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

4. Filesystems, Disks and Memory

Software RAID

Hello, My company has inherited a Centos based machine that has 7 hard drives and a software based raid system. Supposedly one of the drives has failed. I need to replace the hardrive. How can I about telling which hard drive needs replacing? I have looked in the logs and there clearly is a... (5 Replies)
Discussion started by: mojoman
5 Replies

5. Solaris

doubt on RAID

What is the maximum size of RAID-5 volume that can be created using five, 20GB disks. ? explain me in detail. thanks (1 Reply)
Discussion started by: rogerben
1 Replies

6. Linux

Software RAID on Linux

Hey, I have worked with Linux for some time, but have not gotten into the specifics of hard drive tuning or software RAID. This is about to change. I have a Dell PowerEdge T105 at home and I am purchasing the following: 1GBx4 DDR2 ECC PC6400 RAM Rosewill RSV-5 E-Sata 5 bay disk enclosure... (6 Replies)
Discussion started by: mark54g
6 Replies

7. HP-UX

Software RAID (0+1)

Hi! A couple of months ago a disk failed in our JBOD cabinett and I have finally got a new disk to replace it with. It was a RAID 0 so we have to create and configure the whole thing again. First we thought of RAID 1+0 but it seems you can't do this with LVM. If you read my last thread, you can... (0 Replies)
Discussion started by: hoff
0 Replies

8. UNIX for Advanced & Expert Users

Software RAID ...

Hi all, I m just trying using software RAID in RHEL 4, without problem , then i wanna simulate if disk 1 is fail (thereis an bootloader), i plug off my 1st disk. My problems is the second disk cannot boot? just stuck in grub, the computer is hang. Sorry for poor concept in RAID? I use a RAID 1.... (0 Replies)
Discussion started by: blesets
0 Replies

9. SuSE

Raid software besides Veritass

Hello Lunix people, I am looking for Raid software or solution besides Veritas. Veritas has some great software but are way too costly. Does anyone know of good raid software that but NOT Veritas. I need the funcations but not the cost. (7 Replies)
Discussion started by: xtmeisel
7 Replies
MDADM(8)						      System Manager's Manual							  MDADM(8)

mdadm - manage MD devices aka Linux Software Raid. SYNOPSIS
mdadm [mode] <raiddevice> [options] <component-devices> DESCRIPTION
RAID devices are virtual devices created from two or more real block devices. This allows multiple devices (typically disk drives or parti- tions there-of) to be combined into a single device to hold (for example) a single filesystem. Some RAID levels include redundancy and so can survive some degree of device failure. Linux Software RAID devices are implemented through the md (Multiple Devices) device driver. Currently, Linux supports LINEAR md devices, RAID0 (striping), RAID1 (mirroring), RAID4 and RAID5. Recent kernels (2002) also support a mode known as MULTIPATH. mdadm only provides limited support for MULTIPATH as yet. mdadm is a program that can be used to create, manage, and monitor MD devices. As such it provides a similar set of functionality to the raidtools packages. The key differences between mdadm and raidtools are: o mdadm is a single program and not a collection of programs. o mdadm can perform (almost) all of its functions without having a configuration file. Also mdadm helps with management of the configu- ration file. o mdadm can provide information about your arrays (through Query, Detail, and Examine) that raidtools cannot. MODES
mdadm has 6 major modes of operation: Assemble Assemble the parts of a previously created array into an active array. Components can be explicitly given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on request, fiddle superblock information so as to assemble a faulty array. Build Build a legacy array without per-device superblocks. Create Create a new array with per-device superblocks. Manage This is for doing things to specific components of an array such as adding new spares and removing faulty devices. Misc This mode allows operations on independent devices such as examine MD superblocks, erasing old superblocks and stopping active arrays. Follow or Monitor Monitor one or more md devices and act on any state changes. OPTIONS
Available options are: -A, --assemble Assemble a pre-existing array. -B, --build Build a legacy array without superblocks. -C, --create Create a new array. -Q, --query Examine a device to see (1) if it is an md device and (2) if it is a component of an md array. Information about what is discovered is presented. -D, --detail Print detail of one or more md devices. -E, --examine Print content of md superblock on device(s). -F, --follow, --monitor Select Monitor mode. -h, --help Display help message or, after above option, mode specific help message. -V, --version Print version information for mdadm. -v, --verbose Be more verbose about what is happening. -b, --brief Be less verbose. This is used with --detail and --examine. -f, --force Be more forceful about certain operations. See the various modes of the exact meaning of this option in different contexts. -c, --config= Specify the config file. Default is /etc/mdadm.conf. -s, --scan scan config file or /proc/mdstat for missing information. In general, this option gives mdadm permission to get any missing infor- mation, like component devices, array devices, array identities, and alert destination from the configuration file: /etc/mdadm.conf. One exception is MISC mode when using --detail or --stop in which case --scan says to get a list of array devices from /proc/mdstat. For create or build: -c, --chunk= Specify chunk size of kibibytes. The default is 64. --rounding= Specify rounding factor for linear array (==chunk size) -l, --level= Set raid level. Options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid5, 4, raid5, 5, multipath, mp. Obviously some of these are synonymous. Only the first 4 are valid when Building. -p, --parity= Set raid5 parity algorithm. Options are: left-asymmetric, left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs. The default is left-symmetric. --layout= same as --parity -n, --raid-devices= number of active devices in array. -x, --spare-devices= number of spare (eXtra) devices in initial array. Spares can be added and removed later. -z, --size= Amount (in Kibibytes) of space to use from each drive in RAID1/4/5. This must be a multiple of the chunk size, and must leave about 128Kb of space at the end of the drive for the RAID superblock. If this is not specified (as it normally is not) the smallest drive (or partition) sets the size, though if there is a variance among the drives of greater than 1%, a warning is issued. For assemble: -u, --uuid= uuid of array to assemble. Devices which don't have this uuid are excluded -m, --super-minor= Minor number of device that array was created for. Devices which don't have this minor number are excluded. If you create an array as /dev/md1, then all superblocks will contain the minor number 1, even if the array is later assembled as /dev/md2. -f, --force Assemble the array even if some superblocks appear out-of-date -R, --run Attempt to start the array even if fewer drives were given than are needed for a full array. Normally if not all drives are found and --scan is not used, then the array will be assembled but not started. With --run an attempt will be made to start it anyway. For Manage mode: -a, --add hotadd listed devices. -r, --remove remove listed devices. They must not be active. i.e. they should be failed or spare devices. -f, --fail mark listed devices as faulty. --set-faulty same as --fail. For Misc mode: -R, --run start a partially built array. -S, --stop deactivate array, releasing all resources. -o, --readonly mark array as readonly. -w, --readwrite mark array as readwrite. --zero-superblock If the device contains a valid md superblock, the block is over-written with zeros. With --force the block where the superblock would be is over-written even if it doesn't appear to be valid. For Monitor mode: -m, --mail Give a mail address to send alerts to. -p, --program, --alert Give a program to be run whenever an event is detected. -d, --delay Give a delay in seconds. mdadm polls the md arrays and then waits this many seconds before polling again. The default is 60 sec- onds. ASSEMBLE MODE
Usage: mdadm --assemble device options... Usage: mdadm --assemble --scan options... This usage assembles one or more raid arrays from pre-existing components. For each array, mdadm needs to know the md device, the identity of the array, and a number of component-devices. These can be found in a number of ways. The md device is either given before --scan or is found from the config file. In the latter case, multiple md devices can be started with a single mdadm command. The identity can be given with the --uuid option, with the --super-minor option, can be found in in the config file, or will be taken from the super block on the first component-device listed on the command line. Devices can be given on the --assemble command line or from the config file. Only devices which have an md superblock which contains the right identity will be considered for any device. The config file is only used if explicitly named with --config or requested with --scan. In the later case, /etc/mdadm.conf is used. If --scan is not given, then the config file will only be used to find the identity of md arrays. Normally the array will be started after it is assembled. However if --scan is not given and insufficient drives were listed to start a complete (non-degraded) array, then the array is not started (to guard against usage errors). To insist that the array be started in this case (as may work for RAID1 or RAID5), give the --run flag. BUILD MODE
Usage: mdadm --build device --chunk=X --level=Y --raid-devices=Z devices This usage is similar to --create. The difference is that it creates a legacy array without a superblock. With these arrays there is no difference between initially creating the array and subsequently assembling the array, except that hopefully there is useful data there in the second case. The level may only be 0, raid0, or linear. All devices must be listed and the array will be started once complete. CREATE MODE
Usage: mdadm --create device --chunk=X --level=Y --raid-devices=Z devices This usage will initialise a new md array, associate some devices with it, and activate the array. As devices are added, they are checked to see if they contain raid superblocks or filesystems. They are also checked to see if the variance in device size exceeds 1%. If any discrepancy is found, the array will not automatically be run, though the presence of a --run can override this caution. To create a "degraded" array in which some devices are missing, simply give the word missing in place of a device name. This will cause mdadm to leave the corresponding slot in the array empty. For a RAID4 or RAID5 array at most one slot can be missing. For a RAID1 array, only one real device needs to be given. All of the others can be missing. The General Management options that are valid with --create are: --run insist of running the array even if some devices look like they might be in use. --readonly start the array readonly - not supported yet. MANAGE MODE
Usage: mdadm device options... devices... This usage will allow individual devices in an array to be failed, removed or added. It is possible to perform multiple operations with on command. For example: mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 /a /dev/hda1 will firstly mark /dev/hda1 as faulty in /dev/md0 and will then remove it from the array and finally add it back in as a spare. However only one md array can be affected by a single command. MISC MODE
Usage: mdadm options ... devices ... MISC mode includes a number if distinct operations that operate on distinct devices. The operations are: --query The device is examined to see if it is (1) an active md array, or (2) a component of an md array. The information discovered is reported. --detail The device should be an active md device. mdadm will display a detailed description of the array. --brief or --scan will cause the output to be less detailed and the format to be suitable for inclusion in /etc/mdadm.conf. --examine The device should be a component of an md array. mdadm will read the md superblock of the device and display the contents. If --brief is given, or --scan then multiple devices that are components of the one array are grouped together and reported in a single entry suitable for inclusion in /etc/mdadm.conf. Having --scan without listing any devices will cause all devices listed in the config file to be examined. --stop This devices should active md arrays which will be deactivated, if they are not currently in use. --run This will fully activate a partially assembled md array. --readonly This will mark an active array as read-only, providing that it is not currently being used. --readwrite This will change a readonly array back to being read/write. --scan For all operations except --examine, --scan will cause the operation to be applied to all arrays listed in /proc/mdstat. For --examine, --scan causes all devices listed in the config file to be examined. MONITOR MODE
Usage: mdadm --monitor options... devices... This usage causes mdadm to periodically poll a number of md arrays and to report on any events noticed. mdadm will never exit once it decides that there are arrays to be checked, so it should normally be run in the background. As well as reporting events, mdadm may move a spare drive from one array to another if they are in the same spare-group and if the destina- tion array has a failed drive but not spares. If any devices are listed on the command line, mdadm will only monitor those devices. Otherwise all arrays listed in the configuration file will be monitored. Further, if --scan is given, then any other md devices that appear in /proc/mdstat will also be monitored. The result of monitoring the arrays is the generation of events. These events are passed to a separate program (if specified) and may be mailed to a given E-mail address. When passing event to program, the program is run once for each event and is given 2 or 3 command-line arguements. The first is the name of the event (see below). The second is the name of the md device which is affected, and the third is the name of a related device if rel- evant, such as a component device that has failed. If --scan is given, then a program or an E-mail address must be specified on the command line or in the config file. If neither are avail- able, then mdadm will not monitor anything. Without --scan mdadm will continue monitoring as long as something was found to monitor. If no program or email is given, then each event is reported to stdout. The different events are: DeviceDisappeared An md array which previously was configured appears to no longer be configured. RebuildStarted An md array started reconstruction. RebuildNN Where NN is 20, 40, 60, or 80, this indicates that rebuild has passed that many percentage of the total. Fail An active component device of an array has been marked as faulty. FailSpare A spare component device which was being rebuilt to replace a faulty device has failed. SpareActive A spare component device which was being rebuilt to replace a faulty device as been successfully rebuild and has been made active. NewArray A new md array has been detected in the /proc/mdstat file. MoveSpare A spare drive has been moved from one array in a spare-group to another to allow a failed drive to be replaced. Only Fail and FailSpare cause Email to be sent. All events cause the program to be run. The program is run with two or three arguments, they being the event name, the array device and possibly a second device. Each event has an associated array device (e.g. /dev/md1) and possibly a second device. For Fail, FailSpare, and SpareActive the second device is the relevant component device. For MoveSpare the second device is the array that the spare was moved from. For mdadm to move spares from one array to another, the different arrays need to be labelled with the same spare-group in the configuration file. The spare-group name can be any string. It is only necessary that different spare groups use different names. When mdadm detects that an array which is in a spare group has fewer active devices than necessary for the complete array, and has no spare devices, it will look for another array in the same spare group that has a full complement of working drive and a spare. It will then attempt to remove the spare from the second drive and add it to the first. If the removal succeeds but the adding fails, then it is added back to the original array. EXAMPLES
To find out if a devices is a raid array or part of one: mdadm -Q /dev/name-of-device To assemble and start all array listed in the standard config file: mdadm -As To shut down all arrays (that are not still in used): mdadm --stop --scan To monitor all arrays if (and only if) an email address or program was given in the config file, but poll every 2 minutes: mdadm -Fs --delay=120 To create /dev/md0 as a RAID1 array with /dev/hda1 and /dev/hdc1: mdadm -C /dev/md0 -l1 -n2 /dev/hd[ac]1 To create prototype a config file that describes currently active arrays that are known to be made from partitions of IDE or SCSI drives: echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf mdadm --detail --scan >> mdadm.conf This file should be reviewed before being used as it may contain unwanted detail. To find out what raid arrays could be assembled from existing IDE and SCSI whole drives (not partitions): echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf mdadm -Es -c mdadm.conf >> mdadm.conf This file is very likely to contain unwanted detail, particularly the devices= entries. To get help about Create mode: mdadm --create --help To get help about the format of the config file: mdadm --config --help To get general help: mdadm --help FILES
/proc/mdstat If you're using the /proc filesystem, /proc/mdstat lists all active md devices with information about them. mdadm uses this to find arrays when --scan is given in Misc mode, and to monitor array reconstruction on Monitor mode. /etc/mdadm.conf The config file lists which devices may be scanned to see if they contain MD super block, and gives identifying information (e.g. UUID) about known MD arrays. See mdadm.conf(5) for more details. NOTE
mdadm was previously known as mdctl. SEE ALSO
For information on the various levels of RAID, check out: <> for new releases of the RAID driver check out: <> or mdadm.conf(5), md(4). raidtab(5), raid0run(8), raidstop(8), mkraid(8) MDADM(8)

Featured Tech Videos