Sponsored Content
Operating Systems Linux Red Hat missing raid array after reboot Post 302528830 by sriniv666 on Wednesday 8th of June 2011 04:49:48 AM
Old 06-08-2011
missing raid array after reboot

Dear all ,

i ve configured raid 0 in redhat machine(VM ware), by following steps:

Code:
#mdadm -C /dev/md0 -l 0 -n 2 /dev/sdb1 /dev/sdc1 
#mkfs.ext3 /dev/md0 
#mdadm --detail --scan --config=mdadm.conf >/etc/mdadm.conf

then
mounted the/dev/md0 device and also added a entry in fstab.

how the isssue is when the system rebooted the raid array configuration is missing, there is no device as /dev/md0..
what's the solution for this??? any help !!!!

Last edited by pludi; 06-08-2011 at 07:06 AM.. Reason: content editing...
 

9 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Percent complete error while scanning RAID array during 5.0.6 load

Percent complete SCO 5.0.6 / No longer an issue (0 Replies)
Discussion started by: Henrys
0 Replies

2. AIX

RAID level of array = false?

I created a RAID 5 array and when I list out the attributes of the "hdisk" it reports back raid_level = 5 but the RAID level of the array = false. What does this actually indicate about my array? I've never paid much attention to this until now since I have a disk reporting failure I want to make... (0 Replies)
Discussion started by: scottsl
0 Replies

3. UNIX for Advanced & Expert Users

Create RAID - Smart Array Tool - ML370

Hi guys, i must install an old old old ml370 server... I must create a RAID 5 with my 4 SCSI disk. I need a SmartStart disk for create it or a Floppy Disk called "Array configuration Tool". I don't find it on the hp website...:mad::mad::mad: Anyone have it?? Thanks in advance. Zio (0 Replies)
Discussion started by: Zio Bill
0 Replies

4. Solaris

EFI Disk labels on 3510 raid array

Hi Peeps, Can anyone help me an EFI lablel on a 3510 raid array that I cannot get rid of, format -e and label just asks you if you want to label it. Want an SMI label writing to it. Anyone got any ideas on how to remove the EFI label? Thanks in advance Martin (2 Replies)
Discussion started by: callmebob
2 Replies

5. Emergency UNIX and Linux Support

Loading a RAID array after OS crash

One of my very old drive farm servers had an OS fault and can't boot now but I'd like to restore some files from it. I tried booting Ubuntu from a CD, but it couldn't see the drives -- possibly because they're RAIDed together. Is there a good way to get at my files? The data in question is a... (2 Replies)
Discussion started by: CRGreathouse
2 Replies

6. Solaris

How to find missing disks on Sun x4150 without reboot?

Hi, Here is the issue: There are 4 disks on this Sun x4150 system under Solaris 10, but only 1 disk can be seen by the OS. I've tried commands disks and devfsadm but not working. It's an important production server, so 'reboot -r' is not a choice. # format < /dev/null Searching for... (6 Replies)
Discussion started by: aixlover
6 Replies

7. Fedora

Missing entries in log files just before/after reboot

Hello world, One of the servers, a Fedora one,rebooted today (Luckily, a testbox). I tried to get the reason the server rebooted. After going through the messages, I think that the log entries just before and after reboot are missing. Please below: (****** is the server name, for privacy... (0 Replies)
Discussion started by: satish51392111
0 Replies

8. Solaris

Solaris 10 Installation - Disks missing, and Raid

Hey everyone. First, let me start by saying I'm primarily focused on linux boxes, and just happened to get pulled into building two T5220's. I'm not super educated on sun boxes. Both T5220's have 8 146GB 15k SAS drives. Inside the service processor, I can run SHOW /SYS/HDD{0-7} and they all come... (2 Replies)
Discussion started by: msarro
2 Replies

9. UNIX for Advanced & Expert Users

Revive RAID 0 Array From Buffalo Duo NAS

Thank you in advanced, I had a Buffalo DUO crap out on me that was setup as RAID 0. I dont belive it was the drives but rather the controller in the DUO unit. I bought another external HDD enclosure and was able to fireup the two older DUO drives in it and I think I resembled the RAID... (12 Replies)
Discussion started by: metallica1973
12 Replies
MDADM(8)						      System Manager's Manual							  MDADM(8)

NAME
mdadm - manage MD devices aka Linux Software Raid. SYNOPSIS
mdadm [mode] <raiddevice> [options] <component-devices> DESCRIPTION
RAID devices are virtual devices created from two or more real block devices. This allows multiple devices (typically disk drives or parti- tions there-of) to be combined into a single device to hold (for example) a single filesystem. Some RAID levels include redundancy and so can survive some degree of device failure. Linux Software RAID devices are implemented through the md (Multiple Devices) device driver. Currently, Linux supports LINEAR md devices, RAID0 (striping), RAID1 (mirroring), RAID4 and RAID5. Recent kernels (2002) also support a mode known as MULTIPATH. mdadm only provides limited support for MULTIPATH as yet. mdadm is a program that can be used to create, manage, and monitor MD devices. As such it provides a similar set of functionality to the raidtools packages. The key differences between mdadm and raidtools are: o mdadm is a single program and not a collection of programs. o mdadm can perform (almost) all of its functions without having a configuration file. Also mdadm helps with management of the configu- ration file. o mdadm can provide information about your arrays (through Query, Detail, and Examine) that raidtools cannot. MODES
mdadm has 6 major modes of operation: Assemble Assemble the parts of a previously created array into an active array. Components can be explicitly given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on request, fiddle superblock information so as to assemble a faulty array. Build Build a legacy array without per-device superblocks. Create Create a new array with per-device superblocks. Manage This is for doing things to specific components of an array such as adding new spares and removing faulty devices. Misc This mode allows operations on independent devices such as examine MD superblocks, erasing old superblocks and stopping active arrays. Follow or Monitor Monitor one or more md devices and act on any state changes. OPTIONS
Available options are: -A, --assemble Assemble a pre-existing array. -B, --build Build a legacy array without superblocks. -C, --create Create a new array. -Q, --query Examine a device to see (1) if it is an md device and (2) if it is a component of an md array. Information about what is discovered is presented. -D, --detail Print detail of one or more md devices. -E, --examine Print content of md superblock on device(s). -F, --follow, --monitor Select Monitor mode. -h, --help Display help message or, after above option, mode specific help message. -V, --version Print version information for mdadm. -v, --verbose Be more verbose about what is happening. -b, --brief Be less verbose. This is used with --detail and --examine. -f, --force Be more forceful about certain operations. See the various modes of the exact meaning of this option in different contexts. -c, --config= Specify the config file. Default is /etc/mdadm.conf. -s, --scan scan config file or /proc/mdstat for missing information. In general, this option gives mdadm permission to get any missing infor- mation, like component devices, array devices, array identities, and alert destination from the configuration file: /etc/mdadm.conf. One exception is MISC mode when using --detail or --stop in which case --scan says to get a list of array devices from /proc/mdstat. For create or build: -c, --chunk= Specify chunk size of kibibytes. The default is 64. --rounding= Specify rounding factor for linear array (==chunk size) -l, --level= Set raid level. Options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid5, 4, raid5, 5, multipath, mp. Obviously some of these are synonymous. Only the first 4 are valid when Building. -p, --parity= Set raid5 parity algorithm. Options are: left-asymmetric, left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs. The default is left-symmetric. --layout= same as --parity -n, --raid-devices= number of active devices in array. -x, --spare-devices= number of spare (eXtra) devices in initial array. Spares can be added and removed later. -z, --size= Amount (in Kibibytes) of space to use from each drive in RAID1/4/5. This must be a multiple of the chunk size, and must leave about 128Kb of space at the end of the drive for the RAID superblock. If this is not specified (as it normally is not) the smallest drive (or partition) sets the size, though if there is a variance among the drives of greater than 1%, a warning is issued. For assemble: -u, --uuid= uuid of array to assemble. Devices which don't have this uuid are excluded -m, --super-minor= Minor number of device that array was created for. Devices which don't have this minor number are excluded. If you create an array as /dev/md1, then all superblocks will contain the minor number 1, even if the array is later assembled as /dev/md2. -f, --force Assemble the array even if some superblocks appear out-of-date -R, --run Attempt to start the array even if fewer drives were given than are needed for a full array. Normally if not all drives are found and --scan is not used, then the array will be assembled but not started. With --run an attempt will be made to start it anyway. For Manage mode: -a, --add hotadd listed devices. -r, --remove remove listed devices. They must not be active. i.e. they should be failed or spare devices. -f, --fail mark listed devices as faulty. --set-faulty same as --fail. For Misc mode: -R, --run start a partially built array. -S, --stop deactivate array, releasing all resources. -o, --readonly mark array as readonly. -w, --readwrite mark array as readwrite. --zero-superblock If the device contains a valid md superblock, the block is over-written with zeros. With --force the block where the superblock would be is over-written even if it doesn't appear to be valid. For Monitor mode: -m, --mail Give a mail address to send alerts to. -p, --program, --alert Give a program to be run whenever an event is detected. -d, --delay Give a delay in seconds. mdadm polls the md arrays and then waits this many seconds before polling again. The default is 60 sec- onds. ASSEMBLE MODE
Usage: mdadm --assemble device options... Usage: mdadm --assemble --scan options... This usage assembles one or more raid arrays from pre-existing components. For each array, mdadm needs to know the md device, the identity of the array, and a number of component-devices. These can be found in a number of ways. The md device is either given before --scan or is found from the config file. In the latter case, multiple md devices can be started with a single mdadm command. The identity can be given with the --uuid option, with the --super-minor option, can be found in in the config file, or will be taken from the super block on the first component-device listed on the command line. Devices can be given on the --assemble command line or from the config file. Only devices which have an md superblock which contains the right identity will be considered for any device. The config file is only used if explicitly named with --config or requested with --scan. In the later case, /etc/mdadm.conf is used. If --scan is not given, then the config file will only be used to find the identity of md arrays. Normally the array will be started after it is assembled. However if --scan is not given and insufficient drives were listed to start a complete (non-degraded) array, then the array is not started (to guard against usage errors). To insist that the array be started in this case (as may work for RAID1 or RAID5), give the --run flag. BUILD MODE
Usage: mdadm --build device --chunk=X --level=Y --raid-devices=Z devices This usage is similar to --create. The difference is that it creates a legacy array without a superblock. With these arrays there is no difference between initially creating the array and subsequently assembling the array, except that hopefully there is useful data there in the second case. The level may only be 0, raid0, or linear. All devices must be listed and the array will be started once complete. CREATE MODE
Usage: mdadm --create device --chunk=X --level=Y --raid-devices=Z devices This usage will initialise a new md array, associate some devices with it, and activate the array. As devices are added, they are checked to see if they contain raid superblocks or filesystems. They are also checked to see if the variance in device size exceeds 1%. If any discrepancy is found, the array will not automatically be run, though the presence of a --run can override this caution. To create a "degraded" array in which some devices are missing, simply give the word missing in place of a device name. This will cause mdadm to leave the corresponding slot in the array empty. For a RAID4 or RAID5 array at most one slot can be missing. For a RAID1 array, only one real device needs to be given. All of the others can be missing. The General Management options that are valid with --create are: --run insist of running the array even if some devices look like they might be in use. --readonly start the array readonly - not supported yet. MANAGE MODE
Usage: mdadm device options... devices... This usage will allow individual devices in an array to be failed, removed or added. It is possible to perform multiple operations with on command. For example: mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 /a /dev/hda1 will firstly mark /dev/hda1 as faulty in /dev/md0 and will then remove it from the array and finally add it back in as a spare. However only one md array can be affected by a single command. MISC MODE
Usage: mdadm options ... devices ... MISC mode includes a number if distinct operations that operate on distinct devices. The operations are: --query The device is examined to see if it is (1) an active md array, or (2) a component of an md array. The information discovered is reported. --detail The device should be an active md device. mdadm will display a detailed description of the array. --brief or --scan will cause the output to be less detailed and the format to be suitable for inclusion in /etc/mdadm.conf. --examine The device should be a component of an md array. mdadm will read the md superblock of the device and display the contents. If --brief is given, or --scan then multiple devices that are components of the one array are grouped together and reported in a single entry suitable for inclusion in /etc/mdadm.conf. Having --scan without listing any devices will cause all devices listed in the config file to be examined. --stop This devices should active md arrays which will be deactivated, if they are not currently in use. --run This will fully activate a partially assembled md array. --readonly This will mark an active array as read-only, providing that it is not currently being used. --readwrite This will change a readonly array back to being read/write. --scan For all operations except --examine, --scan will cause the operation to be applied to all arrays listed in /proc/mdstat. For --examine, --scan causes all devices listed in the config file to be examined. MONITOR MODE
Usage: mdadm --monitor options... devices... This usage causes mdadm to periodically poll a number of md arrays and to report on any events noticed. mdadm will never exit once it decides that there are arrays to be checked, so it should normally be run in the background. As well as reporting events, mdadm may move a spare drive from one array to another if they are in the same spare-group and if the destina- tion array has a failed drive but not spares. If any devices are listed on the command line, mdadm will only monitor those devices. Otherwise all arrays listed in the configuration file will be monitored. Further, if --scan is given, then any other md devices that appear in /proc/mdstat will also be monitored. The result of monitoring the arrays is the generation of events. These events are passed to a separate program (if specified) and may be mailed to a given E-mail address. When passing event to program, the program is run once for each event and is given 2 or 3 command-line arguements. The first is the name of the event (see below). The second is the name of the md device which is affected, and the third is the name of a related device if rel- evant, such as a component device that has failed. If --scan is given, then a program or an E-mail address must be specified on the command line or in the config file. If neither are avail- able, then mdadm will not monitor anything. Without --scan mdadm will continue monitoring as long as something was found to monitor. If no program or email is given, then each event is reported to stdout. The different events are: DeviceDisappeared An md array which previously was configured appears to no longer be configured. RebuildStarted An md array started reconstruction. RebuildNN Where NN is 20, 40, 60, or 80, this indicates that rebuild has passed that many percentage of the total. Fail An active component device of an array has been marked as faulty. FailSpare A spare component device which was being rebuilt to replace a faulty device has failed. SpareActive A spare component device which was being rebuilt to replace a faulty device as been successfully rebuild and has been made active. NewArray A new md array has been detected in the /proc/mdstat file. MoveSpare A spare drive has been moved from one array in a spare-group to another to allow a failed drive to be replaced. Only Fail and FailSpare cause Email to be sent. All events cause the program to be run. The program is run with two or three arguments, they being the event name, the array device and possibly a second device. Each event has an associated array device (e.g. /dev/md1) and possibly a second device. For Fail, FailSpare, and SpareActive the second device is the relevant component device. For MoveSpare the second device is the array that the spare was moved from. For mdadm to move spares from one array to another, the different arrays need to be labelled with the same spare-group in the configuration file. The spare-group name can be any string. It is only necessary that different spare groups use different names. When mdadm detects that an array which is in a spare group has fewer active devices than necessary for the complete array, and has no spare devices, it will look for another array in the same spare group that has a full complement of working drive and a spare. It will then attempt to remove the spare from the second drive and add it to the first. If the removal succeeds but the adding fails, then it is added back to the original array. EXAMPLES
To find out if a devices is a raid array or part of one: mdadm -Q /dev/name-of-device To assemble and start all array listed in the standard config file: mdadm -As To shut down all arrays (that are not still in used): mdadm --stop --scan To monitor all arrays if (and only if) an email address or program was given in the config file, but poll every 2 minutes: mdadm -Fs --delay=120 To create /dev/md0 as a RAID1 array with /dev/hda1 and /dev/hdc1: mdadm -C /dev/md0 -l1 -n2 /dev/hd[ac]1 To create prototype a config file that describes currently active arrays that are known to be made from partitions of IDE or SCSI drives: echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf mdadm --detail --scan >> mdadm.conf This file should be reviewed before being used as it may contain unwanted detail. To find out what raid arrays could be assembled from existing IDE and SCSI whole drives (not partitions): echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf mdadm -Es -c mdadm.conf >> mdadm.conf This file is very likely to contain unwanted detail, particularly the devices= entries. To get help about Create mode: mdadm --create --help To get help about the format of the config file: mdadm --config --help To get general help: mdadm --help FILES
/proc/mdstat If you're using the /proc filesystem, /proc/mdstat lists all active md devices with information about them. mdadm uses this to find arrays when --scan is given in Misc mode, and to monitor array reconstruction on Monitor mode. /etc/mdadm.conf The config file lists which devices may be scanned to see if they contain MD super block, and gives identifying information (e.g. UUID) about known MD arrays. See mdadm.conf(5) for more details. NOTE
mdadm was previously known as mdctl. SEE ALSO
For information on the various levels of RAID, check out: http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/ <http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/> for new releases of the RAID driver check out: ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches <ftp://ftp.kernel.org/pub/linux/kernel/people/mingo/raid-patches> or http://www.cse.unsw.edu.au/~neilb/patches/linux-stable/ mdadm.conf(5), md(4). raidtab(5), raid0run(8), raidstop(8), mkraid(8) MDADM(8)
All times are GMT -4. The time now is 01:37 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy