Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Help needed! Raid 5 failure on a Debian System Post 302797099 by jonlisty on Sunday 21st of April 2013 11:29:41 PM
Old 04-22-2013
Ok after some more reading, I tried this:

Quote:
mdadm --create /dev/md8 --verbose --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc missing
and got this:

Quote:
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sda: Device or resource busy
mdadm: failed container membership check
mdadm: device /dev/sda not suitable for any style of array

aaaghhh!!!

---------- Post updated at 10:17 PM ---------- Previous update was at 10:13 PM ----------

also...

Quote:
$ sudo cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md8 : inactive sda[0] sdc[2] sdb[1]
8790796680 blocks super 1.2
---------- Post updated at 10:29 PM ---------- Previous update was at 10:17 PM ----------

also:

Quote:
$ sudo mdadm --detail /dev/md8
/dev/md8:
Version : 1.2
Creation Time : Mon Jan 7 11:03:39 2013
Raid Level : raid5
Used Dev Size : -1
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Sat Apr 6 13:17:10 2013
State : active, degraded, Not Started
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : TTVServer:TTV2 (local to host TTVServer)
UUID : dc344271:82f55bd0:fcfd0e16:a2a60bc8
Events : 103

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 0 0 3 removed
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Raid control vs scsi for operating system

I was trying to get a server using a raid controller card up and running. I could not get the card configured right so i just installed the system strait onto a scsi drive. Questions? Is is nescessary to have the operating system on raid? Pros/Cons Is it really difficult to go back later... (1 Reply)
Discussion started by: macdonto
1 Replies

2. UNIX for Dummies Questions & Answers

Ultra60 and A1000....raid manager needed just to see it?

Hi guys, I was asked to setup an Ultra60 (Sol 8) with an StorEdge A1000. Does anyone know if a probe-scsi-all is suppose to detect it? Right now it doesn't, so maybe I answered my own question :rolleyes: We have an the same setup running already, but I wasn't around when that was setup. ... (3 Replies)
Discussion started by: Yinzer955i
3 Replies

3. SCO

driver needed for hp smartarry p200i sas raid controller

recently we have purchased hp proliant ml350 g5 server and configured raid 5 with hp smartarray p200i sas controller.but i could not found any sas raid controller hp smartarry p200i driver for sco unix 5.0.7 :(.i searched on hp support site,but no use.any one can help. (3 Replies)
Discussion started by: prakrithi
3 Replies

4. Solaris

RAID controller needed for SVM?

hi this may be a very stupid question, but im quite new to Solaris (gonna buid my first system, Solaris 10 on x86 system, connected to other windows systems in a home network) i wanna put a RAID 5 system in there to back up my other systems at home; iv read that its really so easy with SVM to... (4 Replies)
Discussion started by: Landser
4 Replies

5. UNIX for Advanced & Expert Users

need sample system o/p RHEL/Debian

Hi, Could somebody sent me sample output of below commands on 1) Debian linux and 2) RHEL3 and 3) any RHEL version less than 3, a) uname -a b) cat /etc/issue c) cat /etc/redhat-release or other equivalent file Thanks in advance - Krishna (0 Replies)
Discussion started by: krishnamurthig
0 Replies

6. Solaris

Contingency planning for System Failure

I have inhereted a Solaris 8 server which is running an important application in our production environment. The dilema is that the server has just one internal hard drive I believe it was installed using jump start, it does not even have a CD ROM drive and root is not mirrored (since there is... (2 Replies)
Discussion started by: Tirmazi
2 Replies

7. SCO

file system not getting mounted in read write mode after system power failure

After System power get failed File system is not getting mounted in read- write mode (1 Reply)
Discussion started by: gtkpmbpl
1 Replies

8. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

9. Debian

Best RAID settings for Debian Server? Help!! (1+0 or 5 or NAS)

I am installing a Debian Server on a: HP Proliant DL380 G4 Dual CPU's 3.20 ghz / 800 mhz / 1MB L2 5120 MB RAM 6 hard disks on HP Smart Array 6i controller (36.4 GB Ultra320 SCSI HD each) I will be using this server to capture VHS video, encode, compress, cut, edit, make DVD's, rip... (0 Replies)
Discussion started by: Marcus Aurelius
0 Replies

10. Shell Programming and Scripting

Help needed on restart-from-point-of-failure in Parallel Processing

Hi Gurus, Good morning... :) OS Info: Linux 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux I have a script which takes multiples parameters from a properties file one by one and run in background (to do parallel processing). As example: $ cat... (4 Replies)
Discussion started by: saps19
4 Replies
MDADM.CONF(5)							File Formats Manual						     MDADM.CONF(5)

NAME
mdadm.conf - configuration for management of Software RAID with mdadm SYNOPSIS
/etc/mdadm.conf DESCRIPTION
mdadm is a tool for creating, managing, and monitoring RAID devices using the md driver in Linux. Some common tasks, such as assembling all arrays, can be simplified by describing the devices and arrays in this configuration file. SYNTAX The file should be seen as a collection of words separated by white space (space, tab, or newline). Any word that beings with a hash sign (#) starts a comment and that word together with the remainder of the line is ignored. Any line that starts with white space (space or tab) is treated as though it were a continuation of the previous line. Empty lines are ignored, but otherwise each (non continuation) line must start with a keyword as listed below. The keywords are case insensitive and can be abbreviated to 3 characters. The keywords are: DEVICE A device line lists the devices (whole devices or partitions) that might contain a component of an MD array. When looking for the components of an array, mdadm will scan these devices (or any devices listed on the command line). The device line may contain a number of different devices (separated by spaces) and each device name can contain wild cards as defined by glob(7). Also, there may be several device lines present in the file. Alternatively, a device line can contain either of both of the words containers and partitions. The word containers will cause mdadm to look for assembled CONTAINER arrays and included them as a source for assembling further arrays. The word partitions will cause mdadm to read /proc/partitions and include all devices and partitions found therein. mdadm does not use the names from /proc/partitions but only the major and minor device numbers. It scans /dev to find the name that matches the numbers. If no DEVICE line is present, then "DEVICE partitions containers" is assumed. For example: DEVICE /dev/hda* /dev/hdc* DEV /dev/sd* DEVICE /dev/disk/by-path/pci* DEVICE partitions ARRAY The ARRAY lines identify actual arrays. The second word on the line may be the name of the device where the array is normally assembled, such as /dev/md1 or /dev/md/backup. If the name does not start with a slash ('/'), it is treated as being in /dev/md/. Alternately the word <ignore> (complete with angle brackets) can be given in which case any array which matches the rest of the line will never be automatically assembled. If no device name is given, mdadm will use various heuristics to determine an appropriate name. Subsequent words identify the array, or identify the array as a member of a group. If multiple identities are given, then a compo- nent device must match ALL identities to be considered a match. Each identity word has a tag, and equals sign, and some value. The tags are: uuid= The value should be a 128 bit uuid in hexadecimal, with punctuation interspersed if desired. This must match the uuid stored in the superblock. name= The value should be a simple textual name as was given to mdadm when the array was created. This must match the name stored in the superblock on a device for that device to be included in the array. Not all superblock formats support names. super-minor= The value is an integer which indicates the minor number that was stored in the superblock when the array was created. When an array is created as /dev/mdX, then the minor number X is stored. devices= The value is a comma separated list of device names or device name patterns. Only devices with names which match one entry in the list will be used to assemble the array. Note that the devices listed there must also be listed on a DEVICE line. level= The value is a raid level. This is not normally used to identify an array, but is supported so that the output of mdadm --examine --scan can be use directly in the configuration file. num-devices= The value is the number of devices in a complete active array. As with level= this is mainly for compatibility with the output of mdadm --examine --scan. spares= The value is a number of spare devices to expect the array to have. The sole use of this keyword and value is as follows: mdadm --monitor will report an array if it is found to have fewer than this number of spares when --monitor starts or when --oneshot is used. spare-group= The value is a textual name for a group of arrays. All arrays with the same spare-group name are considered to be part of the same group. The significance of a group of arrays is that mdadm will, when monitoring the arrays, move a spare drive from one array in a group to another array in that group if the first array had a failed or missing drive but no spare. auto= This option is rarely needed with mdadm-3.0, particularly if use with the Linux kernel v2.6.28 or later. It tells mdadm whether to use partitionable array or non-partitionable arrays and, in the absence of udev, how many partition devices to create. From 2.6.28 all md array devices are partitionable, hence this option is not needed. The value of this option can be "yes" or "md" to indicate that a traditional, non-partitionable md array should be created, or "mdp", "part" or "partition" to indicate that a partitionable md array (only available in linux 2.6 and later) should be used. This later set can also have a number appended to indicate how many partitions to create device files for, e.g. auto=mdp5. The default is 4. bitmap= The option specifies a file in which a write-intent bitmap should be found. When assembling the array, mdadm will provide this file to the md driver as the bitmap file. This has the same function as the --bitmap-file option to --assemble. metadata= Specify the metadata format that the array has. This is mainly recognised for comparability with the output of mdadm -Es. container= Specify that this array is a member array of some container. The value given can be either a path name in /dev, or a UUID of the container array. member= Specify that this array is a member array of some container. Each type of container has some way to enumerate member arrays, often a simple sequence number. The value identifies which member of a container the array is. It will usually accompany a "container=" word. MAILADDR The mailaddr line gives an E-mail address that alerts should be sent to when mdadm is running in --monitor mode (and was given the --scan option). There should only be one MAILADDR line and it should have only one address. MAILFROM The mailfrom line (which can only be abbreviated to at least 5 characters) gives an address to appear in the "From" address for alert mails. This can be useful if you want to explicitly set a domain, as the default from address is "root" with no domain. All words on this line are catenated with spaces to form the address. Note that this value cannot be set via the mdadm commandline. It is only settable via the config file. PROGRAM The program line gives the name of a program to be run when mdadm --monitor detects potentially interesting events on any of the arrays that it is monitoring. This program gets run with two or three arguments, they being the Event, the md device, and possibly the related component device. There should only be one program line and it should be give only one program. CREATE The create line gives default values to be used when creating arrays and device entries for arrays. These include: owner= group= These can give user/group ids or names to use instead of system defaults (root/wheel or root/disk). mode= An octal file mode such as 0660 can be given to override the default of 0600. auto= This corresponds to the --auto flag to mdadm. Give yes, md, mdp, part -- possibly followed by a number of partitions -- to indicate how missing device entries should be created. metadata= The name of the metadata format to use if none is explicitly given. This can be useful to impose a system-wide default of ver- sion-1 superblocks. symlinks=no Normally when creating devices in /dev/md/ mdadm will create a matching symlink from /dev/ with a name starting md or md_. Give symlinks=no to suppress this symlink creation. HOMEHOST The homehost line gives a default value for the --homehost= option to mdadm. There should normally be only one other word on the line. It should either be a host name, or one of the special words <system> and <ignore>. If <system> is given, then the gethost- name(2) systemcall is used to get the host name. If <ignore> is given, then a flag is set so that when arrays are being auto-assembled the checking of the recorded homehost is dis- abled. If <ignore> is given it is also possible to give an explicit name which will be used when creating arrays. This is the only case when there can be more that one other word on the HOMEHOST line. When arrays are created, this host name will be stored in the metadata. When arrays are assembled using auto-assembly, arrays which do not record the correct homehost name in their metadata will be assembled using a "foreign" name. A "foreign" name alway ends with a digit string preceded by an underscore to differentiate it from any possible local name. e.g. /dev/md/1_1 or /dev/md/home_0. AUTO A list of names of metadata format can be given, each preceded by a plus or minus sign. Also the word all preceded by plus or minus is allowed and is usually last. When mdadm is auto-assembling an array, with via --assemble or --incremental and it finds metadata of a given type, it checks that metadata type against those listed in this line. The first match wins, where all matches anything. If a match is found that was preceded by a plus sign, the auto assembly is allowed. If the match was preceded by a minus sign, the auto assembly is disallowed. If no match is found, the auto assembly is allowed. This can be used to disable all auto-assembly (so that only arrays explicitly listed in mdadm.conf or on the command line are assem- bled), or to disable assembly of certain metadata types which might be handled by other software. The known metadata types are 0.90, 1.x, ddf, imsm. EXAMPLE
DEVICE /dev/sd[bcdjkl]1 DEVICE /dev/hda1 /dev/hdb1 # /dev/md0 is known by its UUID. ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371 # /dev/md1 contains all devices with a minor number of # 1 in the superblock. ARRAY /dev/md1 superminor=1 # /dev/md2 is made from precisely these two devices ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1 # /dev/md4 and /dev/md5 are a spare-group and spares # can be moved between them ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1 ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1 # /dev/md/home is created if need to be a partitionable md array # any spare device number is allocated. ARRAY /dev/md/home UUID=9187a482:5dde19d9:eea3cc4a:d646ab8b auto=part MAILADDR root@mydomain.tld PROGRAM /usr/sbin/handle-mdadm-events CREATE group=system mode=0640 auto=part-8 HOMEHOST <system> AUTO +1.x -all SEE ALSO
mdadm(8), md(4). MDADM.CONF(5)
All times are GMT -4. The time now is 01:44 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy