Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Help needed! Raid 5 failure on a Debian System Post 302797099 by jonlisty on Sunday 21st of April 2013 11:29:41 PM
Old 04-22-2013
Ok after some more reading, I tried this:

Quote:
mdadm --create /dev/md8 --verbose --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc missing
and got this:

Quote:
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sda: Device or resource busy
mdadm: failed container membership check
mdadm: device /dev/sda not suitable for any style of array

aaaghhh!!!

---------- Post updated at 10:17 PM ---------- Previous update was at 10:13 PM ----------

also...

Quote:
$ sudo cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md8 : inactive sda[0] sdc[2] sdb[1]
8790796680 blocks super 1.2
---------- Post updated at 10:29 PM ---------- Previous update was at 10:17 PM ----------

also:

Quote:
$ sudo mdadm --detail /dev/md8
/dev/md8:
Version : 1.2
Creation Time : Mon Jan 7 11:03:39 2013
Raid Level : raid5
Used Dev Size : -1
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Sat Apr 6 13:17:10 2013
State : active, degraded, Not Started
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : TTVServer:TTV2 (local to host TTVServer)
UUID : dc344271:82f55bd0:fcfd0e16:a2a60bc8
Events : 103

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 0 0 3 removed
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Raid control vs scsi for operating system

I was trying to get a server using a raid controller card up and running. I could not get the card configured right so i just installed the system strait onto a scsi drive. Questions? Is is nescessary to have the operating system on raid? Pros/Cons Is it really difficult to go back later... (1 Reply)
Discussion started by: macdonto
1 Replies

2. UNIX for Dummies Questions & Answers

Ultra60 and A1000....raid manager needed just to see it?

Hi guys, I was asked to setup an Ultra60 (Sol 8) with an StorEdge A1000. Does anyone know if a probe-scsi-all is suppose to detect it? Right now it doesn't, so maybe I answered my own question :rolleyes: We have an the same setup running already, but I wasn't around when that was setup. ... (3 Replies)
Discussion started by: Yinzer955i
3 Replies

3. SCO

driver needed for hp smartarry p200i sas raid controller

recently we have purchased hp proliant ml350 g5 server and configured raid 5 with hp smartarray p200i sas controller.but i could not found any sas raid controller hp smartarry p200i driver for sco unix 5.0.7 :(.i searched on hp support site,but no use.any one can help. (3 Replies)
Discussion started by: prakrithi
3 Replies

4. Solaris

RAID controller needed for SVM?

hi this may be a very stupid question, but im quite new to Solaris (gonna buid my first system, Solaris 10 on x86 system, connected to other windows systems in a home network) i wanna put a RAID 5 system in there to back up my other systems at home; iv read that its really so easy with SVM to... (4 Replies)
Discussion started by: Landser
4 Replies

5. UNIX for Advanced & Expert Users

need sample system o/p RHEL/Debian

Hi, Could somebody sent me sample output of below commands on 1) Debian linux and 2) RHEL3 and 3) any RHEL version less than 3, a) uname -a b) cat /etc/issue c) cat /etc/redhat-release or other equivalent file Thanks in advance - Krishna (0 Replies)
Discussion started by: krishnamurthig
0 Replies

6. Solaris

Contingency planning for System Failure

I have inhereted a Solaris 8 server which is running an important application in our production environment. The dilema is that the server has just one internal hard drive I believe it was installed using jump start, it does not even have a CD ROM drive and root is not mirrored (since there is... (2 Replies)
Discussion started by: Tirmazi
2 Replies

7. SCO

file system not getting mounted in read write mode after system power failure

After System power get failed File system is not getting mounted in read- write mode (1 Reply)
Discussion started by: gtkpmbpl
1 Replies

8. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

9. Debian

Best RAID settings for Debian Server? Help!! (1+0 or 5 or NAS)

I am installing a Debian Server on a: HP Proliant DL380 G4 Dual CPU's 3.20 ghz / 800 mhz / 1MB L2 5120 MB RAM 6 hard disks on HP Smart Array 6i controller (36.4 GB Ultra320 SCSI HD each) I will be using this server to capture VHS video, encode, compress, cut, edit, make DVD's, rip... (0 Replies)
Discussion started by: Marcus Aurelius
0 Replies

10. Shell Programming and Scripting

Help needed on restart-from-point-of-failure in Parallel Processing

Hi Gurus, Good morning... :) OS Info: Linux 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux I have a script which takes multiples parameters from a properties file one by one and run in background (to do parallel processing). As example: $ cat... (4 Replies)
Discussion started by: saps19
4 Replies
MDMON(8)						      System Manager's Manual							  MDMON(8)

NAME
mdmon - monitor MD external metadata arrays SYNOPSIS
mdmon CONTAINER [NEWROOT] OVERVIEW
The 2.6.27 kernel brings the ability to support external metadata arrays. External metadata implies that user space handles all updates to the metadata. The kernel's responsibility is to notify user space when a "metadata event" occurs, like disk failures and clean-to-dirty transitions. The kernel, in important cases, waits for user space to take action on these notifications. DESCRIPTION
Metadata updates: To service metadata update requests a daemon, mdmon, is introduced. Mdmon is tasked with polling the sysfs namespace looking for changes in array_state, sync_action, and per disk state attributes. When a change is detected it calls a per metadata type handler to make modifi- cations to the metadata. The following actions are taken: array_state - inactive Clear the dirty bit for the volume and let the array be stopped array_state - write pending Set the dirty bit for the array and then set array_state to active. Writes are blocked until userspace writes active. array_state - active-idle The safe mode timer has expired so set array state to clean to block writes to the array array_state - clean Clear the dirty bit for the volume array_state - read-only This is the initial state that all arrays start at. mdmon takes one of the three actions: 1/ Transition the array to read-auto keeping the dirty bit clear if the metadata handler determines that the array does not need resyncing or other modification 2/ Transition the array to active if the metadata handler determines a resync or some other manipulation is necessary 3/ Leave the array read-only if the volume is marked to not be monitored; for example, the metadata version has been set to "external:-dev/md127" instead of "external:/dev/md127" sync_action - resync-to-idle Notify the metadata handler that a resync may have completed. If a resync process is idled before it completes this event allows the metadata handler to checkpoint resync. sync_action - recover-to-idle A spare may have completed rebuilding so tell the metadata handler about the state of each disk. This is the metadata han- dler's opportunity to clear any "out-of-sync" bits and clear the volume's degraded status. If a recovery process is idled before it completes this event allows the metadata handler to checkpoint recovery. <disk>/state - faulty A disk failure kicks off a series of events. First, notify the metadata handler that a disk has failed, and then notify the kernel that it can unblock writes that were dependent on this disk. After unblocking the kernel this disk is set to be removed+ from the member array. Finally the disk is marked failed in all other member arrays in the container. + Note This behavior differs slightly from native MD arrays where removal is reserved for a mdadm --remove event. In the external metadata case the container holds the final reference on a block device and a mdadm --remove <container> <victim> call is still required. Containers: External metadata formats, like DDF, differ from the native MD metadata formats in that they define a set of disks and a series of sub- arrays within those disks. MD metadata in comparison defines a 1:1 relationship between a set of block devices and a raid array. For example to create 2 arrays at different raid levels on a single set of disks, MD metadata requires the disks be partitioned and then each array can created be created with a subset of those partitions. The supported external formats perform this disk carving internally. Container devices simply hold references to all member disks and allow tools like mdmon to determine which active arrays belong to which container. Some array management commands like disk removal and disk add are now only valid at the container level. Attempts to perform these actions on member arrays are blocked with error messages like: "mdadm: Cannot remove disks from a 'member' array, perform this operation on the parent container" Containers are identified in /proc/mdstat with a metadata version string "external:<metadata name>". Member devices are identified by "external:/<container device>/<member index>", or "external:-<container device>/<member index>" if the array is to remain readonly. OPTIONS
CONTAINER The container device to monitor. It can be a full path like /dev/md/container, a simple md device name like md127, or /proc/mdstat which tells mdmon to scan for containers and launch an mdmon instance for each one found. [NEWROOT] In order to support an external metadata raid array as the rootfs mdmon needs to be started in the initramfs environment. Once the initramfs environment mounts the final rootfs mdmon needs to be restarted in the new namespace. When NEWROOT is specified mdmon will terminate any mdmon instances that are running in the current namespace, chroot(2) to NEWROOT, and continue monitoring the con- tainer. Note that mdmon is automatically started by mdadm when needed and so does not need to be considered when working with RAID arrays. The only times it is run other that by mdadm is when the boot scripts need to restart it after mounting the new root filesystem. SEE ALSO
mdadm(8), md(4). v3.0.3 MDMON(8)
All times are GMT -4. The time now is 08:41 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy