mdadm container! How does it work


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users mdadm container! How does it work
# 1  
Old 10-29-2011
mdadm container! How does it work

Hi everyone,

I am not sure if I understand how mdadm --create /dev/md0 --level=container works?
A device called /dev/md0 appears in /proc/mdstat but I am not sure how to use that device?

I have 2 blank drives with 1 500GB partition on each. I would like to setup mirroring, but not in the way that you have to create partition of the size that you want mirror to be. I am not sure if container does that or not? I have looked all over the net and was unable to find any info.

If someone can give me more info on it, it would be greatly appreciated!
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

How to fix mistake on raid: mdadm create instead of assemble?

Hi guys, I'm new to RAID although I've had a server running raid5 for a while. It was delivered preinstalled like this and I never really wondered how to monitor and maintain it. This quick introduction just to let you understand why I'm such an idiot asking such a silly question. Now what... (0 Replies)
Discussion started by: chebarbudo
0 Replies

2. Filesystems, Disks and Memory

MDADM Failure - where it came from?

Hello, i have a system with 6 sata3 seagate st3000dm01 disks running on stable Debian with software raid mdadm. i have md0 for root and md1 for swap and md2 for the files. i now want to add one more disk = sdh4 for md2 but i got this errors: The new disk is connected to an 4 port sata... (7 Replies)
Discussion started by: Sunghost
7 Replies

3. UNIX for Advanced & Expert Users

USB RAID 5 Problem on Joli OS 1.2 (Ubuntu) using mdadm

Hi All, I have been trying to create a USB RAID 5 using mdadm tool on Joli OS 1.2 (Ubuntu) but with no luck. I cannot even get pass the creation of array device (/dev/md0) and superblock. I am using 3 USB keys (2 16.4 GB kingston and 1 16GB sandisk). My steps are: ... (5 Replies)
Discussion started by: powelltallen
5 Replies

4. Solaris

Container

Hi, Can any one let me know in detail what is container and how it is different from zones. and also any process to create a container (6 Replies)
Discussion started by: chetansingh23
6 Replies

5. Red Hat

mdadm for / and /boot

had this RHEL 5 installation with /dev/sda1 and /dev/sda2 running.. created two more partitions /dev/sdj1 and /dev/sdj2 , the same sized partition as /dev/sda trying to use mdadm to create RAID1 .. I cannot even do it in "rescue" mode, I wonder if it can be done.. it kept... (2 Replies)
Discussion started by: ppchu99
2 Replies

6. UNIX for Advanced & Expert Users

mdadm question

Hello, I have 4 drives (500G each) in a raid 10, I got a power failior and this is the result? cat /proc/mdstat Personalities : md126 : inactive sdb sdc sdd sde 1953536528 blocks super external:-md127/0 md127 : inactive sdd(S) sde(S) sdb(S) sdc(S) 9028 blocks super... (3 Replies)
Discussion started by: rmokros
3 Replies

7. Emergency UNIX and Linux Support

mdadm unable to fail a resyncing drive?

Hi All I have a RAID 5 array consisting of 4 drives that had a partial drive failure in one of the drives. Rebooting shows the faulty drive as background rebuilding and mdadm /dev/ARRAYID shows three drives as in sync with the fourth drive as spare rebuilding. However the array won't come... (9 Replies)
Discussion started by: Bashingaway
9 Replies

8. Solaris

solaris zones vs container..

kindly share what are difference b/w solaris zones and containers.... (8 Replies)
Discussion started by: Rajesh_Apple
8 Replies

9. Virtualization and Cloud Computing

is mdadm --incremental --rebuild --run --scan destructive?

Hello Unix Community: My task to figure out how to add a 20G volume to an existing EBS Array (RAID0) at AWS. I haven't been told that growing the existing volumes isn't an option, or adding another larger volume to the existing array is the way to go. The client's existing data-store is... (0 Replies)
Discussion started by: Habitual
0 Replies

10. Linux

mdadm - Swapping 500GB disks for 1TB

Hi, I have a three disk raid 5, with 500GB disks. This is close to being full, and whilst I can just add another disk and rebuild to add another 500GB, I would prefer to replace with 1TB disks. So i have some questions. Can I replace these disks one by one with bigger disks? I... (1 Reply)
Discussion started by: snoop2048
1 Replies
Login or Register to Ask a Question
MDMON(8)						      System Manager's Manual							  MDMON(8)

NAME
mdmon - monitor MD external metadata arrays SYNOPSIS
mdmon CONTAINER [NEWROOT] OVERVIEW
The 2.6.27 kernel brings the ability to support external metadata arrays. External metadata implies that user space handles all updates to the metadata. The kernel's responsibility is to notify user space when a "metadata event" occurs, like disk failures and clean-to-dirty transitions. The kernel, in important cases, waits for user space to take action on these notifications. DESCRIPTION
Metadata updates: To service metadata update requests a daemon, mdmon, is introduced. Mdmon is tasked with polling the sysfs namespace looking for changes in array_state, sync_action, and per disk state attributes. When a change is detected it calls a per metadata type handler to make modifi- cations to the metadata. The following actions are taken: array_state - inactive Clear the dirty bit for the volume and let the array be stopped array_state - write pending Set the dirty bit for the array and then set array_state to active. Writes are blocked until userspace writes active. array_state - active-idle The safe mode timer has expired so set array state to clean to block writes to the array array_state - clean Clear the dirty bit for the volume array_state - read-only This is the initial state that all arrays start at. mdmon takes one of the three actions: 1/ Transition the array to read-auto keeping the dirty bit clear if the metadata handler determines that the array does not need resyncing or other modification 2/ Transition the array to active if the metadata handler determines a resync or some other manipulation is necessary 3/ Leave the array read-only if the volume is marked to not be monitored; for example, the metadata version has been set to "external:-dev/md127" instead of "external:/dev/md127" sync_action - resync-to-idle Notify the metadata handler that a resync may have completed. If a resync process is idled before it completes this event allows the metadata handler to checkpoint resync. sync_action - recover-to-idle A spare may have completed rebuilding so tell the metadata handler about the state of each disk. This is the metadata han- dler's opportunity to clear any "out-of-sync" bits and clear the volume's degraded status. If a recovery process is idled before it completes this event allows the metadata handler to checkpoint recovery. <disk>/state - faulty A disk failure kicks off a series of events. First, notify the metadata handler that a disk has failed, and then notify the kernel that it can unblock writes that were dependent on this disk. After unblocking the kernel this disk is set to be removed+ from the member array. Finally the disk is marked failed in all other member arrays in the container. + Note This behavior differs slightly from native MD arrays where removal is reserved for a mdadm --remove event. In the external metadata case the container holds the final reference on a block device and a mdadm --remove <container> <victim> call is still required. Containers: External metadata formats, like DDF, differ from the native MD metadata formats in that they define a set of disks and a series of sub- arrays within those disks. MD metadata in comparison defines a 1:1 relationship between a set of block devices and a raid array. For example to create 2 arrays at different raid levels on a single set of disks, MD metadata requires the disks be partitioned and then each array can created be created with a subset of those partitions. The supported external formats perform this disk carving internally. Container devices simply hold references to all member disks and allow tools like mdmon to determine which active arrays belong to which container. Some array management commands like disk removal and disk add are now only valid at the container level. Attempts to perform these actions on member arrays are blocked with error messages like: "mdadm: Cannot remove disks from a 'member' array, perform this operation on the parent container" Containers are identified in /proc/mdstat with a metadata version string "external:<metadata name>". Member devices are identified by "external:/<container device>/<member index>", or "external:-<container device>/<member index>" if the array is to remain readonly. OPTIONS
CONTAINER The container device to monitor. It can be a full path like /dev/md/container, a simple md device name like md127, or /proc/mdstat which tells mdmon to scan for containers and launch an mdmon instance for each one found. [NEWROOT] In order to support an external metadata raid array as the rootfs mdmon needs to be started in the initramfs environment. Once the initramfs environment mounts the final rootfs mdmon needs to be restarted in the new namespace. When NEWROOT is specified mdmon will terminate any mdmon instances that are running in the current namespace, chroot(2) to NEWROOT, and continue monitoring the con- tainer. Note that mdmon is automatically started by mdadm when needed and so does not need to be considered when working with RAID arrays. The only times it is run other that by mdadm is when the boot scripts need to restart it after mounting the new root filesystem. SEE ALSO
mdadm(8), md(4). v3.0.3 MDMON(8)