Sponsored Content
Top Forums UNIX for Dummies Questions & Answers How to find which raid is configured(without restart) Post 302449347 by drl on Monday 30th of August 2010 09:51:01 AM
Old 08-30-2010
Hi.

Much data is conveniently placed in /proc for you:
Code:
% cat /proc/mdstat 
Personalities : [raid1] 
md4 : active raid1 sda8[0] sdb8[1]
      55649024 blocks [2/2] [UU]
...
      
md2 : active raid1 sda6[0] sdb6[1]
      7936 blocks [2/2] [UU]
...
      
unused devices: <none>

This is on a system:
Code:
OS, ker|rel, machine: Linux, 2.6.26-2-amd64, x86_64
Distribution        : Debian GNU/Linux 5.0 
/sbin/mdadm mdadm - v2.6.7.2 - 14th November 2008

Best wishes ... cheers, drl
 

9 More Discussions You Might Find Interesting

1. Slackware

LDAP not getting configured!!!

hi, i m tryin to learn ldap. but its not getting configured. the error msg it shows is: LDAP configure error: BDB/HDB : Berkeley DB version incompatibe. The BDB version i have installed is bdb4.2.52 and the ldap version is openldap-2.3.12. my machine is running on red hat linux 9. Why... (1 Reply)
Discussion started by: mridula
1 Replies

2. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

3. Red Hat

How to Find what HBA is configured on Linux?

Hi I am working in an environment where there are many redhat physical and virtual machines, mostly Redhat 4. These servers have LUNs attached. The external storage can be EMC, NetApp or Par3. My question is that when Storage Administrator informs that a new LUN has been presented to a... (4 Replies)
Discussion started by: Tirmazi
4 Replies

4. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

5. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

6. HP-UX

Script to find what netprinters are configured with what model

Following this thread : https://www.unix.com/hp-ux/189023-solved-way-tell-printer-used-configured-print-queue.html This is rwuerth's nice contribution! I had a more complicated script written a long time ago to find out this information, but after realizing due to VBE's post (thank you VBE)... (0 Replies)
Discussion started by: rwuerth
0 Replies

7. Red Hat

RAID Configuration for IBM Serveraid-7k SCSI RAID Controller

Hello, I want to delete a RAID configuration an old server has. Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton? I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies

8. IP Networking

IP not configured is being used to login

Hi have a solaris server with the following IP 192.168.0.85, but anybody can login in using 172.19.0.85, and the ifconfigcommand does not show the 172.19.05 . # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask... (6 Replies)
Discussion started by: fretagi
6 Replies

9. Linux

Find a process ID,kill it and restart agent

#!/bin/bash #This shell finds the pid of the hawkagent and kills and restarts to put the rulebase into effect output=`ps aux|grep hawkagent` #The set -- below helps to parse the above ps output into words and $2 gives the 2nd word which is pid set -- $output pid=$2 #Checks if pid of hawkagent... (12 Replies)
Discussion started by: samrat dutta
12 Replies
MDMON(8)						      System Manager's Manual							  MDMON(8)

NAME
mdmon - monitor MD external metadata arrays SYNOPSIS
mdmon CONTAINER [NEWROOT] OVERVIEW
The 2.6.27 kernel brings the ability to support external metadata arrays. External metadata implies that user space handles all updates to the metadata. The kernel's responsibility is to notify user space when a "metadata event" occurs, like disk failures and clean-to-dirty transitions. The kernel, in important cases, waits for user space to take action on these notifications. DESCRIPTION
Metadata updates: To service metadata update requests a daemon, mdmon, is introduced. Mdmon is tasked with polling the sysfs namespace looking for changes in array_state, sync_action, and per disk state attributes. When a change is detected it calls a per metadata type handler to make modifi- cations to the metadata. The following actions are taken: array_state - inactive Clear the dirty bit for the volume and let the array be stopped array_state - write pending Set the dirty bit for the array and then set array_state to active. Writes are blocked until userspace writes active. array_state - active-idle The safe mode timer has expired so set array state to clean to block writes to the array array_state - clean Clear the dirty bit for the volume array_state - read-only This is the initial state that all arrays start at. mdmon takes one of the three actions: 1/ Transition the array to read-auto keeping the dirty bit clear if the metadata handler determines that the array does not need resyncing or other modification 2/ Transition the array to active if the metadata handler determines a resync or some other manipulation is necessary 3/ Leave the array read-only if the volume is marked to not be monitored; for example, the metadata version has been set to "external:-dev/md127" instead of "external:/dev/md127" sync_action - resync-to-idle Notify the metadata handler that a resync may have completed. If a resync process is idled before it completes this event allows the metadata handler to checkpoint resync. sync_action - recover-to-idle A spare may have completed rebuilding so tell the metadata handler about the state of each disk. This is the metadata han- dler's opportunity to clear any "out-of-sync" bits and clear the volume's degraded status. If a recovery process is idled before it completes this event allows the metadata handler to checkpoint recovery. <disk>/state - faulty A disk failure kicks off a series of events. First, notify the metadata handler that a disk has failed, and then notify the kernel that it can unblock writes that were dependent on this disk. After unblocking the kernel this disk is set to be removed+ from the member array. Finally the disk is marked failed in all other member arrays in the container. + Note This behavior differs slightly from native MD arrays where removal is reserved for a mdadm --remove event. In the external metadata case the container holds the final reference on a block device and a mdadm --remove <container> <victim> call is still required. Containers: External metadata formats, like DDF, differ from the native MD metadata formats in that they define a set of disks and a series of sub- arrays within those disks. MD metadata in comparison defines a 1:1 relationship between a set of block devices and a raid array. For example to create 2 arrays at different raid levels on a single set of disks, MD metadata requires the disks be partitioned and then each array can created be created with a subset of those partitions. The supported external formats perform this disk carving internally. Container devices simply hold references to all member disks and allow tools like mdmon to determine which active arrays belong to which container. Some array management commands like disk removal and disk add are now only valid at the container level. Attempts to perform these actions on member arrays are blocked with error messages like: "mdadm: Cannot remove disks from a 'member' array, perform this operation on the parent container" Containers are identified in /proc/mdstat with a metadata version string "external:<metadata name>". Member devices are identified by "external:/<container device>/<member index>", or "external:-<container device>/<member index>" if the array is to remain readonly. OPTIONS
CONTAINER The container device to monitor. It can be a full path like /dev/md/container, a simple md device name like md127, or /proc/mdstat which tells mdmon to scan for containers and launch an mdmon instance for each one found. [NEWROOT] In order to support an external metadata raid array as the rootfs mdmon needs to be started in the initramfs environment. Once the initramfs environment mounts the final rootfs mdmon needs to be restarted in the new namespace. When NEWROOT is specified mdmon will terminate any mdmon instances that are running in the current namespace, chroot(2) to NEWROOT, and continue monitoring the con- tainer. Note that mdmon is automatically started by mdadm when needed and so does not need to be considered when working with RAID arrays. The only times it is run other that by mdadm is when the boot scripts need to restart it after mounting the new root filesystem. SEE ALSO
mdadm(8), md(4). v3.0.3 MDMON(8)
All times are GMT -4. The time now is 02:40 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy