/proc/mdstat and cciss not available, how to know your disk is raid

 
Thread Tools Search this Thread
Operating Systems Linux SuSE /proc/mdstat and cciss not available, how to know your disk is raid
# 1  
Old 11-17-2013
/proc/mdstat and cciss not available, how to know your disk is raid

Im issuing a cat /proc/mdstat, dmraid -r, and finding a cciss, to know if my server is software raid and hardware raid. But all of them are missing.

What is the other way to know, your disk are raid, your disks is sync, your disk are out of sync, your disk is failed, ASIDE LOOKING AT THEM physically.


My server is old IBM blade hs21, under linux suse 10.3
and the other one is linux suse 11.5
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

Patching on Raid 0 Disk

Dear All , We need to do patching on one Solaris Server , where we have raid 0 configured. What is the process to patch a Server if RAID 0 (Concat/Stripe) is there. Below is the sample output. # metadb flags first blk block count a m pc luo 16 ... (1 Reply)
Discussion started by: jegaraman
1 Replies

2. AIX

SAN DISK raid level in AIX

Hello All, Our servers having emc clarion for the data disks. Is that possible to see those san disks raid level from AIX ? Am having AIX 6.1 and EMC clarion 5.5. Regards, Gowtham.G (3 Replies)
Discussion started by: gowthamakanthan
3 Replies

3. UNIX for Advanced & Expert Users

Identify failed disk in Linux RAID

Good Evening, 2 years ago, I set up an Ubuntu file-server for a friend, who is a photograph amateur. Basically, the server offers a software RAID-5 that can be accessed remotely from a MAC. Unfortunately, I didn't labeled the hard drives (i.e. which physical drive corresponds to the /dev/sdX... (2 Replies)
Discussion started by: Loic Domaigne
2 Replies

4. Ubuntu

Ubunutu 8.04.4 RAID 1 mirror replace disk

Hi, I have an Ubuntu system which I have an faulted mirror. I trying to replace the disk, but I'm stuck on that it boots and only showing GRUB GRUB ## ## End Default Options ## title Ubuntu 8.04.4 LTS, kernel 2.6.24-26-server root (hd0,0) kernel ... (0 Replies)
Discussion started by: jld
0 Replies

5. Solaris

Configuring RAID using single disk

Hi All, I have a SUN ENTERPRISE 3500 server with solaris 10 on it. I have already mirrored root partition.Now i need to mirror two more partitions with 25GB space. But i have only one disk having 70GB space.Total i have 7 disks but each one is of 18 GB only except one. Please find the output of... (2 Replies)
Discussion started by: Renjesh
2 Replies

6. Solaris

EFI Disk labels on 3510 raid array

Hi Peeps, Can anyone help me an EFI lablel on a 3510 raid array that I cannot get rid of, format -e and label just asks you if you want to label it. Want an SMI label writing to it. Anyone got any ideas on how to remove the EFI label? Thanks in advance Martin (2 Replies)
Discussion started by: callmebob
2 Replies

7. Solaris

Upgrade disk in RAID 1

I need to upgrade 2 x 73 GB disk and replace with 2 x 146 GB disk in sun v240. These disks contain boot and swap files These are mirrored disks with RAID 1 I am trining to create the correct procedure. So far the procedure I have is as follows: # metastat State: Okay ... (5 Replies)
Discussion started by: photon
5 Replies

8. Filesystems, Disks and Memory

Problem replace disk with RAID-5 volumes

Good morning, I have a problem replacing a disk with raid-5 volumes. An hardware error was occurred from a disk c9t3 so all slices were in maintenace. Every slice is part of a raid-5 volume. Any replica is present. Following Volume manager manual for replacing a disk, I have: - phisically... (0 Replies)
Discussion started by: bonovox
0 Replies

9. Filesystems, Disks and Memory

Creating a Mirror RAID With Existing Disk

Hi there, I'm not sure if this is possible, but here is what I'd like to do.. I have an existing 160GB drive in my Redhat 9.0 server that I would like to add an additional 160GB drive to and create a mirrored RAID of the first disk to the new disk. I would like to do this without having to... (2 Replies)
Discussion started by: sysera
2 Replies

10. UNIX for Dummies Questions & Answers

/proc is eating my disk man

hi I have an sun ultra 5 running a firewall which has logging enabled (essential). The disk is sliced up with /proc on / (c0t0d0s0). / is sliced at 3 gig. My problem is this, one afternoon, a manager asked me to retrieve some firewall logs, so i went into the relevant directory (also on the /... (3 Replies)
Discussion started by: hcclnoodles
3 Replies
Login or Register to Ask a Question
MDMON(8)						      System Manager's Manual							  MDMON(8)

NAME
mdmon - monitor MD external metadata arrays SYNOPSIS
mdmon CONTAINER [NEWROOT] OVERVIEW
The 2.6.27 kernel brings the ability to support external metadata arrays. External metadata implies that user space handles all updates to the metadata. The kernel's responsibility is to notify user space when a "metadata event" occurs, like disk failures and clean-to-dirty transitions. The kernel, in important cases, waits for user space to take action on these notifications. DESCRIPTION
Metadata updates: To service metadata update requests a daemon, mdmon, is introduced. Mdmon is tasked with polling the sysfs namespace looking for changes in array_state, sync_action, and per disk state attributes. When a change is detected it calls a per metadata type handler to make modifi- cations to the metadata. The following actions are taken: array_state - inactive Clear the dirty bit for the volume and let the array be stopped array_state - write pending Set the dirty bit for the array and then set array_state to active. Writes are blocked until userspace writes active. array_state - active-idle The safe mode timer has expired so set array state to clean to block writes to the array array_state - clean Clear the dirty bit for the volume array_state - read-only This is the initial state that all arrays start at. mdmon takes one of the three actions: 1/ Transition the array to read-auto keeping the dirty bit clear if the metadata handler determines that the array does not need resyncing or other modification 2/ Transition the array to active if the metadata handler determines a resync or some other manipulation is necessary 3/ Leave the array read-only if the volume is marked to not be monitored; for example, the metadata version has been set to "external:-dev/md127" instead of "external:/dev/md127" sync_action - resync-to-idle Notify the metadata handler that a resync may have completed. If a resync process is idled before it completes this event allows the metadata handler to checkpoint resync. sync_action - recover-to-idle A spare may have completed rebuilding so tell the metadata handler about the state of each disk. This is the metadata han- dler's opportunity to clear any "out-of-sync" bits and clear the volume's degraded status. If a recovery process is idled before it completes this event allows the metadata handler to checkpoint recovery. <disk>/state - faulty A disk failure kicks off a series of events. First, notify the metadata handler that a disk has failed, and then notify the kernel that it can unblock writes that were dependent on this disk. After unblocking the kernel this disk is set to be removed+ from the member array. Finally the disk is marked failed in all other member arrays in the container. + Note This behavior differs slightly from native MD arrays where removal is reserved for a mdadm --remove event. In the external metadata case the container holds the final reference on a block device and a mdadm --remove <container> <victim> call is still required. Containers: External metadata formats, like DDF, differ from the native MD metadata formats in that they define a set of disks and a series of sub- arrays within those disks. MD metadata in comparison defines a 1:1 relationship between a set of block devices and a raid array. For example to create 2 arrays at different raid levels on a single set of disks, MD metadata requires the disks be partitioned and then each array can created be created with a subset of those partitions. The supported external formats perform this disk carving internally. Container devices simply hold references to all member disks and allow tools like mdmon to determine which active arrays belong to which container. Some array management commands like disk removal and disk add are now only valid at the container level. Attempts to perform these actions on member arrays are blocked with error messages like: "mdadm: Cannot remove disks from a 'member' array, perform this operation on the parent container" Containers are identified in /proc/mdstat with a metadata version string "external:<metadata name>". Member devices are identified by "external:/<container device>/<member index>", or "external:-<container device>/<member index>" if the array is to remain readonly. OPTIONS
CONTAINER The container device to monitor. It can be a full path like /dev/md/container, a simple md device name like md127, or /proc/mdstat which tells mdmon to scan for containers and launch an mdmon instance for each one found. [NEWROOT] In order to support an external metadata raid array as the rootfs mdmon needs to be started in the initramfs environment. Once the initramfs environment mounts the final rootfs mdmon needs to be restarted in the new namespace. When NEWROOT is specified mdmon will terminate any mdmon instances that are running in the current namespace, chroot(2) to NEWROOT, and continue monitoring the con- tainer. Note that mdmon is automatically started by mdadm when needed and so does not need to be considered when working with RAID arrays. The only times it is run other that by mdadm is when the boot scripts need to restart it after mounting the new root filesystem. SEE ALSO
mdadm(8), md(4). v3.0.3 MDMON(8)