Sponsored Content
Operating Systems Linux Raid0 recovery from external HD Post 302385020 by pludi on Thursday 7th of January 2010 01:18:31 AM
Old 01-07-2010
For this to work you'd need to know the parameters the hardware controller used, or at least the stripe size used. If you can get that it might be possible using dd to create a complete image, and use that to start reconstructing your data.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Password recovery

We recently terminated a developer at my place of employment who created scripts on a windows server (that i do not have access to) that invoke FTP sessions on my UnixWare 7.1.1 servers. I need to know the password that is being used. Does anyone know of a good password crack? (8 Replies)
Discussion started by: rm -r *
8 Replies

2. Solaris

Solaris RAID0 doubt...

friends, Suppose I am typing metastat command and it is showing: d100: Concat/Stripe Size: 369495 blocks (180 MB) Stripe 0: (interlace: 32 blocks) Device Start Block Dbase Reloc c1d0s0 16065 Yes Yes c1d0s1 0 No Yes... (4 Replies)
Discussion started by: saagar
4 Replies

3. Solaris

Solaris recovery

Some thing happened to our solaris 10 ( sparc ) box and it is not coming up now. These are some of the console messages : I assume it is not able to find very basic system libraries so i need to tell it some how to find it under /lib:/usr/lib. I booted it from the CD but now i... (4 Replies)
Discussion started by: rajwinder
4 Replies

4. UNIX for Dummies Questions & Answers

Why is RAID0 faster?

I have read anecdotes about people installing RAID0 (RAID - Wikipedia, the free encyclopedia) on some of their machines because it gives a performance boost. Because bandwidth on the motherboard is limited, can someone explain exactly why it should be faster? (7 Replies)
Discussion started by: figaro
7 Replies

5. UNIX for Advanced & Expert Users

live upgrade with raid0 soft partitions

Hi, I have this mirrored system with soft-partitions. I have a difficulty determining the lucreate cmd in this env. #metastat -p d0 -m d10 d20 1 d10 1 1 c1t2d0s0 d20 1 1 c1t3d0s0 d1 -m d11 d21 1 d11 1 1 c1t2d0s5 d21 1 1 c1t3d0s5 d100 -p d1 -o 58720384 -b 8388608 d200 -p d1 -o... (1 Reply)
Discussion started by: chaandana
1 Replies

6. Hardware

HP9000 Server - Stuck on RAID0

Hey all, I've got an old HP9000 L1000 server with HP-UX installed. The drives that the OS is running on are in RAID0. I am concerned for the reliability of the server. The four hard drives in the front of the server are LVD 18.2 drives. I know with RAID0, if one drive fails, they all fail. ... (2 Replies)
Discussion started by: mroselli
2 Replies

7. Solaris

Cloning RAID0 drives, Solaris 10u11

Hello all, this is my first time posting here. Where I work we have multiple servers (x3-2's) running Solaris 10u11 with 2 drives configured as RAID0, 300GB per. There are 4-6 open slots for drives to clone to. Past attempts to clone/backup these drives has failed. One of the machines is... (1 Reply)
Discussion started by: eprlsguy
1 Replies

8. Gentoo

Data recovery of formatted external HDD

accidentally formatted ext3 external hard disk .. im using EAse us tool in windows system to recover the data ... will this works?? if yes ... the another external hard disk have to be formatted in which file system ? is there any other option ..please help me out (1 Reply)
Discussion started by: rajeshz
1 Replies

9. Solaris

Solaris 11 recovery

Hi, I need to recover the Solaris 11 OS, and it backup via Netbackup 7.6 file level backup only. Does anyone know what are steps to recover it? Thanks. :confused::confused::confused: (3 Replies)
Discussion started by: freshmeat
3 Replies

10. UNIX for Dummies Questions & Answers

Raid0 array stresses only 1 disk out of 3

Hi there, I've setup a raid0 array of 3 identical disks using : mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1I'm using dstat to monitor the disk activity : dstat --epoch -D sdb,sdc,sdd --disk-util 30The results show that the stress is not... (8 Replies)
Discussion started by: chebarbudo
8 Replies
MDMON(8)						      System Manager's Manual							  MDMON(8)

NAME
mdmon - monitor MD external metadata arrays SYNOPSIS
mdmon CONTAINER [NEWROOT] OVERVIEW
The 2.6.27 kernel brings the ability to support external metadata arrays. External metadata implies that user space handles all updates to the metadata. The kernel's responsibility is to notify user space when a "metadata event" occurs, like disk failures and clean-to-dirty transitions. The kernel, in important cases, waits for user space to take action on these notifications. DESCRIPTION
Metadata updates: To service metadata update requests a daemon, mdmon, is introduced. Mdmon is tasked with polling the sysfs namespace looking for changes in array_state, sync_action, and per disk state attributes. When a change is detected it calls a per metadata type handler to make modifi- cations to the metadata. The following actions are taken: array_state - inactive Clear the dirty bit for the volume and let the array be stopped array_state - write pending Set the dirty bit for the array and then set array_state to active. Writes are blocked until userspace writes active. array_state - active-idle The safe mode timer has expired so set array state to clean to block writes to the array array_state - clean Clear the dirty bit for the volume array_state - read-only This is the initial state that all arrays start at. mdmon takes one of the three actions: 1/ Transition the array to read-auto keeping the dirty bit clear if the metadata handler determines that the array does not need resyncing or other modification 2/ Transition the array to active if the metadata handler determines a resync or some other manipulation is necessary 3/ Leave the array read-only if the volume is marked to not be monitored; for example, the metadata version has been set to "external:-dev/md127" instead of "external:/dev/md127" sync_action - resync-to-idle Notify the metadata handler that a resync may have completed. If a resync process is idled before it completes this event allows the metadata handler to checkpoint resync. sync_action - recover-to-idle A spare may have completed rebuilding so tell the metadata handler about the state of each disk. This is the metadata han- dler's opportunity to clear any "out-of-sync" bits and clear the volume's degraded status. If a recovery process is idled before it completes this event allows the metadata handler to checkpoint recovery. <disk>/state - faulty A disk failure kicks off a series of events. First, notify the metadata handler that a disk has failed, and then notify the kernel that it can unblock writes that were dependent on this disk. After unblocking the kernel this disk is set to be removed+ from the member array. Finally the disk is marked failed in all other member arrays in the container. + Note This behavior differs slightly from native MD arrays where removal is reserved for a mdadm --remove event. In the external metadata case the container holds the final reference on a block device and a mdadm --remove <container> <victim> call is still required. Containers: External metadata formats, like DDF, differ from the native MD metadata formats in that they define a set of disks and a series of sub- arrays within those disks. MD metadata in comparison defines a 1:1 relationship between a set of block devices and a raid array. For example to create 2 arrays at different raid levels on a single set of disks, MD metadata requires the disks be partitioned and then each array can created be created with a subset of those partitions. The supported external formats perform this disk carving internally. Container devices simply hold references to all member disks and allow tools like mdmon to determine which active arrays belong to which container. Some array management commands like disk removal and disk add are now only valid at the container level. Attempts to perform these actions on member arrays are blocked with error messages like: "mdadm: Cannot remove disks from a 'member' array, perform this operation on the parent container" Containers are identified in /proc/mdstat with a metadata version string "external:<metadata name>". Member devices are identified by "external:/<container device>/<member index>", or "external:-<container device>/<member index>" if the array is to remain readonly. OPTIONS
CONTAINER The container device to monitor. It can be a full path like /dev/md/container, a simple md device name like md127, or /proc/mdstat which tells mdmon to scan for containers and launch an mdmon instance for each one found. [NEWROOT] In order to support an external metadata raid array as the rootfs mdmon needs to be started in the initramfs environment. Once the initramfs environment mounts the final rootfs mdmon needs to be restarted in the new namespace. When NEWROOT is specified mdmon will terminate any mdmon instances that are running in the current namespace, chroot(2) to NEWROOT, and continue monitoring the con- tainer. Note that mdmon is automatically started by mdadm when needed and so does not need to be considered when working with RAID arrays. The only times it is run other that by mdadm is when the boot scripts need to restart it after mounting the new root filesystem. SEE ALSO
mdadm(8), md(4). v3.0.3 MDMON(8)
All times are GMT -4. The time now is 05:23 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy