Sponsored Content
Homework and Emergencies Emergency UNIX and Linux Support mdadm unable to fail a resyncing drive? Post 302547075 by drl on Saturday 13th of August 2011 05:47:19 PM
Old 08-13-2011
Hi.
Quote:
Originally Posted by otheus
Though I always found md to be stable, the Linux world seems to have fixated on the much more flexible (and thoroughly documented) Volume Management (LVM) tools. Should you recover, consider a rebuild with LVM. There are more steps and there is a learning curve involved, but these are
Outweighed by the ability to get support.
I thought that I had read some time ago, that LVM on top of MD was a good solution. In fact, that is what I had done the last time I installed Linux on a standalone machine (not a VM) recently.

One reference provides some of the background: RAID verses LVM - Stack Overflow

Some other sources, mainly for the procedures of installing LVM on top of MD: Setup Software Raid 1 with LVM on Linux , https://wiki.archlinux.org/index.php...re_RAID_or_LVM ,

Some performance numbers are available at https://raid.wiki.kernel.org/index.php/Linux_Raid

I don't recall seeing advice to shift over to RAID via LVM as opposed to MD. One reason (for me) is that MD has the ability to do RAID10, and (so far) LVM does only RAID0 and RAID1. Also grub has not traditionally understood LVM, so the boot partition cannot be LVM.

However, I have used rsync to backup LVM partitions and it makes use of snapshots to do the work -- very nice feature so that you don't need to take down the machine to backup.

cheers, drl
 

10 More Discussions You Might Find Interesting

1. Solaris

Resyncing Progress of hardware mirror

Hi, I've recently mirrored the 4 disks in a V440. Disks 0 + 1 have been mirrored with hardware mirroring using the command raidctl -c c1t0d0 c1t2d0, the other 2 disks have been mirrored using Soltisce Disk Suite. I know how to check the progress for the SDS mirroring but how can I find the... (2 Replies)
Discussion started by: Chains
2 Replies

2. Solaris

Unable to mount USB Pen drive on my Server

Hello Gurus!! Very recently i tried to mount a USB pen drive onto my solaris 10 (X4170 model) server. As i understand, in ideal scenarios it should get mounted automatically, but it did not happen. Neither anything is shown in "iostat -En" output or "rmformat -l" about the pen drive. I also... (10 Replies)
Discussion started by: EmbedUX
10 Replies

3. UNIX for Advanced & Expert Users

mdadm question

Hello, I have 4 drives (500G each) in a raid 10, I got a power failior and this is the result? cat /proc/mdstat Personalities : md126 : inactive sdb sdc sdd sde 1953536528 blocks super external:-md127/0 md127 : inactive sdd(S) sde(S) sdb(S) sdc(S) 9028 blocks super... (3 Replies)
Discussion started by: rmokros
3 Replies

4. UNIX for Advanced & Expert Users

mdadm container! How does it work

Hi everyone, I am not sure if I understand how mdadm --create /dev/md0 --level=container works? A device called /dev/md0 appears in /proc/mdstat but I am not sure how to use that device? I have 2 blank drives with 1 500GB partition on each. I would like to setup mirroring, but not in the... (0 Replies)
Discussion started by: hytron
0 Replies

5. Debian

Unable to mount external drive

Trying to mount an external 160GB Toshiba drive but.... this is my dmesg tail output: usb 2-2: new high speed USB device using ehci_hcd and address 3 usb 2-2: New USB device found, idVendor=13fd, idProduct=1618 usb 2-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0 usb 2-2:... (4 Replies)
Discussion started by: Ridson
4 Replies

6. Red Hat

mdadm for / and /boot

had this RHEL 5 installation with /dev/sda1 and /dev/sda2 running.. created two more partitions /dev/sdj1 and /dev/sdj2 , the same sized partition as /dev/sda trying to use mdadm to create RAID1 .. I cannot even do it in "rescue" mode, I wonder if it can be done.. it kept... (2 Replies)
Discussion started by: ppchu99
2 Replies

7. UNIX for Dummies Questions & Answers

unable to automount a cifs drive in linux

Hi I am using SUSE 11 linux I have couple of "nfs" entries in /etc/fstab which are automatically loaded after system restart. One of the entry is windows drive mounted using cifs as shown below //IP-Address/Partition /mnt/x cifs credentials=/creds/.creds,rw,uid=<name> 0 0 I want to... (1 Reply)
Discussion started by: rakeshkumar
1 Replies

8. Filesystems, Disks and Memory

MDADM Failure - where it came from?

Hello, i have a system with 6 sata3 seagate st3000dm01 disks running on stable Debian with software raid mdadm. i have md0 for root and md1 for swap and md2 for the files. i now want to add one more disk = sdh4 for md2 but i got this errors: The new disk is connected to an 4 port sata... (7 Replies)
Discussion started by: Sunghost
7 Replies

9. Solaris

Unable to send SCSI commands to USB Drive

I am connecting a USB mass storage removeable drive to Solaris 10 x86 machine. The device is detected and i am able to perform standard read and write functions. But i want to use a code to send IOCTL based SCSI commands to the same device to read and write the data. Which i am unable to do so.... (17 Replies)
Discussion started by: danish2012
17 Replies

10. Solaris

Maint, resyncing and last-erred notifications

Hi fellow members! I have a oracle solaris server with two internal disks, that acts as an authentication server only, and for now the server seems to be doing its job, but when typing metastat -c I get some notifications. I am not familiar with SVM, I wonder if someone can help me on this:... (3 Replies)
Discussion started by: fretagi
3 Replies
MDADM.CONF(5)							File Formats Manual						     MDADM.CONF(5)

NAME
mdadm.conf - configuration for management of Software Raid with mdadm SYNOPSIS
/etc/mdadm.conf DESCRIPTION
mdadm is a tool for creating, managing, and monitoring RAID devices using the md driver in Linux. Some common tasks, such as assembling all arrays, can be simplified by describing the devices and arrays in this configuration file. SYNTAX The file should be seen as a collection of words separated by white space (space, tab, or newline). Any word that beings with a hash sign (#) starts a comment and that word together with the remainder of the line is ignored. Any line that starts with white space (space or tab) is treated as though it were a continuation of the previous line. Empty lines are ignored, but otherwise each (non continuation) line must start with a keyword as listed below. The keywords are case insensitive and can be abbreviated to 3 characters. The keywords are: DEVICE A device line lists the devices (whole devices or partitions) that might contain a component of an MD array. When looking for the components of an array, mdadm will scan these devices (or any devices listed on the command line). The device line may contain a number of different devices (separated by spaces) and each device name can contain wild cards as defined by glob(7). Also, there may be several device lines present in the file. For example: DEVICE /dev/hda* /dev/hdc* DEV /dev/sd* DEVICE /dev/discs/disc*/disc ARRAY The ARRAY lines identify actual arrays. The second word on the line should be the name of the device where the array is normally assembled, such as /dev/md1. Subsequent words identify the array, or identify the array as a member of a group. If multiple identi- ties are given, then a component device must match ALL identities to be considered a match. Each identity word has a tag, and equals sign, and some value. The tags are: uuid= The value should be a 128 bit uuid in hexadecimal, with punctuation interspersed if desired. This must match the uuid stored in the superblock. super-minor= The value is an integer which indicates the minor number that was stored in the superblock when the array was created. When an array is created as /dev/mdX, then the minor number X is stored. devices= The value is a comma separated list of device names. Precisely these devices will be used to assemble the array. Note that the devices listed there must also be listed on a DEVICE line. level= The value is a raid level. This is not normally used to identify an array, but is supported so that the output of mdadm --examine --scan can be use directly in the configuration file. num-devices= The value is the number of devices in a complete active array. As with level= this is mainly for compatibility with the output of mdadm --examine --scan. spare-group= The value is a textual name for a group of arrays. All arrays with the same spare-group name are considered to be part of the same group. The significance of a group of arrays is that mdadm will, when monitoring the arrays, move a spare drive from one array in a group to another array in that group if the first array had a failed or missing drive but no spare. MAILADDR The mailaddr line gives an E-mail address that alerts should be sent to when is running in --monitor mode (and was given the --scan option). There should only be one MAILADDR line and it should have only one address. PROGRAM The program line gives the name of a program to be run when mdadm --monitor detects potentially interesting events on any of the arrays that it is monitoring. This program gets run with two or three arguments, they being the Event, the md device, and possibly the related component device. There should only be one program line and it should be give only one program. EXAMPLE
DEVICE /dev/sd[bcdjkl]1 DEVICE /dev/hda1 /dev/hdb1 # /dev/md0 is known by it's UID. ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371 # /dev/md1 contains all devices with a minor number of # 1 in the superblock. ARRAY /dev/md1 superminor=1 # /dev/md2 is made from precisey these two devices ARRAY /dev/md2 devices=/dev/hda1,/dev/hda2 # /dev/md4 and /dev/md5 are a spare-group and spares # can be moved between them ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1 ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1 MAILADDR root@mydomain.tld PROGRAM /usr/sbin/handle-mdadm-events SEE ALSO
mdadm(8), md(4). MDADM.CONF(5)
All times are GMT -4. The time now is 01:29 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy