is mdadm --incremental --rebuild --run --scan destructive?
Hello Unix Community:
My task to figure out how to add a 20G volume to an existing EBS Array (RAID0) at AWS.
I haven't been told that growing the existing volumes isn't an option, or adding another larger volume to the existing array is the way to go. The client's existing data-store is growing fast and he needs more space.
The boss said "Add".... but I am free to conceptualize some solutions he may not have thought of.
I have RTFM'd today for about 3 hours and came to these 2 possibilities. Add or grow...
After a cursory review (and some in depth reading) I am uncertain, so I have to ask
is "mdadm --incremental --rebuild --run --scan" destructive to data or will it simply incorporate the new volume into the designated mdX?
"Possibly start the array" implies that it is not started, but what if the array is already started?
I further wonder if I can just create a new volume and edit the /etc/mdadm/mdadm.conf
and --scan and/or --rebuild and if that would be destructive?
Hi,
I have a three disk raid 5, with 500GB disks.
This is close to being full, and whilst I can just add another disk and rebuild to add another 500GB, I would prefer to replace with 1TB disks. So i have some questions.
Can I replace these disks one by one with bigger disks? I... (1 Reply)
Hi All
I have a RAID 5 array consisting of 4 drives that had a partial drive failure in one of the drives.
Rebooting shows the faulty drive as background rebuilding and mdadm /dev/ARRAYID shows three drives as in sync with the fourth drive as spare rebuilding.
However the array won't come... (9 Replies)
Hello, I have 4 drives (500G each) in a raid 10, I got a power failior and this is the result?
cat /proc/mdstat
Personalities :
md126 : inactive sdb sdc sdd sde
1953536528 blocks super external:-md127/0
md127 : inactive sdd(S) sde(S) sdb(S) sdc(S)
9028 blocks super... (3 Replies)
Hi everyone,
I am not sure if I understand how mdadm --create /dev/md0 --level=container works?
A device called /dev/md0 appears in /proc/mdstat but I am not sure how to use that device?
I have 2 blank drives with 1 500GB partition on each. I would like to setup mirroring, but not in the... (0 Replies)
Hi I'm trying to hack a web server as part of an assignment and have gotten it to exec commands but I cannot pass commands arguments as the program splits up space separated words and only execs the first one. Is there anything I can pass to cause any sort of damage in one word? Btw webserver runs... (1 Reply)
had this RHEL 5 installation with /dev/sda1 and /dev/sda2 running..
created two more partitions /dev/sdj1 and /dev/sdj2 , the same sized partition as /dev/sda
trying to use mdadm to create RAID1 ..
I cannot even do it in "rescue" mode, I wonder if it can be done..
it kept... (2 Replies)
I am trying to format a Seagate 2 Gb SCSI drive using the HP-UX 9.0 support disc and I receive a message that says: DESTRUCTIVE MODE REQUIRED TO EXECUTIVE THIS COMMAND (SCD2WARN 106). I have entered this command several times on other SCSI drives and never got this message. Anyone ever see this... (8 Replies)
Hello,
i have a system with 6 sata3 seagate st3000dm01 disks running on stable Debian with software raid mdadm. i have md0 for root and md1 for swap and md2 for the files. i now want to add one more disk = sdh4 for md2 but i got this errors:
The new disk is connected to an 4 port sata... (7 Replies)
Hi guys,
I'm new to RAID although I've had a server running raid5 for a while. It was delivered preinstalled like this and I never really wondered how to monitor and maintain it. This quick introduction just to let you understand why I'm such an idiot asking such a silly question.
Now what... (0 Replies)
Discussion started by: chebarbudo
0 Replies
LEARN ABOUT REDHAT
mdadm.conf
MDADM.CONF(5) File Formats Manual MDADM.CONF(5)NAME
mdadm.conf - configuration for management of Software Raid with mdadm
SYNOPSIS
/etc/mdadm.conf
DESCRIPTION
mdadm is a tool for creating, managing, and monitoring RAID devices using the md driver in Linux.
Some common tasks, such as assembling all arrays, can be simplified by describing the devices and arrays in this configuration file.
SYNTAX
The file should be seen as a collection of words separated by white space (space, tab, or newline). Any word that beings with a hash sign
(#) starts a comment and that word together with the remainder of the line is ignored.
Any line that starts with white space (space or tab) is treated as though it were a continuation of the previous line.
Empty lines are ignored, but otherwise each (non continuation) line must start with a keyword as listed below. The keywords are case
insensitive and can be abbreviated to 3 characters.
The keywords are:
DEVICE A device line lists the devices (whole devices or partitions) that might contain a component of an MD array. When looking for the
components of an array, mdadm will scan these devices (or any devices listed on the command line).
The device line may contain a number of different devices (separated by spaces) and each device name can contain wild cards as
defined by glob(7).
Also, there may be several device lines present in the file.
For example:
DEVICE /dev/hda* /dev/hdc*
DEV /dev/sd*
DEVICE /dev/discs/disc*/disc
ARRAY The ARRAY lines identify actual arrays. The second word on the line should be the name of the device where the array is normally
assembled, such as /dev/md1. Subsequent words identify the array, or identify the array as a member of a group. If multiple identi-
ties are given, then a component device must match ALL identities to be considered a match. Each identity word has a tag, and
equals sign, and some value. The tags are:
uuid= The value should be a 128 bit uuid in hexadecimal, with punctuation interspersed if desired. This must match the uuid stored in
the superblock.
super-minor=
The value is an integer which indicates the minor number that was stored in the superblock when the array was created. When an
array is created as /dev/mdX, then the minor number X is stored.
devices=
The value is a comma separated list of device names. Precisely these devices will be used to assemble the array. Note that the
devices listed there must also be listed on a DEVICE line.
level= The value is a raid level. This is not normally used to identify an array, but is supported so that the output of
mdadm --examine --scan
can be use directly in the configuration file.
num-devices=
The value is the number of devices in a complete active array. As with level= this is mainly for compatibility with the output
of
mdadm --examine --scan.
spare-group=
The value is a textual name for a group of arrays. All arrays with the same spare-group name are considered to be part of the
same group. The significance of a group of arrays is that mdadm will, when monitoring the arrays, move a spare drive from one
array in a group to another array in that group if the first array had a failed or missing drive but no spare.
MAILADDR
The mailaddr line gives an E-mail address that alerts should be sent to when is running in --monitor mode (and was given the --scan
option). There should only be one MAILADDR line and it should have only one address.
PROGRAM
The program line gives the name of a program to be run when mdadm --monitor detects potentially interesting events on any of the
arrays that it is monitoring. This program gets run with two or three arguments, they being the Event, the md device, and possibly
the related component device.
There should only be one program line and it should be give only one program.
EXAMPLE
DEVICE /dev/sd[bcdjkl]1
DEVICE /dev/hda1 /dev/hdb1
# /dev/md0 is known by it's UID.
ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
# /dev/md1 contains all devices with a minor number of
# 1 in the superblock.
ARRAY /dev/md1 superminor=1
# /dev/md2 is made from precisey these two devices
ARRAY /dev/md2 devices=/dev/hda1,/dev/hda2
# /dev/md4 and /dev/md5 are a spare-group and spares
# can be moved between them
ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df
spare-group=group1
ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977
spare-group=group1
MAILADDR root@mydomain.tld
PROGRAM /usr/sbin/handle-mdadm-events
SEE ALSO mdadm(8), md(4).
MDADM.CONF(5)