10 More Discussions You Might Find Interesting
1. Ubuntu
I suddenly don't see my folders into /mnt/md0.
What can be reason?
mdadm --detail /dev/md*
/dev/md0:
Version : 1.2
Creation Time : Fri Jan 18 09:54:27 2019
Raid Level : raid1
Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
Used Dev Size : 1953383488 (1862.89 GiB... (8 Replies)
Discussion started by: tomislav91
8 Replies
2. UNIX for Advanced & Expert Users
Currently I am using this laborious command
lvdisplay | awk '/LV Path/ {p=$3} /LV Name/ {n=$3} /VG Name/ {v=$3} /Block device/ {d=$3; sub(".*:", "/dev/dm-", d); printf "%s\t%s\t%s\n", p, "/dev/mapper/"v"-"n, d}'
Would like to know if there is any shorter method to get this mapping of... (2 Replies)
Discussion started by: royalibrahim
2 Replies
3. Solaris
Dear All ,
Pl find the below command ,
# raidctl -l
Controller: 1
Volume:c1t0d0
Disk: 0.0.0
Disk: 0.1.0
Disk: 0.3.0
#
raidctl -l c1t0d0
Volume Size Stripe Status Cache RAID
Sub Size ... (10 Replies)
Discussion started by: jegaraman
10 Replies
4. Red Hat
Good morning
Recently we needed to change the password from a redhat 6.5 system that no one knew the root password.
Starting the system with the init=/bin/bash method took us to the following scenario:
system_vg active with only root_lv and tmpfs mounted.
our entries at fstab are like... (1 Reply)
Discussion started by: Ikaro0
1 Replies
5. AIX
Hello,
I have a scsi pci x raid controller card on which I had created a disk array of 3 disks
when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk )
suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies
6. Solaris
I've just installed Sol 10 Update 9 on a Sun 4140 server and have a RAID 1 configuration (2 136 Gb drives) for the OS and have created a RAID 5 array (6 136 GB) drives. When i log into the system I am unable to see the RAID 5 disks at all. I've tried using the devfsadm command but no luck and... (9 Replies)
Discussion started by: goose25
9 Replies
7. Red Hat
Hi,
I had a doubt regarding device mapper notations and their corresponding LVM volumes.
I have configured a volume group with two logical volumes in it as root and swap.
The entries in the /etc/fstab file show the dm notations namely,
/dev/mapper/VolGroup00-LogVol01... (2 Replies)
Discussion started by: kanna_geekworkz
2 Replies
8. Red Hat
I have an HP blade with Qlogic HBA's connected to an EVA8000. I have downloaded the latest multipath.conf from HP's website. The drive presented to the server appears to be configured and working except the output of "multipath -l" shows for all paths. What is causing this output?
mpath0... (2 Replies)
Discussion started by: manzier
2 Replies
9. Solaris
Hi.
I need to move a 5 disk RAID5 array from a SE3310 box to a different SE3310 array. After installing the disks in the "new" StorEdge device, I "would like" ;) to be able have access to the data which is on the RAID.
Essentially, the quesion is, how can this be done? :confused:
I checked... (5 Replies)
Discussion started by: alexs77
5 Replies
10. Red Hat
We have a Red Hat linux server running on IBM x445 hardware. There are external disks in an IBM EXP300 disk enclosure. The system is running RAID 5. One of the four IBM disks (73.4 GB 10k FRU 06P5760) has become faulty. The system is still up and running OK because of the RAID. In that same EXP300... (3 Replies)
Discussion started by: pdudley
3 Replies
GRAID(8) BSD System Manager's Manual GRAID(8)
NAME
graid -- control utility for software RAID devices
SYNOPSIS
graid label [-f] [-o fmtopt] [-S size] [-s strip] format label level prov ...
graid add [-f] [-S size] [-s strip] name label level
graid delete [-f] name [label | num]
graid insert name prov ...
graid remove name prov ...
graid fail name prov ...
graid stop [-fv] name ...
graid list
graid status
graid load
graid unload
DESCRIPTION
The graid utility is used to manage software RAID configurations, supported by the GEOM RAID class. GEOM RAID class uses on-disk metadata to
provide access to software-RAID volumes defined by different RAID BIOSes. Depending on RAID BIOS type and its metadata format, different
subsets of configurations and features are supported. To allow booting from RAID volume, the metadata format should match the RAID BIOS type
and its capabilities. To guarantee that these match, it is recommended to create volumes via the RAID BIOS interface, while experienced
users are free to do it using this utility.
The first argument to graid indicates an action to be performed:
label Create an array with single volume. The format argument specifies the on-disk metadata format to use for this array, such as
"Intel". The label argument specifies the label of the created volume. The level argument specifies the RAID level of the created
volume, such as: "RAID0", "RAID1", etc. The subsequent list enumerates providers to use as array components. The special name
"NONE" can be used to reserve space for absent disks. The order of components can be important, depending on specific RAID level
and metadata format.
Additional options include:
-f Enforce specified configuration creation if it is officially unsupported, but technically can be created.
-o fmtopt
Specifies metadata format options.
-S size Use size bytes on each component for this volume. Should be used if several volumes per array are planned, or if smaller
components going to be inserted later. Defaults to size of the smallest component.
-s strip Specifies strip size in bytes. Defaults to 131072.
add Create another volume on the existing array. The name argument is the name of the existing array, reported by label command. The
rest of arguments are the same as for the label command.
delete Delete volume(s) from the existing array. When the last volume is deleted, the array is also deleted and its metadata erased. The
name argument is the name of existing array. Optional label or num arguments allow specifying volume for deletion.
Additional options include:
-f Delete volume(s) even if it is still open.
insert Insert specified provider(s) into specified array instead of the first missing or failed components. If there are no such compo-
nents, mark disk(s) as spare.
remove Remove the specified provider(s) from the specified array and erase metadata. If there are spare disks present, the removed disk(s)
will be replaced by spares.
fail Mark the given disks(s) as failed, removing from active use unless absolutely necessary due to exhausted redundancy. If there are
spare disks present - failed disk(s) will be replaced with one of them.
stop Stop the given array. The metadata will not be erased.
Additional options include:
-f Stop the given array even if some of its volumes are opened.
list See geom(8).
status See geom(8).
load See geom(8).
unload See geom(8).
Additional options include:
-v Be more verbose.
SUPPORTED METADATA FORMATS
The GEOM RAID class follows a modular design, allowing different metadata formats to be used. Support is currently implemented for the fol-
lowing formats:
DDF The format defined by the SNIA Common RAID Disk Data Format v2.0 specification. Used by some Adaptec RAID BIOSes and some hardware
RAID controllers. Because of high format flexibility different implementations support different set of features and have different
on-disk metadata layouts. To provide compatibility, the GEOM RAID class mimics capabilities of the first detected DDF array.
Respecting that, it may support different number of disks per volume, volumes per array, partitions per disk, etc. The following con-
figurations are supported: RAID0 (2+ disks), RAID1 (2+ disks), RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+
disks), RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks), RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1
disk), CONCAT (2+ disks).
Format supports two options "BE" and "LE", that mean big-endian byte order defined by specification (default) and little-endian used
by some Adaptec controllers.
Intel The format used by Intel RAID BIOS. Supports up to two volumes per array. Supports configurations: RAID0 (2+ disks), RAID1 (2
disks), RAID5 (3+ disks), RAID10 (4 disks). Configurations not supported by Intel RAID BIOS, but enforceable on your own risk: RAID1
(3+ disks), RAID1E (3+ disks), RAID10 (6+ disks).
JMicron
The format used by JMicron RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
RAID10 (4 disks), CONCAT (2+ disks). Configurations not supported by JMicron RAID BIOS, but enforceable on your own risk: RAID1 (3+
disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks).
NVIDIA
The format used by NVIDIA MediaShield RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2
disks), RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by NVIDIA MediaShield
RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks).
Promise
The format used by Promise and AMD/ATI RAID BIOSes. Supports multiple volumes per array. Each disk can be split to be used by up to
two arbitrary volumes. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1
disk), CONCAT (2+ disks). Configurations not supported by RAID BIOSes, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+
disks).
SiI The format used by SiliconImage RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2
disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by SiliconImage RAID
BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks).
SUPPORTED RAID LEVELS
The GEOM RAID class follows a modular design, allowing different RAID levels to be used. Full support for the following RAID levels is cur-
rently implemented: RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT. The following RAID levels supported as read-only for volumes in optimal
state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF.
RAID LEVEL MIGRATION
The GEOM RAID class has no support for RAID level migration, allowed by some metadata formats. If you started migration using BIOS or in
some other way, make sure to complete it there. Do not run GEOM RAID class on migrating volumes under pain of possible data corruption!
2TiB BARRIERS
NVIDIA metadata format does not support volumes above 2TiB.
SYSCTL VARIABLES
The following sysctl(8) variable can be used to control the behavior of the RAID GEOM class.
kern.geom.raid.aggressive_spare: 0
Use any disks without metadata connected to controllers of the vendor matching to volume metadata format as spare. Use it with much
care to not lose data if connecting unrelated disk!
kern.geom.raid.clean_time: 5
Mark volume as clean when idle for the specified number of seconds.
kern.geom.raid.debug: 0
Debug level of the RAID GEOM class.
kern.geom.raid.enable: 1
Enable on-disk metadata taste.
kern.geom.raid.idle_threshold: 1000000
Time in microseconds to consider a volume idle for rebuild purposes.
kern.geom.raid.name_format: 0
Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}.
kern.geom.raid.read_err_thresh: 10
Number of read errors equated to disk failure. Write errors are always considered as disk failures.
kern.geom.raid.start_timeout: 30
Time to wait for missing array components on startup.
kern.geom.raid.X.enable: 1
Enable taste for specific metadata or transformation module.
kern.geom.raid.legacy_aliases: 0
Enable geom raid emulation of legacy /dev/ar%d devices. This should aid the upgrade of systems from legacy to modern releases.
EXIT STATUS
Exit status is 0 on success, and non-zero if the command fails.
SEE ALSO
geom(4), geom(8), gvinum(8)
HISTORY
The graid utility appeared in FreeBSD 9.0.
AUTHORS
Alexander Motin <mav@FreeBSD.org>
M. Warner Losh <imp@FreeBSD.org>
BSD
April 4, 2013 BSD