RAID5 + STRIPED LUNs


 
Thread Tools Search this Thread
Special Forums Hardware RAID5 + STRIPED LUNs
Prev   Next
# 2  
Old 02-27-2013
I assume by striped LUN's you mean software RAID.
Software RAID is "poor man's RAID".

I assume that you have a hardware RAID5 controller.

There is little point in using both at the same time. Software RAID uses CPU cycles which can be bad on a system loaded with apps.

Originally there was RAID3. This striped the data over a number of drives and also had a dedicated parity drive. This meant that every file write involved a write to the parity drive hence creating a bottleneck. So RAID5 was created.

RAID5 is striped data with rotating parity. The parity function is rotated between all the drives eliminating the bottleneck. I/O is spread across a number of actuators (drives) so the more drives in the RAID5 the greater the I/O bandwidth available. Compounding this functionality with software RAID is pointless. The hardware RAID5 controller will offload all I/O processing (parity calculation) from the main CPU of the box.

Dunno whether that answers you question(s) or not? Post back any further questions if not.

So RAID5 if good for general random I/O (mixed and unpredictable read/write)
In a situation where I/O's are predominantly read-only (eg, large Oracle database with mainly read enquiries) then RAID3 will be a bit faster because there's no need to read the parity if drives are healthy.
 
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Red Hat

Install RHEL6 on x3650M4 with RAID5

Hi All, I have a new x3650 M4 server with hardware RAID 5 configured 4 x 300 GB (HDD). The Raid controller is ServeRAID M5110e. Im getting "device not found" error during hardisk detection of RHEL6 install using DVD. Some pages over the net pointed to using ServerGuide media for... (1 Reply)
Discussion started by: Solaris_Begin
1 Replies

2. SuSE

Raid5

Hi all, I am currently using opensuse 12.1, We have Raid 5 array of 8 disks. A friend of mine accidently removed a drive & place it back and also added a new disk to it(making it 9 disks). now the output of mdadm --detail is as shown below si64:/dev # mdadm --detail /dev/md3 /dev/md3:... (1 Reply)
Discussion started by: patilrajashekar
1 Replies

3. UNIX for Advanced & Expert Users

RAID5 multi disk failure

Hi there, Don't know if my title is relevant but I'm dealing with dangerous materials that I don't really know and I'm very afraid to mess anything up. I have a Debian 5.0.4 server with 4 x 1TB hard drives. I have the following mdstat Personalities : md1 : active raid1 sda1 sdd1... (3 Replies)
Discussion started by: chebarbudo
3 Replies

4. AIX

Striped FS , need to add new disks

Hi, I have a filesystem that is created on a VG with 12 disks. The FS is striped on these disks. Now I have to add 10 more disks to this volume group to help increase the space of the same FS that is striped. How should I add these disks to the Vg and i need these disks to be added such the FS... (1 Reply)
Discussion started by: aixromeo
1 Replies

5. AIX

AIX striped LV - lslv stripe width has wrong value

Hello all. I have a volume group with 8 PV's, and a logical volume striped across these 8 volumes. However, an lslv is showing: STRIPE WIDTH: 9 STRIPE SIZE: 64k There's really only eight disks, so how can the stripe width be 9? ODM also showed this: # odmget CuAt |... (4 Replies)
Discussion started by: Scott
4 Replies

6. AIX

how to mirror raid5

Hi, I have an ssa filesystem to move to san. We don't want any downtime. I heard that you can do a mirroring of existing file system on the san. The file system is a type of either raid 0, raid 1, or raid 5. Anyone know how to do this? Thanks in advance, itik (4 Replies)
Discussion started by: itik
4 Replies

7. Solaris

How to add disk into Striped Volume VxVM

VxVM: How to add one more disk into v08 the stripe should change from 7/128 to 8/128 v v08 - ENABLED ACTIVE 8954292224 SELECT v08-01 fsgen pl v08-01 v08 ENABLED ACTIVE 8954292480 STRIPE 7/128 RW sd bkpdg35-01 v08-01 bkpdg35 17216 ... (0 Replies)
Discussion started by: geoffry
0 Replies

8. Solaris

RAID5 problems on solaris

I have one volume raid5 with 3 slice one of them is maintenance state. I replace this slice and resync the volume. when I try to mount the file system another slice goes to last erred. Again resync and the state goes to OK but the slice in mantenance persist. I try to enabled this but persist in... (2 Replies)
Discussion started by: usdsia
2 Replies

9. SCO

Raid5 Failure

Forgive me, I do not know much about RAID so I'm going to be as detailed as possible. This morning, our server's alarm was going. I found that one of our drives have failed. (we have 3) It is an Adaptec ATA RAID 2400A controller I'm purchasing a new SCSI drive today. My questions: ... (2 Replies)
Discussion started by: gseyforth
2 Replies

10. Programming

Finding out if a file has been striped or not

Is there a way in c to find out if a binary program contains debug information? I have tried to compare the striped and unstriped versions of two programs, but i have had a hard time understand them. (2 Replies)
Discussion started by: shienarier
2 Replies
Login or Register to Ask a Question
GRAID(8)						    BSD System Manager's Manual 						  GRAID(8)

NAME
graid -- control utility for software RAID devices SYNOPSIS
graid label [-f] [-o fmtopt] [-S size] [-s strip] format label level prov ... graid add [-f] [-S size] [-s strip] name label level graid delete [-f] name [label | num] graid insert name prov ... graid remove name prov ... graid fail name prov ... graid stop [-fv] name ... graid list graid status graid load graid unload DESCRIPTION
The graid utility is used to manage software RAID configurations, supported by the GEOM RAID class. GEOM RAID class uses on-disk metadata to provide access to software-RAID volumes defined by different RAID BIOSes. Depending on RAID BIOS type and its metadata format, different subsets of configurations and features are supported. To allow booting from RAID volume, the metadata format should match the RAID BIOS type and its capabilities. To guarantee that these match, it is recommended to create volumes via the RAID BIOS interface, while experienced users are free to do it using this utility. The first argument to graid indicates an action to be performed: label Create an array with single volume. The format argument specifies the on-disk metadata format to use for this array, such as "Intel". The label argument specifies the label of the created volume. The level argument specifies the RAID level of the created volume, such as: "RAID0", "RAID1", etc. The subsequent list enumerates providers to use as array components. The special name "NONE" can be used to reserve space for absent disks. The order of components can be important, depending on specific RAID level and metadata format. Additional options include: -f Enforce specified configuration creation if it is officially unsupported, but technically can be created. -o fmtopt Specifies metadata format options. -S size Use size bytes on each component for this volume. Should be used if several volumes per array are planned, or if smaller components going to be inserted later. Defaults to size of the smallest component. -s strip Specifies strip size in bytes. Defaults to 131072. add Create another volume on the existing array. The name argument is the name of the existing array, reported by label command. The rest of arguments are the same as for the label command. delete Delete volume(s) from the existing array. When the last volume is deleted, the array is also deleted and its metadata erased. The name argument is the name of existing array. Optional label or num arguments allow specifying volume for deletion. Additional options include: -f Delete volume(s) even if it is still open. insert Insert specified provider(s) into specified array instead of the first missing or failed components. If there are no such compo- nents, mark disk(s) as spare. remove Remove the specified provider(s) from the specified array and erase metadata. If there are spare disks present, the removed disk(s) will be replaced by spares. fail Mark the given disks(s) as failed, removing from active use unless absolutely necessary due to exhausted redundancy. If there are spare disks present - failed disk(s) will be replaced with one of them. stop Stop the given array. The metadata will not be erased. Additional options include: -f Stop the given array even if some of its volumes are opened. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options include: -v Be more verbose. SUPPORTED METADATA FORMATS
The GEOM RAID class follows a modular design, allowing different metadata formats to be used. Support is currently implemented for the fol- lowing formats: DDF The format defined by the SNIA Common RAID Disk Data Format v2.0 specification. Used by some Adaptec RAID BIOSes and some hardware RAID controllers. Because of high format flexibility different implementations support different set of features and have different on-disk metadata layouts. To provide compatibility, the GEOM RAID class mimics capabilities of the first detected DDF array. Respecting that, it may support different number of disks per volume, volumes per array, partitions per disk, etc. The following con- figurations are supported: RAID0 (2+ disks), RAID1 (2+ disks), RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+ disks), RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks), RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Format supports two options "BE" and "LE", that mean big-endian byte order defined by specification (default) and little-endian used by some Adaptec controllers. Intel The format used by Intel RAID BIOS. Supports up to two volumes per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks). Configurations not supported by Intel RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks). JMicron The format used by JMicron RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID10 (4 disks), CONCAT (2+ disks). Configurations not supported by JMicron RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks). NVIDIA The format used by NVIDIA MediaShield RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by NVIDIA MediaShield RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks). Promise The format used by Promise and AMD/ATI RAID BIOSes. Supports multiple volumes per array. Each disk can be split to be used by up to two arbitrary volumes. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by RAID BIOSes, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SiI The format used by SiliconImage RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by SiliconImage RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SUPPORTED RAID LEVELS
The GEOM RAID class follows a modular design, allowing different RAID levels to be used. Full support for the following RAID levels is cur- rently implemented: RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT. The following RAID levels supported as read-only for volumes in optimal state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF. RAID LEVEL MIGRATION
The GEOM RAID class has no support for RAID level migration, allowed by some metadata formats. If you started migration using BIOS or in some other way, make sure to complete it there. Do not run GEOM RAID class on migrating volumes under pain of possible data corruption! 2TiB BARRIERS NVIDIA metadata format does not support volumes above 2TiB. SYSCTL VARIABLES
The following sysctl(8) variable can be used to control the behavior of the RAID GEOM class. kern.geom.raid.aggressive_spare: 0 Use any disks without metadata connected to controllers of the vendor matching to volume metadata format as spare. Use it with much care to not lose data if connecting unrelated disk! kern.geom.raid.clean_time: 5 Mark volume as clean when idle for the specified number of seconds. kern.geom.raid.debug: 0 Debug level of the RAID GEOM class. kern.geom.raid.enable: 1 Enable on-disk metadata taste. kern.geom.raid.idle_threshold: 1000000 Time in microseconds to consider a volume idle for rebuild purposes. kern.geom.raid.name_format: 0 Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}. kern.geom.raid.read_err_thresh: 10 Number of read errors equated to disk failure. Write errors are always considered as disk failures. kern.geom.raid.start_timeout: 30 Time to wait for missing array components on startup. kern.geom.raid.X.enable: 1 Enable taste for specific metadata or transformation module. kern.geom.raid.legacy_aliases: 0 Enable geom raid emulation of legacy /dev/ar%d devices. This should aid the upgrade of systems from legacy to modern releases. EXIT STATUS
Exit status is 0 on success, and non-zero if the command fails. SEE ALSO
geom(4), geom(8), gvinum(8) HISTORY
The graid utility appeared in FreeBSD 9.0. AUTHORS
Alexander Motin <mav@FreeBSD.org> M. Warner Losh <imp@FreeBSD.org> BSD
April 4, 2013 BSD