02-27-2013
I assume by striped LUN's you mean software RAID.
Software RAID is "poor man's RAID".
I assume that you have a hardware RAID5 controller.
There is little point in using both at the same time. Software RAID uses CPU cycles which can be bad on a system loaded with apps.
Originally there was RAID3. This striped the data over a number of drives and also had a dedicated parity drive. This meant that every file write involved a write to the parity drive hence creating a bottleneck. So RAID5 was created.
RAID5 is striped data with rotating parity. The parity function is rotated between all the drives eliminating the bottleneck. I/O is spread across a number of actuators (drives) so the more drives in the RAID5 the greater the I/O bandwidth available. Compounding this functionality with software RAID is pointless. The hardware RAID5 controller will offload all I/O processing (parity calculation) from the main CPU of the box.
Dunno whether that answers you question(s) or not? Post back any further questions if not.
So RAID5 if good for general random I/O (mixed and unpredictable read/write)
In a situation where I/O's are predominantly read-only (eg, large Oracle database with mainly read enquiries) then RAID3 will be a bit faster because there's no need to read the parity if drives are healthy.
10 More Discussions You Might Find Interesting
1. Programming
Is there a way in c to find out if a binary program contains debug information?
I have tried to compare the striped and unstriped versions of two programs,
but i have had a hard time understand them. (2 Replies)
Discussion started by: shienarier
2 Replies
2. SCO
Forgive me, I do not know much about RAID so I'm going to be
as detailed as possible.
This morning, our server's alarm was going. I found that one
of our drives have failed. (we have 3)
It is an Adaptec ATA RAID 2400A controller
I'm purchasing a new SCSI drive today. My questions:
... (2 Replies)
Discussion started by: gseyforth
2 Replies
3. Solaris
I have one volume raid5 with 3 slice one of them is maintenance state. I replace this slice and resync the volume. when I try to mount the file system another slice goes to last erred. Again resync and the state goes to OK but the slice in mantenance persist. I try to enabled this but persist in... (2 Replies)
Discussion started by: usdsia
2 Replies
4. Solaris
VxVM:
How to add one more disk into v08 the stripe should change from 7/128 to 8/128
v v08 - ENABLED ACTIVE 8954292224 SELECT v08-01 fsgen
pl v08-01 v08 ENABLED ACTIVE 8954292480 STRIPE 7/128 RW
sd bkpdg35-01 v08-01 bkpdg35 17216 ... (0 Replies)
Discussion started by: geoffry
0 Replies
5. AIX
Hi,
I have an ssa filesystem to move to san. We don't want any downtime. I heard that you can do a mirroring of existing file system on the san. The file system is a type of either raid 0, raid 1, or raid 5.
Anyone know how to do this?
Thanks in advance,
itik (4 Replies)
Discussion started by: itik
4 Replies
6. AIX
Hello all.
I have a volume group with 8 PV's, and a logical volume striped across these 8 volumes.
However, an lslv is showing:
STRIPE WIDTH: 9
STRIPE SIZE: 64k
There's really only eight disks, so how can the stripe width be 9?
ODM also showed this:
# odmget CuAt |... (4 Replies)
Discussion started by: Scott
4 Replies
7. AIX
Hi,
I have a filesystem that is created on a VG with 12 disks. The FS is striped on these disks. Now I have to add 10 more disks to this volume group to help increase the space of the same FS that is striped. How should I add these disks to the Vg and i need these disks to be added such the FS... (1 Reply)
Discussion started by: aixromeo
1 Replies
8. UNIX for Advanced & Expert Users
Hi there,
Don't know if my title is relevant but I'm dealing with dangerous materials that I don't really know and I'm very afraid to mess anything up.
I have a Debian 5.0.4 server with 4 x 1TB hard drives.
I have the following mdstat
Personalities :
md1 : active raid1 sda1 sdd1... (3 Replies)
Discussion started by: chebarbudo
3 Replies
9. SuSE
Hi all,
I am currently using opensuse 12.1,
We have Raid 5 array of 8 disks. A friend of mine accidently removed a drive & place it back and also added a new disk to it(making it 9 disks). now the output of mdadm --detail is as shown below
si64:/dev # mdadm --detail /dev/md3
/dev/md3:... (1 Reply)
Discussion started by: patilrajashekar
1 Replies
10. Red Hat
Hi All,
I have a new x3650 M4 server with hardware RAID 5 configured 4 x 300 GB (HDD).
The Raid controller is ServeRAID M5110e.
Im getting "device not found" error during hardisk detection of RHEL6 install using DVD.
Some pages over the net pointed to using ServerGuide media for... (1 Reply)
Discussion started by: Solaris_Begin
1 Replies
MFI(4) BSD Kernel Interfaces Manual MFI(4)
NAME
mfi -- LSI Logic & Dell MegaRAID SAS RAID controller
SYNOPSIS
mfi* at pci? dev ? function ?
DESCRIPTION
The mfi driver provides support for the MegaRAID SAS family of RAID controllers, including:
- Dell PERC 5/e, PERC 5/i, PERC 6/e, PERC 6/i
- Intel RAID Controller SRCSAS18E, SRCSAS144E
- LSI Logic MegaRAID SAS 8208ELP, MegaRAID SAS 8208XLP, MegaRAID SAS 8300XLP, MegaRAID SAS 8308ELP, MegaRAID SAS 8344ELP, MegaRAID
SAS 8408E, MegaRAID SAS 8480E, MegaRAID SAS 8708ELP, MegaRAID SAS 8888ELP, MegaRAID SAS 8880EM2, MegaRAID SAS 9260-8i
- IBM ServeRAID M1015, ServeRAID M5014
These controllers support RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 50 and RAID 60 using either SAS or SATA II drives.
Although the controllers are actual RAID controllers, the driver makes them look just like SCSI controllers. All RAID configuration is done
through the controllers' BIOSes.
mfi supports monitoring of the logical disks in the controller through the bioctl(8) and envstat(8) commands.
EVENTS
The mfi driver is able to send events to powerd(8) if a logical drive in the controller is not online. The state-changed event will be sent
to the /etc/powerd/scripts/sensor_drive script when such condition happens.
SEE ALSO
intro(4), pci(4), scsi(4), sd(4), bioctl(8), envstat(8), powerd(8)
HISTORY
The mfi driver first appeared in NetBSD 4.0.
BSD
March 22, 2012 BSD