They are undoubtedly ZFS as Solaris 11 doesn't support anything else anyway for its system disks.
Not sure about the T4-1 RAID controller performance but there is a common misconception than H/W RAID must be faster than S/W raid. Real-life tests seem to routinely demonstrate the opposite, although with your planned RAID-0, both should just be quite fast.
In any case, I would strongly recommend using ZFS for your disks, that would be at least one degree of magnitude simpler to setup and maintain than hardware raid. As DukeNuke2 already stated, a single command line can be enough:
Note that you won't have redundancy so no data self healing is possible in such a configuration.
i've scoured the net and haven't found too many items. i found one at princeton and a few things at sun's site, however, i don't find them to my level. they seem to be written for someone who is very comfortable doing what they do.
does anyone know of any good tutorial that is written similar... (1 Reply)
Hey Alll
Thanks for the help u give me yesterday,i need a help again from u all guys i have RS 6000 server with AIX 4.3.3 OS and with external storage (multipack) with 12 Hard disk drive which will be connected to RS 6000 scsi controller ,i dont have any raid card. now i have to... (0 Replies)
hi to all
am new to shell scripting..itz very urgent.
when i excuting the command metastat(raid configuration info) it will display some information.
#metastat
d1:submirror
status: okey
pass:1
d2:submirror
staus:okey
d3:submirror
staus:error
if staus is okey.no problem.once i... (0 Replies)
Dear Friends,
I need to configure T5240 with Internal SAS RAID HBA(SG-XPCIESAS-R-INT-Z).. T5240 uses 8 hard disks... From the documents of RAID card I have found that I need to create a jump start server to include three packages SUNWaac, StorMan, SUNWgccruntime if Im using solaris10 5/08...
... (5 Replies)
We have configured software based RAID5 with LVM on our RHEL5 servers. Please let us know if its good to configure software RAID on live environment servers. What can be the disadvantages of software RAID against hardware RAID (4 Replies)
Hello,
I want to delete a RAID configuration an old server has.
Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton?
I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies
LEARN ABOUT NETBSD
mfi
MFI(4) BSD Kernel Interfaces Manual MFI(4)NAME
mfi -- LSI Logic & Dell MegaRAID SAS RAID controller
SYNOPSIS
mfi* at pci? dev ? function ?
DESCRIPTION
The mfi driver provides support for the MegaRAID SAS family of RAID controllers, including:
- Dell PERC 5/e, PERC 5/i, PERC 6/e, PERC 6/i
- Intel RAID Controller SRCSAS18E, SRCSAS144E
- LSI Logic MegaRAID SAS 8208ELP, MegaRAID SAS 8208XLP, MegaRAID SAS 8300XLP, MegaRAID SAS 8308ELP, MegaRAID SAS 8344ELP, MegaRAID
SAS 8408E, MegaRAID SAS 8480E, MegaRAID SAS 8708ELP, MegaRAID SAS 8888ELP, MegaRAID SAS 8880EM2, MegaRAID SAS 9260-8i
- IBM ServeRAID M1015, ServeRAID M5014
These controllers support RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 50 and RAID 60 using either SAS or SATA II drives.
Although the controllers are actual RAID controllers, the driver makes them look just like SCSI controllers. All RAID configuration is done
through the controllers' BIOSes.
mfi supports monitoring of the logical disks in the controller through the bioctl(8) and envstat(8) commands.
EVENTS
The mfi driver is able to send events to powerd(8) if a logical drive in the controller is not online. The state-changed event will be sent
to the /etc/powerd/scripts/sensor_drive script when such condition happens.
SEE ALSO intro(4), pci(4), scsi(4), sd(4), bioctl(8), envstat(8), powerd(8)HISTORY
The mfi driver first appeared in NetBSD 4.0.
BSD March 22, 2012 BSD