i have a dl365 with x86 installed. currently my raid is set as follows:
what i'd like to do is have the 3rd, 72gb drive (1i:1:3) as a standard disk. not actually in an array or apart of "array A". at this point, i'm not sure how to get this disk to be seen by sol10. i'm not well trained with this utility either so it could be my lack of knowledge. docs/manuals aren't really showing me much other then actually placing these into an array. i've tried devfsadm and reboot -- -r with out any luck.
I enquired before about adding another disk to an ultra 10...
and was told it was possible...
Well - since then I got my hands on another ultra 10 - and have taken the disk out of that one...
I should be able to connect it up - but I am wondering - should this be mounted in any partiulcar... (2 Replies)
Dear all ,
I am new to sun cluster and i having a problem adding a disk to a disk group .
My platfrom consists of a clustered E6900 SUN server and an EMC DX1000 storage .
the disk group that iam trying to add the disk to is shared group ,
whwen i run vxdisk list i get in status "online... (2 Replies)
Hi all,
I tried to add the new disk into veritas control.but the OS/veritas is not recognize the disk.so how i can know disk status or the disk added or not into veritas?please help me out step by step procedure.i would really thankful to all.
regards
Krishna (4 Replies)
Recently added a disk to a clariion array system, binded it as a raid 5, now I have no clue how to see it in dg-unix. I want to add it to a current filesystem, and now i'm down river without a paddle. (0 Replies)
Hi,
On P5 I would like to add hard drive, currently 2 hard disk are already exists, would like to add one more,
2 Slots are emtpy, I would like to know how to find out wheather adapter is attached to that 2 free slot using command. (7 Replies)
I am having trouble with LVM and one of my physical volumes.
Using Ubuntu Desktop 14.04
I was trying to set up LVM across two disks (not containing the OS or Home).
First I created the initial Physical Volume, the Volume Group, and the Logical volume, and everything seemed fine. The... (2 Replies)
hi all i have entered Aix environment 4 months had experienced in linux
what i am facing is i am unable to do sort of RnD with aix like
installation on my own, creating vgs managing networks, the VIOS, storage,lpars,
So we have a setup here almost all are in live production environment
with... (4 Replies)
Discussion started by: vax
4 Replies
LEARN ABOUT FREEBSD
mfi
MFI(4) BSD Kernel Interfaces Manual MFI(4)NAME
mfi -- LSI MegaRAID SAS driver
SYNOPSIS
To compile this driver into the kernel, place the following lines in your kernel configuration file:
device pci
device mfi
Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5):
mfi_load="YES"
DESCRIPTION
This driver is for LSI's next generation PCI Express SAS RAID controllers. Access to RAID arrays (logical disks) from this driver is pro-
vided via /dev/mfid? device nodes. A simple management interface is also provided on a per-controller basis via the /dev/mfi? device node.
The mfi name is derived from the phrase "MegaRAID Firmware Interface", which is substantially different than the old "MegaRAID" interface and
thus requires a new driver. Older SCSI and SATA MegaRAID cards are supported by amr(4) and will not work with this driver.
Two sysctls are provided to tune the mfi driver's behavior when a request is made to remove a mounted volume. By default the driver will
disallow any requests to remove a mounted volume. If the sysctl dev.mfi.%d.delete_busy_volumes is set to 1, then the driver will allow
mounted volumes to be removed.
A tunable is provided to adjust the mfi driver's behaviour when attaching to a card. By default the driver will attach to all known cards
with high probe priority. If the tunable hw.mfi.mrsas_enable is set to 1, then the driver will reduce its probe priority to allow mrsas to
attach to the card instead of mfi.
HARDWARE
The mfi driver supports the following hardware:
o LSI MegaRAID SAS 1078
o LSI MegaRAID SAS 8408E
o LSI MegaRAID SAS 8480E
o LSI MegaRAID SAS 9240
o LSI MegaRAID SAS 9260
o Dell PERC5
o Dell PERC6
o IBM ServeRAID M1015 SAS/SATA
o IBM ServeRAID M1115 SAS/SATA
o IBM ServeRAID M5015 SAS/SATA
o IBM ServeRAID M5110 SAS/SATA
o IBM ServeRAID-MR10i
o Intel RAID Controller SRCSAS18E
o Intel RAID Controller SROMBSAS18E
FILES
/dev/mfid? array/logical disk interface
/dev/mfi? management interface
DIAGNOSTICS
mfid%d: Unable to delete busy device An attempt was made to remove a mounted volume.
SEE ALSO amr(4), pci(4), mfiutil(8)HISTORY
The mfi driver first appeared in FreeBSD 6.1.
AUTHORS
The mfi driver and this manual page were written by Scott Long <scottl@FreeBSD.org>.
BUGS
The driver does not support big-endian architectures at this time.
BSD July 15, 2013 BSD