03-04-2012
Juri_al :
Can you tell me how to check using smitty if SCIS/SAS adapter is connected to backplain ?
With multiple commands like lsdev, lscfg, lsattr lsslot you can check connected device but SAS cable or even backplain split is not recognized as device. You can not check that without a checking documentation or in some cases some person is needed to do checking.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Does anyone know of any commands that offer the same sort of facilities of scandisk on windows. My Linux server (Mandrake 6.2) keeps crashing and gives hard disk errors when I reboot. I've used fcsk to fix any problems that arise but when I use dumpe2fs to display disk information it says that... (1 Reply)
Discussion started by: DGM
1 Replies
2. Filesystems, Disks and Memory
Hi all,
I am using SCO Openserver V and I want to add one more harddisk (/dev/hd1) Hw can I do it? (1 Reply)
Discussion started by: skant
1 Replies
3. UNIX Desktop Questions & Answers
I have a cuestion. How Can I to add other hard disk to my computer? I need to configurate anyone? (4 Replies)
Discussion started by: hmaraver
4 Replies
4. SCO
Hi!
Sorry, but I am'not spesialist in SCO OpenServer. I need to add hard disk from SCO Open Server ( "a") in my SCO OpenServer 5.6. I need data from "a". When I added, I see only swap disk, and didn't see root file system. I need to add IDE and SCSI
Please, help me. How right to add disk?... (0 Replies)
Discussion started by: fedir
0 Replies
5. UNIX for Dummies Questions & Answers
:eek: I use this Solaris to run CMS a call acounting software package for my job. No one could run reports today because it said the this when you logged on
"The following file systems are low, and could adversely affect server performance:
File system /: 99%full"
Can some one please explain... (9 Replies)
Discussion started by: mannyisme
9 Replies
6. Filesystems, Disks and Memory
Folks;
I just added 2 physical new hard drives to my SUSE server. My server is already running SUSE 10.3 version.
Is there a command i can use to add the new space or even see if the system can sees them? (3 Replies)
Discussion started by: Katkota
3 Replies
7. SCO
hi
I've a fresh installation of SCO 5.0.7 on the IDE hard disk.
For SCSI hard disk I can declare, for example blc disk driver using:
# mkdev hd 0 SCSI-0 0 blc 0but it works for IDE hard disk? (3 Replies)
Discussion started by: ccc
3 Replies
8. Linux
Hi all,
I'm kind of new to programming in Linux & c/c++. I'm currently writing a FileManager using Ubuntu Linux(10.10) for Learning Purposes. I've got started on this project by creating a loopback device to be used as my virtual hard disk. After creating the loop back hard disk and mounting it... (23 Replies)
Discussion started by: shen747
23 Replies
9. Solaris
Hi Solaris users - I have an Ultra10 SPARC machine, with IIe processor. To prepare for the Solaris10 admin exam PartII I need to set up the metadb/mirroring in my machine, but do not know how to do this properly.
I need this to practice the mirroring tasks.
If anyone could help it would be... (3 Replies)
Discussion started by: patcom
3 Replies
10. Hardware
As the title suggests, I'm trying to install a second drive (really want an OS mirror) on a Sun Netra X1.
I've taken the spacer out, and had a go at with the drill-press so now I have a nice HDD tray. Have installed an IDE drive in the tray, plugged in the power and data cables that were... (0 Replies)
Discussion started by: Smiling Dragon
0 Replies
MFI(4) BSD Kernel Interfaces Manual MFI(4)
NAME
mfi -- LSI MegaRAID SAS driver
SYNOPSIS
To compile this driver into the kernel, place the following lines in your kernel configuration file:
device pci
device mfi
Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5):
mfi_load="YES"
DESCRIPTION
This driver is for LSI's next generation PCI Express SAS RAID controllers. Access to RAID arrays (logical disks) from this driver is pro-
vided via /dev/mfid? device nodes. A simple management interface is also provided on a per-controller basis via the /dev/mfi? device node.
The mfi name is derived from the phrase "MegaRAID Firmware Interface", which is substantially different than the old "MegaRAID" interface and
thus requires a new driver. Older SCSI and SATA MegaRAID cards are supported by amr(4) and will not work with this driver.
Two sysctls are provided to tune the mfi driver's behavior when a request is made to remove a mounted volume. By default the driver will
disallow any requests to remove a mounted volume. If the sysctl dev.mfi.%d.delete_busy_volumes is set to 1, then the driver will allow
mounted volumes to be removed.
HARDWARE
The mfi driver supports the following hardware:
o LSI MegaRAID SAS 1078
o LSI MegaRAID SAS 8408E
o LSI MegaRAID SAS 8480E
o LSI MegaRAID SAS 9260
o Dell PERC5
o Dell PERC6
o IBM ServeRAID M5015 SAS/SATA
o IBM ServeRAID-MR10i
o Intel RAID Controller SROMBSAS18E
FILES
/dev/mfid? array/logical disk interface
/dev/mfi? management interface
DIAGNOSTICS
mfid%d: Unable to delete busy device An attempt was made to remove a mounted volume.
SEE ALSO
amr(4), pci(4), mfiutil(8)
HISTORY
The mfi driver first appeared in FreeBSD 6.1.
AUTHORS
The mfi driver and this manual page were written by Scott Long <scottl@FreeBSD.org>.
BUGS
The driver does not support big-endian architectures at this time.
BSD
May 12, 2010 BSD