Sponsored Content
Operating Systems Solaris One StorageTek 6140 vs Two (2) 2540 's? Post 302197107 by lowbyte on Tuesday 20th of May 2008 08:07:16 AM
Old 05-20-2008
Hey,

the 2540 will be faster than the 3510 and the 6140 is faster than the 2540.
Why?
The 2540 has SAS-Drives or SATA-Drives with 3 GB/s, only the host-connection
is 4 GB/s. The 6140 has FC-Disks with 4 GB/s. So on the 6140 you will be able
the full 4 GB transport to the disks. And the Cache-Controller of the 6140 is greater.
Here some trends from the test center of SUN/LSI:
2540 / ca. 100 KIOPs, ca. 600 MB/s
6140 / ca. 200 KIOPs, ca. 1000 MB/s

CU
lowbyte
 

8 More Discussions You Might Find Interesting

1. Solaris

Sun Sparc T2000 + StorageTek 2540 - Help, I'm lost

I have a Sun Sparc T2000 (Solaris 10 05-08) and have installed a PCI-X 4GB Single Port HBA card in it. I have one StorageTek 2540 array that I would like to connect to the T2000. For the moment it would be a single path connection, but I've ordered a 2nd HBA, so eventually it would be... (4 Replies)
Discussion started by: soliberus
4 Replies

2. Filesystems, Disks and Memory

Configure large volume on Sun StorageTek 2540 array

Hi, We have 12x1TB SATA disks in our array and I need to create 10TB volume. I defined new storage profile on array and when I tried to add volume, I faced with ~2TB limit for new volumes. I didn't find how to set another limit on my storage profile. Is there is a way to configure one large... (3 Replies)
Discussion started by: Sapfeer
3 Replies

3. Solaris

Sun StorageTek 2540 - shutdown matrix

Hi, I have a simple question on how to correctly disable the matrix? Looking for the Common Array Manager and I do not see this option... Thank for help (3 Replies)
Discussion started by: bieszczaders
3 Replies

4. Solaris

Storagetek 2540

Hi peeps, Was wondering if anyone can help me, got a couple of storagetek 2540's that I need to configure. Trouble is I think they were brought as second user as all that came with them was cables. Does anyone know how to configure them (i.e. create raid sets and map to Luns and present... (1 Reply)
Discussion started by: callmebob
1 Replies

5. Hardware

Storagetek 2540

Hello all ! I am a beginner about system and networking , and after some research on internet , i didn't find any relevant information , so i post here , if someone have an experience with this kind of material or if some documentation about how to connect my storage device on my server. I... (2 Replies)
Discussion started by: acorradi
2 Replies

6. Hardware

StorageTek 2540 battery failed

Hi all, My Sun StorageTek 2540 have redundant batteries, but a battery was failed. # /opt/SUNWstkcam/bin/sscs list -d MyStorage1 fru Name FRU Alarm State Status Revision Unique Id -------------------------- ----------- --------... (2 Replies)
Discussion started by: buyantugs
2 Replies

7. Hardware

Storagetek 2540

Hi Guys and Gals, Wionder if you could help me, got a problem with a controller on a storagetek 2540, dead when fully powered up but if you reboot, it is ok for a couple of seconds (and you can ping it). Then once the array is fully up, it goes faulty and is un-pingable. Had anyone... (4 Replies)
Discussion started by: callmebob
4 Replies

8. Solaris

StorageTek 2540 SAN array

Bought a Sun StorageTek 2540 SAN array a few years ago from a company that was going out of business. When we first set it up, we were able to get all the software (Common Array Manager) and firmware directly from Sun. We just upgraded the drives, but the array is too large for the firmware. Now... (6 Replies)
Discussion started by: edison303
6 Replies
MFI(4)							   BSD Kernel Interfaces Manual 						    MFI(4)

NAME
mfi -- LSI MegaRAID SAS driver SYNOPSIS
To compile this driver into the kernel, place the following lines in your kernel configuration file: device pci device mfi Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5): mfi_load="YES" DESCRIPTION
This driver is for LSI's next generation PCI Express SAS RAID controllers. Access to RAID arrays (logical disks) from this driver is pro- vided via /dev/mfid? device nodes. A simple management interface is also provided on a per-controller basis via the /dev/mfi? device node. The mfi name is derived from the phrase "MegaRAID Firmware Interface", which is substantially different than the old "MegaRAID" interface and thus requires a new driver. Older SCSI and SATA MegaRAID cards are supported by amr(4) and will not work with this driver. Two sysctls are provided to tune the mfi driver's behavior when a request is made to remove a mounted volume. By default the driver will disallow any requests to remove a mounted volume. If the sysctl dev.mfi.%d.delete_busy_volumes is set to 1, then the driver will allow mounted volumes to be removed. HARDWARE
The mfi driver supports the following hardware: o LSI MegaRAID SAS 1078 o LSI MegaRAID SAS 8408E o LSI MegaRAID SAS 8480E o LSI MegaRAID SAS 9260 o Dell PERC5 o Dell PERC6 o IBM ServeRAID M5015 SAS/SATA o IBM ServeRAID-MR10i o Intel RAID Controller SROMBSAS18E FILES
/dev/mfid? array/logical disk interface /dev/mfi? management interface DIAGNOSTICS
mfid%d: Unable to delete busy device An attempt was made to remove a mounted volume. SEE ALSO
amr(4), pci(4), mfiutil(8) HISTORY
The mfi driver first appeared in FreeBSD 6.1. AUTHORS
The mfi driver and this manual page were written by Scott Long <scottl@FreeBSD.org>. BUGS
The driver does not support big-endian architectures at this time. BSD
May 12, 2010 BSD
All times are GMT -4. The time now is 09:20 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy