Sponsored Content
Full Discussion: HW Raid poor io performance
Top Forums UNIX for Advanced & Expert Users HW Raid poor io performance Post 302461302 by roli8200 on Sunday 10th of October 2010 03:53:43 AM
Old 10-10-2010
HW Raid poor io performance

Hello all

We just built a storage cluster for our new xenserver farm. Using 3ware 9650SE raid controllers with 8 x 1TB WD sata disks in a raid 5, 256KB stripe size.

While making first performance test on the local storage server using dd (which simulates the read/write access to the disk mostly equal as the iscsi target does it later) we see very strange performance values.

Using the default dd (with the hardware reported block size of 512bytes) directly on the device (/dev/sdb) gives around 44MB/s write performance.

Using dd with a 1MB blocksize (bs=1M) gives around 587MB/s write performance.

Also the partition alignment makes huge diffrences between 28MB/s and 250MB/s (by 512byte blocksize).

The values are all the same using diffrent linux distros: CentOS, Fedora 13, Ubuntu, SLES.

I know it must have something to do with the stripe size and scheduler settings such as queue_depth and nr_requests, etc. But I can't see the relation between all this settings.

Is there a crack who can give me a little help getting this done? It would be very helpful especially that we work on this issue more than two weeks, read all the available documentations to this topics and the people from 3ware couln't help us yet.

Thanks in advance.

Roland Kaeser
 

9 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Samba on E3500 Poor Performance!!!

Hi you all, I have a BIG performance problem on an Sun E3500, the scenario is described below: I have several users (30) accessing via samba to the E3500 using an application built on Visual Foxpro from their Windows PC , the problem is that the first guy that logs in demands 30% of the E3500... (2 Replies)
Discussion started by: alex blanco
2 Replies

2. Filesystems, Disks and Memory

Poor read performance on sun storedge a1000

Hello, i have a a1000 connected to an e6500. There's a raid 10 (12 disks) on the a1000. If i do a dd if=/dev/zero of=/mnt/1 bs=1024k count=1000 and then look at iostat it tells me there's a kw/s of 25000. But if i do a dd of=/dev/zero if=/mnt/1 bs=1024k count=1000 then i see only a... (1 Reply)
Discussion started by: mbrenner
1 Replies

3. UNIX for Dummies Questions & Answers

poor performance processing file with awk

Hello, I'm running a script on AIX to process lines in a file. I need to enclose the second column in quotation marks and write each line to a new file. I've come up with the following: #!/bin/ksh filename=$1 exec >> $filename.new cat $filename | while read LINE do echo $LINE | awk... (2 Replies)
Discussion started by: scooter53080
2 Replies

4. Solaris

Poor Disk performance on ZFS

Hello, we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks. Each disk has a sequential performance of 220/230 MB/s and in fact if I do a dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies

5. Solaris

Poor disk performance however no sign of failure

Hello guys, I have two servers performing the same disk operations. I believe one server is having a disk's impending failure however I have no hard evidence to prove it. This is a pair of Netra 210's with 2 drives in a hardware raid mirror (LSI raid controller). While performing intensive... (4 Replies)
Discussion started by: s ladd
4 Replies

6. AIX

Poor Performance of server

Hi, I am new registered user here in this UNIX forums. I am a new system administrator for AIX 6.1. One of our servers performs poorly every time our application (FINACLE) runs many processes/instances. (see below for topas snapshot) I use NMON or Topas to monitor the server utilization. I... (9 Replies)
Discussion started by: guzzelle
9 Replies

7. Solaris

Poor performance on an M3000

Hi We have an M3000 single physical processor and 8gb of memory running Solaris 10. This system runs two Oracle Databases one on Oracle 9i and One on Oracle 10g. As soon as the Oracle 10g database starts we see an immediate drop in system performance, for example opening an ssh session can... (6 Replies)
Discussion started by: gregsih
6 Replies

8. AIX

ISCSI poor performance 1.5MB/s fresh install AIX7.1

Hi Everyone, I have been struggling for few days with iSCSI and thought I could get some help on the forum... fresh install of AIX7.1 TL4 on Power 710, The rootvg relies on 3 SAS disks in RAID 0, 32GB Memory The lpar Profile is using all of the managed system's resources. I have connected... (11 Replies)
Discussion started by: frenchy59
11 Replies

9. Windows & DOS: Issues & Discussions

Poor Windows 10 Performance of Parallels Desktop 15 on macOS Catalina

Just a quick note for macOS users. I just installed (and removed) Parallels Desktop 15 Edition on my MacPro (2013) with 64GB memory and 12-cores, which is running the latest version of macOS Catalina as of this post. The reason for this install was to test some RIGOL test gear software which... (6 Replies)
Discussion started by: Neo
6 Replies
mega_sas(7D)							      Devices							      mega_sas(7D)

NAME
mega_sas - SCSI HBA driver for LSI MegaRAID SAS controller DESCRIPTION
The mega_sas MegaRAID controller host bus adapter driver is a SCSA-compliant nexus driver that supports the Dell PERC 5/E, 5/i, 6/E and 6/i RAID controllers, the IBM ServeRAID-MR10k SAS/SATA controller and the LSI MegaRAID SAS/SATA 8308ELP, 8344ELP, 84016E, 8408ELP, 8480ELP, 8704ELP, 8704EM2, 8708ELP, 8708EM2, 8880EM2 and 8888ELP series of controllers. Supported RAID features include RAID levels 0, 1, 5, and 6, RAID spans 10, 50 and 60, online capacity expansion (OCE), online RAID level migration (RLM), auto resume after loss of system power during arrays, array rebuild or reconstruction (RLM) and configurable stripe size up to 1MB. Additional supported RAID features include check consistency for background data integrity, patrol read for media scanning and repairing, 64 logical drive support, up to 64TB LUN support, automatic rebuild and global and dedicated hot spare support. CONFIGURATION
The mega_sas.conf file contains no user configurable parameters. Please configure your hardware through the related BIOS utility or the MegaCli configuration utility. If you want to install to a drive attached to a mega_sas HBA, you should create the virtual drive first from the BIOS before running the Solaris install. You can obtain the MegaCli utility from the LSI website. The mega_sas device can support up to 64 virtual disks. Note that BIOS numbers the virtual disks as 1 through 64, however in the Solaris operating environment virtual disks are numbered from 0 to 63. Also note that SAS and SATA drives cannot be configured into the same vir- tual disk. KNOWN PROBLEMS AND LIMITATIONS
The mega_sas driver does not support the LSI MegaRAID SAS 8204ELP, 8204XLP, 8208ELP, and 8208XLP controllers. FILES
/kernel/drv/mega_sas 32-bit ELF kernel module. (x86) /kernel/drv/amd64/mega_sas 64-bit kernel module. (x86) /kernel/drv/mega_sas.conf Driver configuration file (contains no user-configurable options). ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Architecture |x86-based systems | +-----------------------------+-----------------------------+ |Availability |SUNWmegasas | +-----------------------------+-----------------------------+ |Interface stability |Uncommitted | +-----------------------------+-----------------------------+ SEE ALSO
prtconf(1M), attributes(5), sata(7D), scsi_hba_attach_setup(9F), scsi_sync_pkt(9F), scsi_transport(9F), scsi_inquiry(9S), scsi_device(9S), scsi_pkt(9S) Small Computer System Interface-2 (SCSI-2) SunOS 5.11 14 Aug 2008 mega_sas(7D)
All times are GMT -4. The time now is 04:50 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy