Sponsored Content
Full Discussion: HW Raid poor io performance
Top Forums UNIX for Advanced & Expert Users HW Raid poor io performance Post 302461302 by roli8200 on Sunday 10th of October 2010 03:53:43 AM
Old 10-10-2010
HW Raid poor io performance

Hello all

We just built a storage cluster for our new xenserver farm. Using 3ware 9650SE raid controllers with 8 x 1TB WD sata disks in a raid 5, 256KB stripe size.

While making first performance test on the local storage server using dd (which simulates the read/write access to the disk mostly equal as the iscsi target does it later) we see very strange performance values.

Using the default dd (with the hardware reported block size of 512bytes) directly on the device (/dev/sdb) gives around 44MB/s write performance.

Using dd with a 1MB blocksize (bs=1M) gives around 587MB/s write performance.

Also the partition alignment makes huge diffrences between 28MB/s and 250MB/s (by 512byte blocksize).

The values are all the same using diffrent linux distros: CentOS, Fedora 13, Ubuntu, SLES.

I know it must have something to do with the stripe size and scheduler settings such as queue_depth and nr_requests, etc. But I can't see the relation between all this settings.

Is there a crack who can give me a little help getting this done? It would be very helpful especially that we work on this issue more than two weeks, read all the available documentations to this topics and the people from 3ware couln't help us yet.

Thanks in advance.

Roland Kaeser
 

9 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Samba on E3500 Poor Performance!!!

Hi you all, I have a BIG performance problem on an Sun E3500, the scenario is described below: I have several users (30) accessing via samba to the E3500 using an application built on Visual Foxpro from their Windows PC , the problem is that the first guy that logs in demands 30% of the E3500... (2 Replies)
Discussion started by: alex blanco
2 Replies

2. Filesystems, Disks and Memory

Poor read performance on sun storedge a1000

Hello, i have a a1000 connected to an e6500. There's a raid 10 (12 disks) on the a1000. If i do a dd if=/dev/zero of=/mnt/1 bs=1024k count=1000 and then look at iostat it tells me there's a kw/s of 25000. But if i do a dd of=/dev/zero if=/mnt/1 bs=1024k count=1000 then i see only a... (1 Reply)
Discussion started by: mbrenner
1 Replies

3. UNIX for Dummies Questions & Answers

poor performance processing file with awk

Hello, I'm running a script on AIX to process lines in a file. I need to enclose the second column in quotation marks and write each line to a new file. I've come up with the following: #!/bin/ksh filename=$1 exec >> $filename.new cat $filename | while read LINE do echo $LINE | awk... (2 Replies)
Discussion started by: scooter53080
2 Replies

4. Solaris

Poor Disk performance on ZFS

Hello, we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks. Each disk has a sequential performance of 220/230 MB/s and in fact if I do a dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies

5. Solaris

Poor disk performance however no sign of failure

Hello guys, I have two servers performing the same disk operations. I believe one server is having a disk's impending failure however I have no hard evidence to prove it. This is a pair of Netra 210's with 2 drives in a hardware raid mirror (LSI raid controller). While performing intensive... (4 Replies)
Discussion started by: s ladd
4 Replies

6. AIX

Poor Performance of server

Hi, I am new registered user here in this UNIX forums. I am a new system administrator for AIX 6.1. One of our servers performs poorly every time our application (FINACLE) runs many processes/instances. (see below for topas snapshot) I use NMON or Topas to monitor the server utilization. I... (9 Replies)
Discussion started by: guzzelle
9 Replies

7. Solaris

Poor performance on an M3000

Hi We have an M3000 single physical processor and 8gb of memory running Solaris 10. This system runs two Oracle Databases one on Oracle 9i and One on Oracle 10g. As soon as the Oracle 10g database starts we see an immediate drop in system performance, for example opening an ssh session can... (6 Replies)
Discussion started by: gregsih
6 Replies

8. AIX

ISCSI poor performance 1.5MB/s fresh install AIX7.1

Hi Everyone, I have been struggling for few days with iSCSI and thought I could get some help on the forum... fresh install of AIX7.1 TL4 on Power 710, The rootvg relies on 3 SAS disks in RAID 0, 32GB Memory The lpar Profile is using all of the managed system's resources. I have connected... (11 Replies)
Discussion started by: frenchy59
11 Replies

9. Windows & DOS: Issues & Discussions

Poor Windows 10 Performance of Parallels Desktop 15 on macOS Catalina

Just a quick note for macOS users. I just installed (and removed) Parallels Desktop 15 Edition on my MacPro (2013) with 64GB memory and 12-cores, which is running the latest version of macOS Catalina as of this post. The reason for this install was to test some RIGOL test gear software which... (6 Replies)
Discussion started by: Neo
6 Replies
mkfs.gfs2(8)						      System Manager's Manual						      mkfs.gfs2(8)

NAME
mkfs.gfs2 - Make a GFS2 filesystem SYNOPSIS
mkfs.gfs2 [OPTION]... DEVICE [ block-count ] DESCRIPTION
mkfs.gfs2 is used to create a Global File System. OPTIONS
-b BlockSize Set the filesystem block size to BlockSize (must be a power of two). The minimum block size is 512. The FS block size cannot exceed the machine's memory page size. On the most architectures (i386, x86_64, s390, s390x), the memory page size is 4096 bytes. On other architectures it may be bigger. The default block size is 4096 bytes. In general, GFS2 filesystems should not deviate from the default value. -c MegaBytes Initial size of each journal's quota change file -D Enable debugging output. -h Print out a help message describing available options, then exit. -J MegaBytes The size of the journals in Megabytes. The default journal size is 128 megabytes. The minimum size is 8 megabytes. -j Number The number of journals for gfs2_mkfs to create. You need at least one journal per machine that will mount the filesystem. If this option is not specified, one journal will be created. -K Keep, do not attempt to discard blocks at mkfs time (discarding blocks initially is useful on solid state devices and sparse / thin-provisioned storage). -O This option prevents gfs2_mkfs from asking for confirmation before writing the filesystem. -o Specify extended options. Multiple options can be separated by commas. Valid extended options are: help Display an extended options help summary, then exit. sunit=bytes This is used to specify the stripe unit for a RAID device or striped logical volume. This option ensures that resource groups will be stripe unit aligned and overrides the stripe unit value obtained by probing the device. This value must be a multiple of the file system block size and must be specified with the swidth option. swidth=bytes This is used to specify the stripe width for a RAID device or striped logical volume. This option ensures that resource groups will be stripe aligned and overrides the stripe width value obtained by probing the device. This value must be a multiple of the sunit option and must also be specified with it. align=[0|1] Disable or enable the alignment of resource groups. The default behaviour is to align resource groups to the stripe width and stripe unit values obtained from probing the device or specified with the swidth and sunit extended options. -p LockProtoName LockProtoName is the name of the locking protocol to use. Acceptable locking protocols are lock_dlm (for shared storage) or if you are using GFS2 as a local filesystem (1 node only), you can specify the lock_nolock protocol. If this option is not specified, lock_dlm protocol will be assumed. -q Be quiet. Don't print anything. -r MegaBytes gfs2_mkfs will try to make Resource Groups about this big. Minimum RG size is 32 MB. Maximum RG size is 2048 MB. A large RG size may increase performance on very large file systems. If not specified, mkfs.gfs2 will choose the RG size based on the size of the file system: average size file systems will have 256 MB RGs, and bigger file systems will have bigger RGs for better performance. -t LockTableName The lock table field appropriate to the lock module you're using. It is clustername:fsname. Clustername must match that in clus- ter.conf; only members of this cluster are permitted to use this file system. Fsname is a unique file system name used to distin- guish this GFS2 file system from others created (1 to 16 characters). Lock_nolock doesn't use this field. Valid clusternames and fsnames may only contain alphanumeric characters, hyphens (-) and underscores (_). -V Print program version information, then exit. [ block-count ] Make the file system this many blocks in size. If not specified, the entire length of the specified device is used. EXAMPLE
gfs2_mkfs -t mycluster:mygfs2 -p lock_dlm -j 2 /dev/vg0/mygfs2 This will make a Global File System on the block device "/dev/vg0/mygfs2". It will belong to "mycluster" and register itself as wanting locking for "mygfs2". It will use DLM for locking and make two journals. gfs2_mkfs -t mycluster:mygfs2 -p lock_nolock -j 3 /dev/vg0/mygfs2 This will make a Global File System on the block device "/dev/vg0/mygfs2". It will belong to "mycluster" and but have no cluster locking. It will have three journals. mkfs.gfs2(8)
All times are GMT -4. The time now is 11:41 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy