Sponsored Content
Full Discussion: Poor Disk performance on ZFS
Operating Systems Solaris Poor Disk performance on ZFS Post 302570397 by golemico on Thursday 3rd of November 2011 08:20:33 AM
Old 11-03-2011
Poor Disk performance on ZFS

Hello,
we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks.

Each disk has a sequential performance of 220/230 MB/s and in fact if I do a
dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k count=16384

writing 16 GB of sequential 0 goes at that speed (iostat confirms that, with load in round robin on the 2 controllers and 100% disk utilization)

If I now create a simple zpool made up of 1 single disk (the same disk as above) with zpool create tank <diskID_1>

and then issue a

dd if=/dev/zero of=/Tank/test bs=1024k count=16384
iostat shows a disk performance of less than 20 MB/s (1/10 !!!!!!!) with disk utilization around 96% (also zpool iostat shows similar figures)


I tried to set also checksum=off but it does not change anything.
Zpool is created with default values (compress=off, dedup=off, stripe 128k).

Of course if I turn compression on and dedup on everthing changes, but this is a "trick" and works because file are all zeros.

I tried to change the max_vdev_pending to 1, to 64 (default is 10) and nothing changes.

Anyone has a clue of what is going on ?
 

9 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Samba on E3500 Poor Performance!!!

Hi you all, I have a BIG performance problem on an Sun E3500, the scenario is described below: I have several users (30) accessing via samba to the E3500 using an application built on Visual Foxpro from their Windows PC , the problem is that the first guy that logs in demands 30% of the E3500... (2 Replies)
Discussion started by: alex blanco
2 Replies

2. Filesystems, Disks and Memory

Poor read performance on sun storedge a1000

Hello, i have a a1000 connected to an e6500. There's a raid 10 (12 disks) on the a1000. If i do a dd if=/dev/zero of=/mnt/1 bs=1024k count=1000 and then look at iostat it tells me there's a kw/s of 25000. But if i do a dd of=/dev/zero if=/mnt/1 bs=1024k count=1000 then i see only a... (1 Reply)
Discussion started by: mbrenner
1 Replies

3. UNIX for Dummies Questions & Answers

poor performance processing file with awk

Hello, I'm running a script on AIX to process lines in a file. I need to enclose the second column in quotation marks and write each line to a new file. I've come up with the following: #!/bin/ksh filename=$1 exec >> $filename.new cat $filename | while read LINE do echo $LINE | awk... (2 Replies)
Discussion started by: scooter53080
2 Replies

4. UNIX for Advanced & Expert Users

HW Raid poor io performance

Hello all We just built a storage cluster for our new xenserver farm. Using 3ware 9650SE raid controllers with 8 x 1TB WD sata disks in a raid 5, 256KB stripe size. While making first performance test on the local storage server using dd (which simulates the read/write access to the disk... (1 Reply)
Discussion started by: roli8200
1 Replies

5. Solaris

Poor disk performance however no sign of failure

Hello guys, I have two servers performing the same disk operations. I believe one server is having a disk's impending failure however I have no hard evidence to prove it. This is a pair of Netra 210's with 2 drives in a hardware raid mirror (LSI raid controller). While performing intensive... (4 Replies)
Discussion started by: s ladd
4 Replies

6. AIX

Poor Performance of server

Hi, I am new registered user here in this UNIX forums. I am a new system administrator for AIX 6.1. One of our servers performs poorly every time our application (FINACLE) runs many processes/instances. (see below for topas snapshot) I use NMON or Topas to monitor the server utilization. I... (9 Replies)
Discussion started by: guzzelle
9 Replies

7. Solaris

Poor performance on an M3000

Hi We have an M3000 single physical processor and 8gb of memory running Solaris 10. This system runs two Oracle Databases one on Oracle 9i and One on Oracle 10g. As soon as the Oracle 10g database starts we see an immediate drop in system performance, for example opening an ssh session can... (6 Replies)
Discussion started by: gregsih
6 Replies

8. AIX

ISCSI poor performance 1.5MB/s fresh install AIX7.1

Hi Everyone, I have been struggling for few days with iSCSI and thought I could get some help on the forum... fresh install of AIX7.1 TL4 on Power 710, The rootvg relies on 3 SAS disks in RAID 0, 32GB Memory The lpar Profile is using all of the managed system's resources. I have connected... (11 Replies)
Discussion started by: frenchy59
11 Replies

9. Windows & DOS: Issues & Discussions

Poor Windows 10 Performance of Parallels Desktop 15 on macOS Catalina

Just a quick note for macOS users. I just installed (and removed) Parallels Desktop 15 Edition on my MacPro (2013) with 64GB memory and 12-cores, which is running the latest version of macOS Catalina as of this post. The reason for this install was to test some RIGOL test gear software which... (6 Replies)
Discussion started by: Neo
6 Replies
raidtab(5)							File Formats Manual							raidtab(5)

NAME
raidtab - configuration file for md (RAID) devices DESCRIPTION
/etc/raidtab is the default configuration file for the raid tools (raidstart and company). It defines how RAID devices are configured on a system. FORMAT
/etc/raidtab has multiple sections, one for each md device which is being configured. Each section begins with the raiddev keyword. The order of items in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code isn't overly bright, so be sure to follow the ordering in this man page for best results. Here's a sample md configuration file: # # sample raiddev configuration file # 'old' RAID0 array created with mdtools. # raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 0 chunk-size 8 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md1 raid-level 5 nr-raid-disks 3 nr-spare-disks 1 persistent-superblock 1 parity-algorithm left-symmetric device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 spare-disk 0 Here is more information on the directives which are in raid configuration files; the options are listen in this file in the same order they should appear in the actual configuration file. raiddev device This introduces the configuration section for the stated device. nr-raid-disks count Number of raid devices in the array; there should be count raid-disk entries later in the file. (current maximum limit for RAID devices -including spares- is 12 disks. This limit is already extended to 256 disks in experimental patches.) nr-spare-disks count Number of spare devices in the array; there should be count spare-disk entries later in the file. Spare disks may only be used with RAID4 and RAID5, and allow the kernel to automatically build new RAID disks as needed. It is also possible to add/remove spares run- time via raidhotadd/raidhotremove, care has to be taken that the /etc/raidtab configuration exactly follows the actual configuration of the array. (raidhotadd/raidhotremove does not change the configuration file) persistent-superblock 0/1 newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use. parity-algorithm which The parity-algorithm to use with RAID5. It must be one of left-asymmetric, right-asymmetric, left-symmetric, or right-symmetric. left-symmetric is the one that offers maximum performance on typical disks with rotating platters. chunk-size size Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the ker- nel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration. device devpath Adds the device devpath to the list of devices which comprise the raid system. Note that this command must be followed by one of raid-disk, spare-disk, or parity-disk. Also note that it's possible to recursively define RAID arrays, ie. to set up a RAID5 array of RAID5 arrays. (thus achieving two-disk failure protection, at the price of more disk space spent on RAID5 checksum blocks) raid-disk index The most recently defined device is inserted at position index in the raid array. spare-disk index The most recently defined device is inserted at position index in the spare disk array. parity-disk index The most recently defined device is moved to the end of the raid array, which forces it to be used for parity. failed-disk index The most recently defined device is inserted at position index in the raid array as a failed device. This allows you to create raid 1/4/5 devices in degraded mode - useful for installation. Don't use the smallest device in an array for this, put this after the raid-disk definitions! NOTES
The raidtools are derived from the md-tools and raidtools packages, which were originally written by Marc Zyngier, Miguel de Icaza, Gadi Oxman, Bradley Ward Allen, and Ingo Molnar. SEE ALSO
raidstart(8), raid0run(8), mkraid(8), raidstop(8) raidtab(5)
All times are GMT -4. The time now is 08:44 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy