Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Poor read performance on sun storedge a1000 Post 302302357 by TonyFullerMalv on Monday 30th of March 2009 07:00:29 PM
Old 03-30-2009
If a raid 10 made up of 12 disks is 6 disks in a striped volume mirrored against another volume of 6 disks in a striped volume, the the mirroring process (which has to write to both striped volumes) slows down writes compared with reading (which only has to read from one of the striped volumes), normally.

I think writing to /dev/zero is not a good idea, I would try writing to /dev/null instead.

Reading from /dev/random would be interesting to compare with reading from /dev/zero also?
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

veritas/STOREDGE A1000

Hi, I am a Dba and very new to filesystems and stuff. I think that we have Veritas filesystems on my Sun SOlaris 5.8 box, how do I confirm this: all my filesystems are mounted like this: /dev/vx/dsk... Now we are also using disk arrays (storedge a1000) how do I access them from the system.... (1 Reply)
Discussion started by: knarayan
1 Replies

2. UNIX for Advanced & Expert Users

Samba on E3500 Poor Performance!!!

Hi you all, I have a BIG performance problem on an Sun E3500, the scenario is described below: I have several users (30) accessing via samba to the E3500 using an application built on Visual Foxpro from their Windows PC , the problem is that the first guy that logs in demands 30% of the E3500... (2 Replies)
Discussion started by: alex blanco
2 Replies

3. UNIX for Advanced & Expert Users

Storedge A1000 Controller Firmware question

Hello everyone. I'm trying to setup two A1000s connected to a single host w/ a dual port adapter. The host is a V480. Do I need to have thesame firmware version on both controllers for the A1000s? If so, where can I download the latest and greatest firmware? I tried to google for it and... (8 Replies)
Discussion started by: xnightcrawl
8 Replies

4. UNIX for Advanced & Expert Users

Help: Sun Disk partitioning for Sun V240 & StorEdge 3300

Dear Sun gurus, I have Sun Fire V240 server with its StorEdge 3300 disk-array. Following are its disks appeared in format command. I have prepared its partitions thru format and metainit & metattach (may be i have made wrong steps, causing the errors below because I have done thru some document... (1 Reply)
Discussion started by: shafeeq
1 Replies

5. Solaris

How can i connect storedge A1000 to E250box?

Hello Experts, I am using E250 on that solais 10 5/08 installed. I am unable to see disks. I connected 2 disks in that storage of 18gb each. When I run format command it is showing that 2 disks one is operating system and another one is 6MB. I checked probe-scsi and probe-scsi-all at ok... (6 Replies)
Discussion started by: younus_syed
6 Replies

6. UNIX for Advanced & Expert Users

HW Raid poor io performance

Hello all We just built a storage cluster for our new xenserver farm. Using 3ware 9650SE raid controllers with 8 x 1TB WD sata disks in a raid 5, 256KB stripe size. While making first performance test on the local storage server using dd (which simulates the read/write access to the disk... (1 Reply)
Discussion started by: roli8200
1 Replies

7. Solaris

Poor Disk performance on ZFS

Hello, we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks. Each disk has a sequential performance of 220/230 MB/s and in fact if I do a dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies

8. AIX

Poor Performance of server

Hi, I am new registered user here in this UNIX forums. I am a new system administrator for AIX 6.1. One of our servers performs poorly every time our application (FINACLE) runs many processes/instances. (see below for topas snapshot) I use NMON or Topas to monitor the server utilization. I... (9 Replies)
Discussion started by: guzzelle
9 Replies

9. Solaris

Poor performance on an M3000

Hi We have an M3000 single physical processor and 8gb of memory running Solaris 10. This system runs two Oracle Databases one on Oracle 9i and One on Oracle 10g. As soon as the Oracle 10g database starts we see an immediate drop in system performance, for example opening an ssh session can... (6 Replies)
Discussion started by: gregsih
6 Replies
raidreconf(8)						      System Manager's Manual						     raidreconf(8)

NAME
raidreconf - reconfigure RAID arrays SYNOPSIS
raidreconf -h {--help} - or - raidreconf -V {--version} - or - raidreconf -o oldraidtab -n newraidtab -m /dev/md? - or - raidreconf -i /dev/sd?? -n newraidtab -m /dev/md? - or - raidreconf -n newraidtab -m /dev/md? -e /dev/sd?? WARNING
You should back up all data BEFORE any attempt is made to reconfigure a RAID device. YOU HAVE BEEN WARNED. The author will give you no guarantee whatsoever, that this program works in any specific way at all. It may well destroy all data on any device connected directly, indirectly, or not at all, to any system this software is used on. Please use this stuff with care, if you decide to use it at all. Ok, that said, let's see how to actually use it :-) DESCRIPTION
raidreconf will read two raidtab files, an old one, and a new one. It will then re-build your old array to match the configuration for the new array, while retaining all data possible. It can also be used to import a single block-device into a RAID array (using more block devices), or export a RAID array to a single block- device. raidreconf can, of course, only retain your original data if you grow the configuration. If you shrink the configuration from say, P bytes to Q bytes, raidreconf will retain the first Q bytes of your original data, but everything from Q bytes to the end of the old array (to P bytes) will be lost. Currently raidreconf can grow and shrink RAID-0 and RAID-5 arrays, and import non-RAID devices into a new RAID-0 or RAID-5. The whole purpose of raidreconf is to be able to add disks to an existing array, or convert it to a new type (eg. RAID-0 to RAID-5) without losing data. raidreconf will move the existing data around on your array, to match the layout of the new array. OPTIONS
-h {--help} Raidreconf will print a short help message, and exit. -V {--verbose} Raidreconf will print it's version information, and exit. -o {--old} oldraidtab Specifies the path name of the old (current) raidtab. NOTE: raidreconf performs some tests to ensure that this configuration file matches the raid superblocks stored on the disk, but there may be scenarios where the two are in conflict, but aren't detected as such. Be very careful to specify this file properly. -n {--new} newraidtab Specifies the path name of the new raidtab. After raidreconf finishes, copy the newraidtab to the oldraidtab location, as raidreconf doesn't perform this (potentially dangerous) operation. -m {--mddev} /dev/md? Specifies the name of the raid array to modify. -i {--import} /dev/sd?? Specifies the name of the device to import from. -e {--export} /dev/sd?? Specifies the name of the device to export to. BUGS
Perhaps many. Well, the basic RAID-0 growth, shrink and import algorithms seem to work, but there are lots and lots of consistency checks and graceful error handling missing. The RAID-5 algorithms are simplistic, with little optimization other than that provided by the buffer layer. Conversions between non-RAID, RAID-0, and RAID-5 all *seem* to work, but there may be some bugs left yet. If an error occurs during reconfiguration, a power failure for example, restore from backup (you DID make a backup, right?), and try again. Although RAID-4 is not supported, and almost no one uses it, it would be almost trivial to add. REPORTING BUGS
Since this is highly experimental software, there are a number of known bugs already. The author would of course like to know about bugs, but at this stage in development you shouldn't waste too much of your time trying to hunt them down. They're probably known, and maybe already fixed in the author's tree. Report bugs to <bugs@oss.connex.com>. ????? AUTHOR
raidreconf was written in 1999 by Jakob Oestergaard <jakob@ostenfeld.dk> The RAID-5 routines were written by Daniel S. Cox in 2001 <dcox@connex.com> SEE ALSO
mkraid(8), raidtab(5), raidstart(8), raidhotadd(8), raidhotremove(8), raidstop(8) raidreconf(8)
All times are GMT -4. The time now is 03:22 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy