Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Poor read performance on sun storedge a1000 Post 302301483 by mbrenner on Friday 27th of March 2009 03:02:29 AM
Old 03-27-2009
Poor read performance on sun storedge a1000

Hello,

i have a a1000 connected to an e6500. There's a raid 10 (12 disks) on the a1000.
If i do a
dd if=/dev/zero of=/mnt/1 bs=1024k count=1000
and then look at iostat it tells me there's a kw/s of 25000.

But if i do a
dd of=/dev/zero if=/mnt/1 bs=1024k count=1000
then i see only a rw/s of 13000.

So the read performance is smaller then the write performance?

Can someone tell me what i am doing wrong?
Regards
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

veritas/STOREDGE A1000

Hi, I am a Dba and very new to filesystems and stuff. I think that we have Veritas filesystems on my Sun SOlaris 5.8 box, how do I confirm this: all my filesystems are mounted like this: /dev/vx/dsk... Now we are also using disk arrays (storedge a1000) how do I access them from the system.... (1 Reply)
Discussion started by: knarayan
1 Replies

2. UNIX for Advanced & Expert Users

Samba on E3500 Poor Performance!!!

Hi you all, I have a BIG performance problem on an Sun E3500, the scenario is described below: I have several users (30) accessing via samba to the E3500 using an application built on Visual Foxpro from their Windows PC , the problem is that the first guy that logs in demands 30% of the E3500... (2 Replies)
Discussion started by: alex blanco
2 Replies

3. UNIX for Advanced & Expert Users

Storedge A1000 Controller Firmware question

Hello everyone. I'm trying to setup two A1000s connected to a single host w/ a dual port adapter. The host is a V480. Do I need to have thesame firmware version on both controllers for the A1000s? If so, where can I download the latest and greatest firmware? I tried to google for it and... (8 Replies)
Discussion started by: xnightcrawl
8 Replies

4. UNIX for Advanced & Expert Users

Help: Sun Disk partitioning for Sun V240 & StorEdge 3300

Dear Sun gurus, I have Sun Fire V240 server with its StorEdge 3300 disk-array. Following are its disks appeared in format command. I have prepared its partitions thru format and metainit & metattach (may be i have made wrong steps, causing the errors below because I have done thru some document... (1 Reply)
Discussion started by: shafeeq
1 Replies

5. Solaris

How can i connect storedge A1000 to E250box?

Hello Experts, I am using E250 on that solais 10 5/08 installed. I am unable to see disks. I connected 2 disks in that storage of 18gb each. When I run format command it is showing that 2 disks one is operating system and another one is 6MB. I checked probe-scsi and probe-scsi-all at ok... (6 Replies)
Discussion started by: younus_syed
6 Replies

6. UNIX for Advanced & Expert Users

HW Raid poor io performance

Hello all We just built a storage cluster for our new xenserver farm. Using 3ware 9650SE raid controllers with 8 x 1TB WD sata disks in a raid 5, 256KB stripe size. While making first performance test on the local storage server using dd (which simulates the read/write access to the disk... (1 Reply)
Discussion started by: roli8200
1 Replies

7. Solaris

Poor Disk performance on ZFS

Hello, we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks. Each disk has a sequential performance of 220/230 MB/s and in fact if I do a dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies

8. AIX

Poor Performance of server

Hi, I am new registered user here in this UNIX forums. I am a new system administrator for AIX 6.1. One of our servers performs poorly every time our application (FINACLE) runs many processes/instances. (see below for topas snapshot) I use NMON or Topas to monitor the server utilization. I... (9 Replies)
Discussion started by: guzzelle
9 Replies

9. Solaris

Poor performance on an M3000

Hi We have an M3000 single physical processor and 8gb of memory running Solaris 10. This system runs two Oracle Databases one on Oracle 9i and One on Oracle 10g. As soon as the Oracle 10g database starts we see an immediate drop in system performance, for example opening an ssh session can... (6 Replies)
Discussion started by: gregsih
6 Replies
MKINITRD(8)						      System Manager's Manual						       MKINITRD(8)

NAME
mkinitrd - creates initial ramdisk images for preloading modules SYNOPSIS
mkinitrd [--version] [-v] [-f] [--preload=module] [--omit-scsi-modules] [--omit-raid-modules] [--omit-lvm-modules] [--with=module] [--image-version] [--fstab=fstab] [--nocompress] [--builtin=module] [--nopivot] image kernel-version DESCRIPTION
mkinitrd creates filesystem images which are suitable for use as Linux initial ramdisk (initrd) images. Such images are often used for preloading the block device modules (such as IDE, SCSI or RAID) which are needed to access the root filesystem. mkinitrd automatically loads filesystem modules (such as ext3 and jbd), IDE modules, all scsi_hostadapter entries in /etc/modules.conf, and raid modules if the system's root partition is on raid, which makes it simple to build and use kernels using modular device drivers. Any module options specified in /etc/modules.conf are passed to the modules as they are loaded by the initial ramdisk. If the root device is on a loop device (such as /dev/loop0), mkinitrd will build an initrd which sets up the loopback file properly. To do this, the fstab must contain a comment of the form: # LOOP0: /dev/hda1 vfat /linux/rootfs LOOP0 must be the name of the loop device which needs to be configured, in all capital lettes. The parameters after the colon are the device which contains the filesystem with the loopback image on it, the filesystem which is on the device, and the full path to the loop- back image. If the filesystem is modular, initrd will automatically add the filesystem's modules to the initrd image. The root filesystem used by the kernel is specified in the boot configuration file, as always. The traditional root=/dev/hda1 style device specification is allowed. If a label is used, as in root=LABEL=rootPart the initrd will search all available devices for an ext2 or ext3 filesystem with the appropriate label, and mount that device as the root filesystem. OPTIONS
--builtin=module Act as if module is built into the kernel being used. mkinitrd will not look for this module, and will not emit an error if it does not exist. This option may be used multiple times. -f Allows mkinitrd to overwrite an existing image file. --fstab=fstab Use fstab to automatically determine what type of filesystem the root device is on. Normally, /etc/fstab is used. --image-version The kernel version number is appended to the initrd image path before the image is created. --nocompress Normally the created initrd image is compressed with gzip. If this option is specified, the compression is skipped. --nopivot Do not use the pivot_root system call as part of the initrd. This lets mkinitrd build proper images for Linux 2.2 kernels at the expense of some features. In particular, some filesystems (such as ext3) will not work properly and filesystem options will not be used to mount root. This option is not recommended, and will be removed in future versions. --omit-lvm-modules Do not load any lvm modules, even if /etc/fstab expects them. --omit-raid-modules Do not load any raid modules, even if /etc/fstab and /etc/raidtab expect them. --omit-scsi-modules Do not load any scsi modules, including 'scsi_mod' and 'sd_mod' modules, even if they are present. --preload=module Load the module module in the initial ramdisk image. The module gets loaded before any SCSI modules which are specified in /etc/mod- ules.conf. This option may be used as many times as necessary. -v Prints out verbose information while creating the image (normally the mkinitrd runs silently). --version Prints the version of mkinitrd that's being used and then exits. --with=module Load the modules module in the initial ramdisk image. The module gets loaded after any SCSI modules which are specified in /etc/mod- ules.conf. This option may be used as many times as necessary. FILES
/dev/loop* A block loopback device is used to create the image, which makes this script useless on systems without block loopback support available. /etc/modules.conf Specified SCSI modules to be loaded and module options to be used. SEE ALSO
fstab(5), insmod(1), kerneld(8), lilo(8) AUTHOR
Erik Troan <ewt@redhat.com> 4th Berkeley Distribution Sat Mar 27 1999 MKINITRD(8)
All times are GMT -4. The time now is 10:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy