Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Raid0 array stresses only 1 disk out of 3 Post 302970987 by chebarbudo on Thursday 14th of April 2016 10:45:21 AM
Old 04-14-2016
Hi guys,

Thank you very much for your contributions.

First of all, my problem does not happen any more. I created the raid with sdb, sdc and sdd on April 11 at 09:35.
Until 11:32, sdd was very busy, then until 14:51, sdc was very busy.
Since then (3 days), the 3 disks are always under the same moderate load altogether (0-20%). The server is used by 5 graphic designers manipulating quite large files (100M-2G).

I ran some tests and the results leave me quite puzzled. So I created simultaneously 10 files. 1GB each. But all the load went on sda. Leaving sdb, sdc and sdd with a moderate 20% load.

The command:
Code:
for i in {1..10}; do
  file=$(mktemp /galaxy/XXXXXXX)
  echo $file >> /galaxy/dd.files
  dd if=/dev/zero of=$file bs=1G count=1 &
  echo $!    >> /galaxy/dd.pids
done

The output of dstat:
Code:
----system---- sda--sdb--sdc--sdd-
     time     |util:util:util:util
14-04 15:56:30|  21:   0:   0:   0
14-04 15:57:00| 100:   0:   0:   0
14-04 15:57:30| 101:   0:   0:   0
14-04 15:58:00| 100:   2:   2:   1
14-04 15:58:30| 101:   3:   4:   2
14-04 15:59:00| 102:   4:   5:   4
14-04 15:59:30|  98:   2:   3:   2
14-04 16:00:00| 100:   4:   4:   2
14-04 16:00:30| 103:  16:  16:  15
14-04 16:01:00|  98:  16:  17:  15
14-04 16:01:30| 101:  15:  15:  15
14-04 16:02:00|  99:   9:   8:   8
14-04 16:02:30| 100:   3:   4:   3
14-04 16:03:00| 100:   2:   4:   3
14-04 16:03:30| 104:   4:   4:   3
14-04 16:04:00|  95:   4:   4:   3
14-04 16:04:30| 100:   3:   4:   2
14-04 16:05:00| 101:   3:   4:   3
14-04 16:05:30|  99:  12:  13:  12
14-04 16:06:00| 102:  20:  22:  18
14-04 16:06:30|  98:  17:  19:  18
14-04 16:07:00| 101:   7:   9:   8
14-04 16:07:30|  99:   4:   5:   3
14-04 16:08:00| 102:   4:   5:   3
14-04 16:08:30|  98:   3:   5:   3
14-04 16:09:00| 100:   5:   7:   5
14-04 16:09:30| 101:   5:   5:   4
14-04 16:10:00| 100:   4:   4:   2
14-04 16:10:30| 100:  17:  18:  16
14-04 16:11:01| 105:  16:  20:  16
14-04 16:11:30|  95:  15:  17:  17
14-04 16:12:00| 100:  12:  11:  10
14-04 16:12:30|  34:  15:  16:  14

Is /dev/zero an actual file of sda?
How do you interpret the results?

Regards
Santiago
 

9 More Discussions You Might Find Interesting

1. Solaris

A1000 Disk storage array

I am new to the unix world. I have SunBlade 100 and A1000 Disk storage array with 12 Hard drives. I used SCSI card and SCSI cables to connect. When I do the format command,I can see disk storage as one disk instead of 12 disks as below. Could anybody can explain why? What should I do in order... (1 Reply)
Discussion started by: Dulasi
1 Replies

2. Solaris

Solaris RAID0 doubt...

friends, Suppose I am typing metastat command and it is showing: d100: Concat/Stripe Size: 369495 blocks (180 MB) Stripe 0: (interlace: 32 blocks) Device Start Block Dbase Reloc c1d0s0 16065 Yes Yes c1d0s1 0 No Yes... (4 Replies)
Discussion started by: saagar
4 Replies

3. UNIX for Advanced & Expert Users

3510 Disk Array Problem

I have a 3510 disk array attached to a T2000 server. The dmesg command shows disk error as follows and is generated a couple of times during the day Aug 18 03:35:51 myserver SUNWscsdMonitor: <rctrl6042> Standard General Event, CHL:2 ID:22 Drive NOTICE: Drive Recovered Error - 5F8E1F... (1 Reply)
Discussion started by: Tirmazi
1 Replies

4. Linux

Raid0 recovery from external HD

Hi, I have the Lacie Big Disk, which is a external hard drive enclosure in a hardware RAID0 array of 2x250GB disks. The RAID controller seems to have died, leaving me with 2 working hard drives but no way to get the data. I tried hooking the drives up to a windows machine and using Raid... (4 Replies)
Discussion started by: dangral
4 Replies

5. UNIX for Dummies Questions & Answers

Why is RAID0 faster?

I have read anecdotes about people installing RAID0 (RAID - Wikipedia, the free encyclopedia) on some of their machines because it gives a performance boost. Because bandwidth on the motherboard is limited, can someone explain exactly why it should be faster? (7 Replies)
Discussion started by: figaro
7 Replies

6. Solaris

Disk Array

So I have 2 solaris sun servers one with a disk array with 14 drives which I need to move to the other server. How do I move the disk array configured as a Raid 5 to another server without losing data? So far I know I'll connect the drive, run devfsadm, use format to verify the server see's... (5 Replies)
Discussion started by: Kjons76
5 Replies

7. UNIX for Advanced & Expert Users

Disk Array

HI: I have a DUAL redundant system running in HP-UX 10.20 ( 2 servers) Both servers share an external SCSI disk array. Both server runs a Data base wich uses the disk array to write the data, and that is the way they share the information. The way that the servers see the disk array, is like... (1 Reply)
Discussion started by: pmoren
1 Replies

8. Filesystems, Disks and Memory

DISK ARRAY PROTECTION SUSPENDED message displayed following disk replacement

Hello, On 4/20/2018, we performed a disk replacement on our IBM 8202 P7 server. After the disk was rebuilt, the SAS Disk Array sissas0 showed a status of degraded. However, the pdisks in the array all show a status of active. We did see a message in errpt. DISK ARRAY PROTECTION SUSPENDED. ... (1 Reply)
Discussion started by: terrya
1 Replies

9. AIX

DISK ARRAY PROTECTION SUSPENDED message following disk replacement

Hello, On 4/20/2018, we performed a disk replacement on our IBM 8202 P7 server. After the disk was rebuilt, the SAS Disk Array sissas0 showed a status of degraded. However, the pdisks in the array all show a status of active. We did see a message in errpt. DISK ARRAY PROTECTION SUSPENDED. ... (3 Replies)
Discussion started by: terrya
3 Replies
GPTZFSBOOT(8)						    BSD System Manager's Manual 					     GPTZFSBOOT(8)

NAME
gptzfsboot -- GPT bootcode for ZFS on BIOS-based computers DESCRIPTION
gptzfsboot is used on BIOS-based computers to boot from a filesystem in a ZFS pool. gptzfsboot is installed in a freebsd-boot partition of a GPT-partitioned disk with gpart(8). IMPLEMENTATION NOTES
The GPT standard allows a variable number of partitions, but gptzfsboot only boots from tables with 128 partitions or less. BOOTING
gptzfsboot tries to find all ZFS pools that are composed of BIOS-visible hard disks or partitions on them. gptzfsboot looks for ZFS device labels on all visible disks and in discovered supported partitions for all supported partition scheme types. The search starts with the disk from which gptzfsboot itself was loaded. Other disks are probed in BIOS defined order. After a disk is probed and gptzfsboot determines that the whole disk is not a ZFS pool member, the individual partitions are probed in their partition table order. Currently GPT and MBR partition schemes are supported. With the GPT scheme, only partitions of type freebsd-zfs are probed. The first pool seen during probing is used as a default boot pool. The filesystem specified by the bootfs property of the pool is used as a default boot filesystem. If the bootfs property is not set, then the root filesystem of the pool is used as the default. zfsloader(8) is loaded from the boot filesystem. If /boot.config or /boot/config is present in the boot filesystem, boot options are read from it in the same way as boot(8). The ZFS GUIDs of the first successfully probed device and the first detected pool are made available to zfsloader(8) in the vfs.zfs.boot.primary_vdev and vfs.zfs.boot.primary_pool variables. USAGE
Normally gptzfsboot will boot in fully automatic mode. However, like boot(8), it is possible to interrupt the automatic boot process and interact with gptzfsboot through a prompt. gptzfsboot accepts all the options that boot(8) supports. The filesystem specification and the path to zfsloader(8) are different from boot(8). The format is [zfs:pool/filesystem:][/path/to/loader] Both the filesystem and the path can be specified. If only a path is specified, then the default filesystem is used. If only a pool and filesystem are specified, then /boot/zfsloader is used as a path. Additionally, the status command can be used to query information about discovered pools. The output format is similar to that of zpool status (see zpool(8)). The configured or automatically determined ZFS boot filesystem is stored in the zfsloader(8) loaddev variable, and also set as the initial value of the currdev variable. FILES
/boot/gptzfsboot boot code binary /boot.config parameters for the boot block (optional) /boot/config alternative parameters for the boot block (optional) EXAMPLES
gptzfsboot is typically installed in combination with a ``protective MBR'' (see gpart(8)). To install gptzfsboot on the ada0 drive: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 gptzfsboot can also be installed without the PMBR: gpart bootcode -p /boot/gptzfsboot -i 1 ada0 SEE ALSO
boot.config(5), boot(8), gpart(8), loader(8), zfsloader(8), zpool(8) HISTORY
gptzfsboot appeared in FreeBSD 7.3. AUTHORS
This manual page was written by Andriy Gapon <avg@FreeBSD.org>. BUGS
gptzfsboot looks for ZFS meta-data only in MBR partitions (known on FreeBSD as slices). It does not look into BSD disklabel(8) partitions that are traditionally called partitions. If a disklabel partition happens to be placed so that ZFS meta-data can be found at the fixed off- sets relative to a slice, then gptzfsboot will recognize the partition as a part of a ZFS pool, but this is not guaranteed to happen. BSD
September 15, 2014 BSD
All times are GMT -4. The time now is 03:43 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy