Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory iostat output vs TPC output (array layer) Post 302517362 by DGPickett on Tuesday 26th of April 2011 02:59:55 PM
Old 04-26-2011
Well, it is a complex world, with security and speed in opposition. One oddity of expanding sidk sizes is that one new big disk may be overwhelmed with the level of I/O that used to be handled by 8 disks, so size attracting query and churn is a negative! Striping allows the bandwidth of many drives to be applied to the combined storage, with supports more buffering with faster buffer fills, if things are sequential often enough. If everything was sequential and failure was no worry, you could stipe all together for max bandwidth, but you might do better with 2 or more virtual volumes so copying, database joining and such can be sequential on each virtual device. So, there are sometimes ways to force smart parallelism, the ability to join huge sets without seeks. However, RAM and 64 bit VM have made buffering so ample that it may dilute that sort of approach. RAID has not entirely freed us from failure worry, since with all the layers of software and hardware and vendors, it seems RAID errors often never get heard until they are 2 devices down. Rebuild time is not inconsequential, either. So, your approach should go beyond hot spots to maximizing the bandwidth of a managable number of virtual volumes. Along the way, look at the pathways and how they figure into the redundancy and striping. If a controller handles both sides of a mirror, and goes wonky . . . . If striping runs across all controllers, scsi cables, then any controller or cable bottleneck is diluted. Intellegent use of simple mirror for high churn and RAID-N for low churn is nice, too! Sometimes, this discussion can be extended down into the app, as DB2 append tables with insert never update or delete are churn free except at the end. Disk is cheap and 100% history is wise. Churn-free data might even migrate to some hierarchical read-only store like DVD arrays. Assuming control of chaos is someone else's job can be a luxury.

But, yeah, it seems like it is still good, but might not be sufficient, and an approach with sufficiency might not make it necessary.
This User Gave Thanks to DGPickett For This Post:
 

10 More Discussions You Might Find Interesting

1. Solaris

iostat -e / -E output explanation

Hi all, hope you are having a nice day, its nice and warm today in Canberra Australia. iostat -e / -E reports soft and hard errors. Any idea what these are exactly? All I hear are I/O's failing and needing to retry, but no cause as to why they fail. My SUN guru tells me its our EMC SAN... (1 Reply)
Discussion started by: scottman
1 Replies

2. UNIX for Advanced & Expert Users

iostat output what is that mean

Hi all, i have run iostat -em, and get below result. Can i know what is this output meaning, and how to fix that problem. iostat -em ---- errors --- device s/w h/w trn tot sd7 0 1 0 1 sd8 1 1 0 2 sd9 0 1 0 1 sd10 0 ... (2 Replies)
Discussion started by: foongkt5220
2 Replies

3. Shell Programming and Scripting

output of an array

Hi gurus, I need to set up an array like this set - A arr 'A', 'B' The output of this array should be like this 'A','B' Right now, I get the output like this 'A B' Can anyone suggest me on how to achieve this. thanks (3 Replies)
Discussion started by: ragha81
3 Replies

4. Shell Programming and Scripting

Formatting output from iostat

So I use Cacti for monitoring IO statistics on my servers, now originally I couldnt monitor Multipath deviced servers as they have alot of /dev/sdxx and /dev/emcpowerxx, I have devised a method of trimming them down to just the actual devices but the issue is the output looks like so. # iostat... (0 Replies)
Discussion started by: RiSk
0 Replies

5. Shell Programming and Scripting

finding greatest value in a column using awk from iostat output in linux

Friends, . On linux i have to run iostat command and in each iteration have to print the greatest value in each column. e.g iostat -dt -kx 2 2 | awk ' !/sd/ &&!/%util/ && !/Time/ && !/Linux/ {print $12}' 4.38 0.00 0.00 0.00 What i would like to print is only the... (3 Replies)
Discussion started by: achak01
3 Replies

6. Shell Programming and Scripting

finding greatest value in a column using awk from iostat output in linux

Friends, Need some help. On linux i have to run iostat command and in each iteration have to print the greatest value in each column. e.g iostat -dt -kx 2 2 | awk ' !/sd/ &&!/%util/ && !/Time/ && !/Linux/ {print $12}' 4.38 0.00 0.00 0.00 WHhat i would like to... (15 Replies)
Discussion started by: achak01
15 Replies

7. Solaris

Unmatched ssd create huge unuseful iostat output

My scheduled collection of statistics is giving very large output because of an high number of ssd device not associated to any disk The iostat -x command is collecting statistics from them and the output is very large. I.g. if a run iostat -x|tail +3|awk '{print $1}'>f0.txt.$$ iostat... (5 Replies)
Discussion started by: sun-mik
5 Replies

8. Solaris

Asvc_t values in iostat output

Noticed that asvc_t values in iostat command outputs are mostly more than 100 in our previous iostat analysis. Also found the following detail from an alternate site IO Bottleneck - Disk performance issue - UnixArena ---- 1. asvc_t average service time of active transactions, in... (2 Replies)
Discussion started by: saraperu
2 Replies

9. Shell Programming and Scripting

Need help to get the parsed output of "iostat" command

Hi, I have a requirement where parsed output from various linux commands like top, netstat, iostat, etc. will be the input for one javascript with the parsed output from these commands converted to JSON format For "iostat" command, since there are two outputs - one w.r.t CPU utilization and... (2 Replies)
Discussion started by: gopivallabha
2 Replies

10. Shell Programming and Scripting

Need help to parse iostat command output

Hi, I got the code below is one of the threads from this forum. lineCount=$(iostat | wc -l) numDevices=$(expr $lineCount - 7); iostat $interval -x -t | awk -v awkCpuFile=$cpuFile -v awkDeviceFile=$deviceFile -v awkNumDevices=$numDevices ' BEGIN { print... (2 Replies)
Discussion started by: gopivallabha
2 Replies
CCD(4)							   BSD Kernel Interfaces Manual 						    CCD(4)

NAME
ccd -- Concatenated Disk driver SYNOPSIS
device ccd DESCRIPTION
The ccd driver provides the capability of combining one or more disks/partitions into one virtual disk. This document assumes that you are familiar with how to generate kernels, how to properly configure disks and devices in a kernel configura- tion file, and how to partition disks. In order to compile in support for the ccd, you must add a line similar to the following to your kernel configuration file: device ccd # concatenated disk devices As of the FreeBSD 3.0 release, you do not need to configure your kernel with ccd but may instead use it as a kernel loadable module. Simply running ccdconfig(8) will load the module into the kernel. A ccd may be either serially concatenated or interleaved. To serially concatenate the partitions, specify the interleave factor of 0. Note that mirroring may not be used with an interleave factor of 0. There is a run-time utility that is used for configuring ccds. See ccdconfig(8) for more information. The Interleave Factor If a ccd is interleaved correctly, a ``striping'' effect is achieved, which can increase sequential read/write performance. The interleave factor is expressed in units of DEV_BSIZE (usually 512 bytes). For large writes, the optimum interleave factor is typically the size of a track, while for large reads, it is about a quarter of a track. (Note that this changes greatly depending on the number and speed of disks.) For instance, with eight 7,200 RPM drives on two Fast-Wide SCSI buses, this translates to about 128 for writes and 32 for reads. A larger interleave tends to work better when the disk is taking a multitasking load by localizing the file I/O from any given process onto a single disk. You lose sequential performance when you do this, but sequential performance is not usually an issue with a multitasking load. An interleave factor must be specified when using a mirroring configuration, even when you have only two disks (i.e., the layout winds up being the same no matter what the interleave factor). The interleave factor will determine how I/O is broken up, however, and a value 128 or greater is recommended. ccd has an option for a parity disk, but does not currently implement it. The best performance is achieved if all component disks have the same geometry and size. Optimum striping cannot occur with different disk types. For random-access oriented workloads, such as news servers, a larger interleave factor (e.g., 65,536) is more desirable. Note that there is not much ccd can do to speed up applications that are seek-time limited. Larger interleave factors will at least reduce the chance of having to seek two disk-heads to read one directory or a file. Disk Mirroring You can configure the ccd to ``mirror'' any even number of disks. See ccdconfig(8) for how to specify the necessary flags. For example, if you have a ccd configuration specifying four disks, the first two disks will be mirrored with the second two disks. A write will be run to both sides of the mirror. A read will be run to either side of the mirror depending on what the driver believes to be most optimal. If the read fails, the driver will automatically attempt to read the same sector from the other side of the mirror. Currently ccd uses a dual seek zone model to optimize reads for a multi-tasking load rather than a sequential load. In an event of a disk failure, you can use dd(1) to recover the failed disk. Note that a one-disk ccd is not the same as the original partition. In particular, this means if you have a file system on a two-disk mir- rored ccd and one of the disks fail, you cannot mount and use the remaining partition as itself; you have to configure it as a one-disk ccd. You cannot replace a disk in a mirrored ccd partition without first backing up the partition, then replacing the disk, then restoring the partition. Linux Compatibility The Linux compatibility mode does not try to read the label that Linux' md(4) driver leaves on the raw devices. You will have to give the order of devices and the interleave factor on your own. When in Linux compatibility mode, ccd will convert the interleave factor from Linux terminology. That means you give the same interleave factor that you gave as chunk size in Linux. If you have a Linux md(4) device in ``legacy'' mode, do not use the CCDF_LINUX flag in ccdconfig(8). Use the CCDF_NO_OFFSET flag instead. In that case you have to convert the interleave factor on your own, usually it is Linux' chunk size multiplied by two. Using a Linux RAID this way is potentially dangerous and can destroy the data in there. Since FreeBSD does not read the label used by Linux, changes in Linux might invalidate the compatibility layer. However, using this is reasonably safe if you test the compatibility before mounting a RAID read-write for the first time. Just using ccdconfig(8) without mounting does not write anything to the Linux RAID. Then you do a fsck.ext2fs (ports/sysutils/e2fsprogs) on the ccd device using the -n flag. You can mount the file system read-only to check files in there. If all this works, it is unlikely that there is a problem with ccd. Keep in mind that even when the Linux compatibility mode in ccd is working correctly, bugs in FreeBSD's ex2fs implemen- tation would still destroy your data. WARNINGS
If just one (or more) of the disks in a ccd fails, the entire file system will be lost unless you are mirroring the disks. If one of the disks in a mirror is lost, you should still be able to back up your data. If a write error occurs, however, data read from that sector may be non-deterministic. It may return the data prior to the write or it may return the data that was written. When a write error occurs, you should recover and regenerate the data as soon as possible. Changing the interleave or other parameters for a ccd disk usually destroys whatever data previously existed on that disk. FILES
/dev/ccd* ccd device special files SEE ALSO
dd(1), ccdconfig(8), config(8), disklabel(8), fsck(8), gvinum(8), mount(8), newfs(8) HISTORY
The concatenated disk driver was originally written at the University of Utah. BSD
August 9, 1995 BSD
All times are GMT -4. The time now is 05:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy