Sponsored Content
Full Discussion: A1000 Battery Question
Top Forums UNIX for Dummies Questions & Answers A1000 Battery Question Post 53150 by Perderabo on Wednesday 7th of July 2004 01:25:58 AM
Old 07-07-2004
Cache is a holding area. When the computer writes to a disk, the RAID can acknowledge the write as soon as the data has arrived in cache. This makes disk writes seem to be much faster. But you probably want that data to actually get written to the disk. (A cache will delay for awhile in case the data is written again. That way multiple disk writes actually become a single write.) But if you lose power, the actual disks stop spinning. Now you need to preserve the data until power is restored. That's where the battery comes in.

So far, we've talked about write-behind. But cache is also used for read-ahead. The RAID is paying attention to which data the computer wants and tries to guess which read may occur in the future. If a disk goes idle, it will issue the reads to have the data in the cache ready for a read to occur.

This is big performance boost, and without the cache performance drops.

Even if I didn't care about disk i/o performance. I would replace the battery. The code in firmware of the RAID may have bugs. And they probably don't test degraded mode as thoroughly as it's normal mode. Degraded hardware always makes me nervous.
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

veritas/STOREDGE A1000

Hi, I am a Dba and very new to filesystems and stuff. I think that we have Veritas filesystems on my Sun SOlaris 5.8 box, how do I confirm this: all my filesystems are mounted like this: /dev/vx/dsk... Now we are also using disk arrays (storedge a1000) how do I access them from the system.... (1 Reply)
Discussion started by: knarayan
1 Replies

2. UNIX for Advanced & Expert Users

Raid A1000 with E450 and E250.

Hi, I'm facing problem in connecting a Raid A1000 to E250 and E450. ( Both machines with Solaris 2.6 OS and patched, with different scsi-initiator-ids - 7 & 3 ). BTW, I'm not using this setup to access raid data from both machines simultaneously. There is Veritas Cluster Server monitoring... (1 Reply)
Discussion started by: shibz
1 Replies

3. UNIX for Advanced & Expert Users

Storedge A1000 Controller Firmware question

Hello everyone. I'm trying to setup two A1000s connected to a single host w/ a dual port adapter. The host is a V480. Do I need to have thesame firmware version on both controllers for the A1000s? If so, where can I download the latest and greatest firmware? I tried to google for it and... (8 Replies)
Discussion started by: xnightcrawl
8 Replies

4. Solaris

A1000 Solaris 10 on Blade 1000

I have a SUN Blade 1000 running Solaris 10 and an A1000 array. I know that this combo is not supported by SUN but will it work? The raid manager software is installed and it says that the firmware needs to be upgraded -it is at I think level 2.05 something. I can see all the disks in a... (13 Replies)
Discussion started by: tribbles
13 Replies

5. Solaris

How can i connect storedge A1000 to E250box?

Hello Experts, I am using E250 on that solais 10 5/08 installed. I am unable to see disks. I connected 2 disks in that storage of 18gb each. When I run format command it is showing that 2 disks one is operating system and another one is 6MB. I checked probe-scsi and probe-scsi-all at ok... (6 Replies)
Discussion started by: younus_syed
6 Replies

6. Solaris

A1000 Disk storage array

I am new to the unix world. I have SunBlade 100 and A1000 Disk storage array with 12 Hard drives. I used SCSI card and SCSI cables to connect. When I do the format command,I can see disk storage as one disk instead of 12 disks as below. Could anybody can explain why? What should I do in order... (1 Reply)
Discussion started by: Dulasi
1 Replies

7. Solaris

A5200 vs A1000

I'm currently planning to improve my skills on sun disk array and deciding whether to go with sun disk array A5200 or A1000. I have found a few good systems with decent price but still can't seem to find out which one to choose. For the host system, I have 1 ultra 5 and 1 e220r system. I will most... (5 Replies)
Discussion started by: agfa_109
5 Replies

8. Solaris

how to create luns in A1000

hi friends how we can create luns in A1OOO storage ..plzhelp its very urgent when ever i am connect to A1000 raid controller through laptop with console cable with the help of hyperterminal..i isued serial port parameters which i mentioned below.. Set serial port parameters to: �... (1 Reply)
Discussion started by: tv.praveenkumar
1 Replies

9. Solaris

Managing disk array on A1000

I would want to ask on what software do i need to configure disk array for StorEdge A1000. (Sun Enterprise 450 - currently installed with SUN Solaris 9.) Is it using RAID Manager 6.22? And is it compatible with SUN Solaris 9 or 10? Thanks in advance for reading or replying my post. (2 Replies)
Discussion started by: beginningDBA
2 Replies
CCD(4)							   BSD Kernel Interfaces Manual 						    CCD(4)

NAME
ccd -- Concatenated Disk driver SYNOPSIS
device ccd DESCRIPTION
The ccd driver provides the capability of combining one or more disks/partitions into one virtual disk. This document assumes that you are familiar with how to generate kernels, how to properly configure disks and devices in a kernel configura- tion file, and how to partition disks. In order to compile in support for the ccd, you must add a line similar to the following to your kernel configuration file: device ccd # concatenated disk devices As of the FreeBSD 3.0 release, you do not need to configure your kernel with ccd but may instead use it as a kernel loadable module. Simply running ccdconfig(8) will load the module into the kernel. A ccd may be either serially concatenated or interleaved. To serially concatenate the partitions, specify the interleave factor of 0. Note that mirroring may not be used with an interleave factor of 0. There is a run-time utility that is used for configuring ccds. See ccdconfig(8) for more information. The Interleave Factor If a ccd is interleaved correctly, a ``striping'' effect is achieved, which can increase sequential read/write performance. The interleave factor is expressed in units of DEV_BSIZE (usually 512 bytes). For large writes, the optimum interleave factor is typically the size of a track, while for large reads, it is about a quarter of a track. (Note that this changes greatly depending on the number and speed of disks.) For instance, with eight 7,200 RPM drives on two Fast-Wide SCSI buses, this translates to about 128 for writes and 32 for reads. A larger interleave tends to work better when the disk is taking a multitasking load by localizing the file I/O from any given process onto a single disk. You lose sequential performance when you do this, but sequential performance is not usually an issue with a multitasking load. An interleave factor must be specified when using a mirroring configuration, even when you have only two disks (i.e., the layout winds up being the same no matter what the interleave factor). The interleave factor will determine how I/O is broken up, however, and a value 128 or greater is recommended. ccd has an option for a parity disk, but does not currently implement it. The best performance is achieved if all component disks have the same geometry and size. Optimum striping cannot occur with different disk types. For random-access oriented workloads, such as news servers, a larger interleave factor (e.g., 65,536) is more desirable. Note that there is not much ccd can do to speed up applications that are seek-time limited. Larger interleave factors will at least reduce the chance of having to seek two disk-heads to read one directory or a file. Disk Mirroring You can configure the ccd to ``mirror'' any even number of disks. See ccdconfig(8) for how to specify the necessary flags. For example, if you have a ccd configuration specifying four disks, the first two disks will be mirrored with the second two disks. A write will be run to both sides of the mirror. A read will be run to either side of the mirror depending on what the driver believes to be most optimal. If the read fails, the driver will automatically attempt to read the same sector from the other side of the mirror. Currently ccd uses a dual seek zone model to optimize reads for a multi-tasking load rather than a sequential load. In an event of a disk failure, you can use dd(1) to recover the failed disk. Note that a one-disk ccd is not the same as the original partition. In particular, this means if you have a file system on a two-disk mir- rored ccd and one of the disks fail, you cannot mount and use the remaining partition as itself; you have to configure it as a one-disk ccd. You cannot replace a disk in a mirrored ccd partition without first backing up the partition, then replacing the disk, then restoring the partition. Linux Compatibility The Linux compatibility mode does not try to read the label that Linux' md(4) driver leaves on the raw devices. You will have to give the order of devices and the interleave factor on your own. When in Linux compatibility mode, ccd will convert the interleave factor from Linux terminology. That means you give the same interleave factor that you gave as chunk size in Linux. If you have a Linux md(4) device in ``legacy'' mode, do not use the CCDF_LINUX flag in ccdconfig(8). Use the CCDF_NO_OFFSET flag instead. In that case you have to convert the interleave factor on your own, usually it is Linux' chunk size multiplied by two. Using a Linux RAID this way is potentially dangerous and can destroy the data in there. Since FreeBSD does not read the label used by Linux, changes in Linux might invalidate the compatibility layer. However, using this is reasonably safe if you test the compatibility before mounting a RAID read-write for the first time. Just using ccdconfig(8) without mounting does not write anything to the Linux RAID. Then you do a fsck.ext2fs (ports/sysutils/e2fsprogs) on the ccd device using the -n flag. You can mount the file system read-only to check files in there. If all this works, it is unlikely that there is a problem with ccd. Keep in mind that even when the Linux compatibility mode in ccd is working correctly, bugs in FreeBSD's ex2fs implemen- tation would still destroy your data. WARNINGS
If just one (or more) of the disks in a ccd fails, the entire file system will be lost unless you are mirroring the disks. If one of the disks in a mirror is lost, you should still be able to back up your data. If a write error occurs, however, data read from that sector may be non-deterministic. It may return the data prior to the write or it may return the data that was written. When a write error occurs, you should recover and regenerate the data as soon as possible. Changing the interleave or other parameters for a ccd disk usually destroys whatever data previously existed on that disk. FILES
/dev/ccd* ccd device special files SEE ALSO
dd(1), ccdconfig(8), config(8), disklabel(8), fsck(8), gvinum(8), mount(8), newfs(8) HISTORY
The concatenated disk driver was originally written at the University of Utah. BSD
August 9, 1995 BSD
All times are GMT -4. The time now is 01:56 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy