Sponsored Content
Full Discussion: IBM SAN TO SAN Mirroring
Operating Systems AIX IBM SAN TO SAN Mirroring Post 302717149 by DGPickett on Wednesday 17th of October 2012 03:33:37 PM
Old 10-17-2012
Well, I am sure most SAN can be configured for mirroring, which is raid-2 or something like that. To achieve the most bang for your buck, you want the mirrors to be as far apart as possible, so something between the application host and short of the storage box needs to know where there are 2 so it can sidestep the dead side as well as splitting the query load and duplicating the churn. The farther upstream it is done, the greater the reliability, but too close can load the app server and communications, unless fiber has multicast write and anycast read. There may be several layers vying to mirror your storage, so pick wisely.

Mirroring got overshadowed a bit by raid, but has always had a query bandwidth advantage, with two devices handling query. Within the devices, there can be as much striping as in raid, so that is no different. When writing, there is no parity calculation and additional write time, just two immediate simultaneous writes. With raid in sequential striping mode, you write data to 1,2,3,4 and parity to 5, then data to 5,1,2,3 and parity to 4, and so on, so while reading is 5x spindle speed, writing is 4x. A mirrored pair trades space for bandwidth. Disk is cheap, and bandwidth is golden. Finally, it seems some raid systems seem to only get defects detected by staff when 2 adjacent devices fail, so often raid5 is either also mirrored or great downtime, data loss and partial restore pain is experienced.
 

10 More Discussions You Might Find Interesting

1. Solaris

Thoughts/experiences of SAN attaching V880 to EMC SAN

Hi everyone, I wonder if I can canvas any opinions or thoughts (good or bad) on SAN attaching a SUN V880/490 to an EMC Clarion SAN? At the moment the 880 is using 12 internal FC-AL disks as a db server and seems to be doing a pretty good job. It is not I/O, CPU or Memory constrained and the... (2 Replies)
Discussion started by: si_linux
2 Replies

2. AIX

mirroring ssa to san

Hi guys, I'd like to share my migration/mirroring of ssa to san. No downtime for users, probably I/O performance. here's the step: 1 After the lun had been carved on the SAN and the connections had been done on AIX fiber card 2 “lspv” and look for the new SAN hdisk? on the bottom, say... (1 Reply)
Discussion started by: itik
1 Replies

3. AIX

ibm san cache battery with aix

Hi All, I would like to share this incident that happened the other day. I have a question with this, https://www.unix.com/aix/64921-create-new-vg-san-rename-fs.html And I thought it's related to the above link but the problem was the ibm san 4300 cache battery was dead and I need to click... (2 Replies)
Discussion started by: itik
2 Replies

4. Filesystems, Disks and Memory

First steps on Ibm SAN DS4500

Hello everyone ! Im new on Ibm San DS4500. Can you give me some tips to this, because I dont want to make a mistake. I have some questions. How can I know how much space get on the san, I cant find it. How can add more space to a partition. Do you have some tutorial about this. I... (0 Replies)
Discussion started by: lo-lp-kl
0 Replies

5. AIX

Question about IBM San Ds4500

I have a question about SAN commands I have almost 15Tb of disk on my san but assigned and something else I have almost 11Tb There is a command to know, what its my real total storage capacity and another command to know how much I used .? Thanks again in advance (0 Replies)
Discussion started by: lo-lp-kl
0 Replies

6. AIX

MPIO RDAC IBM SAN STORAGE DS4700 ?

Hello, I have AIX 6.1 with TL 4 and it is connected to IBM SAN STORAGE DS4700 After assigning some disks from SAN to AIX, I can see the disks in my AIX as hdisk2 Available 05-00-02 MPIO Other DS4K Array Disk hdisk3 Available 05-00-02 MPIO Other DS4K Array Disk But it should... (0 Replies)
Discussion started by: filosophizer
0 Replies

7. Solaris

Cannot see the IBM SAN storage

HI all, I had recently change the Server storage from EMC to the IBM SAN. but after the configuration, the IBM success to see the server HBA port and successfully assign a LUN for the server. When i go to the server, and restarted it. i use the "format" command to check, but din see any... (1 Reply)
Discussion started by: SmartAntz
1 Replies

8. AIX

IBM SAN storage -- cache battery

Hello, I have IBM SAN STORAGE DS4100 and one of the cache battery for the controller is dead. Suddenly the performance has been degraded and access to SAN disks ( reading and writing ) became very slow ? My query: Replacing the battery will take 6 days, so in the mean time what are the ways... (1 Reply)
Discussion started by: filosophizer
1 Replies

9. AIX

IBM SAN STORAGE HOT SPARE DISK

Hello, I have DS4000 IBM SAN Storage ( aka FastT Storage ) One of my disks has failed and I had a hot spare disk covering all the arrays. As the disk failed, immediately the hotspare disk took over the failed disk ( see the JPEG in the attachment ) My Question: How can I make the hotspare... (1 Reply)
Discussion started by: filosophizer
1 Replies

10. AIX

IBM AIX - SAN Storage DS4300 issue

Hi, This is follow up to the post https://www.unix.com/aix/233361-san-disk-appearing-double-aix.html When I connected Pseries Machine HBA Card ( Dual Port ) directly to the SAN Storage DS4300 , I was able to see Host Port Adapter WWN numbers , although I was getting this message... (2 Replies)
Discussion started by: filosophizer
2 Replies
CCD(4)							   BSD Kernel Interfaces Manual 						    CCD(4)

NAME
ccd -- Concatenated Disk driver SYNOPSIS
device ccd DESCRIPTION
The ccd driver provides the capability of combining one or more disks/partitions into one virtual disk. This document assumes that you are familiar with how to generate kernels, how to properly configure disks and devices in a kernel configura- tion file, and how to partition disks. In order to compile in support for the ccd, you must add a line similar to the following to your kernel configuration file: device ccd # concatenated disk devices As of the FreeBSD 3.0 release, you do not need to configure your kernel with ccd but may instead use it as a kernel loadable module. Simply running ccdconfig(8) will load the module into the kernel. A ccd may be either serially concatenated or interleaved. To serially concatenate the partitions, specify the interleave factor of 0. Note that mirroring may not be used with an interleave factor of 0. There is a run-time utility that is used for configuring ccds. See ccdconfig(8) for more information. The Interleave Factor If a ccd is interleaved correctly, a ``striping'' effect is achieved, which can increase sequential read/write performance. The interleave factor is expressed in units of DEV_BSIZE (usually 512 bytes). For large writes, the optimum interleave factor is typically the size of a track, while for large reads, it is about a quarter of a track. (Note that this changes greatly depending on the number and speed of disks.) For instance, with eight 7,200 RPM drives on two Fast-Wide SCSI buses, this translates to about 128 for writes and 32 for reads. A larger interleave tends to work better when the disk is taking a multitasking load by localizing the file I/O from any given process onto a single disk. You lose sequential performance when you do this, but sequential performance is not usually an issue with a multitasking load. An interleave factor must be specified when using a mirroring configuration, even when you have only two disks (i.e., the layout winds up being the same no matter what the interleave factor). The interleave factor will determine how I/O is broken up, however, and a value 128 or greater is recommended. ccd has an option for a parity disk, but does not currently implement it. The best performance is achieved if all component disks have the same geometry and size. Optimum striping cannot occur with different disk types. For random-access oriented workloads, such as news servers, a larger interleave factor (e.g., 65,536) is more desirable. Note that there is not much ccd can do to speed up applications that are seek-time limited. Larger interleave factors will at least reduce the chance of having to seek two disk-heads to read one directory or a file. Disk Mirroring You can configure the ccd to ``mirror'' any even number of disks. See ccdconfig(8) for how to specify the necessary flags. For example, if you have a ccd configuration specifying four disks, the first two disks will be mirrored with the second two disks. A write will be run to both sides of the mirror. A read will be run to either side of the mirror depending on what the driver believes to be most optimal. If the read fails, the driver will automatically attempt to read the same sector from the other side of the mirror. Currently ccd uses a dual seek zone model to optimize reads for a multi-tasking load rather than a sequential load. In an event of a disk failure, you can use dd(1) to recover the failed disk. Note that a one-disk ccd is not the same as the original partition. In particular, this means if you have a file system on a two-disk mir- rored ccd and one of the disks fail, you cannot mount and use the remaining partition as itself; you have to configure it as a one-disk ccd. You cannot replace a disk in a mirrored ccd partition without first backing up the partition, then replacing the disk, then restoring the partition. Linux Compatibility The Linux compatibility mode does not try to read the label that Linux' md(4) driver leaves on the raw devices. You will have to give the order of devices and the interleave factor on your own. When in Linux compatibility mode, ccd will convert the interleave factor from Linux terminology. That means you give the same interleave factor that you gave as chunk size in Linux. If you have a Linux md(4) device in ``legacy'' mode, do not use the CCDF_LINUX flag in ccdconfig(8). Use the CCDF_NO_OFFSET flag instead. In that case you have to convert the interleave factor on your own, usually it is Linux' chunk size multiplied by two. Using a Linux RAID this way is potentially dangerous and can destroy the data in there. Since FreeBSD does not read the label used by Linux, changes in Linux might invalidate the compatibility layer. However, using this is reasonably safe if you test the compatibility before mounting a RAID read-write for the first time. Just using ccdconfig(8) without mounting does not write anything to the Linux RAID. Then you do a fsck.ext2fs (ports/sysutils/e2fsprogs) on the ccd device using the -n flag. You can mount the file system read-only to check files in there. If all this works, it is unlikely that there is a problem with ccd. Keep in mind that even when the Linux compatibility mode in ccd is working correctly, bugs in FreeBSD's ex2fs implemen- tation would still destroy your data. WARNINGS
If just one (or more) of the disks in a ccd fails, the entire file system will be lost unless you are mirroring the disks. If one of the disks in a mirror is lost, you should still be able to back up your data. If a write error occurs, however, data read from that sector may be non-deterministic. It may return the data prior to the write or it may return the data that was written. When a write error occurs, you should recover and regenerate the data as soon as possible. Changing the interleave or other parameters for a ccd disk usually destroys whatever data previously existed on that disk. FILES
/dev/ccd* ccd device special files SEE ALSO
dd(1), ccdconfig(8), config(8), disklabel(8), fsck(8), mount(8), newfs(8), vinum(8) HISTORY
The concatenated disk driver was originally written at the University of Utah. BSD
August 9, 1995 BSD
All times are GMT -4. The time now is 10:38 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy