Sponsored Content
Full Discussion: IBM SAN TO SAN Mirroring
Operating Systems AIX IBM SAN TO SAN Mirroring Post 302717149 by DGPickett on Wednesday 17th of October 2012 03:33:37 PM
Old 10-17-2012
Well, I am sure most SAN can be configured for mirroring, which is raid-2 or something like that. To achieve the most bang for your buck, you want the mirrors to be as far apart as possible, so something between the application host and short of the storage box needs to know where there are 2 so it can sidestep the dead side as well as splitting the query load and duplicating the churn. The farther upstream it is done, the greater the reliability, but too close can load the app server and communications, unless fiber has multicast write and anycast read. There may be several layers vying to mirror your storage, so pick wisely.

Mirroring got overshadowed a bit by raid, but has always had a query bandwidth advantage, with two devices handling query. Within the devices, there can be as much striping as in raid, so that is no different. When writing, there is no parity calculation and additional write time, just two immediate simultaneous writes. With raid in sequential striping mode, you write data to 1,2,3,4 and parity to 5, then data to 5,1,2,3 and parity to 4, and so on, so while reading is 5x spindle speed, writing is 4x. A mirrored pair trades space for bandwidth. Disk is cheap, and bandwidth is golden. Finally, it seems some raid systems seem to only get defects detected by staff when 2 adjacent devices fail, so often raid5 is either also mirrored or great downtime, data loss and partial restore pain is experienced.
 

10 More Discussions You Might Find Interesting

1. Solaris

Thoughts/experiences of SAN attaching V880 to EMC SAN

Hi everyone, I wonder if I can canvas any opinions or thoughts (good or bad) on SAN attaching a SUN V880/490 to an EMC Clarion SAN? At the moment the 880 is using 12 internal FC-AL disks as a db server and seems to be doing a pretty good job. It is not I/O, CPU or Memory constrained and the... (2 Replies)
Discussion started by: si_linux
2 Replies

2. AIX

mirroring ssa to san

Hi guys, I'd like to share my migration/mirroring of ssa to san. No downtime for users, probably I/O performance. here's the step: 1 After the lun had been carved on the SAN and the connections had been done on AIX fiber card 2 “lspv” and look for the new SAN hdisk? on the bottom, say... (1 Reply)
Discussion started by: itik
1 Replies

3. AIX

ibm san cache battery with aix

Hi All, I would like to share this incident that happened the other day. I have a question with this, https://www.unix.com/aix/64921-create-new-vg-san-rename-fs.html And I thought it's related to the above link but the problem was the ibm san 4300 cache battery was dead and I need to click... (2 Replies)
Discussion started by: itik
2 Replies

4. Filesystems, Disks and Memory

First steps on Ibm SAN DS4500

Hello everyone ! Im new on Ibm San DS4500. Can you give me some tips to this, because I dont want to make a mistake. I have some questions. How can I know how much space get on the san, I cant find it. How can add more space to a partition. Do you have some tutorial about this. I... (0 Replies)
Discussion started by: lo-lp-kl
0 Replies

5. AIX

Question about IBM San Ds4500

I have a question about SAN commands I have almost 15Tb of disk on my san but assigned and something else I have almost 11Tb There is a command to know, what its my real total storage capacity and another command to know how much I used .? Thanks again in advance (0 Replies)
Discussion started by: lo-lp-kl
0 Replies

6. AIX

MPIO RDAC IBM SAN STORAGE DS4700 ?

Hello, I have AIX 6.1 with TL 4 and it is connected to IBM SAN STORAGE DS4700 After assigning some disks from SAN to AIX, I can see the disks in my AIX as hdisk2 Available 05-00-02 MPIO Other DS4K Array Disk hdisk3 Available 05-00-02 MPIO Other DS4K Array Disk But it should... (0 Replies)
Discussion started by: filosophizer
0 Replies

7. Solaris

Cannot see the IBM SAN storage

HI all, I had recently change the Server storage from EMC to the IBM SAN. but after the configuration, the IBM success to see the server HBA port and successfully assign a LUN for the server. When i go to the server, and restarted it. i use the "format" command to check, but din see any... (1 Reply)
Discussion started by: SmartAntz
1 Replies

8. AIX

IBM SAN storage -- cache battery

Hello, I have IBM SAN STORAGE DS4100 and one of the cache battery for the controller is dead. Suddenly the performance has been degraded and access to SAN disks ( reading and writing ) became very slow ? My query: Replacing the battery will take 6 days, so in the mean time what are the ways... (1 Reply)
Discussion started by: filosophizer
1 Replies

9. AIX

IBM SAN STORAGE HOT SPARE DISK

Hello, I have DS4000 IBM SAN Storage ( aka FastT Storage ) One of my disks has failed and I had a hot spare disk covering all the arrays. As the disk failed, immediately the hotspare disk took over the failed disk ( see the JPEG in the attachment ) My Question: How can I make the hotspare... (1 Reply)
Discussion started by: filosophizer
1 Replies

10. AIX

IBM AIX - SAN Storage DS4300 issue

Hi, This is follow up to the post https://www.unix.com/aix/233361-san-disk-appearing-double-aix.html When I connected Pseries Machine HBA Card ( Dual Port ) directly to the SAN Storage DS4300 , I was able to see Host Port Adapter WWN numbers , although I was getting this message... (2 Replies)
Discussion started by: filosophizer
2 Replies
raidtab(5)							File Formats Manual							raidtab(5)

NAME
raidtab - configuration file for md (RAID) devices DESCRIPTION
/etc/raidtab is the default configuration file for the raid tools (raidstart and company). It defines how RAID devices are configured on a system. FORMAT
/etc/raidtab has multiple sections, one for each md device which is being configured. Each section begins with the raiddev keyword. The order of items in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code isn't overly bright, so be sure to follow the ordering in this man page for best results. Here's a sample md configuration file: # # sample raiddev configuration file # 'old' RAID0 array created with mdtools. # raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 0 chunk-size 8 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md1 raid-level 5 nr-raid-disks 3 nr-spare-disks 1 persistent-superblock 1 parity-algorithm left-symmetric device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 spare-disk 0 Here is more information on the directives which are in raid configuration files; the options are listen in this file in the same order they should appear in the actual configuration file. raiddev device This introduces the configuration section for the stated device. nr-raid-disks count Number of raid devices in the array; there should be count raid-disk entries later in the file. (current maximum limit for RAID devices -including spares- is 12 disks. This limit is already extended to 256 disks in experimental patches.) nr-spare-disks count Number of spare devices in the array; there should be count spare-disk entries later in the file. Spare disks may only be used with RAID4 and RAID5, and allow the kernel to automatically build new RAID disks as needed. It is also possible to add/remove spares run- time via raidhotadd/raidhotremove, care has to be taken that the /etc/raidtab configuration exactly follows the actual configuration of the array. (raidhotadd/raidhotremove does not change the configuration file) persistent-superblock 0/1 newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use. parity-algorithm which The parity-algorithm to use with RAID5. It must be one of left-asymmetric, right-asymmetric, left-symmetric, or right-symmetric. left-symmetric is the one that offers maximum performance on typical disks with rotating platters. chunk-size size Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the ker- nel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration. device devpath Adds the device devpath to the list of devices which comprise the raid system. Note that this command must be followed by one of raid-disk, spare-disk, or parity-disk. Also note that it's possible to recursively define RAID arrays, ie. to set up a RAID5 array of RAID5 arrays. (thus achieving two-disk failure protection, at the price of more disk space spent on RAID5 checksum blocks) raid-disk index The most recently defined device is inserted at position index in the raid array. spare-disk index The most recently defined device is inserted at position index in the spare disk array. parity-disk index The most recently defined device is moved to the end of the raid array, which forces it to be used for parity. failed-disk index The most recently defined device is inserted at position index in the raid array as a failed device. This allows you to create raid 1/4/5 devices in degraded mode - useful for installation. Don't use the smallest device in an array for this, put this after the raid-disk definitions! NOTES
The raidtools are derived from the md-tools and raidtools packages, which were originally written by Marc Zyngier, Miguel de Icaza, Gadi Oxman, Bradley Ward Allen, and Ingo Molnar. SEE ALSO
raidstart(8), raid0run(8), mkraid(8), raidstop(8) raidtab(5)
All times are GMT -4. The time now is 11:19 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy