Sponsored Content
Operating Systems Solaris Thoughts/experiences of SAN attaching V880 to EMC SAN Post 302108264 by reborg on Sunday 25th of February 2007 06:24:27 PM
Old 02-25-2007
Irrespective of the storage used it is usually much easier if you use host based mirroring, as this allows you to do backups using the detached mirror of a raid 1, and apply the correct logic for making sure the backup is clean, for example to quiece a database while the mirror is being detached.

RAID5 in hardware would protect you from failure of a disk during the time the mirror is deatached. Personally I find attached fiber storage much easier to work with than SCSI becasue there is no need for example to configure multiple initiators and things like that, also it much easier to configure a multi-pathed configuration.

You could consider doing a mixture of the two, something like 3511 JBODs, with SATA disks and attached them using fiber HBAs, you wouldn't have the RAID 5, but you would have the extra low cost storage. This would also offer you the possibility of moving onto the SAN at a later stage, with much less work.

Last edited by reborg; 02-26-2007 at 10:29 PM..
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

what is SAN

Hello all, I have looked at the entire posting that have SAN in it and I'm still fuzzy on how SAN works. I understand that every disk array can be access from any server that needs it, but is there software that is install or NFS mount type situation. One post stated that if your format command... (7 Replies)
Discussion started by: larry
7 Replies

2. AIX

San:

hi We have 2 AIX nodes running with HACMP and all of them connected to SAN, Our shared storage is shark; I need to create shared volume group and I need the HACMP take a ware of it. Regards (1 Reply)
Discussion started by: h2aix
1 Replies

3. Solaris

How to configure my SAN with Sun V880 servers to run Oracle 9i

Hi: I am in the process of configuring the SAN for Solaris to host 6 oracle 9i databases. We have 30 -146 GB disks stiped with RAID 10 for SAN. Of which 11 are dedicated for databsaes related things. Then we have 2 v880 Sun Servers with 16 -73 GB disks and 24 GB memory. The questions... (1 Reply)
Discussion started by: oracle_dba
1 Replies

4. IP Networking

SAN basics

Hi I like to learn and practice SAN, iSCSI. Could you sugges the appropriate tutorial and small tasks to practice SAN. Thankyou (1 Reply)
Discussion started by: kingskar
1 Replies

5. AIX

Configurin EMC SAN disks on AIX

This may sound like an absolute rookie question, and it is. I have been working on Migrating our HP and Solaris servers to the new EMC SAN and know the routines backwards. Now we've suddenly got a new IBM server and I don't even know how to check if it is connected to the switch. Can someone... (1 Reply)
Discussion started by: ronellevan
1 Replies

6. Solaris

Attaching 1TB external drive to SunFire v880

Hi I have a 1TB external drive that I want to attach to a SunFire v880 server which runs on Solaris 9. After attaching the external drive what commands should I issue so that the drive will be seen by the server? (7 Replies)
Discussion started by: rahmantanko
7 Replies

7. AIX

IBM SAN TO SAN Mirroring

Has anyone tried SAN to SAN mirroring on IBM DS SAN Storage. DS5020 mentions Enhanced Remote Mirror to multi-LUN applications I wonder if Oracle High availibility can be setup using Remote Mirror option of SAN ? (1 Reply)
Discussion started by: filosophizer
1 Replies

8. Solaris

Solaris 10 booting from EMC SAN DISK

Hi All, I have server : Sun-Fire-V490 configured with Solaris 10 zfs .. and I have configured three mirror the third one from EMC storage. root@server # zpool status -v pool: rpool state: ONLINE status: The pool is formatted using an older on-disk format. The pool can ... (8 Replies)
Discussion started by: top.level
8 Replies

9. Filesystems, Disks and Memory

Faster way: SAN hd to SAN hd copying

hi! i got a rhel 6.3 host that already have an xfs filesystem mounted from a SAN (let's call it SAN-1) whose size is 9TB. i will be receiving another SAN (let's call it SAN-2) storage of 15TB size. this new addition is physically on another SAN storage. SAN-1 is on a Pillar storage while the new... (6 Replies)
Discussion started by: rino19ny
6 Replies
raidtab(5)							File Formats Manual							raidtab(5)

NAME
raidtab - configuration file for md (RAID) devices DESCRIPTION
/etc/raidtab is the default configuration file for the raid tools (raidstart and company). It defines how RAID devices are configured on a system. FORMAT
/etc/raidtab has multiple sections, one for each md device which is being configured. Each section begins with the raiddev keyword. The order of items in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code isn't overly bright, so be sure to follow the ordering in this man page for best results. Here's a sample md configuration file: # # sample raiddev configuration file # 'old' RAID0 array created with mdtools. # raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 0 chunk-size 8 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md1 raid-level 5 nr-raid-disks 3 nr-spare-disks 1 persistent-superblock 1 parity-algorithm left-symmetric device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 spare-disk 0 Here is more information on the directives which are in raid configuration files; the options are listen in this file in the same order they should appear in the actual configuration file. raiddev device This introduces the configuration section for the stated device. nr-raid-disks count Number of raid devices in the array; there should be count raid-disk entries later in the file. (current maximum limit for RAID devices -including spares- is 12 disks. This limit is already extended to 256 disks in experimental patches.) nr-spare-disks count Number of spare devices in the array; there should be count spare-disk entries later in the file. Spare disks may only be used with RAID4 and RAID5, and allow the kernel to automatically build new RAID disks as needed. It is also possible to add/remove spares run- time via raidhotadd/raidhotremove, care has to be taken that the /etc/raidtab configuration exactly follows the actual configuration of the array. (raidhotadd/raidhotremove does not change the configuration file) persistent-superblock 0/1 newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use. parity-algorithm which The parity-algorithm to use with RAID5. It must be one of left-asymmetric, right-asymmetric, left-symmetric, or right-symmetric. left-symmetric is the one that offers maximum performance on typical disks with rotating platters. chunk-size size Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the ker- nel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration. device devpath Adds the device devpath to the list of devices which comprise the raid system. Note that this command must be followed by one of raid-disk, spare-disk, or parity-disk. Also note that it's possible to recursively define RAID arrays, ie. to set up a RAID5 array of RAID5 arrays. (thus achieving two-disk failure protection, at the price of more disk space spent on RAID5 checksum blocks) raid-disk index The most recently defined device is inserted at position index in the raid array. spare-disk index The most recently defined device is inserted at position index in the spare disk array. parity-disk index The most recently defined device is moved to the end of the raid array, which forces it to be used for parity. failed-disk index The most recently defined device is inserted at position index in the raid array as a failed device. This allows you to create raid 1/4/5 devices in degraded mode - useful for installation. Don't use the smallest device in an array for this, put this after the raid-disk definitions! NOTES
The raidtools are derived from the md-tools and raidtools packages, which were originally written by Marc Zyngier, Miguel de Icaza, Gadi Oxman, Bradley Ward Allen, and Ingo Molnar. SEE ALSO
raidstart(8), raid0run(8), mkraid(8), raidstop(8) raidtab(5)
All times are GMT -4. The time now is 10:31 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy