Sponsored Content
Full Discussion: how to mirror raid5
Operating Systems AIX how to mirror raid5 Post 302163536 by bakunin on Friday 1st of February 2008 09:31:37 AM
Old 02-01-2008
Quote:
Originally Posted by itik
Hi,

I have an ssa filesystem to move to san. We don't want any downtime. I heard that you can do a mirroring of existing file system on the san. The file system is a type of either raid 0, raid 1, or raid 5.

Anyone know how to do this?

Thanks in advance,
itik
You don't have to resort to disks means to employ mirroring as the LVM will do it for you. Yes, you can create/remove mirrors without even umounting the FS:

- create a normal RAID set, create you normal (unmirrored) LV there, create an FS and mount it. Start using it.....

- create a second RAID set, connect it to the machine somehow.

- run the configuration manager to add the disks to the configuration (-v is "verbose mode", you don't need it) :

# cfgmgr -v

- add the disk(s) to the volume group where you have created the logical volumes. If you are unsure you could use smitty instead of the command below:

# extendvg <volumegroupname> <physical volume>

- mirror the LV in question using the "mklvcopy" command. Again you can use smitty instead of issuing the command below directly.

# mklvcopy -s s <LV name> <Nr of copies> <physical volume>

Alternatively, instead of cycling through all LVs of a VG you can use the command "mirrorvg" to create mirrors for all LVs in a volumegroup. In fact mirrorvg is just a wrapper script around mklvcopy to make mirrors for all LVs in a VG.

WARNING: if you create a mirror for the rootvg you will have to change the bootlist accordingly, create a boot record on the new disk and adjust the "quorum" of the VG. See "man mirrorvg" for details. There are also examples on how to replace bad disks (unmirror, then remirror the VG).

EXAMPLE: i suppose here you add an additional RAID device as "hdisk15" to the volume group "myvg" and want to mirror the lv "mylv":

connect the disks to the system, use the RAID adapter utilites ("diag" utility, "smitty devices") to configure the RAID set itself. Then run cfgmgr:

Code:
# cfgmgr -v
# lspv
hdisk0          000bf05d94f0e1a8                    rootvg          active
hdisk1          000bf05d94f0e27d                    rootvg          active

...

hdisk14         000bf05d981228ff                    myvg            active
hdisk15         000bf05d95422cb2                    None            
# extendvg myvg hdisk15
# lspv
hdisk0          000bf05d94f0e1a8                    rootvg          active
hdisk1          000bf05d94f0e27d                    rootvg          active

...

hdisk14         000bf05d981228ff                    myvg            active
hdisk15         000bf05d95422cb2                    myvg            active
# lsvg -l myvg
myvg:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
some_lv             jfs2       134   134   1    open/syncd    /somewhere
mylv                jfs2       25    25    1    open/syncd    /somewhere/else
# mklvcopy -s s mylv 2 hdisk15
# lsvg -l myvg
myvg:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
some_lv             jfs2       134   134   1    open/syncd    /somewhere
mylv                jfs2       25    50    2    open/syncd    /somewhere/else

I hope this helps.

bakunin
 

10 More Discussions You Might Find Interesting

1. SCO

Raid5 Failure

Forgive me, I do not know much about RAID so I'm going to be as detailed as possible. This morning, our server's alarm was going. I found that one of our drives have failed. (we have 3) It is an Adaptec ATA RAID 2400A controller I'm purchasing a new SCSI drive today. My questions: ... (2 Replies)
Discussion started by: gseyforth
2 Replies

2. Solaris

RAID5 problems on solaris

I have one volume raid5 with 3 slice one of them is maintenance state. I replace this slice and resync the volume. when I try to mount the file system another slice goes to last erred. Again resync and the state goes to OK but the slice in mantenance persist. I try to enabled this but persist in... (2 Replies)
Discussion started by: usdsia
2 Replies

3. Filesystems, Disks and Memory

Problem booting a linux (RAID5)

Hello, we have a problem with our system, a machine with a RAID5: - We can boot the system from CD only, if we try to boot from hard-disk the GRUB seems to be "freezed". What is the difference, why we can boot from CD if something is wrong? - Allways we retreive an error like: "raid array is... (6 Replies)
Discussion started by: aristegui
6 Replies

4. Solaris

ZFS Mirror versus Hardware Mirror

I've looked a little but haven't found a solid answer, assuming there is one. What's better, hardware mirroring or ZFS mirroring? Common practice for us was to use the raid controllers on the Sun x86 servers. Now we've been using ZFS mirroring since U6. Any performance difference? Any other... (3 Replies)
Discussion started by: Lespaul20
3 Replies

5. Solaris

What is mirror and sub mirror in RAID -1 SVM

Hi , I am new to SVM .when i try to learn RAID 1 , first they are creating two RAID 0 strips through metainit d51 1 1 c0t0d0s2 metainit d52 1 1 c1t0d0s2 In the next step metainit d50 -m d51 d50: Mirror is setup next step is metaattach d50 d52 d50 : submirror d52 is... (7 Replies)
Discussion started by: vr_mari
7 Replies

6. UNIX for Advanced & Expert Users

RAID5 multi disk failure

Hi there, Don't know if my title is relevant but I'm dealing with dangerous materials that I don't really know and I'm very afraid to mess anything up. I have a Debian 5.0.4 server with 4 x 1TB hard drives. I have the following mdstat Personalities : md1 : active raid1 sda1 sdd1... (3 Replies)
Discussion started by: chebarbudo
3 Replies

7. SuSE

Raid5

Hi all, I am currently using opensuse 12.1, We have Raid 5 array of 8 disks. A friend of mine accidently removed a drive & place it back and also added a new disk to it(making it 9 disks). now the output of mdadm --detail is as shown below si64:/dev # mdadm --detail /dev/md3 /dev/md3:... (1 Reply)
Discussion started by: patilrajashekar
1 Replies

8. HP-UX

What is the difference between DRD and Root Mirror Disk using LVM mirror ?

what is the difference between DRD and Root Mirror Disk using LVM mirror ? (3 Replies)
Discussion started by: maxim42
3 Replies

9. Hardware

RAID5 + STRIPED LUNs

Hello Experts, I have few doubts on RAID 5 with LUNs carved as STRIPE and CONCAT RAID 5 = STRIPE + Parity mirroring I would like to know if the LUNs carved are CONCATE from RAID 5 disk array. Are the I/Os are spread accross the disks within the RAID 5 Array? And if I do carve STRIPED... (1 Reply)
Discussion started by: sybadm
1 Replies

10. Red Hat

Install RHEL6 on x3650M4 with RAID5

Hi All, I have a new x3650 M4 server with hardware RAID 5 configured 4 x 300 GB (HDD). The Raid controller is ServeRAID M5110e. Im getting "device not found" error during hardisk detection of RHEL6 install using DVD. Some pages over the net pointed to using ServerGuide media for... (1 Reply)
Discussion started by: Solaris_Begin
1 Replies
raidtab(5)							File Formats Manual							raidtab(5)

NAME
raidtab - configuration file for md (RAID) devices DESCRIPTION
/etc/raidtab is the default configuration file for the raid tools (raidstart and company). It defines how RAID devices are configured on a system. FORMAT
/etc/raidtab has multiple sections, one for each md device which is being configured. Each section begins with the raiddev keyword. The order of items in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code isn't overly bright, so be sure to follow the ordering in this man page for best results. Here's a sample md configuration file: # # sample raiddev configuration file # 'old' RAID0 array created with mdtools. # raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 0 chunk-size 8 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md1 raid-level 5 nr-raid-disks 3 nr-spare-disks 1 persistent-superblock 1 parity-algorithm left-symmetric device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 spare-disk 0 Here is more information on the directives which are in raid configuration files; the options are listen in this file in the same order they should appear in the actual configuration file. raiddev device This introduces the configuration section for the stated device. nr-raid-disks count Number of raid devices in the array; there should be count raid-disk entries later in the file. (current maximum limit for RAID devices -including spares- is 12 disks. This limit is already extended to 256 disks in experimental patches.) nr-spare-disks count Number of spare devices in the array; there should be count spare-disk entries later in the file. Spare disks may only be used with RAID4 and RAID5, and allow the kernel to automatically build new RAID disks as needed. It is also possible to add/remove spares run- time via raidhotadd/raidhotremove, care has to be taken that the /etc/raidtab configuration exactly follows the actual configuration of the array. (raidhotadd/raidhotremove does not change the configuration file) persistent-superblock 0/1 newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use. parity-algorithm which The parity-algorithm to use with RAID5. It must be one of left-asymmetric, right-asymmetric, left-symmetric, or right-symmetric. left-symmetric is the one that offers maximum performance on typical disks with rotating platters. chunk-size size Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the ker- nel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration. device devpath Adds the device devpath to the list of devices which comprise the raid system. Note that this command must be followed by one of raid-disk, spare-disk, or parity-disk. Also note that it's possible to recursively define RAID arrays, ie. to set up a RAID5 array of RAID5 arrays. (thus achieving two-disk failure protection, at the price of more disk space spent on RAID5 checksum blocks) raid-disk index The most recently defined device is inserted at position index in the raid array. spare-disk index The most recently defined device is inserted at position index in the spare disk array. parity-disk index The most recently defined device is moved to the end of the raid array, which forces it to be used for parity. failed-disk index The most recently defined device is inserted at position index in the raid array as a failed device. This allows you to create raid 1/4/5 devices in degraded mode - useful for installation. Don't use the smallest device in an array for this, put this after the raid-disk definitions! NOTES
The raidtools are derived from the md-tools and raidtools packages, which were originally written by Marc Zyngier, Miguel de Icaza, Gadi Oxman, Bradley Ward Allen, and Ingo Molnar. SEE ALSO
raidstart(8), raid0run(8), mkraid(8), raidstop(8) raidtab(5)
All times are GMT -4. The time now is 11:40 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy