Sponsored Content
Full Discussion: disk mirroring
Operating Systems Solaris disk mirroring Post 302276455 by webster5u on Tuesday 13th of January 2009 08:36:56 PM
Old 01-13-2009
I also studying about mirroring right now.

the sun Fire T2000 is maximum support 2 RAID. Is we can assign 2 disks in a raid so 4 disks in 2 raid?

If you wanna use svm, you need create the state database replica and mirror and submirror. you can get this info from Solaris Volume Manager Administration Guide documentation from sun website.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

what is disk mirroring in unix?

Can anyone give some answers on what is disk mirroring in Unix? It may be related to unix online backup. (2 Replies)
Discussion started by: asutoshch
2 Replies

2. Filesystems, Disks and Memory

Disk Mirroring?? have any idea???

After reading some books, I came across this idea of having a duplicate of your current hard drive on another second hard drive so that if the first hard drive happens to crash, the system can be up and running in quickly now, is there anybody in here who uses this method at work?? If there is,... (3 Replies)
Discussion started by: IMPORTANT
3 Replies

3. UNIX for Dummies Questions & Answers

Disk mirroring under RedHat 8

I would like to build a new box that has the disk mirrored to another IDE disk on a different channel. Does anyone know if a RAID controller like the Promise is supported under RedHat 8, or should I use a software RAID. (1 Reply)
Discussion started by: 98_1LE
1 Replies

4. Shell Programming and Scripting

Disk mirroring

Good Morning :) I have a new challenge to solve, I am going to write a new backup disk mirroring script. The current one, whcih is useing 'dd' caused some stalled systems :( Currently I am in the phase of experimenting with different methods, I was thinking about dump/restore afio/cpio or... (3 Replies)
Discussion started by: malcom
3 Replies

5. Solaris

Disk mirroring

Hi I have two raw disk that I want to mirror and then create soft partition on that. Could someone please help in the steps required c0t1d0 c0t0d0 Thanks Ajwat (2 Replies)
Discussion started by: Ajwat
2 Replies

6. Solaris

Help !! disk Mirroring

Hi I have a Sunfire X4100 box with a 4 disk Chassis (although I only have 2 disks in it). I have been asked to add two more disks into the chassis so that I can mirror the original two using SVM .....Ive read through a couple of SVM docs but am finding it a little confusing, and if any of you... (1 Reply)
Discussion started by: hcclnoodles
1 Replies

7. UNIX for Advanced & Expert Users

Mirroring Disk Geometry

How can one mirror disk geometry from one hard disk to another in Solaris. Is disk snapshot same as a mirror? Pls explain. (3 Replies)
Discussion started by: lexusujx
3 Replies

8. Solaris

Solaris 10 Disk Mirroring

Has anyone managed to set up disk mirroring in Solaris 10 yet? If so can you point me in the direction of some useful documentation please. Cheers (25 Replies)
Discussion started by: korfnz
25 Replies

9. Solaris

Disk Mirroring on solaris 5.8

Hi Friends, I am having Sun Solaris 5.8 OS installed having 2 different size hard disk, sizes are c0t0d0s0(160 GB) and c0t2d0s0 (40GB). I have installed Sun Solaris 5.8 OS in c0t0d0s0 (160GB) harddisk. I have configured all the parameters required for disk mirroring. But when executing... (4 Replies)
Discussion started by: Vijayakumarpc
4 Replies

10. Red Hat

Disk Mirroring

Hi, How to identify whether the disk is being mirrored or not in RHEL (2 Replies)
Discussion started by: gsiva
2 Replies
GRAID(8)						    BSD System Manager's Manual 						  GRAID(8)

NAME
graid -- control utility for software RAID devices SYNOPSIS
graid label [-f] [-o fmtopt] [-S size] [-s strip] format label level prov ... graid add [-f] [-S size] [-s strip] name label level graid delete [-f] name [label | num] graid insert name prov ... graid remove name prov ... graid fail name prov ... graid stop [-fv] name ... graid list graid status graid load graid unload DESCRIPTION
The graid utility is used to manage software RAID configurations, supported by the GEOM RAID class. GEOM RAID class uses on-disk metadata to provide access to software-RAID volumes defined by different RAID BIOSes. Depending on RAID BIOS type and its metadata format, different subsets of configurations and features are supported. To allow booting from RAID volume, the metadata format should match the RAID BIOS type and its capabilities. To guarantee that these match, it is recommended to create volumes via the RAID BIOS interface, while experienced users are free to do it using this utility. The first argument to graid indicates an action to be performed: label Create an array with single volume. The format argument specifies the on-disk metadata format to use for this array, such as "Intel". The label argument specifies the label of the created volume. The level argument specifies the RAID level of the created volume, such as: "RAID0", "RAID1", etc. The subsequent list enumerates providers to use as array components. The special name "NONE" can be used to reserve space for absent disks. The order of components can be important, depending on specific RAID level and metadata format. Additional options include: -f Enforce specified configuration creation if it is officially unsupported, but technically can be created. -o fmtopt Specifies metadata format options. -S size Use size bytes on each component for this volume. Should be used if several volumes per array are planned, or if smaller components going to be inserted later. Defaults to size of the smallest component. -s strip Specifies strip size in bytes. Defaults to 131072. add Create another volume on the existing array. The name argument is the name of the existing array, reported by label command. The rest of arguments are the same as for the label command. delete Delete volume(s) from the existing array. When the last volume is deleted, the array is also deleted and its metadata erased. The name argument is the name of existing array. Optional label or num arguments allow specifying volume for deletion. Additional options include: -f Delete volume(s) even if it is still open. insert Insert specified provider(s) into specified array instead of the first missing or failed components. If there are no such compo- nents, mark disk(s) as spare. remove Remove the specified provider(s) from the specified array and erase metadata. If there are spare disks present, the removed disk(s) will be replaced by spares. fail Mark the given disks(s) as failed, removing from active use unless absolutely necessary due to exhausted redundancy. If there are spare disks present - failed disk(s) will be replaced with one of them. stop Stop the given array. The metadata will not be erased. Additional options include: -f Stop the given array even if some of its volumes are opened. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options include: -v Be more verbose. SUPPORTED METADATA FORMATS
The GEOM RAID class follows a modular design, allowing different metadata formats to be used. Support is currently implemented for the fol- lowing formats: DDF The format defined by the SNIA Common RAID Disk Data Format v2.0 specification. Used by some Adaptec RAID BIOSes and some hardware RAID controllers. Because of high format flexibility different implementations support different set of features and have different on-disk metadata layouts. To provide compatibility, the GEOM RAID class mimics capabilities of the first detected DDF array. Respecting that, it may support different number of disks per volume, volumes per array, partitions per disk, etc. The following con- figurations are supported: RAID0 (2+ disks), RAID1 (2+ disks), RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+ disks), RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks), RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Format supports two options "BE" and "LE", that mean big-endian byte order defined by specification (default) and little-endian used by some Adaptec controllers. Intel The format used by Intel RAID BIOS. Supports up to two volumes per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks). Configurations not supported by Intel RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks). JMicron The format used by JMicron RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID10 (4 disks), CONCAT (2+ disks). Configurations not supported by JMicron RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks). NVIDIA The format used by NVIDIA MediaShield RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by NVIDIA MediaShield RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks). Promise The format used by Promise and AMD/ATI RAID BIOSes. Supports multiple volumes per array. Each disk can be split to be used by up to two arbitrary volumes. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by RAID BIOSes, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SiI The format used by SiliconImage RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by SiliconImage RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SUPPORTED RAID LEVELS
The GEOM RAID class follows a modular design, allowing different RAID levels to be used. Full support for the following RAID levels is cur- rently implemented: RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT. The following RAID levels supported as read-only for volumes in optimal state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF. RAID LEVEL MIGRATION
The GEOM RAID class has no support for RAID level migration, allowed by some metadata formats. If you started migration using BIOS or in some other way, make sure to complete it there. Do not run GEOM RAID class on migrating volumes under pain of possible data corruption! 2TiB BARRIERS NVIDIA metadata format does not support volumes above 2TiB. SYSCTL VARIABLES
The following sysctl(8) variable can be used to control the behavior of the RAID GEOM class. kern.geom.raid.aggressive_spare: 0 Use any disks without metadata connected to controllers of the vendor matching to volume metadata format as spare. Use it with much care to not lose data if connecting unrelated disk! kern.geom.raid.clean_time: 5 Mark volume as clean when idle for the specified number of seconds. kern.geom.raid.debug: 0 Debug level of the RAID GEOM class. kern.geom.raid.enable: 1 Enable on-disk metadata taste. kern.geom.raid.idle_threshold: 1000000 Time in microseconds to consider a volume idle for rebuild purposes. kern.geom.raid.name_format: 0 Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}. kern.geom.raid.read_err_thresh: 10 Number of read errors equated to disk failure. Write errors are always considered as disk failures. kern.geom.raid.start_timeout: 30 Time to wait for missing array components on startup. kern.geom.raid.X.enable: 1 Enable taste for specific metadata or transformation module. kern.geom.raid.legacy_aliases: 0 Enable geom raid emulation of legacy /dev/ar%d devices. This should aid the upgrade of systems from legacy to modern releases. EXIT STATUS
Exit status is 0 on success, and non-zero if the command fails. SEE ALSO
geom(4), geom(8), gvinum(8) HISTORY
The graid utility appeared in FreeBSD 9.0. AUTHORS
Alexander Motin <mav@FreeBSD.org> M. Warner Losh <imp@FreeBSD.org> BSD
April 4, 2013 BSD
All times are GMT -4. The time now is 03:19 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy