Sponsored Content
Full Discussion: regarding RAID.....
Operating Systems Solaris regarding RAID..... Post 302255708 by DukeNuke2 on Friday 7th of November 2008 01:44:20 AM
Old 11-07-2008
1. i think the default value is 128K stripe size. but you can choose the value.

2. you need 3 devices. 1 for the mirror and 2 for the submirrors.
 

9 More Discussions You Might Find Interesting

1. Solaris

RAID 0+1 Vs RAID 1+0

hi all...sudhansu here............ i need some help.......... what is RAID 0+1 and RAID 1+0.............what is the difference between them................ suppose i have 80 gb disk space,then which RAID level i follow n why......... i need redundancy n maximum storage..........please help (3 Replies)
Discussion started by: sudhansu
3 Replies

2. Solaris

Raid 0

Somebody can help me ! how can i do mirror in sun ver 8 without third party software !!! (3 Replies)
Discussion started by: goldfelda
3 Replies

3. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

4. Solaris

implementing RAID 1 from RAID 5

Dear ALl, I have a RAID 5 volume which is as below d120 r 60GB c1t2d0s5 c1t3d0s5 c1t4d0s5 c1t5d0s5 d7 r 99GB c1t2d0s0 c1t3d0s0 c1t4d0s0 c1t5d0s0 d110 r 99GB c1t2d0s4 c1t3d0s4 c1t4d0s4 c1t5d0s4 d8 r 99GB c1t2d0s1 c1t3d0s1... (2 Replies)
Discussion started by: jegaraman
2 Replies

5. Solaris

Creation of Raid 01 and Raid 10

Hello All, I have read enough of texts on Raid 01 and Raid 10 on solaris :wall: . But no-where found a way to create them using SVM. Some one pls tell me how to do or Post some link if that helps. TIA Curious solarister (1 Reply)
Discussion started by: Solarister
1 Replies

6. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

7. UNIX for Dummies Questions & Answers

Need help with RAID.

Hi Gurus, Can any one explain me the difference between hardware RAID and s/w RAID. Thanks in Advance. (1 Reply)
Discussion started by: rama krishna
1 Replies

8. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

9. Red Hat

RAID Configuration for IBM Serveraid-7k SCSI RAID Controller

Hello, I want to delete a RAID configuration an old server has. Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton? I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies
volume-request(4)						   File Formats 						 volume-request(4)

NAME
volume-request, volume-defaults - Solaris Volume Manager configuration information for top down volume creation with metassist SYNOPSIS
/usr/share/lib/xml/dtd/volume-request.dtd /usr/share/lib/xml/dtd/volume-defaults.dtd /etc/defaults/metassist.xml DESCRIPTION
A volume request file, XML-based and compliant with the volume-request.dtd Document Type Definition, describes the characteristics of the volumes that metassist should produce. A system administrator would use the volume request file instead of providing options at the command line to give more specific instruc- tions about the characteristics of the volumes to create. A volume request file can request more than one volume, but all requested volumes must reside in the same disk set. If you start metassist by providing a volume-request file as input, metassist can implement the configuration specified in the file, can generate a command file that sets up the configuraiton for you to inspect or edit, or can generate a volume configuration file for you to inspect or edit. As a system administrator, you would want to create a volume request file if you need to reuse configurations (and do not want to reenter the same command arguments), or if you prefer to use a configuration file to specify volume characteristics. Volume request files must be valid XML that complies with the document type definition in the volume-request.dtd file, located at /usr/share/lib/xml/dtd/volume-request.dtd. You create a volume request file, and provide it as input to metassist to create volumes from the top down. Defining Volume Request The top level element <volume-request> surrounds the volume request data. This element has no attributes. A volume request requires at least one <diskset> element, which must be the first element after <volume-request>. Optionally, the <volume-request> element can include one or more <available> and <unavailable> elements to specify which controllers or disks associated with a specific controller can or cannot be used to create the volume. Optionally, the <volume-request> element can include a <hsp> element to specify characteristics of a hot spare pool if fault recovery is used. If not specified for a volume with fault-recovery, the first hot spare pool found in the disk set is used. If no hot spare pool exists but one is required, a hot spare pool is created. Optionally, the volume-request can include one or more <concat>, <stripe>, <mirror>, <volume> elements to specify volumes to create. Defining Disk Set Within the <volume-request> element, a <diskset> element must exist. The <diskset> element, with the name attribute, specifies the name of the disk set to be used. If this disk set does not exist, it is created. This element and the name attribute are required. Defining Availability Within the <volume-request> element and within other elements, you can specify available or unavailable components (disks, or disks on a specific controller path) for use or exclusion from use in a volume or hot spare pool. The <available> and <unavailable> elements require a name attribute which specifies either a full ctd name, or a partial ctd name that is used with the implied wildcard to complete the expression. For example, specifying c3t2d0 as available would look like: <available name="/dev/dsk/c3t2d0"> The <available> element also makes any unnamed components unavailable. Specifying all controllers exept c1 unavailable would look like: <available name="c1"> Specifying all disks on controller 2 as unavailable would look like: <unavailable name="c2"> The <unavailable> element can also be used to further restrict the list of available components. For example, specifying all controllers exept c1 unavailable, and making all devices associated with c1t2 unavailable as well would look like this: <available name="c1"> <unavailable name="c1t2"> Components specified as available must be either part of the named disk set used for this volume creation, or must be unused and not in any disk set. If the components are selected for use, but are not in the specified diskset, the metassist command automatically adds them to the diskset. It is unnecessary to specify components that are in other disk sets as unavailable. metassist automatically excludes them from considera- tion. However, unused components or components that are not obviously used (for example, an unmounted slice that is reserved for different uses) must be explicitly specified as unavailable, or the metassist command can include them in the configuration. Defining Hot Spare Pool The next element within the <volume-request> element, after the <diskset> and, optionally, <available> and <unavailable> elements, is the <hsp> element. Its sole attribute specifies the name of the hot spare pool: <hsp name="hsp001"> The hot spare pool names must start with hsp and conclude with a number, thus following the existing Solaris Volume Manager hot spare pool naming requirements. Within the <hsp> element, you can specify one or more <available> and <unavailable> elements to specify which disks, or disks associated with a specific controller can or cannot be used to create the hot spares within the pool. Also within the <hsp> element, you can use the <slice> element to specify hot spares to be included in the hot spare pool (see DEFINING SLICE). Depending on the requirements placed on the hot spare pool by other parts of the volume request, additional slices can be added to the hot spare pool. Defining Slice The <slice> element is used to define slices to include or exclude within other elements. It requires only a name attribute to specify the ctd name of the slice, and the context of the <slice> element determines the function of the element. Sample slice elements might look like: <slice name="c0t1d0s2" /> <slice name="c0t12938567201lkj29561sllkj381d0s2" /> Defining Stripe The <stripe> element defines stripes (interlaced RAID 0 volumes) to be used in a volume. It can contain either slice elements (to explic- itly determine which slices are used), or appropriate combinations of available and unavailable elements if the specific determination of slices is to be left to the metassist command. The <stripe> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected from available Solaris Volume Manager names. If possible, names for related components are related. The <stripe> element takes an optional size attribute that specifies the size as value and units (for example, 10TB, 5GB). If slices for the <stripe> are explicitly specified, the size attribute is ignored. The <available> and <unavailable> elements can be used to constrain slices for use in a stripe. The <stripe> elements takes optional mincomp and maxcomp attributes to specify both the minimum and maximum number of components that can be included in it. As with size, if slices for the <stripe> are explicitly specified, the mincomp and maxcomp attributes are ignored. The <stripe> elements takes an optional interlace attribute as value and units (for example, 16KB, 5BLOCKS, 20KB). If this value is not specified, the Solaris Volume Manager default value is used. The <stripe> element takes an optional usehsp attribute to specify if a hot spare pool should be associated with this component. This attribute is specified as a boolean value, as usehsp="TRUE". If the component is not a submirror, this attribute is ignored. Defining Concat The <concat> element defines concats (non-interlaced RAID 0 volumes) to be used in a configuration. It is specified in the same way as a <stripe> element, except that the mincomp, maxcomp, and interlace attributes are not valid. Defining Mirror The <mirror> element defines mirrors (RAID 1 volumes) to be used in a volume configuration. It can contain combinations of <concat> and <stripe> elements (to explicitly determine which volumes are used as submirrors). Alternatively, it can have a size attribute specified, along with the appropriate combinations of available and unavailable elements to leave the specific determination of components to the metassist command. The <mirror> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected. The <mirror> element takes an optional size attribute that specifies the size as value and units (for example, 10TB, 5GB). If <stripe> and <concat> elements for the mirror are not specified, this attribute is required. Otherwise, it is ignored. The <mirror> element takes an optional nsubmirrors attribute to define the number of submirrors (1-4) to include. Like the size attribute, this attribute is ignored if the underlying <concat> and <stripe> submirrors are explicitly specified. The <mirror> element takes an optional read attribute to define the mirror read options (ROUNDROBIN, GEOMETRIC, or FIRST) for the mirror. If this attribute is not speci- fied, the Solaris Volume Manager default value is used. The <mirror> element takes an optional write attribute to define the mirror write options (PARALLEL, SERIAL, or FIRST) for the mirror. If this attribute is not specified, the Solaris Volume Manager default value is used. The <mirror> element takes an optional usehsp attribute to specify if a hot spare pool should be associated with each submirror. This attribute is specified as a boolean value, as usehsp="TRUE". If the usehsp attribute is specified in the configuration of the <stripe> or <concat> element used as a submirror, it overrides the value of usehsp attributes for the mirror as a whole. Defining Volume by Quality of Service The <volume> element defines volumes (high-level) by the quality of service they should provide. (The <volume> element offers the same functionality that options on the metassist command line can provide.) The <volume> element can contain combinations of <available> and <unavailable> elements to determine which components can be included in the configuration. The <volume> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected. The <volume> element takes a required size attribute that specifies the size as value and units (for example, 10TB, 5GB). The <volume> element takes an optional redundancy attribute to define the number of additional copies of data (1-4) to include. In a worst- case scenario, a volume can suffer failure of n-1 components without data loss, where redundancy=n. With fault recovery options, the volume could withstand up to n+hsps-1 non-concurrent failures without data loss. Specifying redundancy=0 results in a RAID 0 volume being created (a stripe, specifically). The <volume> element takes an optional faultrecovery attribute to determine if additional components should be allocated to recover from component failures in the volume. This is used to determine whether the volume is associated with a hot spare pool. The faultrecovery attribute is a boolean attribute, with a default value of FALSE. The <volume> element takes an optional datapaths attribute to determine if multiple data paths should be required to access the volume. The datapaths attribute should be set to a numeric value. Defining Default Values Globally Global defaults can be set in /etc/default/metassist.xml. This volume-defaults file can contain most of the same elements as a volume- request file, but differs structurally from a volume-request file: o The container element must be <volume-defaults>, not <volume-request>. o The <volume-defaults> element can contain <available>, <unavailable>, <hsp>, <concat>, <stripe>, <mirror>, or <volume> elements. Attributes specified by these elements define global default values, unless overridden by the corresponding attributes and ele- ments in a volume-request. None of these elements is a container element. o The <volume-defaults> element can contain one or more <diskset> elements to provide disk set-specific defaults. The <diskset> element can contain <available>, <unavailable>, <hsp>, <concat>, <stripe>, <mirror>, or <volume> elements. o Settings specified outside of a <diskset> element apply to all disk sets, but can be overridden within each <diskset> element. EXAMPLES
Example 1 Creating a Redundant Volume The following example shows a volume request file used to create a redundant and fault tolerant volume of 1TB. <volume-request> <diskset name="sparestorage"/> <volume size="1TB" redundancy="2" faultrecovery="TRUE"> <available name="c2" /> <available name="c3" /> <unavailable name="c2t2d0" /> </volume> </volume-request> Example 2 Creating a Complex Configuration The following example shows a sample volume-request file that specifies a disk set name, and specifically itemizes characteristics of com- ponents to create. <volume-request> <!-- Specify the disk set to use --> <diskset name="mailspool"/> <!-- Generally available devices --> <available name="c0"/> <!-- Create a 3-way mirror with redundant datapaths and HSPs / via QoS --> <volume size="10GB" redundancy="3" datapaths="2" / faultrecovery="TRUE"/> <!-- Create a 1-way mirror with a HSP via QoS --> <volume size="10GB" faultrecovery="TRUE"/> <!-- Create a stripe via QoS --> <volume size="100GB"/> </volume-request> BOUNDARY VALUES
Attribute Minimum Maximum mincomp 1 N/A maxcomp N/A 32 nsubmirrors 1 4 passnum 0 9 datapaths 1 4 redundancy 0 4 FILES
/usr/share/lib/xml/dtd/volume-request.dtd /usr/share/lib/xml/dtd/volume-defaults.dtd /etc/defaults/metassist.xml SEE ALSO
metassist(1M), metaclear(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metare- cover(1M), metareplace(1M), metaroot(1M), metaset(1M), metasync(1M), metattach(1M), mount_ufs(1M), mddb.cf(4) Solaris Volume Manager Administration Guide SunOS 5.11 27 Apr 2005 volume-request(4)
All times are GMT -4. The time now is 09:20 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy