Sponsored Content
Operating Systems Linux Raid0 recovery from external HD Post 302385020 by pludi on Thursday 7th of January 2010 01:18:31 AM
Old 01-07-2010
For this to work you'd need to know the parameters the hardware controller used, or at least the stripe size used. If you can get that it might be possible using dd to create a complete image, and use that to start reconstructing your data.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Password recovery

We recently terminated a developer at my place of employment who created scripts on a windows server (that i do not have access to) that invoke FTP sessions on my UnixWare 7.1.1 servers. I need to know the password that is being used. Does anyone know of a good password crack? (8 Replies)
Discussion started by: rm -r *
8 Replies

2. Solaris

Solaris RAID0 doubt...

friends, Suppose I am typing metastat command and it is showing: d100: Concat/Stripe Size: 369495 blocks (180 MB) Stripe 0: (interlace: 32 blocks) Device Start Block Dbase Reloc c1d0s0 16065 Yes Yes c1d0s1 0 No Yes... (4 Replies)
Discussion started by: saagar
4 Replies

3. Solaris

Solaris recovery

Some thing happened to our solaris 10 ( sparc ) box and it is not coming up now. These are some of the console messages : I assume it is not able to find very basic system libraries so i need to tell it some how to find it under /lib:/usr/lib. I booted it from the CD but now i... (4 Replies)
Discussion started by: rajwinder
4 Replies

4. UNIX for Dummies Questions & Answers

Why is RAID0 faster?

I have read anecdotes about people installing RAID0 (RAID - Wikipedia, the free encyclopedia) on some of their machines because it gives a performance boost. Because bandwidth on the motherboard is limited, can someone explain exactly why it should be faster? (7 Replies)
Discussion started by: figaro
7 Replies

5. UNIX for Advanced & Expert Users

live upgrade with raid0 soft partitions

Hi, I have this mirrored system with soft-partitions. I have a difficulty determining the lucreate cmd in this env. #metastat -p d0 -m d10 d20 1 d10 1 1 c1t2d0s0 d20 1 1 c1t3d0s0 d1 -m d11 d21 1 d11 1 1 c1t2d0s5 d21 1 1 c1t3d0s5 d100 -p d1 -o 58720384 -b 8388608 d200 -p d1 -o... (1 Reply)
Discussion started by: chaandana
1 Replies

6. Hardware

HP9000 Server - Stuck on RAID0

Hey all, I've got an old HP9000 L1000 server with HP-UX installed. The drives that the OS is running on are in RAID0. I am concerned for the reliability of the server. The four hard drives in the front of the server are LVD 18.2 drives. I know with RAID0, if one drive fails, they all fail. ... (2 Replies)
Discussion started by: mroselli
2 Replies

7. Solaris

Cloning RAID0 drives, Solaris 10u11

Hello all, this is my first time posting here. Where I work we have multiple servers (x3-2's) running Solaris 10u11 with 2 drives configured as RAID0, 300GB per. There are 4-6 open slots for drives to clone to. Past attempts to clone/backup these drives has failed. One of the machines is... (1 Reply)
Discussion started by: eprlsguy
1 Replies

8. Gentoo

Data recovery of formatted external HDD

accidentally formatted ext3 external hard disk .. im using EAse us tool in windows system to recover the data ... will this works?? if yes ... the another external hard disk have to be formatted in which file system ? is there any other option ..please help me out (1 Reply)
Discussion started by: rajeshz
1 Replies

9. Solaris

Solaris 11 recovery

Hi, I need to recover the Solaris 11 OS, and it backup via Netbackup 7.6 file level backup only. Does anyone know what are steps to recover it? Thanks. :confused::confused::confused: (3 Replies)
Discussion started by: freshmeat
3 Replies

10. UNIX for Dummies Questions & Answers

Raid0 array stresses only 1 disk out of 3

Hi there, I've setup a raid0 array of 3 identical disks using : mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1I'm using dstat to monitor the disk activity : dstat --epoch -D sdb,sdc,sdd --disk-util 30The results show that the stress is not... (8 Replies)
Discussion started by: chebarbudo
8 Replies
volume-request(4)														 volume-request(4)

NAME
volume-request, volume-defaults - Solaris Volume Manager configuration information for top down volume creation with metassist SYNOPSIS
/usr/share/lib/xml/dtd/volume-request.dtd /usr/share/lib/xml/dtd/volume-defaults.dtd /etc/defaults/metassist.xml A volume request file, XML-based and compliant with the volume-request.dtd Document Type Definition, describes the characteristics of the volumes that metassist should produce. A system administrator would use the volume request file instead of providing options at the command line to give more specific instruc- tions about the characteristics of the volumes to create. A volume request file can request more than one volume, but all requested volumes must reside in the same disk set. If you start metassist by providing a volume-request file as input, metassist can implement the configuration specified in the file, can generate a command file that sets up the configuraiton for you to inspect or edit, or can generate a volume configuration file for you to inspect or edit. As a system administrator, you would want to create a volume request file if you need to reuse configurations (and do not want to reenter the same command arguments), or if you prefer to use a configuration file to specify volume characteristics. Volume request files must be valid XML that complies with the document type definition in the volume-request.dtd file, located at /usr/share/lib/xml/dtd/volume-request.dtd. You create a volume request file, and provide it as input to metassist to create volumes from the top down. Defining Volume Request The top level element <volume-request> surrounds the volume request data. This element has no attributes. A volume request requires at least one <diskset> element, which must be the first element after <volume-request>. Optionally, the <volume-request> element can include one or more <available> and <unavailable> elements to specify which controllers or disks associated with a specific controller can or cannot be used to create the volume. Optionally, the <volume-request> element can include a <hsp> element to specify characteristics of a hot spare pool if fault recovery is used. If not specified for a volume with fault-recovery, the first hot spare pool found in the disk set is used. If no hot spare pool exists but one is required, a hot spare pool is created. Optionally, the volume-request can include one or more <concat>, <stripe>, <mirror>, <volume> elements to specify volumes to create. Defining Disk Set Within the <volume-request> element, a <diskset> element must exist. The <diskset> element, with the name attribute, specifies the name of the disk set to be used. If this disk set does not exist, it is created. This element and the name attribute are required. Defining Availability Within the <volume-request> element and within other elements, you can specify available or unavailable components (disks, or disks on a specific controller path) for use or exclusion from use in a volume or hot spare pool. The <available> and <unavailable> elements require a name attribute which specifies either a full ctd name, or a partial ctd name that is used with the implied wildcard to complete the expression. For example, specifying c3t2d0 as available would look like: <available name="/dev/dsk/c3t2d0"> The <available> element also makes any unnamed components unavailable. Specifying all controllers exept c1 unavailable would look like: <available name="c1"> Specifying all disks on controller 2 as unavailable would look like: <unavailable name="c2"> The <unavailable> element can also be used to further restrict the list of available components. For example, specifying all controllers exept c1 unavailable, and making all devices associated with c1t2 unavailable as well would look like this: <available name="c1"> <unavailable name="c1t2"> Components specified as available must be either part of the named disk set used for this volume creation, or must be unused and not in any disk set. If the components are selected for use, but are not in the specified diskset, the metassist command automatically adds them to the diskset. It is unnecessary to specify components that are in other disk sets as unavailable. metassist automatically excludes them from considera- tion. However, unused components or components that are not obviously used (for example, an unmounted slice that is reserved for different uses) must be explicitly specified as unavailable, or the metassist command can include them in the configuration. Defining Hot Spare Pool The next element within the <volume-request> element, after the <diskset> and, optionally, <available> and <unavailable> elements, is the <hsp> element. Its sole attribute specifies the name of the hot spare pool: <hsp name="hsp001"> The hot spare pool names must start with hsp and conclude with a number, thus following the existing Solaris Volume Manager hot spare pool naming requirements. Within the <hsp> element, you can specify one or more <available> and <unavailable> elements to specify which disks, or disks associated with a specific controller can or cannot be used to create the hot spares within the pool. Also within the <hsp> element, you can use the <slice> element to specify hot spares to be included in the hot spare pool (see DEFINING SLICE). Depending on the requirements placed on the hot spare pool by other parts of the volume request, additional slices can be added to the hot spare pool. Defining Slice The <slice> element is used to define slices to include or exclude within other elements. It requires only a name attribute to specify the ctd name of the slice, and the context of the <slice> element determines the function of the element. Sample slice elements might look like: <slice name="c0t1d0s2" /> <slice name="c0t12938567201lkj29561sllkj381d0s2" /> Defining Stripe The <stripe> element defines stripes (interlaced RAID 0 volumes) to be used in a volume. It can contain either slice elements (to explic- itly determine which slices are used), or appropriate combinations of available and unavailable elements if the specific determination of slices is to be left to the metassist command. The <stripe> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected from available Solaris Volume Manager names. If possible, names for related components are related. The <stripe> element takes an optional size attribute that specifies the size as value and units (for example, 10TB, 5GB). If slices for the <stripe> are explicitly specified, the size attribute is ignored. The <available> and <unavailable> elements can be used to constrain slices for use in a stripe. The <stripe> elements takes optional mincomp and maxcomp attributes to specify both the minimum and maximum number of components that can be included in it. As with size, if slices for the <stripe> are explicitly specified, the mincomp and maxcomp attributes are ignored. The <stripe> elements takes an optional interlace attribute as value and units (for example, 16KB, 5BLOCKS, 20KB). If this value is not specified, the Solaris Volume Manager default value is used. The <stripe> element takes an optional usehsp attribute to specify if a hot spare pool should be associated with this component. This attribute is specified as a boolean value, as usehsp="TRUE". If the component is not a submirror, this attribute is ignored. Defining Concat The <concat> element defines concats (non-interlaced RAID 0 volumes) to be used in a configuration. It is specified in the same way as a <stripe> element, except that the mincomp, maxcomp, and interlace attributes are not valid. Defining Mirror The <mirror> element defines mirrors (RAID 1 volumes) to be used in a volume configuration. It can contain combinations of <concat> and <stripe> elements (to explicitly determine which volumes are used as submirrors). Alternatively, it can have a size attribute specified, along with the appropriate combinations of available and unavailable elements to leave the specific determination of components to the metassist command. The <mirror> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected. The <mirror> element takes an optional size attribute that specifies the size as value and units (for example, 10TB, 5GB). If <stripe> and <concat> elements for the mirror are not specified, this attribute is required. Otherwise, it is ignored. The <mirror> element takes an optional nsubmirrors attribute to define the number of submirrors (1-4) to include. Like the size attribute, this attribute is ignored if the underlying <concat> and <stripe> submirrors are explicitly specified. The <mirror> element takes an optional read attribute to define the mirror read options (ROUNDROBIN, GEOMETRIC, or FIRST) for the mirror. If this attribute is not speci- fied, the Solaris Volume Manager default value is used. The <mirror> element takes an optional write attribute to define the mirror write options (PARALLEL, SERIAL, or FIRST) for the mirror. If this attribute is not specified, the Solaris Volume Manager default value is used. The <mirror> element takes an optional usehsp attribute to specify if a hot spare pool should be associated with each submirror. This attribute is specified as a boolean value, as usehsp="TRUE". If the usehsp attribute is specified in the configuration of the <stripe> or <concat> element used as a submirror, it overrides the value of usehsp attributes for the mirror as a whole. Defining Volume by Quality of Service The <volume> element defines volumes (high-level) by the quality of service they should provide. (The <volume> element offers the same functionality that options on the metassist command line can provide.) The <volume> element can contain combinations of <available> and <unavailable> elements to determine which components can be included in the configuration. The <volume> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected. The <volume> element takes a required size attribute that specifies the size as value and units (for example, 10TB, 5GB). The <volume> element takes an optional redundancy attribute to define the number of additional copies of data (1-4) to include. In a worst- case scenario, a volume can suffer failure of n-1 components without data loss, where redundancy=n. With fault recovery options, the volume could withstand up to n+hsps-1 non-concurrent failures without data loss. Specifying redundancy=0 results in a RAID 0 volume being created (a stripe, specifically). The <volume> element takes an optional faultrecovery attribute to determine if additional components should be allocated to recover from component failures in the volume. This is used to determine whether the volume is associated with a hot spare pool. The faultrecovery attribute is a boolean attribute, with a default value of FALSE. The <volume> element takes an optional datapaths attribute to determine if multiple data paths should be required to access the volume. The datapaths attribute should be set to a numeric value. Defining Default Values Globally Global defaults can be set in /etc/default/metassist.xml. This volume-defaults file can contain most of the same elements as a volume- request file, but differs structurally from a volume-request file: o The container element must be <volume-defaults>, not <volume-request>. o The <volume-defaults> element can contain <available>, <unavailable>, <hsp>, <concat>, <stripe>, <mirror>, or <volume> elements. Attributes specified by these elements define global default values, unless overridden by the corresponding attributes and elements in a volume-request. None of these elements is a container element. o The <volume-defaults> element can contain one or more <diskset> elements to provide disk set-specific defaults. The <diskset> element can contain <available>, <unavailable>, <hsp>, <concat>, <stripe>, <mirror>, or <volume> elements. o Settings specified outside of a <diskset> element apply to all disk sets, but can be overridden within each <diskset> element. Example 1: Creating a Redundant Volume The following example shows a volume request file used to create a redundant and fault tolerant volume of 1TB. <volume-request> <diskset name="sparestorage"/> <volume size="1TB" redundancy="2" faultrecovery="TRUE"> <available name="c2" /> <available name="c3" /> <unavailable name="c2t2d0" /> </volume> </volume-request> Example 2: Creating a Complex Configuration The following example shows a sample volume-request file that specifies a disk set name, and specifically itemizes characteristics of com- ponents to create. <volume-request> <!-- Specify the disk set to use --> <diskset name="mailspool"/> <!-- Generally available devices --> <available name="c0"/> <!-- Create a 3-way mirror with redundant datapaths and HSPs / via QoS --> <volume size="10GB" redundancy="3" datapaths="2" / faultrecovery="TRUE"/> <!-- Create a 1-way mirror with a HSP via QoS --> <volume size="10GB" faultrecovery="TRUE"/> <!-- Create a stripe via QoS --> <volume size="100GB"/> </volume-request> BOUNDARY VALUES
Attribute Minimum Maximum mincomp 1 N/A maxcomp N/A 32 nsubmirrors 1 4 passnum 0 9 datapaths 1 4 redundancy 0 4 /usr/share/lib/xml/dtd/volume-request.dtd /usr/share/lib/xml/dtd/volume-defaults.dtd /etc/defaults/metassist.xml metassist(1M), metaclear(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metare- cover(1M), metareplace(1M), metaroot(1M), metaset(1M), metasync(1M), metattach(1M), mount_ufs(1M), mddb.cf(4) 27 Apr 2005 volume-request(4)
All times are GMT -4. The time now is 06:47 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy