Sponsored Content
The Lounge War Stories Data Centre meets Vacuum Cleaner Post 303014201 by gull04 on Wednesday 7th of March 2018 04:10:19 AM
Old 03-07-2018
Data Centre meets Vacuum Cleaner

Hi Folks,

I have just spent a couple of days resolving some problems at the remote DR data centre, sorting out the problems caused by the over zealous use of a Vacuum cleaner of all things.

We have a backup server a SUN V480R with a Storedge 3510 and expansion attached which suffered a significant unexplained failure, all tracked back to an ID selector being touched by the nozzle of said vacuum cleaner - it looks like things went as follows over a period of time.

When the array was installed the setup was disks 0-9 were setup as a 10way stripe with disks 10 and 11 as hot standby disks. Over a period of time, a disk in the expansion (disk 3) failed and the and the first available spare (disk 10) built from the surviving mirror.

At this point the situation that existed left us exposed in a way which wasn't really appreciated, in that the one of the arrays had both mirrors of one part of the stripe. That would be the part that had the exposed ID selector switch, the one that the Vacuum Cleaner nozzle could change causing the failure of one whole stripe and one slice of an other stripe. The result as you can imagine was somewhat unpredictable, which is exactly what the Sun manual for the array says.

To add insult to injury the contents of the 3510 array, was the Legato Networker Backup Catalogue from the 24 drive ATL - making the recovery somewhat awkward.

What's the point of the story - don't let some idiot into a data centre with a Vacuum Cleaner.

Regards

Gull04
 

6 More Discussions You Might Find Interesting

1. News, Links, Events and Announcements

The HP-UX Porting and Archive Centre

If you're looking for an easy source of free software for HP-UX, look no further. A consortium of major universities and HP user groups have banded together and operate what is simply the finest free software site in existence. The world's best HP-UX software engineers have ported each package... (0 Replies)
Discussion started by: Perderabo
0 Replies

2. Solaris

Sun Management Centre

Hi Guys, Does anyone ever use Sun MC before? If yes, can u share with me how to do the installation. Thanks (1 Reply)
Discussion started by: raziayub
1 Replies

3. Shell Programming and Scripting

Cleaner method for this if-then statement?

I have a script that runs once per month. It performs a certain task ONLY if the month is January, April, July, or October. MONTH=`date +%m` if || || || ; then do something else do a different thing fi Is there a neater way of doing it than my four separate "or" comparisons? That... (2 Replies)
Discussion started by: lupin..the..3rd
2 Replies

4. Shell Programming and Scripting

Cleaner way to use shell variable in awk /X/,/Y/ syntax?

$ cat data Do NOT print me START_MARKER Print Me END_MARKER Do NOT print me $ cat awk.sh start=START_MARKER end=END_MARKER echo; echo Is this ugly syntax the only way? awk '/'"$start"'/,/'"$end"'/ { print }' data echo; echo Is there some modification of this that would work? awk... (2 Replies)
Discussion started by: hanson44
2 Replies

5. Shell Programming and Scripting

A cleaner way to rearrange column

Hello, I have some tab delimited text data, index name chg_p chg_m 1 name,1 1 0 2 name,2 1 1 3 name,3 1 0 4 name,4 1 0 5 name,5 1 1 I need to duplicate the "index" column, call it "id" and insert it after the... (8 Replies)
Discussion started by: LMHmedchem
8 Replies

6. Shell Programming and Scripting

Maybe a cleaner way to generate a file?

greetings, to be clear, i have a solution but i'm wondering if anyone has a cleaner way to accomplish the following: the variable: LSB_MCPU_HOSTS='t70c7n120 16 t70c7n121 16 t70c7n122 16 t70c7n123 16 t70c7n124 16 t70c7n125 16 t70c7n126 16 t70c7n127 16 t70c7n128 16 t70c7n129 16 t70c7n130 16... (2 Replies)
Discussion started by: crimso
2 Replies
volume-request(4)														 volume-request(4)

NAME
volume-request, volume-defaults - Solaris Volume Manager configuration information for top down volume creation with metassist SYNOPSIS
/usr/share/lib/xml/dtd/volume-request.dtd /usr/share/lib/xml/dtd/volume-defaults.dtd /etc/defaults/metassist.xml A volume request file, XML-based and compliant with the volume-request.dtd Document Type Definition, describes the characteristics of the volumes that metassist should produce. A system administrator would use the volume request file instead of providing options at the command line to give more specific instruc- tions about the characteristics of the volumes to create. A volume request file can request more than one volume, but all requested volumes must reside in the same disk set. If you start metassist by providing a volume-request file as input, metassist can implement the configuration specified in the file, can generate a command file that sets up the configuraiton for you to inspect or edit, or can generate a volume configuration file for you to inspect or edit. As a system administrator, you would want to create a volume request file if you need to reuse configurations (and do not want to reenter the same command arguments), or if you prefer to use a configuration file to specify volume characteristics. Volume request files must be valid XML that complies with the document type definition in the volume-request.dtd file, located at /usr/share/lib/xml/dtd/volume-request.dtd. You create a volume request file, and provide it as input to metassist to create volumes from the top down. Defining Volume Request The top level element <volume-request> surrounds the volume request data. This element has no attributes. A volume request requires at least one <diskset> element, which must be the first element after <volume-request>. Optionally, the <volume-request> element can include one or more <available> and <unavailable> elements to specify which controllers or disks associated with a specific controller can or cannot be used to create the volume. Optionally, the <volume-request> element can include a <hsp> element to specify characteristics of a hot spare pool if fault recovery is used. If not specified for a volume with fault-recovery, the first hot spare pool found in the disk set is used. If no hot spare pool exists but one is required, a hot spare pool is created. Optionally, the volume-request can include one or more <concat>, <stripe>, <mirror>, <volume> elements to specify volumes to create. Defining Disk Set Within the <volume-request> element, a <diskset> element must exist. The <diskset> element, with the name attribute, specifies the name of the disk set to be used. If this disk set does not exist, it is created. This element and the name attribute are required. Defining Availability Within the <volume-request> element and within other elements, you can specify available or unavailable components (disks, or disks on a specific controller path) for use or exclusion from use in a volume or hot spare pool. The <available> and <unavailable> elements require a name attribute which specifies either a full ctd name, or a partial ctd name that is used with the implied wildcard to complete the expression. For example, specifying c3t2d0 as available would look like: <available name="/dev/dsk/c3t2d0"> The <available> element also makes any unnamed components unavailable. Specifying all controllers exept c1 unavailable would look like: <available name="c1"> Specifying all disks on controller 2 as unavailable would look like: <unavailable name="c2"> The <unavailable> element can also be used to further restrict the list of available components. For example, specifying all controllers exept c1 unavailable, and making all devices associated with c1t2 unavailable as well would look like this: <available name="c1"> <unavailable name="c1t2"> Components specified as available must be either part of the named disk set used for this volume creation, or must be unused and not in any disk set. If the components are selected for use, but are not in the specified diskset, the metassist command automatically adds them to the diskset. It is unnecessary to specify components that are in other disk sets as unavailable. metassist automatically excludes them from considera- tion. However, unused components or components that are not obviously used (for example, an unmounted slice that is reserved for different uses) must be explicitly specified as unavailable, or the metassist command can include them in the configuration. Defining Hot Spare Pool The next element within the <volume-request> element, after the <diskset> and, optionally, <available> and <unavailable> elements, is the <hsp> element. Its sole attribute specifies the name of the hot spare pool: <hsp name="hsp001"> The hot spare pool names must start with hsp and conclude with a number, thus following the existing Solaris Volume Manager hot spare pool naming requirements. Within the <hsp> element, you can specify one or more <available> and <unavailable> elements to specify which disks, or disks associated with a specific controller can or cannot be used to create the hot spares within the pool. Also within the <hsp> element, you can use the <slice> element to specify hot spares to be included in the hot spare pool (see DEFINING SLICE). Depending on the requirements placed on the hot spare pool by other parts of the volume request, additional slices can be added to the hot spare pool. Defining Slice The <slice> element is used to define slices to include or exclude within other elements. It requires only a name attribute to specify the ctd name of the slice, and the context of the <slice> element determines the function of the element. Sample slice elements might look like: <slice name="c0t1d0s2" /> <slice name="c0t12938567201lkj29561sllkj381d0s2" /> Defining Stripe The <stripe> element defines stripes (interlaced RAID 0 volumes) to be used in a volume. It can contain either slice elements (to explic- itly determine which slices are used), or appropriate combinations of available and unavailable elements if the specific determination of slices is to be left to the metassist command. The <stripe> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected from available Solaris Volume Manager names. If possible, names for related components are related. The <stripe> element takes an optional size attribute that specifies the size as value and units (for example, 10TB, 5GB). If slices for the <stripe> are explicitly specified, the size attribute is ignored. The <available> and <unavailable> elements can be used to constrain slices for use in a stripe. The <stripe> elements takes optional mincomp and maxcomp attributes to specify both the minimum and maximum number of components that can be included in it. As with size, if slices for the <stripe> are explicitly specified, the mincomp and maxcomp attributes are ignored. The <stripe> elements takes an optional interlace attribute as value and units (for example, 16KB, 5BLOCKS, 20KB). If this value is not specified, the Solaris Volume Manager default value is used. The <stripe> element takes an optional usehsp attribute to specify if a hot spare pool should be associated with this component. This attribute is specified as a boolean value, as usehsp="TRUE". If the component is not a submirror, this attribute is ignored. Defining Concat The <concat> element defines concats (non-interlaced RAID 0 volumes) to be used in a configuration. It is specified in the same way as a <stripe> element, except that the mincomp, maxcomp, and interlace attributes are not valid. Defining Mirror The <mirror> element defines mirrors (RAID 1 volumes) to be used in a volume configuration. It can contain combinations of <concat> and <stripe> elements (to explicitly determine which volumes are used as submirrors). Alternatively, it can have a size attribute specified, along with the appropriate combinations of available and unavailable elements to leave the specific determination of components to the metassist command. The <mirror> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected. The <mirror> element takes an optional size attribute that specifies the size as value and units (for example, 10TB, 5GB). If <stripe> and <concat> elements for the mirror are not specified, this attribute is required. Otherwise, it is ignored. The <mirror> element takes an optional nsubmirrors attribute to define the number of submirrors (1-4) to include. Like the size attribute, this attribute is ignored if the underlying <concat> and <stripe> submirrors are explicitly specified. The <mirror> element takes an optional read attribute to define the mirror read options (ROUNDROBIN, GEOMETRIC, or FIRST) for the mirror. If this attribute is not speci- fied, the Solaris Volume Manager default value is used. The <mirror> element takes an optional write attribute to define the mirror write options (PARALLEL, SERIAL, or FIRST) for the mirror. If this attribute is not specified, the Solaris Volume Manager default value is used. The <mirror> element takes an optional usehsp attribute to specify if a hot spare pool should be associated with each submirror. This attribute is specified as a boolean value, as usehsp="TRUE". If the usehsp attribute is specified in the configuration of the <stripe> or <concat> element used as a submirror, it overrides the value of usehsp attributes for the mirror as a whole. Defining Volume by Quality of Service The <volume> element defines volumes (high-level) by the quality of service they should provide. (The <volume> element offers the same functionality that options on the metassist command line can provide.) The <volume> element can contain combinations of <available> and <unavailable> elements to determine which components can be included in the configuration. The <volume> element takes an optional name attribute to specify a name. If the name is not specified, an available name is automatically selected. The <volume> element takes a required size attribute that specifies the size as value and units (for example, 10TB, 5GB). The <volume> element takes an optional redundancy attribute to define the number of additional copies of data (1-4) to include. In a worst- case scenario, a volume can suffer failure of n-1 components without data loss, where redundancy=n. With fault recovery options, the volume could withstand up to n+hsps-1 non-concurrent failures without data loss. Specifying redundancy=0 results in a RAID 0 volume being created (a stripe, specifically). The <volume> element takes an optional faultrecovery attribute to determine if additional components should be allocated to recover from component failures in the volume. This is used to determine whether the volume is associated with a hot spare pool. The faultrecovery attribute is a boolean attribute, with a default value of FALSE. The <volume> element takes an optional datapaths attribute to determine if multiple data paths should be required to access the volume. The datapaths attribute should be set to a numeric value. Defining Default Values Globally Global defaults can be set in /etc/default/metassist.xml. This volume-defaults file can contain most of the same elements as a volume- request file, but differs structurally from a volume-request file: o The container element must be <volume-defaults>, not <volume-request>. o The <volume-defaults> element can contain <available>, <unavailable>, <hsp>, <concat>, <stripe>, <mirror>, or <volume> elements. Attributes specified by these elements define global default values, unless overridden by the corresponding attributes and elements in a volume-request. None of these elements is a container element. o The <volume-defaults> element can contain one or more <diskset> elements to provide disk set-specific defaults. The <diskset> element can contain <available>, <unavailable>, <hsp>, <concat>, <stripe>, <mirror>, or <volume> elements. o Settings specified outside of a <diskset> element apply to all disk sets, but can be overridden within each <diskset> element. Example 1: Creating a Redundant Volume The following example shows a volume request file used to create a redundant and fault tolerant volume of 1TB. <volume-request> <diskset name="sparestorage"/> <volume size="1TB" redundancy="2" faultrecovery="TRUE"> <available name="c2" /> <available name="c3" /> <unavailable name="c2t2d0" /> </volume> </volume-request> Example 2: Creating a Complex Configuration The following example shows a sample volume-request file that specifies a disk set name, and specifically itemizes characteristics of com- ponents to create. <volume-request> <!-- Specify the disk set to use --> <diskset name="mailspool"/> <!-- Generally available devices --> <available name="c0"/> <!-- Create a 3-way mirror with redundant datapaths and HSPs / via QoS --> <volume size="10GB" redundancy="3" datapaths="2" / faultrecovery="TRUE"/> <!-- Create a 1-way mirror with a HSP via QoS --> <volume size="10GB" faultrecovery="TRUE"/> <!-- Create a stripe via QoS --> <volume size="100GB"/> </volume-request> BOUNDARY VALUES
Attribute Minimum Maximum mincomp 1 N/A maxcomp N/A 32 nsubmirrors 1 4 passnum 0 9 datapaths 1 4 redundancy 0 4 /usr/share/lib/xml/dtd/volume-request.dtd /usr/share/lib/xml/dtd/volume-defaults.dtd /etc/defaults/metassist.xml metassist(1M), metaclear(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metare- cover(1M), metareplace(1M), metaroot(1M), metaset(1M), metasync(1M), metattach(1M), mount_ufs(1M), mddb.cf(4) 27 Apr 2005 volume-request(4)
All times are GMT -4. The time now is 12:04 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy