Sponsored Content
The Lounge War Stories Data Centre meets Vacuum Cleaner Post 303014201 by gull04 on Wednesday 7th of March 2018 04:10:19 AM
Old 03-07-2018
Data Centre meets Vacuum Cleaner

Hi Folks,

I have just spent a couple of days resolving some problems at the remote DR data centre, sorting out the problems caused by the over zealous use of a Vacuum cleaner of all things.

We have a backup server a SUN V480R with a Storedge 3510 and expansion attached which suffered a significant unexplained failure, all tracked back to an ID selector being touched by the nozzle of said vacuum cleaner - it looks like things went as follows over a period of time.

When the array was installed the setup was disks 0-9 were setup as a 10way stripe with disks 10 and 11 as hot standby disks. Over a period of time, a disk in the expansion (disk 3) failed and the and the first available spare (disk 10) built from the surviving mirror.

At this point the situation that existed left us exposed in a way which wasn't really appreciated, in that the one of the arrays had both mirrors of one part of the stripe. That would be the part that had the exposed ID selector switch, the one that the Vacuum Cleaner nozzle could change causing the failure of one whole stripe and one slice of an other stripe. The result as you can imagine was somewhat unpredictable, which is exactly what the Sun manual for the array says.

To add insult to injury the contents of the 3510 array, was the Legato Networker Backup Catalogue from the 24 drive ATL - making the recovery somewhat awkward.

What's the point of the story - don't let some idiot into a data centre with a Vacuum Cleaner.

Regards

Gull04
 

6 More Discussions You Might Find Interesting

1. News, Links, Events and Announcements

The HP-UX Porting and Archive Centre

If you're looking for an easy source of free software for HP-UX, look no further. A consortium of major universities and HP user groups have banded together and operate what is simply the finest free software site in existence. The world's best HP-UX software engineers have ported each package... (0 Replies)
Discussion started by: Perderabo
0 Replies

2. Solaris

Sun Management Centre

Hi Guys, Does anyone ever use Sun MC before? If yes, can u share with me how to do the installation. Thanks (1 Reply)
Discussion started by: raziayub
1 Replies

3. Shell Programming and Scripting

Cleaner method for this if-then statement?

I have a script that runs once per month. It performs a certain task ONLY if the month is January, April, July, or October. MONTH=`date +%m` if || || || ; then do something else do a different thing fi Is there a neater way of doing it than my four separate "or" comparisons? That... (2 Replies)
Discussion started by: lupin..the..3rd
2 Replies

4. Shell Programming and Scripting

Cleaner way to use shell variable in awk /X/,/Y/ syntax?

$ cat data Do NOT print me START_MARKER Print Me END_MARKER Do NOT print me $ cat awk.sh start=START_MARKER end=END_MARKER echo; echo Is this ugly syntax the only way? awk '/'"$start"'/,/'"$end"'/ { print }' data echo; echo Is there some modification of this that would work? awk... (2 Replies)
Discussion started by: hanson44
2 Replies

5. Shell Programming and Scripting

A cleaner way to rearrange column

Hello, I have some tab delimited text data, index name chg_p chg_m 1 name,1 1 0 2 name,2 1 1 3 name,3 1 0 4 name,4 1 0 5 name,5 1 1 I need to duplicate the "index" column, call it "id" and insert it after the... (8 Replies)
Discussion started by: LMHmedchem
8 Replies

6. Shell Programming and Scripting

Maybe a cleaner way to generate a file?

greetings, to be clear, i have a solution but i'm wondering if anyone has a cleaner way to accomplish the following: the variable: LSB_MCPU_HOSTS='t70c7n120 16 t70c7n121 16 t70c7n122 16 t70c7n123 16 t70c7n124 16 t70c7n125 16 t70c7n126 16 t70c7n127 16 t70c7n128 16 t70c7n129 16 t70c7n130 16... (2 Replies)
Discussion started by: crimso
2 Replies
raidtab(5)							File Formats Manual							raidtab(5)

NAME
raidtab - configuration file for md (RAID) devices DESCRIPTION
/etc/raidtab is the default configuration file for the raid tools (raidstart and company). It defines how RAID devices are configured on a system. FORMAT
/etc/raidtab has multiple sections, one for each md device which is being configured. Each section begins with the raiddev keyword. The order of items in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code isn't overly bright, so be sure to follow the ordering in this man page for best results. Here's a sample md configuration file: # # sample raiddev configuration file # 'old' RAID0 array created with mdtools. # raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 0 chunk-size 8 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md1 raid-level 5 nr-raid-disks 3 nr-spare-disks 1 persistent-superblock 1 parity-algorithm left-symmetric device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 spare-disk 0 Here is more information on the directives which are in raid configuration files; the options are listen in this file in the same order they should appear in the actual configuration file. raiddev device This introduces the configuration section for the stated device. nr-raid-disks count Number of raid devices in the array; there should be count raid-disk entries later in the file. (current maximum limit for RAID devices -including spares- is 12 disks. This limit is already extended to 256 disks in experimental patches.) nr-spare-disks count Number of spare devices in the array; there should be count spare-disk entries later in the file. Spare disks may only be used with RAID4 and RAID5, and allow the kernel to automatically build new RAID disks as needed. It is also possible to add/remove spares run- time via raidhotadd/raidhotremove, care has to be taken that the /etc/raidtab configuration exactly follows the actual configuration of the array. (raidhotadd/raidhotremove does not change the configuration file) persistent-superblock 0/1 newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use. parity-algorithm which The parity-algorithm to use with RAID5. It must be one of left-asymmetric, right-asymmetric, left-symmetric, or right-symmetric. left-symmetric is the one that offers maximum performance on typical disks with rotating platters. chunk-size size Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the ker- nel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration. device devpath Adds the device devpath to the list of devices which comprise the raid system. Note that this command must be followed by one of raid-disk, spare-disk, or parity-disk. Also note that it's possible to recursively define RAID arrays, ie. to set up a RAID5 array of RAID5 arrays. (thus achieving two-disk failure protection, at the price of more disk space spent on RAID5 checksum blocks) raid-disk index The most recently defined device is inserted at position index in the raid array. spare-disk index The most recently defined device is inserted at position index in the spare disk array. parity-disk index The most recently defined device is moved to the end of the raid array, which forces it to be used for parity. failed-disk index The most recently defined device is inserted at position index in the raid array as a failed device. This allows you to create raid 1/4/5 devices in degraded mode - useful for installation. Don't use the smallest device in an array for this, put this after the raid-disk definitions! NOTES
The raidtools are derived from the md-tools and raidtools packages, which were originally written by Marc Zyngier, Miguel de Icaza, Gadi Oxman, Bradley Ward Allen, and Ingo Molnar. SEE ALSO
raidstart(8), raid0run(8), mkraid(8), raidstop(8) raidtab(5)
All times are GMT -4. The time now is 08:26 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy