Problem replace disk with RAID-5 volumes


 
Thread Tools Search this Thread
Special Forums Hardware Filesystems, Disks and Memory Problem replace disk with RAID-5 volumes
# 1  
Old 05-26-2008
Problem replace disk with RAID-5 volumes

Good morning,

I have a problem replacing a disk with raid-5 volumes.
An hardware error was occurred from a disk c9t3 so all slices were in maintenace. Every slice is part of a raid-5 volume. Any replica is present.
Following Volume manager manual for replacing a disk, I have:

- phisically repleaced the failed disk
- logically repleaced with command devfsadm -C
- updated VM database with command metadevadm -u <disk>

then when I tried to paritioning disk, from command "format" I saw this info:

selecting c9t3d0
[disk formatted]
/dev/dsk/c9t3d0s0 is part of SVM volume raid:d155. Please see metaclear(1M)

d155 is only one of the 6 slices that are part of raid-5 metadevices but is enoght to block labelling disk.

I don't want to destroy metadevice d155 so I would find a work around for solving this problem

Anyone could help me please?

Thanks a lot


Regards
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

Patching on Raid 0 Disk

Dear All , We need to do patching on one Solaris Server , where we have raid 0 configured. What is the process to patch a Server if RAID 0 (Concat/Stripe) is there. Below is the sample output. # metadb flags first blk block count a m pc luo 16 ... (1 Reply)
Discussion started by: jegaraman
1 Replies

2. Solaris

How do I replace a "good" RAID 1+0 disk?

Hi, I have a Solaris Volume Manager (aka Disksuite) RAID 1+0 device consisting of 12 devices. One of these is failing (it has logged several mechanical positioning errors), and I have a replacement disk. Normally, when a disk fails, volume manager marks it as failed, and replacing it is... (1 Reply)
Discussion started by: Twirlip
1 Replies

3. UNIX for Advanced & Expert Users

Identify failed disk in Linux RAID

Good Evening, 2 years ago, I set up an Ubuntu file-server for a friend, who is a photograph amateur. Basically, the server offers a software RAID-5 that can be accessed remotely from a MAC. Unfortunately, I didn't labeled the hard drives (i.e. which physical drive corresponds to the /dev/sdX... (2 Replies)
Discussion started by: Loic Domaigne
2 Replies

4. Ubuntu

Ubunutu 8.04.4 RAID 1 mirror replace disk

Hi, I have an Ubuntu system which I have an faulted mirror. I trying to replace the disk, but I'm stuck on that it boots and only showing GRUB GRUB ## ## End Default Options ## title Ubuntu 8.04.4 LTS, kernel 2.6.24-26-server root (hd0,0) kernel ... (0 Replies)
Discussion started by: jld
0 Replies

5. Solaris

Sun X-series - Raid/Volumes

Hi, Does anyone know of another tool/software to view the underlying disk config (raid volume) in the BIOS from the Solaris OS ? Tried raidctl but does not show info.. Thanks in advance, Gary... ---------- Post updated at 11:48 AM ---------- Previous update was at 11:14 AM ---------- ... (2 Replies)
Discussion started by: gt71027
2 Replies

6. Solaris

Configuring RAID using single disk

Hi All, I have a SUN ENTERPRISE 3500 server with solaris 10 on it. I have already mirrored root partition.Now i need to mirror two more partitions with 25GB space. But i have only one disk having 70GB space.Total i have 7 disks but each one is of 18 GB only except one. Please find the output of... (2 Replies)
Discussion started by: Renjesh
2 Replies

7. OS X (Apple)

Mount a disk elsewhere /Volumes

When a new disk is connected to a Mac/OSX, it automatically mounts on /Volumes. Is it possible to manually mount it elsewhere? For example, on "/raid"? - m66 - (5 Replies)
Discussion started by: makrell66
5 Replies

8. AIX

how to allow Windows platform to access Unix based disk volumes

How to allow Windows platform to access Unix(AIX) based disk volumes? (2 Replies)
Discussion started by: rainbow_bean
2 Replies

9. Solaris

Upgrade disk in RAID 1

I need to upgrade 2 x 73 GB disk and replace with 2 x 146 GB disk in sun v240. These disks contain boot and swap files These are mirrored disks with RAID 1 I am trining to create the correct procedure. So far the procedure I have is as follows: # metastat State: Okay ... (5 Replies)
Discussion started by: photon
5 Replies

10. Filesystems, Disks and Memory

Creating a Mirror RAID With Existing Disk

Hi there, I'm not sure if this is possible, but here is what I'd like to do.. I have an existing 160GB drive in my Redhat 9.0 server that I would like to add an additional 160GB drive to and create a mirrored RAID of the first disk to the new disk. I would like to do this without having to... (2 Replies)
Discussion started by: sysera
2 Replies
Login or Register to Ask a Question
GRAID(8)						    BSD System Manager's Manual 						  GRAID(8)

NAME
graid -- control utility for software RAID devices SYNOPSIS
graid label [-f] [-o fmtopt] [-S size] [-s strip] format label level prov ... graid add [-f] [-S size] [-s strip] name label level graid delete [-f] name [label | num] graid insert name prov ... graid remove name prov ... graid fail name prov ... graid stop [-fv] name ... graid list graid status graid load graid unload DESCRIPTION
The graid utility is used to manage software RAID configurations, supported by the GEOM RAID class. GEOM RAID class uses on-disk metadata to provide access to software-RAID volumes defined by different RAID BIOSes. Depending on RAID BIOS type and its metadata format, different subsets of configurations and features are supported. To allow booting from RAID volume, the metadata format should match the RAID BIOS type and its capabilities. To guarantee that these match, it is recommended to create volumes via the RAID BIOS interface, while experienced users are free to do it using this utility. The first argument to graid indicates an action to be performed: label Create an array with single volume. The format argument specifies the on-disk metadata format to use for this array, such as "Intel". The label argument specifies the label of the created volume. The level argument specifies the RAID level of the created volume, such as: "RAID0", "RAID1", etc. The subsequent list enumerates providers to use as array components. The special name "NONE" can be used to reserve space for absent disks. The order of components can be important, depending on specific RAID level and metadata format. Additional options include: -f Enforce specified configuration creation if it is officially unsupported, but technically can be created. -o fmtopt Specifies metadata format options. -S size Use size bytes on each component for this volume. Should be used if several volumes per array are planned, or if smaller components going to be inserted later. Defaults to size of the smallest component. -s strip Specifies strip size in bytes. Defaults to 131072. add Create another volume on the existing array. The name argument is the name of the existing array, reported by label command. The rest of arguments are the same as for the label command. delete Delete volume(s) from the existing array. When the last volume is deleted, the array is also deleted and its metadata erased. The name argument is the name of existing array. Optional label or num arguments allow specifying volume for deletion. Additional options include: -f Delete volume(s) even if it is still open. insert Insert specified provider(s) into specified array instead of the first missing or failed components. If there are no such compo- nents, mark disk(s) as spare. remove Remove the specified provider(s) from the specified array and erase metadata. If there are spare disks present, the removed disk(s) will be replaced by spares. fail Mark the given disks(s) as failed, removing from active use unless absolutely necessary due to exhausted redundancy. If there are spare disks present - failed disk(s) will be replaced with one of them. stop Stop the given array. The metadata will not be erased. Additional options include: -f Stop the given array even if some of its volumes are opened. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options include: -v Be more verbose. SUPPORTED METADATA FORMATS
The GEOM RAID class follows a modular design, allowing different metadata formats to be used. Support is currently implemented for the fol- lowing formats: DDF The format defined by the SNIA Common RAID Disk Data Format v2.0 specification. Used by some Adaptec RAID BIOSes and some hardware RAID controllers. Because of high format flexibility different implementations support different set of features and have different on-disk metadata layouts. To provide compatibility, the GEOM RAID class mimics capabilities of the first detected DDF array. Respecting that, it may support different number of disks per volume, volumes per array, partitions per disk, etc. The following con- figurations are supported: RAID0 (2+ disks), RAID1 (2+ disks), RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+ disks), RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks), RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Format supports two options "BE" and "LE", that mean big-endian byte order defined by specification (default) and little-endian used by some Adaptec controllers. Intel The format used by Intel RAID BIOS. Supports up to two volumes per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks). Configurations not supported by Intel RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks). JMicron The format used by JMicron RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID10 (4 disks), CONCAT (2+ disks). Configurations not supported by JMicron RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks). NVIDIA The format used by NVIDIA MediaShield RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by NVIDIA MediaShield RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks). Promise The format used by Promise and AMD/ATI RAID BIOSes. Supports multiple volumes per array. Each disk can be split to be used by up to two arbitrary volumes. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by RAID BIOSes, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SiI The format used by SiliconImage RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by SiliconImage RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SUPPORTED RAID LEVELS
The GEOM RAID class follows a modular design, allowing different RAID levels to be used. Full support for the following RAID levels is cur- rently implemented: RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT. The following RAID levels supported as read-only for volumes in optimal state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF. RAID LEVEL MIGRATION
The GEOM RAID class has no support for RAID level migration, allowed by some metadata formats. If you started migration using BIOS or in some other way, make sure to complete it there. Do not run GEOM RAID class on migrating volumes under pain of possible data corruption! 2TiB BARRIERS NVIDIA metadata format does not support volumes above 2TiB. SYSCTL VARIABLES
The following sysctl(8) variable can be used to control the behavior of the RAID GEOM class. kern.geom.raid.aggressive_spare: 0 Use any disks without metadata connected to controllers of the vendor matching to volume metadata format as spare. Use it with much care to not lose data if connecting unrelated disk! kern.geom.raid.clean_time: 5 Mark volume as clean when idle for the specified number of seconds. kern.geom.raid.debug: 0 Debug level of the RAID GEOM class. kern.geom.raid.enable: 1 Enable on-disk metadata taste. kern.geom.raid.idle_threshold: 1000000 Time in microseconds to consider a volume idle for rebuild purposes. kern.geom.raid.name_format: 0 Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}. kern.geom.raid.read_err_thresh: 10 Number of read errors equated to disk failure. Write errors are always considered as disk failures. kern.geom.raid.start_timeout: 30 Time to wait for missing array components on startup. kern.geom.raid.X.enable: 1 Enable taste for specific metadata or transformation module. kern.geom.raid.legacy_aliases: 0 Enable geom raid emulation of legacy /dev/ar%d devices. This should aid the upgrade of systems from legacy to modern releases. EXIT STATUS
Exit status is 0 on success, and non-zero if the command fails. SEE ALSO
geom(4), geom(8), gvinum(8) HISTORY
The graid utility appeared in FreeBSD 9.0. AUTHORS
Alexander Motin <mav@FreeBSD.org> M. Warner Losh <imp@FreeBSD.org> BSD
April 4, 2013 BSD