Sponsored Content
Operating Systems Solaris Cannot remove disk added to zpool Post 302930889 by LittleLebowski on Thursday 8th of January 2015 12:16:38 PM
Old 01-08-2015
Quote:
Originally Posted by Peasant
Hot spare will not protect your data in case of of the disk in raid 0 zpool fails.

Spares are used in raid protected setup (raid1,raidz etc.), when one disk fails it will rebuild a raid array using hot spare automatically or manually depending on the zpool autoreplace policy.

You will need to go with jlliagre suggestion or add two more disk in zpool as mirrors of two devices currently present or risk loosing data due to one of the disk in zpool failing.
Agreed and thanks to you both. Next week, I'll be adding one more disk and adding redundancy. Glad this is a test box Smilie
 

10 More Discussions You Might Find Interesting

1. Solaris

Remove the exported zpool

I had a pool which was exported and due to some issues on my SAN i was never able to import it again. Can anyone tell me how can i destroy the exported pool to free up the LUN. I tried to create a new pool on the same pool but it gives me following error # zpool create emcpool4 emcpower0c... (0 Replies)
Discussion started by: fugitive
0 Replies

2. Solaris

Can't remove a LUN from a Zpool!

I am not seeing anyway to remove a LUN from a Zpool... Am I missing something? or do i have to destroy the zpool and recreate it? (2 Replies)
Discussion started by: BG_JrAdmin
2 Replies

3. Red Hat

Partitioning newly added disk to Redhat

Hi Everyone, I have added new Virtual disk to OS. The main point is I need to bring this whole Disk into LVM control, is it necessary to partition the disk using fdisk command and assign partition type as '8e', or can I directly add that disk into LVM, by running pvcreate command with out... (2 Replies)
Discussion started by: bobby320
2 Replies

4. AIX

Remove the disk online

Hi I have one of the disk missing in my NIMVG. My doubt is can I remove this hdisk2 online ? few of the file systems seems to be spread over 7 PV's. that's why i'm worried. Can someone suggest if I can replace this disk online. Also how to check if there is some data present in hdisk2 alone... (2 Replies)
Discussion started by: newtoaixos
2 Replies

5. Solaris

Bad exchange descriptor : not able to remove files under zpool

Hi , One of my zone went down and when i booted it up i could see the pool in degraded state with some check sum errors . we have brought the pool online after scrubbing. But few files are showing this error Bad exchange descriptor Please let me know how to remove these files (2 Replies)
Discussion started by: chidori
2 Replies

6. Solaris

Add disk to zpool

Hi, Quick question. I have a data zpool that consists of 1 disk. pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 c0t50002AC0014B06BEd0 ONLINE... (2 Replies)
Discussion started by: general_lee
2 Replies

7. AIX

LPAR cannot added disk

Dear All, I created a new partition through "Integrated Virtualization Manager" but there have an error when I added a new disk to the partition. The disk already created without any issue, Below error is to add the disk to the partition An error occured while modifying the assignments... (5 Replies)
Discussion started by: lckdanny
5 Replies

8. Solaris

Exporting zpool sitting on different disk partition

Hello, I need some help in recovering ZFS pool. Here is scenerio. There are two disks - c0t0d0 - This is good disk. I cloned it from other server and boot server from this disk. c0t1d0 - This is original disk of this server, having errors. I am able to mount it on /mnt. So that I can copy... (1 Reply)
Discussion started by: solaris_1977
1 Replies

9. Solaris

Replace zpool with another disk

issue, I had a zpool which was full pool_temp1 199G 197G 1.56G 99% ONLINE - pool_temp2 199G 196G 3.09G 98% ONLINE - as you can see, full so I replaced with a larger disk. zpool replace pool_temp1 c3t600144F0FF8BA036000058CC1DB80008d0s0... (2 Replies)
Discussion started by: rrodgers
2 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
vxrelocd(1M)															      vxrelocd(1M)

NAME
vxrelocd - monitor Veritas Volume Manager for failure events and relocate failed subdisks SYNOPSIS
/etc/vx/bin/vxrelocd [-o vxrecover_argument] [-O old_version] [-s save_max] [mail_address...] DESCRIPTION
The vxrelocd command monitors Veritas Volume Manager (VxVM) by analyzing the output of the vxnotify command, and waits for a failure. When a failure occurs, vxrelocd sends mail via mailx to root (by default) or to other specified users and relocates failed subdisks. After com- pleting the relocation, vxrelocd sends more mail indicating the status of each subdisk replacement. The vxrecover utility is then run on volumes with relocated subdisks to restore data. Mail is sent after vxrecover executes. OPTIONS
-o The -o option and its argument are passed directly to vxrecover if vxrecover is called. This allows specifying -o slow[=iodelay] to keep vxrecover from overloading a busy system during recovery. The default value for the delay is 250 milliseconds. -O Reverts back to an older version. Specifying -O VxVM_version directs vxrelocd to use the relocation scheme in that version. -s Before vxrelocd attempts a relocation, a snapshot of the current configuration is saved in /etc/vx/saveconfig.d. This option specifies the maximum number of configurations to keep for each diskgroup. The default is 32. Mail Notification By default, vxrelocd sends mail to root with information about a detected failure and the status of any relocation and recovery attempts. To send mail to other users, add the user login name to the vxrelocd startup line in the startup script /sbin/init.d/vxvm-recover, and reboot the system. For example, if the line appears as: nohup vxrelocd root & and you want mail also to be sent to user1 and user2, change the line to read: nohup vxrelocd root user1 user2 & Alternatively, you can kill the vxrelocd process and restart it as vxrelocd root mail_address, where mail_address is a user's login name. Do not kill the vxrelocd process while a relocation attempt is in progress. The mail notification that is sent when a failure is detected follows this format: Failures have been detected by the Veritas Volume Manager: failed disks: medianame ... failed plexes: plexname ... failed log plexes: plexname ... failing disks: medianame ... failed subdisks: subdiskname ... The Volume Manager will attempt to find spare disks, relocate failed subdisks and then recover the data in the failed plexes. The medianame list under failed disks specifies disks that appear to have completely failed; the medianame list under failing disks indi- cates a partial disk failure or a disk that is in the process of failing. When a disk has failed completely, the same medianame list appears under both failed disks and failing disks. The plexname list under failed plexes shows plexes that were detached due to I/O fail- ures that occurred while attempting to do I/O to subdisks they contain. The plexname list under failed log plexes indicates RAID-5 or DRL (dirty region logging) log plexes that have failed. The subdiskname list specifies subdisks in RAID-5 volumes that were detached due to I/O errors. Spare Space A disk can be marked as ``spare.'' This makes the disk available as a site for relocating failed subdisks. Disks that are marked as spares are not used for normal allocations unless you explicitly specify them. This ensures that there is a pool of spare space available for relocating failed subdisks and that this space does not get consumed by normal operations. Spare space is the first space used to relocate failed subdisks. However, if no spare space is available, or the available spare space is not suitable or sufficient, free space is also used except for those marked with the nohotuse flag. See the vxedit(1M) and vxdiskadm(1M) manual pages for more information on marking a disk as a spare or nohotuse. Nohotuse Space A disk can be marked as ``nohotuse.'' This excludes the disk from being used by vxrelocd, but it is still available as free space. See the vxedit(1M) and vxdiskadm(1M) manual pages for more information on marking a disk as a spare or nohotuse. Replacement Procedure After mail is sent, vxrelocd relocates failed subdisks (those listed in the subdisks list). This requires finding appropriate spare or free space in the same disk group as the failed subdisk. A disk is eligible as replacement space if it is a valid Veritas Volume Manager disk (VM disk) and contains enough space to hold the data contained in the failed subdisk. If no space is available on spare disks, the relocation uses free space that is not marked nohotuse. To determine which of the eligible disks to use, vxrelocd first tries the disk that is closest to the failed disk. The value of ``close- ness'' depends on the controller, target, and disk number of the failed disk. A disk on the same controller as the failed disk is closer than a disk on a different controller; a disk under the same target as the failed disk is closer than one under a different target. vxrelocd moves all subdisks from a failing drive to the same destination disk if possible. If no spare or free space is found, mail is sent explaining the disposition of volumes that had storage on the failed disk: Hot-relocation was not successful for subdisks on disk dm_name in volume v_name in disk group dg_name. No replacement was made and the disk is still unusable. The following volumes have storage on medianame: volumename ... These volumes are still usable, but the redundancy of those volumes is reduced. Any RAID-5 volumes with storage on the failed disk may become unusable in the face of further failures. If any non-RAID-5 volumes were made unusable due to the disk failure, the following message is included: The following volumes: volumename ... have data on medianame but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored. If any RAID-5 volumes were made unavailable due to the disk failure, the following message is included: The following RAID-5 volumes: volumename ... had storage on medianame and have experienced other failures. These RAID-5 volumes are now unusable and data on them is unavailable. These RAID-5 volumes must have their data restored. If there is spare space available, a snapshot of the current configuration is saved in /etc/vx/saveconfig.d/dg_name.yymmdd_hhmmss.mpvsh before attempting a subdisk relocation. Relocation requires setting up a subdisk on the spare or free space not marked with nohotuse and using it to replace the failed subdisk. If this is successful, the vxrecover command runs in the background to recover the data in volumes that had storage on the disk. If the relocation fails, the following message is sent: Hot-relocation was not successful for subdisks on disk dm_name in volume v_name in disk group dg_name. No replacement was made and the disk is still unusable. If any volumes (RAID-5 or otherwise) become unusable due to the failure, the following message is included: The following volumes: volumename ... have data on dm_name but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored. If the relocation procedure was successful and recovery has begun, the following mail message is sent: Volume v_name Subdisk sd_name relocated to newsd_name, but not yet recovered. After recovery completes, a mail message is sent relaying the result of the recovery procedure. If the recovery is successful, the follow- ing message is included in the mail: Recovery complete for volume v_name in disk group dg_name. If the recovery was not successful, the following message is included in the mail: Failure recovering v_name in disk group dg_name. Disabling vxrelocd If you do not want automatic subdisk relocation, you can disable the hot-relocation feature by killing the relocation daemon, vxrelocd, and preventing it from restarting. However, do not kill the daemon while it is doing the relocation. To kill the daemon, run the command: ps -ef from the command line and find the two entries for vxrelocd. Execute the command: kill -9 PID1 PID2 (substituting PID1 and PID2 with the process IDs for the two vxrelocd processes). To prevent vxrelocd from being started again, you must comment out the line that starts up vxrelocd in the startup script /sbin/init.d/vxvm-recover. FILES
/sbin/init.d/vxvm-recover The startup file for vxrelocd. /etc/vx/saveconfig.d/dg_name.yymmdd_hhmmss.mpvsh File where vxrelocd saves a snapshot of the current configuration before performing a relocation. SEE ALSO
kill(1), mailx(1), ps(1), vxdiskadm(1M), vxedit(1M), vxintro(1M), vxnotify(1M), vxrecover(1M), vxsparecheck(1M), vxunreloc(1M) VxVM 5.0.31.1 24 Mar 2008 vxrelocd(1M)
All times are GMT -4. The time now is 02:14 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy