Sponsored Content
Full Discussion: Sun T3-1 hardware RAID
Special Forums Hardware Sun T3-1 hardware RAID Post 302526687 by soliberus on Wednesday 1st of June 2011 10:05:07 AM
Old 06-01-2011
What I would like to know is:

1.) If I create the RAID volumes from the OBP, how would the OS know when a disk has failed?

2.) In the event of a disk failure, do I have to bring the server down to the OBP to replace the disk OR is it a case of swapping the drives and OBP is clever enough to update itself with the new device path etc?

---------- Post updated at 03:05 PM ---------- Previous update was at 12:13 PM ----------

Things just got even more interesting. It seems I'm limited to 2 volumes if I configure RAID from OBP. What's that about?

The server can take 16 disks and I'm limited to only 2 RAID 1 volumes, surely that can't be right?
 

9 More Discussions You Might Find Interesting

1. Solaris

Hardware RAID

I don't understood why on SPARC-Platforms have not present RAID-Controller ? Sorry for my bad english, but it's crazy always setup software RAID !!! I whanna Hardware RAID and when i can find solution ? (7 Replies)
Discussion started by: jess_t03
7 Replies

2. Solaris

how to hardware RAID 1 on T5120

Hi, I have t5120 sparc and I have 2 146 G drives in the system. I will be installing solaris 10 and also want the system mirrored using Hardware RAID "1" The System did come preinstalled as it comes from sun. I did not do much on it. I booted system using boot cdrom -s gave format... (6 Replies)
Discussion started by: upengan78
6 Replies

3. Solaris

Sun T5120 hardware RAID question

Hi everyone I've just purchased a Sun T5120 server with 2 internal disks. I've configured hardware RAID (mirror) and as a result the device tree in Solaris only contains 1 hard drive. My question is, how would I know when one of the drives become faulty? Thanks (2 Replies)
Discussion started by: soliberus
2 Replies

4. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

5. Solaris

Hardware Raid - LiveUpgrade

Hi, I have a question. Do LiveUpgrade supports hardware raid? How to choose the configuration of the system disk for Solaris 10 SPARC? 1st Hardware RAID-1 and UFS 2nd Hardware RAID-1 and ZFS 3rd SVM - UFS and RAID1 4th Software RAID-1 and ZFS I care about this in the future to take... (1 Reply)
Discussion started by: bieszczaders
1 Replies

6. Hardware

Hardware RAID on Sun T2000 Server

Hi All I have a Sun T2000 server. Couple of years ago I had configured and mirrored the boot drive with an other drive using hardware RAID 1 using raidctl command. Following is the hardware RAID output. root@oracledatabaseserver / $ raidctl RAID Volume RAID RAID Disk... (0 Replies)
Discussion started by: Tirmazi
0 Replies

7. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

8. Solaris

Hardware RAID not recognize the new disk [Sun T6320]

We have hardware RAID configured on Sun-Blade-T6320 and one of the disk got failed. Hence we replaced the failed disk. But the hot swapped disk not recognized by RAID. Kindly help on fixing this issue. We have 2 LDOM configured on this server and this server running on single disk. #... (8 Replies)
Discussion started by: rock123
8 Replies

9. Solaris

Hardware raid patching

Dear All , we have hardware raid 1 implemented on Solaris Disks. We need to patch the Servers. Kindly let me know how to patch hardware raid implemented Servers. Thanks... Rj (7 Replies)
Discussion started by: jegaraman
7 Replies
vxrelocd(1M)															      vxrelocd(1M)

NAME
vxrelocd - monitor Veritas Volume Manager for failure events and relocate failed subdisks SYNOPSIS
/etc/vx/bin/vxrelocd [-o vxrecover_argument] [-O old_version] [-s save_max] [mail_address...] DESCRIPTION
The vxrelocd command monitors Veritas Volume Manager (VxVM) by analyzing the output of the vxnotify command, and waits for a failure. When a failure occurs, vxrelocd sends mail via mailx to root (by default) or to other specified users and relocates failed subdisks. After com- pleting the relocation, vxrelocd sends more mail indicating the status of each subdisk replacement. The vxrecover utility is then run on volumes with relocated subdisks to restore data. Mail is sent after vxrecover executes. OPTIONS
-o The -o option and its argument are passed directly to vxrecover if vxrecover is called. This allows specifying -o slow[=iodelay] to keep vxrecover from overloading a busy system during recovery. The default value for the delay is 250 milliseconds. -O Reverts back to an older version. Specifying -O VxVM_version directs vxrelocd to use the relocation scheme in that version. -s Before vxrelocd attempts a relocation, a snapshot of the current configuration is saved in /etc/vx/saveconfig.d. This option specifies the maximum number of configurations to keep for each diskgroup. The default is 32. Mail Notification By default, vxrelocd sends mail to root with information about a detected failure and the status of any relocation and recovery attempts. To send mail to other users, add the user login name to the vxrelocd startup line in the startup script /sbin/init.d/vxvm-recover, and reboot the system. For example, if the line appears as: nohup vxrelocd root & and you want mail also to be sent to user1 and user2, change the line to read: nohup vxrelocd root user1 user2 & Alternatively, you can kill the vxrelocd process and restart it as vxrelocd root mail_address, where mail_address is a user's login name. Do not kill the vxrelocd process while a relocation attempt is in progress. The mail notification that is sent when a failure is detected follows this format: Failures have been detected by the Veritas Volume Manager: failed disks: medianame ... failed plexes: plexname ... failed log plexes: plexname ... failing disks: medianame ... failed subdisks: subdiskname ... The Volume Manager will attempt to find spare disks, relocate failed subdisks and then recover the data in the failed plexes. The medianame list under failed disks specifies disks that appear to have completely failed; the medianame list under failing disks indi- cates a partial disk failure or a disk that is in the process of failing. When a disk has failed completely, the same medianame list appears under both failed disks and failing disks. The plexname list under failed plexes shows plexes that were detached due to I/O fail- ures that occurred while attempting to do I/O to subdisks they contain. The plexname list under failed log plexes indicates RAID-5 or DRL (dirty region logging) log plexes that have failed. The subdiskname list specifies subdisks in RAID-5 volumes that were detached due to I/O errors. Spare Space A disk can be marked as ``spare.'' This makes the disk available as a site for relocating failed subdisks. Disks that are marked as spares are not used for normal allocations unless you explicitly specify them. This ensures that there is a pool of spare space available for relocating failed subdisks and that this space does not get consumed by normal operations. Spare space is the first space used to relocate failed subdisks. However, if no spare space is available, or the available spare space is not suitable or sufficient, free space is also used except for those marked with the nohotuse flag. See the vxedit(1M) and vxdiskadm(1M) manual pages for more information on marking a disk as a spare or nohotuse. Nohotuse Space A disk can be marked as ``nohotuse.'' This excludes the disk from being used by vxrelocd, but it is still available as free space. See the vxedit(1M) and vxdiskadm(1M) manual pages for more information on marking a disk as a spare or nohotuse. Replacement Procedure After mail is sent, vxrelocd relocates failed subdisks (those listed in the subdisks list). This requires finding appropriate spare or free space in the same disk group as the failed subdisk. A disk is eligible as replacement space if it is a valid Veritas Volume Manager disk (VM disk) and contains enough space to hold the data contained in the failed subdisk. If no space is available on spare disks, the relocation uses free space that is not marked nohotuse. To determine which of the eligible disks to use, vxrelocd first tries the disk that is closest to the failed disk. The value of ``close- ness'' depends on the controller, target, and disk number of the failed disk. A disk on the same controller as the failed disk is closer than a disk on a different controller; a disk under the same target as the failed disk is closer than one under a different target. vxrelocd moves all subdisks from a failing drive to the same destination disk if possible. If no spare or free space is found, mail is sent explaining the disposition of volumes that had storage on the failed disk: Hot-relocation was not successful for subdisks on disk dm_name in volume v_name in disk group dg_name. No replacement was made and the disk is still unusable. The following volumes have storage on medianame: volumename ... These volumes are still usable, but the redundancy of those volumes is reduced. Any RAID-5 volumes with storage on the failed disk may become unusable in the face of further failures. If any non-RAID-5 volumes were made unusable due to the disk failure, the following message is included: The following volumes: volumename ... have data on medianame but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored. If any RAID-5 volumes were made unavailable due to the disk failure, the following message is included: The following RAID-5 volumes: volumename ... had storage on medianame and have experienced other failures. These RAID-5 volumes are now unusable and data on them is unavailable. These RAID-5 volumes must have their data restored. If there is spare space available, a snapshot of the current configuration is saved in /etc/vx/saveconfig.d/dg_name.yymmdd_hhmmss.mpvsh before attempting a subdisk relocation. Relocation requires setting up a subdisk on the spare or free space not marked with nohotuse and using it to replace the failed subdisk. If this is successful, the vxrecover command runs in the background to recover the data in volumes that had storage on the disk. If the relocation fails, the following message is sent: Hot-relocation was not successful for subdisks on disk dm_name in volume v_name in disk group dg_name. No replacement was made and the disk is still unusable. If any volumes (RAID-5 or otherwise) become unusable due to the failure, the following message is included: The following volumes: volumename ... have data on dm_name but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored. If the relocation procedure was successful and recovery has begun, the following mail message is sent: Volume v_name Subdisk sd_name relocated to newsd_name, but not yet recovered. After recovery completes, a mail message is sent relaying the result of the recovery procedure. If the recovery is successful, the follow- ing message is included in the mail: Recovery complete for volume v_name in disk group dg_name. If the recovery was not successful, the following message is included in the mail: Failure recovering v_name in disk group dg_name. Disabling vxrelocd If you do not want automatic subdisk relocation, you can disable the hot-relocation feature by killing the relocation daemon, vxrelocd, and preventing it from restarting. However, do not kill the daemon while it is doing the relocation. To kill the daemon, run the command: ps -ef from the command line and find the two entries for vxrelocd. Execute the command: kill -9 PID1 PID2 (substituting PID1 and PID2 with the process IDs for the two vxrelocd processes). To prevent vxrelocd from being started again, you must comment out the line that starts up vxrelocd in the startup script /sbin/init.d/vxvm-recover. FILES
/sbin/init.d/vxvm-recover The startup file for vxrelocd. /etc/vx/saveconfig.d/dg_name.yymmdd_hhmmss.mpvsh File where vxrelocd saves a snapshot of the current configuration before performing a relocation. SEE ALSO
kill(1), mailx(1), ps(1), vxdiskadm(1M), vxedit(1M), vxintro(1M), vxnotify(1M), vxrecover(1M), vxsparecheck(1M), vxunreloc(1M) VxVM 5.0.31.1 24 Mar 2008 vxrelocd(1M)
All times are GMT -4. The time now is 03:19 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy