Sponsored Content
Operating Systems Solaris Sun Fire v440 Hard disk or controller broken? WARNING: /pci@1f,700000/scsi@2/sd@0,0 (sd1) Post 303044057 by oliwei on Thursday 13th of February 2020 05:25:14 AM
Old 02-13-2020
Hi,


the status is shown as optimal. I would guess that if a disk is failed or failing raidctl would show that? How can I identify which of the two disks are bad if raidctl claims everything is ok. I have powered down the server many times. I have not replugged all the cables yet. I will give it a try.

Last edited by hicksd8; 02-13-2020 at 06:32 AM..
 

10 More Discussions You Might Find Interesting

1. Solaris

Sun Fire V440 and Patch 109147-39

Got an curious issue. I applied 109147-39 to, oh 15 or so various systems all running Jumpstarted Solaris 8. When I hit the first two V440s, they both failed with Return code 139. All non shell commands segfaulted from then on. The patch modified mainly the linker libraries and commands. ... (2 Replies)
Discussion started by: BOFH
2 Replies

2. Solaris

Sun Fire v440 keeps shutting down

Hello, I hope you can help me. I am new to Sun servers and we have a Sun Fire v440 server in which one power supply failed, we are waiting for new one. But now our server is shutting down constantly. Is there any setting with which we can prevent this behaviour? (1 Reply)
Discussion started by: Tibor
1 Replies

3. Solaris

USB Hard Disk Drive Supported by Sun Fire V890

Hi, Can anyone suggest me any USB Hard Disk Drive which I can connect to Sun Fire V890 and take backup at a quick speed. A test with SolidState USB Hard Drive for backup work was taking writing at 2GB per hour for a 75GB backup. Regards, Tushar Kathe (1 Reply)
Discussion started by: tushar_kathe
1 Replies

4. Solaris

Sun Fire v440 hardware problem (can't get ok>)

First of all it's shut down 60 second after power on and write on console : SC Alert: Correct SCC not replaced - shutting managed system down! This is cured by moving out battery from ALOM card. Now server start to loop during the testing. That's on the console: >@(#) Sun Fire V440,Netra... (14 Replies)
Discussion started by: Alisher
14 Replies

5. Solaris

error messages in Sun Fire V440

Hello, I am seeing error messages in V440 (OS = solaris 8). I have copied here : The system does not reboot constantly and it is up for last 67 days. One more interesting thing I found, I see errors start appearing at 4:52AM last until 6am and again start at 16:52am on same day.. I... (5 Replies)
Discussion started by: upengan78
5 Replies

6. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

7. Solaris

Firmware password Solaris Sun Fire v440

Hi: I bougth an used Sun Fire v440, and It have a firmware password. When I turn on the server, it ask for firmware password. (I don 't know what is the correct password). I can access to SC, but when I want to access to OBP, Firmware Password appears again. I remove the battery for two hours,... (1 Reply)
Discussion started by: mguazzardo
1 Replies

8. Solaris

Sun Fire v440 Over heat Problem.

Dear Team, I need some expert advice to my problem. We have a Sun Fire v440 in our customer Place. Server is working fine and no hardware deviations are found except one problem that processors generating too much heat. I have verified and found that the room temperature was 26-27 degree.... (5 Replies)
Discussion started by: sudhansu
5 Replies

9. Solaris

Sun-Fire V440 boot disk issue

Hi, I have Sun Fire V440. Boot disks are mirrored. system crashed and it's not coming up. Error message is Insufficient metadevice database replicas located. Use Metadb to delete databases which are broken. Boot disks are mirrored and other disks are ZFS configuration. Please... (2 Replies)
Discussion started by: samnyc
2 Replies

10. Solaris

Removing a disk from SUN Fire V440 running Solaris 8

Hi, I have a SUN Fire V440 server running Solaris 8. One of the 4 disks do not appear when issued the format command. The "ready to remove" LED is not on either. Metastat command warns that this disk "Needs maintenace". Can I just shutdown and power off the machine and then insert an... (5 Replies)
Discussion started by: Echo68
5 Replies
vxrelocd(1M)															      vxrelocd(1M)

NAME
vxrelocd - monitor Veritas Volume Manager for failure events and relocate failed subdisks SYNOPSIS
/etc/vx/bin/vxrelocd [-o vxrecover_argument] [-O old_version] [-s save_max] [mail_address...] DESCRIPTION
The vxrelocd command monitors Veritas Volume Manager (VxVM) by analyzing the output of the vxnotify command, and waits for a failure. When a failure occurs, vxrelocd sends mail via mailx to root (by default) or to other specified users and relocates failed subdisks. After com- pleting the relocation, vxrelocd sends more mail indicating the status of each subdisk replacement. The vxrecover utility is then run on volumes with relocated subdisks to restore data. Mail is sent after vxrecover executes. OPTIONS
-o The -o option and its argument are passed directly to vxrecover if vxrecover is called. This allows specifying -o slow[=iodelay] to keep vxrecover from overloading a busy system during recovery. The default value for the delay is 250 milliseconds. -O Reverts back to an older version. Specifying -O VxVM_version directs vxrelocd to use the relocation scheme in that version. -s Before vxrelocd attempts a relocation, a snapshot of the current configuration is saved in /etc/vx/saveconfig.d. This option specifies the maximum number of configurations to keep for each diskgroup. The default is 32. Mail Notification By default, vxrelocd sends mail to root with information about a detected failure and the status of any relocation and recovery attempts. To send mail to other users, add the user login name to the vxrelocd startup line in the startup script /sbin/init.d/vxvm-recover, and reboot the system. For example, if the line appears as: nohup vxrelocd root & and you want mail also to be sent to user1 and user2, change the line to read: nohup vxrelocd root user1 user2 & Alternatively, you can kill the vxrelocd process and restart it as vxrelocd root mail_address, where mail_address is a user's login name. Do not kill the vxrelocd process while a relocation attempt is in progress. The mail notification that is sent when a failure is detected follows this format: Failures have been detected by the Veritas Volume Manager: failed disks: medianame ... failed plexes: plexname ... failed log plexes: plexname ... failing disks: medianame ... failed subdisks: subdiskname ... The Volume Manager will attempt to find spare disks, relocate failed subdisks and then recover the data in the failed plexes. The medianame list under failed disks specifies disks that appear to have completely failed; the medianame list under failing disks indi- cates a partial disk failure or a disk that is in the process of failing. When a disk has failed completely, the same medianame list appears under both failed disks and failing disks. The plexname list under failed plexes shows plexes that were detached due to I/O fail- ures that occurred while attempting to do I/O to subdisks they contain. The plexname list under failed log plexes indicates RAID-5 or DRL (dirty region logging) log plexes that have failed. The subdiskname list specifies subdisks in RAID-5 volumes that were detached due to I/O errors. Spare Space A disk can be marked as ``spare.'' This makes the disk available as a site for relocating failed subdisks. Disks that are marked as spares are not used for normal allocations unless you explicitly specify them. This ensures that there is a pool of spare space available for relocating failed subdisks and that this space does not get consumed by normal operations. Spare space is the first space used to relocate failed subdisks. However, if no spare space is available, or the available spare space is not suitable or sufficient, free space is also used except for those marked with the nohotuse flag. See the vxedit(1M) and vxdiskadm(1M) manual pages for more information on marking a disk as a spare or nohotuse. Nohotuse Space A disk can be marked as ``nohotuse.'' This excludes the disk from being used by vxrelocd, but it is still available as free space. See the vxedit(1M) and vxdiskadm(1M) manual pages for more information on marking a disk as a spare or nohotuse. Replacement Procedure After mail is sent, vxrelocd relocates failed subdisks (those listed in the subdisks list). This requires finding appropriate spare or free space in the same disk group as the failed subdisk. A disk is eligible as replacement space if it is a valid Veritas Volume Manager disk (VM disk) and contains enough space to hold the data contained in the failed subdisk. If no space is available on spare disks, the relocation uses free space that is not marked nohotuse. To determine which of the eligible disks to use, vxrelocd first tries the disk that is closest to the failed disk. The value of ``close- ness'' depends on the controller, target, and disk number of the failed disk. A disk on the same controller as the failed disk is closer than a disk on a different controller; a disk under the same target as the failed disk is closer than one under a different target. vxrelocd moves all subdisks from a failing drive to the same destination disk if possible. If no spare or free space is found, mail is sent explaining the disposition of volumes that had storage on the failed disk: Hot-relocation was not successful for subdisks on disk dm_name in volume v_name in disk group dg_name. No replacement was made and the disk is still unusable. The following volumes have storage on medianame: volumename ... These volumes are still usable, but the redundancy of those volumes is reduced. Any RAID-5 volumes with storage on the failed disk may become unusable in the face of further failures. If any non-RAID-5 volumes were made unusable due to the disk failure, the following message is included: The following volumes: volumename ... have data on medianame but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored. If any RAID-5 volumes were made unavailable due to the disk failure, the following message is included: The following RAID-5 volumes: volumename ... had storage on medianame and have experienced other failures. These RAID-5 volumes are now unusable and data on them is unavailable. These RAID-5 volumes must have their data restored. If there is spare space available, a snapshot of the current configuration is saved in /etc/vx/saveconfig.d/dg_name.yymmdd_hhmmss.mpvsh before attempting a subdisk relocation. Relocation requires setting up a subdisk on the spare or free space not marked with nohotuse and using it to replace the failed subdisk. If this is successful, the vxrecover command runs in the background to recover the data in volumes that had storage on the disk. If the relocation fails, the following message is sent: Hot-relocation was not successful for subdisks on disk dm_name in volume v_name in disk group dg_name. No replacement was made and the disk is still unusable. If any volumes (RAID-5 or otherwise) become unusable due to the failure, the following message is included: The following volumes: volumename ... have data on dm_name but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored. If the relocation procedure was successful and recovery has begun, the following mail message is sent: Volume v_name Subdisk sd_name relocated to newsd_name, but not yet recovered. After recovery completes, a mail message is sent relaying the result of the recovery procedure. If the recovery is successful, the follow- ing message is included in the mail: Recovery complete for volume v_name in disk group dg_name. If the recovery was not successful, the following message is included in the mail: Failure recovering v_name in disk group dg_name. Disabling vxrelocd If you do not want automatic subdisk relocation, you can disable the hot-relocation feature by killing the relocation daemon, vxrelocd, and preventing it from restarting. However, do not kill the daemon while it is doing the relocation. To kill the daemon, run the command: ps -ef from the command line and find the two entries for vxrelocd. Execute the command: kill -9 PID1 PID2 (substituting PID1 and PID2 with the process IDs for the two vxrelocd processes). To prevent vxrelocd from being started again, you must comment out the line that starts up vxrelocd in the startup script /sbin/init.d/vxvm-recover. FILES
/sbin/init.d/vxvm-recover The startup file for vxrelocd. /etc/vx/saveconfig.d/dg_name.yymmdd_hhmmss.mpvsh File where vxrelocd saves a snapshot of the current configuration before performing a relocation. SEE ALSO
kill(1), mailx(1), ps(1), vxdiskadm(1M), vxedit(1M), vxintro(1M), vxnotify(1M), vxrecover(1M), vxsparecheck(1M), vxunreloc(1M) VxVM 5.0.31.1 24 Mar 2008 vxrelocd(1M)
All times are GMT -4. The time now is 07:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy