Sponsored Content
Operating Systems Solaris Sun Fire v440 Hard disk or controller broken? WARNING: /pci@1f,700000/scsi@2/sd@0,0 (sd1) Post 303044055 by hicksd8 on Thursday 13th of February 2020 05:15:54 AM
Old 02-13-2020
The Raid controller is not showing no problems, as you put it.

RESYNCING means that the controller is remirroring the Raid1 disks because of a problem. Depending on the capacity of the Raid1 disks (they will typically be exactly the same size) this resyncing shouldn't take very long, however, whilst this is in progress, system response time will be impacted. Once complete, the status should become OPTIMAL.

If the resyncing is falling over for some reason then the process might be restarting over and over and OPTIMAL is never achieved. What for that. If that is the case I would be inclined to first if possible take the system down and re-seat all SCSI/SATA cables both ends (disk and mobo) and all disk power supply plugs. Reboot and see if the problem persists. If it does, then most likely one of the disks is faulty. It's possible but unlikely that the raid controller is faulty. All the moving parts are the disks.

You could remove the faulty raid1 drive (the one continuously resyncing) and put it on another machine running diagnostics. Perhaps completely reformat and try again. Otherwise, it's a new disk required.
 

10 More Discussions You Might Find Interesting

1. Solaris

Sun Fire V440 and Patch 109147-39

Got an curious issue. I applied 109147-39 to, oh 15 or so various systems all running Jumpstarted Solaris 8. When I hit the first two V440s, they both failed with Return code 139. All non shell commands segfaulted from then on. The patch modified mainly the linker libraries and commands. ... (2 Replies)
Discussion started by: BOFH
2 Replies

2. Solaris

Sun Fire v440 keeps shutting down

Hello, I hope you can help me. I am new to Sun servers and we have a Sun Fire v440 server in which one power supply failed, we are waiting for new one. But now our server is shutting down constantly. Is there any setting with which we can prevent this behaviour? (1 Reply)
Discussion started by: Tibor
1 Replies

3. Solaris

USB Hard Disk Drive Supported by Sun Fire V890

Hi, Can anyone suggest me any USB Hard Disk Drive which I can connect to Sun Fire V890 and take backup at a quick speed. A test with SolidState USB Hard Drive for backup work was taking writing at 2GB per hour for a 75GB backup. Regards, Tushar Kathe (1 Reply)
Discussion started by: tushar_kathe
1 Replies

4. Solaris

Sun Fire v440 hardware problem (can't get ok>)

First of all it's shut down 60 second after power on and write on console : SC Alert: Correct SCC not replaced - shutting managed system down! This is cured by moving out battery from ALOM card. Now server start to loop during the testing. That's on the console: >@(#) Sun Fire V440,Netra... (14 Replies)
Discussion started by: Alisher
14 Replies

5. Solaris

error messages in Sun Fire V440

Hello, I am seeing error messages in V440 (OS = solaris 8). I have copied here : The system does not reboot constantly and it is up for last 67 days. One more interesting thing I found, I see errors start appearing at 4:52AM last until 6am and again start at 16:52am on same day.. I... (5 Replies)
Discussion started by: upengan78
5 Replies

6. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

7. Solaris

Firmware password Solaris Sun Fire v440

Hi: I bougth an used Sun Fire v440, and It have a firmware password. When I turn on the server, it ask for firmware password. (I don 't know what is the correct password). I can access to SC, but when I want to access to OBP, Firmware Password appears again. I remove the battery for two hours,... (1 Reply)
Discussion started by: mguazzardo
1 Replies

8. Solaris

Sun Fire v440 Over heat Problem.

Dear Team, I need some expert advice to my problem. We have a Sun Fire v440 in our customer Place. Server is working fine and no hardware deviations are found except one problem that processors generating too much heat. I have verified and found that the room temperature was 26-27 degree.... (5 Replies)
Discussion started by: sudhansu
5 Replies

9. Solaris

Sun-Fire V440 boot disk issue

Hi, I have Sun Fire V440. Boot disks are mirrored. system crashed and it's not coming up. Error message is Insufficient metadevice database replicas located. Use Metadb to delete databases which are broken. Boot disks are mirrored and other disks are ZFS configuration. Please... (2 Replies)
Discussion started by: samnyc
2 Replies

10. Solaris

Removing a disk from SUN Fire V440 running Solaris 8

Hi, I have a SUN Fire V440 server running Solaris 8. One of the 4 disks do not appear when issued the format command. The "ready to remove" LED is not on either. Metastat command warns that this disk "Needs maintenace". Can I just shutdown and power off the machine and then insert an... (5 Replies)
Discussion started by: Echo68
5 Replies
saconfig(1M)															      saconfig(1M)

NAME
saconfig - Command Line Configuration Utility for the HP SmartArray RAID Controller Family SYNOPSIS
<device_file> <device_file> -N <device_file> -R <RAID_level> [-S <stripe_size>] -p <physical_drive_id> [-p <physical_drive_id> ...] [-s <physical_drive_id>] [-c <capac- ity>] <device_file> -e <low|medium|high> <device_file> -r <low|medium|high> <device_file> -P <default|shortest|longest> <device_file> -A <logical_drive_num> -s <physical_drive_id> <device_file> -D <logical_drive_num> -s <physical_drive_id> <device_file> -D <logical_drive_num> -s all <device_file> -D <logical_drive_num> <device_file> -D all <device_file> -C <read_caching_percentage> <device_file> -F on <device_file> -F off <device_file> -I -p <physical_drive_id> [-p <physical_drive_id> ...] <device_file> -I -l <logical_drive_num> <device_file> -E <logical_drive_num> -c <capacity> <device_file> -E <logical_drive_num> -p <physical_drive_id> [-p <physical_drive_id> ...] <device_file> -M <logical_drive_num> -R <RAID_level> [-S <stripe_size>] <device_file> -M <logical_drive_num> -S <stripe_size> DESCRIPTION
The command is a command line configuration tool for the HP SmartArray RAID Controller Family. This command provides the ability to create a logical drive, add a spare drive to existing logical drive, display configuration, delete a logical drive, and clear configuration. This command also enables the cache when it is used to create the first logical drive on the SmartArray RAID Controller. Furthermore it allows the option of changing the percentage of total cache size used for read caching and write caching. The auto-fail missing disks at boot fea- ture can be enabled or disabled using this command. Prerequisites needs to be able to create the /tmp/saconfig.lock file when it starts executing. This prevents multiple users from running This lock file is removed when exits. Security Restrictions requires either the superuser privilege or and privileges. See privileges(5) for more information about privileged access on systems that support fine-grained privileges. Options recognizes the following options and parameters as indicated in the SYNOPSIS section above. <device_file> To specify the device file for the SmartArray RAID Controller. The device file is as displayed by the command for the appropriate SmartArray RAID Controller. With no other options, displays size and status for all physical drives and RAID level, size, and status for all logical drives. With the option saconfig displays size and status for all physical drives and RAID level, size, and status for all logical drives. But the persistent device files (see intro(7)) will be displayed as against legacy device files in the absence of this option. -R <RAID_level> To create a logical drive with the specified RAID level. The supported RAID levels are 0, 1, 1+0, 5, and ADG. The lowest logical drive number not already in use will be given to the newly created logical drive, starting from 0. To determine the hardware path and device file for the logical drive, use the command. The logical drives are dis- played in the same order as in the output. RAID 1+0 logical drive requires even number of physical drives (minimum of 2). To achieve optimal fault tolerance, it is best to specify the same number of physical drives per channel if possible when creating a RAID 1+0 logical drive. This ensures the physical drives in each mirrored pairs would be on different channels. -S <stripe_size> To specify the stripe size for the logical drive to be created. The supported stripe sizes are 8, 16, 32, 64, 128, and 256 (KB). 128 and 256 stripe sizes are not available for RAID 5 or ADG logical drive. If this option is not used to specify the stripe size, default stripe size will be used to create the logical drive. For RAID 0 and 1+0 logical drive, the default size is 128. For RAID 5 and ADG logical drive, the default size is 16. -p <physical_drive_id> To specify the SCSI physical disk. It can be represented as <channel_id>:<target_id> (e.g., 4:12). Valid channel numbers are between 1 and 4. Valid target numbers are between 0 and 15. To specify the SAS/SATA physical disk. It can be represented as either <connector>:<enclosure>:<bay> (e.g., 2I:1:10) or <wwid> (e.g., 0x500000e010f16432). -s <physical_drive_id> To specify a spare drive to be included in the logical drive configuration. Refer <physical_drive_id> to -p option. -c <capacity> To specify the size in GB of a logical drive to be created. -A <logical_drive_num> To specify which logical drive to add spare drive to. -e <low | medium | high> To change the expand priority of controller to one of Low, Medium, High. When there are no logical drives, expand priority cannot be changed. When the configuration is cleared, or the last logical drive is deleted, the subse- quent logical drive creation will change the expand priority value of the controller to medium. -r <low | medium | high> To change the rebuild priority of controller to one of Low, Medium, High. When there are no logical drives, rebuild priority cannot be changed. When the configuration is cleared, or the last logical drive is deleted, the subsequent logical drive creation will change the rebuild priority value of the controller to high. -P <default | shortest | longest> To change the path selection method by the controller firmware. If it is set to default, firmware will select the Active path on its own. If the selection method is set to shortest, firmware will select the shortest path as the Active Path. If the path selection method is longest, firmware will select the longest path as the Active Path. All multi path configurations are Active-Standby configurations. -D <logical_drive_num> -s <physical_drive_id> To delete the spare drive from the specified logical drive. -D <logical_drive_num> -s all To delete all of spare drives from the specified logical drive. -D <logical_drive_num> To delete the specified logical drive. -D all To clear configuration. -C <read_caching_percentage> To specify the percentage of total cache size to be used for read caching. The remaining percentage will be used for write caching. Valid read caching percentage numbers are 0, 25, 50, 75, and 100 (%). -F on To enable auto-fail missing disks at boot feature. -F off To disable auto-fail missing disks at boot feature. -I -p <physical_drive_id> [-p <physical_drive_id> ...] To identify SAS/SATA physical drive(s) by LED. -I -l <logical_drive_num> To identify a SAS/SATA logical drive by LED. -E <logical_drive_num> -c <capacity> To extend the capacity of the specified logical drive up to larger capacity. The capacity is in GB. -E <logical_drive_num> -p <physical_drive_id> [-p <physical_drive_id> ...] To expand the specified logical drive and others in an array by physical drive(s). -M <logical_drive_num> -R <RAID_level> [-S <stripe_size>] To perform RAID level [with stripe size] migration on the specified logical drive. The stripe size is in KB. -M <logical_drive_num> -S <stripe_size> To perform stripe size migration on the specified logical drive. The stripe size is in KB. Logical Drive State Definitions All physical disks in the logical drive are operational. Some possible causes: Multiple physical disks in a fault-tolerant (RAID 1, 1+0, 5, ADG) logical drive have failed. One or more disks in a RAID 0 logical drive have failed. Cache data loss has occurred. Array expansion was aborted. The logical drive is temporarily disabled because another logical drive on the controller had a missing disk at power-up. Also known as "degraded" state. A physical disk in a fault tolerant logical drive has failed. For RAID 1, 1+0 or 5, data loss may result if a second disk should fail. For RAID ADG, data loss may result if two additional disks should fail. A replacement disk is present, but rebuild hasn't started yet (another logical drive may be currently rebuilding). The logical drive will also return to this state if the rebuild had been aborted due to unrecoverable read errors from another disk. One or more physical disks in this logical drive are being rebuilt. While the logical drive was in a degraded state, the system was powered off and a disk other than the failed disk was replaced. Shut off the system and replace the correct (failed) disk. While the system was off, one or more disks were removed. Note: the other logical drives are held in a temporary "failed" state when this occurs. The data in the logical drive is being reorganized because: Physical disks have been added to the array (capacity expansion). The stripe size is being changed (stripe-size migration). The RAID level is being changed (RAID-level migration). A capacity expansion operation is in progress (or is queued up) that will make room on the disks for this new logical drive. Until room has been made on the physical disks, this newly configured logical drive cannot be read or written. The logical drive is waiting to undergo data reorganization (see EXPANDING above). Possible causes for the delay are a rebuild or expansion operation may already be in progress. Physical Disk State Definitions The physical disk is configured in one or more logical drives and is operational. The physical disk is configured as a spare disk. The physical disk has not been configured in any logical drives. The configured physical disk has failed. Return Value returns the following values: Successful completion. Command line syntax error. Incompatible CISS driver API. Failure opening a file. Other error. Examples To display configuration on the SmartArray RAID Controller /dev/ciss5 - To create a RAID 1+0 logical drive to include physical drives 4:0 and 4:1 and spare drive 4:2 - -R -p -p -s To create a RAID ADG logical drive with stripe size 64 to include physical drives 1:2, 2:2, 3:2 and 4:2 - -R -S -p -p -p -p To create a RAID 5 logical drive with capacity 45 GB to include SAS physical drives 1I:1:16, 1I:1:15, and 1I:1:14 - -R -c -p -p -p To set the expand priority of logical drives on a controller to medium -e To set the rebuild priority of logical drives on a controller to high -r To add spare drive 1:0 to logical drive 1 - -A -s To delete spare drive 1:0 from logical drive 1 - -D -s To delete all spare drives from logical drive 1 - -D -s To delete logical drive 0 - -D To clear configuration - -D To change read caching percentage to 75% - -C To extend logical drive 2 upto 45 GB in capacity - -E -c To expand logical drive 0 with 2 SAS physical drives - -E -p -p To migrate logical drive 2 with stripe 32 - -M -S To migrate logical drive 2 to RAID level 5 with stripe 64 - -M -R -S AUTHOR
was developed by HP. FILES
Executable file. saconfig lock file. Device files. SEE ALSO
sautil(1M), privileges(5), intro(7). saconfig(1M)
All times are GMT -4. The time now is 03:37 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy