Sponsored Content
Full Discussion: raidctl performance issues
Operating Systems Solaris raidctl performance issues Post 302322463 by incredible on Wednesday 3rd of June 2009 11:12:18 PM
Old 06-04-2009
Quote:
Originally Posted by skamal4u
sometime when one drive fails we dont face any issue & we replace the drive with out any problem . but sometimes when one drive fails , system becomes unresponsive and doesnot allow us to login
Why is that so? Definitely the systems you have at large are of different models and architecture.
You must check if there's any firmware bug with with and if your's is updated to the latest.
Slot 0 and slot 1 can be used for the OS disks.you can even RAID slot 0 with 2 or 3. It doesn't really matter as long as you raidctl the correct controller/target

-----Post Update-----

And how do you usually confirm if a drive fails, in a h/w raid?
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

shell script performance issues --Urgent

I need help in awk please help immediatly. This below function is taking lot of time Please help me to fine tune it so that it runs faster. The file count is around 3million records # Process Body processbody() { #set -x while read line do ... (18 Replies)
Discussion started by: icefish
18 Replies

2. UNIX for Dummies Questions & Answers

Awk Performance Issues

Hi All, I'm facing an issue in my awk script. The script is processing a large text file having the details of a number of persons, each person's details being written from 100 to 250 tags as given below: 100 START| 101klklk| ... 245 opr| 246 55| 250 END| 100 START| ... 245 pp| 246... (4 Replies)
Discussion started by: pgp_acc1
4 Replies

3. Programming

performance issues of calling a function in if condition

Hi, I have written a program in C and have to test the return value of the functions. So the normal way of doin this wud b int rc rc=myfunction(input); if(rc=TRUE){ } else{ } But instead of doing this I have called the function in the if() condition. Does this have any... (2 Replies)
Discussion started by: sidmania
2 Replies

4. Solaris

Solaris 11 Express NAT performance issues

Hi all, I decided to replace my linux router/firewall with Solaris 11 express. This is a pppoe connection directly to my server...no router boxes. I got everything setup, but the performance is terrible on the NAT....really slow. A web page that loads on the server instantly will take... (3 Replies)
Discussion started by: vectox
3 Replies

5. AIX

Performance issues for LPAR with GPFS 3.4

Hi, We have GPFS 3.4 Installed on two AIX 6.1 Nodes. We have 3 GPFS Mount points: /abc01 4TB (Comprises of 14 x 300GB disks from XIV SAN) /abc02 4TB (Comprises of 14 x 300GB disks from XIV SAN) /abc03 1TB ((Comprises of Multiple 300GB disks from XIV SAN) Now these 40... (1 Reply)
Discussion started by: aixromeo
1 Replies

6. Solaris

zfs send receive performance issues

I 'm trying to clone a zfs file system pool/u01 to a new file system called newpool/u01 using following commands zfs list zfs snapshot pool/u01@new zfs send pool/u01@new | zfs -F receive newpool/u01 Its a 100G file system snapshot and copied to same server on different pool and... (9 Replies)
Discussion started by: fugitive
9 Replies

7. Solaris

Getcwd performance issues

Hello everyone, recently we have been experiencing performance issues with chmod. We managed to narrow it down to getcwd. The following folder exists: /Folder1/subfol1/subfol2/subfol3 cd /Folder1/subfol1/subfol2/subfol3 truss -D pwd 2>&1 | grep getcwd 0.0001... (4 Replies)
Discussion started by: KotekBury
4 Replies

8. AIX

AIX 6.1 Memory Performance issues

Good Day Everyone, Just wonder anyone has encounter AIX 6.1 Memory Performance issues ? What I have in my current scenario is we have 3 datastage servers (Segregate server and EE jobs - for those who know Datastage achitect) and 2 db servers(running HA to load balance 4 nodes partitions for... (3 Replies)
Discussion started by: ckwan
3 Replies

9. UNIX for Dummies Questions & Answers

Hard Disk Performance issues Suse 11 SP1

hi guys right now I have 6 Virtual Machines (VMs) running on Vmware ESXi 5.1 and attached to Storage SAN. All these run Suse Linux 11 SP1 x64. All of a sudden 1 of these VMs is running very slow making high CPU usage and I see al wait % kinda high 40-50%. Apparently since I don't own this... (5 Replies)
Discussion started by: karlochacon
5 Replies

10. What is on Your Mind?

Baiduspider and Forum Performance Issues

For years we blocked Baiduspider due to the fact their bots do not obey the robots.txt directive and can really hurt site performance when they unleash 100 bots on the site each pulling pages many times per second. Last year, I unblocked Baiduspider's IP addresses, and now the problem is back. ... (1 Reply)
Discussion started by: Neo
1 Replies
MLX(4)							   BSD Kernel Interfaces Manual 						    MLX(4)

NAME
mlx -- Mylex DAC-family RAID driver SYNOPSIS
To compile this driver into the kernel, place the following lines in your kernel configuration file: device pci device mlx Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5): mlx_load="YES" DESCRIPTION
The mlx driver provides support for Mylex DAC-family PCI to SCSI RAID controllers, including versions relabeled by Digital/Compaq. HARDWARE
Controllers supported by the mlx driver include: o Mylex DAC960P o Mylex DAC960PD / DEC KZPSC (Fast Wide) o Mylex DAC960PDU o Mylex DAC960PL o Mylex DAC960PJ o Mylex DAC960PG o Mylex DAC960PU / DEC PZPAC (Ultra Wide) o Mylex AcceleRAID 150 (DAC960PRL) o Mylex AcceleRAID 250 (DAC960PTL1) o Mylex eXtremeRAID 1100 (DAC1164P) o RAIDarray 230 controllers, aka the Ultra-SCSI DEC KZPAC-AA (1-ch, 4MB cache), KZPAC-CA (3-ch, 4MB), KZPAC-CB (3-ch, 8MB cache) All major firmware revisions (2.x, 3.x, 4.x and 5.x) are supported, however it is always advisable to upgrade to the most recent firmware available for the controller. Compatible Mylex controllers not listed should work, but have not been verified. DIAGNOSTICS
Controller initialisation phase mlx%d: controller initialisation in progress... mlx%d: initialisation complete The controller firmware is performing/has completed initialisation. mlx%d: physical drive %d:%d not responding The drive at channel:target is not responding; it may have failed or been removed. mlx%d: spinning up drives... Drive startup is in progress; this may take several minutes. mlx%d: configuration checksum error The array configuration has become corrupted. mlx%d: mirror race recovery in progress mlx%d: mirror race on a critical system drive mlx%d: mirror race recovery failed These error codes are undocumented. mlx%d: physical drive %d:%d COD mismatch Configuration data on the drive at channel:target does not match the rest of the array. mlx%d: system drive installation aborted Errors occurred preventing one or more system drives from being configured. mlx%d: new controller configuration found The controller has detected a configuration on disk which supersedes the configuration in its nonvolatile memory. It will reset and come up with the new configuration. mlx%d: FATAL MEMORY PARITY ERROR Firmware detected a fatal memory error; the driver will not attempt to attach to this controller. mlx%d: unknown firmware initialisation error %x:%x:%x An unknown error occurred during initialisation; it will be ignored. Driver initialisation/shutdown phase: mlx%d: can't allocate scatter/gather DMA tag mlx%d: can't allocate buffer DMA tag mlx%d: can't allocate s/g table mlx%d: can't make initial s/g list mapping mlx%d: can't make permanent s/g list mapping mlx%d: can't allocate interrupt mlx%d: can't set up interrupt A resource allocation error occurred while initialising the driver; initialisation has failed and the driver will not attach to this con- troller. mlx%d: error fetching drive status The current status of all system drives could not be fetched; attachment of system drives will be aborted. mlx%d: device_add_child failed mlx%d: bus_generic_attach returned %d Creation of the system drive instances failed; attachment of one or more system drives may have been aborted. mlxd%d: detaching... The indicated system drive is being detached. mlxd%d: still open, can't detach The indicated system drive is still open or mounted; the controller cannot be detached. mlx%d: flushing cache... The controller cache is being flushed prior to detach or shutdown. Operational diagnostics: mlx%d: ENQUIRY failed - %s mlx%d: ENQUIRY2 failed mlx%d: ENQUIRY_OLD failed mlx%d: FLUSH failed - %s mlx%d: CHECK ASYNC failed - %s mlx%d: REBUILD ASYNC failed - %s mlx%d: command failed - %s The controller rejected a command for the reason given. mlx%d: I/O beyond end of unit (%u,%d > %u) mlx%d: I/O error - %s An I/O error was reported by the controller. mlx%d: periodic enquiry failed - %s An attempt to poll the controller for status failed for the reason given. mlx%d: mlx_periodic_enquiry: unknown command %x The periodic status poll has issued a command which has become corrupted. mlxd%d: drive offline mlxd%d: drive online mlxd%d: drive critical The system disk indicated has changed state. mlx%d: physical drive %d:%d reset mlx%d: physical drive %d:%d killed %s mlx%d: physical drive %d:%d error log: sense = %d asc = %x asq = %x mlx%d: info %4D csi %4D The drive at channel:target has been reset, killed for the given reason, or experienced a SCSI error. mlx%d: unknown log message type %x mlx%d: error reading message log - %s An error occurred while trying to read the controller's message log. mlxd%d: consistency check started mlx%d: consistency check completed A user-initiated consistency check has started/completed. mlx%d: drive rebuild started for %d:%d mlx%d: drive rebuild completed A user-initiated physical drive rebuild has started/completed. mlx%d: background check/rebuild operation started mlx%d: background check/rebuild operation completed An automatic system drive consistency check or physical drive rebuild has started/completed. mlx%d: channel %d pausing for %d seconds mlx%d: channel %d resuming mlx%d: pause command failed - %s mlx%d: pause failed for channel %d mlx%d: resume command failed - %s mlx%d: resume failed for channel %d Controller/channel pause operation notification. (Channel pause is not currently supported on any controller.) mlx%d: controller wedged (not taking commands) The controller is not responding to attempts to submit new commands. mlx%d: duplicate done event for slot %d mlx%d: done event for nonbusy slot %d Corruption has occurred in either the controller's onboard list of commands or in the driver. SEE ALSO
mlxcontrol(8) AUTHORS
The mlx driver was written by Michael Smith <msmith@FreeBSD.org>. This manual page was written by Jeroen Ruigrok van der Werven <asmodai@FreeBSD.org> and Michael Smith <msmith@FreeBSD.org>. BUGS
The driver does not yet support EISA adapters. The DEC KZPSC has insufficient flash ROM to hold any reasonably recent firmware. This has caused problems for this driver. The driver does not yet support the version 6.x firmware as found in the AcceleRAID 352 and eXtremeRAID 2000 and 3000 products. BSD
August 10, 2004 BSD
All times are GMT -4. The time now is 05:56 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy