Sponsored Content
Full Discussion: MPIO reliability
Operating Systems AIX MPIO reliability Post 302448440 by funksen on Thursday 26th of August 2010 04:05:56 AM
Old 08-26-2010
Hi, I know this problem, then you have to manually set the path online
we use

Code:
smitty mpio -> mpio path management -> enable paths for a device

but in my case, the paths come from 2 vio servers, which are connected to a IBM DS8300



when directly on the vio-servers, there are driver commands for setting paths online again, after replacing a damaged adapter for example

with sddpcm it's
Code:
pcmpath set adapter x online

 

8 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

Optimizing the system reliability

My product have around 10-15 programs/services running in the sun box, which together completes a task, sequentially. Several instances of the each program/service are running in the unix box, to manage the load and for risk-management reasons. As of now, we dont follow a strict strategy in... (2 Replies)
Discussion started by: Deepa
2 Replies

2. UNIX for Advanced & Expert Users

AIX MPIO and EMC

We are looking at running MPIO for it's redundancy and load balancing benefits. Does anyone know what pieces of software or modules are needed on the VIO server to get load balancing to work. Remember we are using EMC's DMX3500 storage system. We no longer want to use Powerpath. :rolleyes: ... (2 Replies)
Discussion started by: vxg0wa3
2 Replies

3. AIX

AIX native MPIO

Hi folks, does anybody have a link to a documentation how to implement native MPIO on AIX? We are using EMC PowerPath and Datacore SanSymphony/Cambex for this so far and I wasn't able to find a good description on that topic. All I know so far is that mkpath, chpath and lspath are used to... (3 Replies)
Discussion started by: zaxxon
3 Replies

4. High Performance Computing

High reliability web server - cluster, redundancy, etc

Hi. I am IT manager/developer for a small organization. I have been doing as-needed linux server administration for several years and am by no means an expert. I've built several of my own servers, and our org is currently using hosting services for our servers and I am relatively happy. We... (3 Replies)
Discussion started by: bsaadmin
3 Replies

5. AIX

MPIO Driver

On a particular LPAR, I was running AIX 5.3 TL 3. On Monday I did an update of the LPAR to 5.3 TL 9 SP2. The install was smooth, but then I ran into a problem. The MPIO driver does not work with LSI's StoreAge (SVM4). I did some looking, and looks like 5.3 TL3 = IBM.MPIO 5.3.0.30 5.3... (0 Replies)
Discussion started by: clking
0 Replies

6. Solaris

Reasons for NOT using LDOMs? reliability?

Dear Solaris Experts, We are upgrading from sun4u to T4 systems and one proposal is to use LDOMs and also zones within LDOMs. Someone advised using only zones and not LDOMs because the new machines have fewer chips and if a chip or a core fails then it doesn't impact the zones, but impacts... (3 Replies)
Discussion started by: User121
3 Replies

7. AIX

Need Help with SDD / SDDPCM / MPIO

This is getting very confusing for me, and appreciate if someone can help. Platform: Power VM ( Virtual I/O Server) ioslevel 2.1.3.10-FP23 # oslevel -s 6100-05-00-0000 Storage: IBM DS4300 Two HBAs - Dual Port Fibre Adapter Channels Each card has two ports , so a total of 4 ports going... (3 Replies)
Discussion started by: filosophizer
3 Replies

8. AIX

DISK and MPIO

Hello, I have some concerns over the disk management of my AIX system. For example server1 / > lspv hdisk0 00fa6d1288c820aa rootvg active hdisk1 00fa6d1288c8213c vg_2 active hdisk2 00c1cc14d6de272b ... (6 Replies)
Discussion started by: Phat
6 Replies
volreattach(8)						      System Manager's Manual						    volreattach(8)

NAME
volreattach - Reattaches disk drives that have once again become accessible SYNOPSIS
/usr/sbin/volreattach [-rb] [accessname...] /usr/sbin/volreattach -c accessname OPTIONS
The volreattach utility has the following options: Calls the volrecover utility to attempt to recover stale plexes of any volumes on the failed disk. Performs the reattach operation in the background. Checks whether a reattach is possible. No operation is performed, but the name of the disk group and disk media name at which the disk can be reattached is displayed. DESCRIPTION
The volreattach utility attempts to reattach disks using the same disk group and media names they had before detachment. This operation may be necessary if a disk has a transient failure, or if some disk drivers are unloaded or unloadable when the Logical Storage Manager is started, causing disks to enter the failed state. If the problem is fixed, the volreattach command can be used to reattach the disks without plexes being flagged as stale, as long as the reattach happens before any volumes on the disk are started. The volreattach command is called from the voldiskadm menus as part of disk recovery. The volreattach utility tries to find a disk with a disk group and disk ID matching that of the disk(s) being reattached. If the matching disk is found, the reattach operation may still fail if the original cause (or some other cause) for the disk failure still exists. EXIT CODES
A zero exit status is returned if it is possible to perform a reattach. Otherwise, non-zero is returned. SEE ALSO
volintro(8), voldiskadm(8), volrecover(8) volreattach(8)
All times are GMT -4. The time now is 07:24 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy