[root@server2] / > lsdev -Cc disk
hdisk0 Available 13-T1-01 IBM MPIO FC 2107
hdisk1 Available 13-T1-01 IBM MPIO FC 2107
hdisk2 Available 13-T1-01 IBM MPIO FC 2107
hdisk3 Available 13-T1-01 IBM MPIO FC 2107
hdisk4 Available 13-T1-01 IBM MPIO FC 2107
hdisk5 Available 13-T1-01 IBM MPIO FC 2107
hdisk6 Available 13-T1-01 IBM MPIO FC 2107
hdisk7 Available 13-T1-01 IBM MPIO FC 2107
hdisk8 Available 13-T1-01 IBM MPIO FC 2107
hdisk9 Available 13-T1-01 IBM MPIO FC 2107
hdisk10 Available 13-T1-01 IBM MPIO FC 2107
hdisk11 Available 13-T1-01 IBM MPIO FC 2107
hdisk12 Available 13-T1-01 IBM MPIO FC 2107
hdisk13 Available 13-T1-01 IBM MPIO FC 2107
hdisk14 Available 13-T1-01 IBM MPIO FC 2107
hdisk15 Available 13-T1-01 IBM MPIO FC 2107
hdisk16 Available 13-T1-01 IBM MPIO FC 2107
hdisk17 Available 13-T1-01 IBM MPIO FC 2107
hdisk18 Available 13-T1-01 MPIO Other FC SCSI Disk Drive
hdisk19 Defined 13-T1-01 Other FC SCSI Disk Drive
hdisk20 Defined 13-T1-01 Other FC SCSI Disk Drive
hdisk21 Defined 13-T1-01 Other FC SCSI Disk Drive
hdisk22 Available 13-T1-01 IBM MPIO FC 2107
hdisk23 Available 13-T1-01 IBM MPIO FC 2107
--> the disks marked with Defined mean that they are not available?
on hdisk0 it shows it's using sddpcm, but on hdisk18 it uses PCM/friend/fcpother (/usr/lib/drivers/aixdiskpcmke)
Code:
[root@server2] / > lsattr -El hdisk0
PCM PCM/friend/sddpcm PCM True
PCM PCM/friend/sddpcm PCM True
PR_key_value none Reserve Key True
PR_key_value none Reserve Key True
algorithm load_balance Algorithm True
clr_q no Device CLEARS its Queue on error True
clr_q no Device CLEARS its Queue on error True
dist_err_pcnt 0 Distributed Error Percentage True
dist_tw_width 50 Distributed Error Sample Time True
hcheck_interval 60 Health Check Interval True
hcheck_mode nonactive Health Check Mode True
location Location Label True
location Location Label True
lun_id 0x4091400000000000 Logical Unit Number ID False
lun_id 0x4091400000000000 Logical Unit Number ID False
lun_reset_spt yes Support SCSI LUN reset True
lun_reset_spt yes Support SCSI LUN reset True
max_transfer 0x100000 Maximum TRANSFER Size True
max_transfer 0x100000 Maximum TRANSFER Size True
node_name 0x5005076304ffd345 FC Node Name False
node_name 0x5005076304ffd345 FC Node Name False
pvid 00c528f76cff27e10000000000000000 Physical volume identifier False
pvid 00c528f76cff27e10000000000000000 Physical volume identifier False
q_err yes Use QERR bit True
q_err yes Use QERR bit True
q_type simple Queuing TYPE True
q_type simple Queuing TYPE True
qfull_dly 2 delay in seconds for SCSI TASK SET FULL True
qfull_dly 2 delay in seconds for SCSI TASK SET FULL True
queue_depth 20 Queue DEPTH True
queue_depth 20 Queue DEPTH True
reserve_policy no_reserve Reserve Policy True
reserve_policy no_reserve Reserve Policy True
retry_timeout 120 Retry Timeout True
rw_timeout 60 READ/WRITE time out value True
rw_timeout 60 READ/WRITE time out value True
scbsy_dly 20 delay in seconds for SCSI BUSY True
scbsy_dly 20 delay in seconds for SCSI BUSY True
scsi_id 0x146e00 SCSI ID False
scsi_id 0x146e00 SCSI ID False
start_timeout 180 START unit time out value True
start_timeout 180 START unit time out value True
unique_id 200B75BGT11910007210790003IBMfcp Device Unique Identification False
unique_id 200B75BGT11910007210790003IBMfcp Device Unique Identification False
ww_name 0x500507630438d345 FC World Wide Name False
ww_name 0x500507630438d345 FC World Wide Name False
[root@server2] / > lsattr -El hdisk18
PCM PCM/friend/fcpother Path Control Module False
PR_key_value none Persistant Reserve Key Value True+
algorithm fail_over Algorithm True+
clr_q no Device CLEARS its Queue on error True
dist_err_pcnt 0 Distributed Error Percentage True
dist_tw_width 50 Distributed Error Sample Time True
hcheck_cmd test_unit_rdy Health Check Command True+
hcheck_interval 60 Health Check Interval True+
hcheck_mode nonactive Health Check Mode True+
location Location Label True+
lun_id 0x0 Logical Unit Number ID False
lun_reset_spt yes LUN Reset Supported True
max_coalesce 0x40000 Maximum Coalesce Size True
max_retry_delay 60 Maximum Quiesce Time True
max_transfer 0x80000 Maximum TRANSFER Size True
node_name 0x2ff70002ac0109f9 FC Node Name False
pvid 00c528f77c1379a90000000000000000 Physical volume identifier False
q_err yes Use QERR bit True
q_type simple Queuing TYPE True
queue_depth 16 Queue DEPTH True+
reassign_to 120 REASSIGN time out value True
reserve_policy single_path Reserve Policy True+
rw_timeout 30 READ/WRITE time out value True
scsi_id 0x144c00 SCSI ID False
start_timeout 60 START unit time out value True
timeout_policy fail_path Timeout Policy True+
unique_id 2510002A000109F9000002VV083PARdatafcp Unique device identifier False
ww_name 0x20120002ac0109f9 FC World Wide Name False
We can opt to select the mpio method sddpcm, AIX PCM for whatever disk we want?
We are looking at running MPIO for it's redundancy and load balancing benefits. Does anyone know what pieces of software or modules are needed on the VIO server to get load balancing to work. Remember we are using EMC's DMX3500 storage system. We no longer want to use Powerpath. :rolleyes: ... (2 Replies)
Hi
I would like to ask what the benefits are of changing from RDAC to MPIO when connecting to a DS4000 on AIX 5.3? I have heard that IBM MPIO "might" support more than 1 active path to a LUN when connecting to a DS4800 through more than 1 host connection on the same AIX client. I understand that... (8 Replies)
Hi folks,
does anybody have a link to a documentation how to implement native MPIO on AIX? We are using EMC PowerPath and Datacore SanSymphony/Cambex for this so far and I wasn't able to find a good description on that topic. All I know so far is that mkpath, chpath and lspath are used to... (3 Replies)
Can anyone recommend me some reading material surrounding how AIX handles LUNs:
- with and without MPIO installed
- with and without SDD or SDDPCM installed
Where does lspath sit in all of this (MPIO layer?). Can a system be built with just MPIO software? Is MPIO software even needed?
I guess... (0 Replies)
On a particular LPAR, I was running AIX 5.3 TL 3. On Monday I did an update of the LPAR to 5.3 TL 9 SP2. The install was smooth, but then I ran into a problem.
The MPIO driver does not work with LSI's StoreAge (SVM4). I did some looking, and looks like
5.3 TL3 = IBM.MPIO 5.3.0.30
5.3... (0 Replies)
Hi,
we have a vew boxes using MPIO and they are connected to some virtualization software managing some disk subsystems, offering volumes to the AIX boxes.
Sometimes when a cable has been plugged out for a test or when a real problem occurs, using lspath to show the state of the paths shows... (8 Replies)
Hello,
we are planning to deploy some of our databases on AIX/LPAR based servers (we didn't bought it yet ...). IBM's engineers says that if we want to boot them from SAN the hardware array has to be compatible with MPIO but they don't want to deliver any document with list of arrays supported... (5 Replies)
We have AIX 6.1 system attached to SAN disks (DS4700 and DS8100) thru SVC.
Initially when the system was I forgot to install sddpcm drivers. and wanted to know how can i go with the installation of the sddpcm drivers.
My understandin going thru the manual ... (3 Replies)
This is getting very confusing for me, and appreciate if someone can help.
Platform: Power VM ( Virtual I/O Server)
ioslevel 2.1.3.10-FP23
# oslevel -s
6100-05-00-0000
Storage: IBM DS4300
Two HBAs - Dual Port Fibre Adapter Channels
Each card has two ports , so a total of 4 ports going... (3 Replies)