You seem to lack a solid understanding how all the virtualisation works:
First you create a virtual SCSI adapter on the VIOS and allow a certain LPAR to use it. For the LPAR it is used like any physical SCSI adapter, save for the fact that it is just a software construct in the VIOS. You do this on the HMC (see here), then run cfgdev on the VIOS.
Next you create (virtual) SCSI-devices on the VIOS: see the command "mkvdev" on the VIOS for creating such devices. You create such a device by assigning a "backing device" (some disk space the VIOS can control) and connect it to a virtual SCSI adapter.
For instance:
This means: take "hdisk10" on the VIOS and use it(s disk space) to create a virtual disk "vtdisk_lpar5", which is connected to a virtual SCSI adapter named "vhost5". If "vhost5" is connected to the "lpar 5" you can run the cfgmgr command there and the newly connected disk would appear.
One of my zone is stuck in down state, not able to boot it or halt it .. not even detach .. is there any way to recover without rebooting the whole system ( global zone ) ? (3 Replies)
Hi All,
I'm preparing to migrate some servers from vscsi to pass-thru NPIV. I am planning to have the SAN team move the exact LUNs from vio1/vio2 to those two VWWN through NPIV.
My question is on the partition itself.. right now, let's say I have hdisk0/1/2/3/4 that are part of datavg. They... (2 Replies)
Hi,
I want to change from vscsi to npiv. Is it possible to use both on the same adapter, so we can change the systems one by one, or must we place a second FC adapter in the VIO servers?
Thanks,
Ronald (2 Replies)
Hello,
I have a VIOS System and would like to do mapping some hdisks, hdisk160 until hdisk165 to a vSCSi Adapter. I try to do this in the oem_setup_env like the following:
for i in $(lspv | grep hdisk* | awk {'print $1'};
do
mkdev -V $i -p vhost20
done
There where a mapping with... (4 Replies)
If you're familiar with vscsi mappings thru a VIO Server, you are probably aware, on an AIX 6.1 Client LPAR, that:
print cvai | kdbcan provide useful information to you.... like VIO Server name & vhost #. But, "cvai" does not appear to be part of the Kernel Debugger in AIX 5.3.
My question is... (3 Replies)
Hello,
When I assigned CDROM from IVM (VIOS) to LPAR and then running cfgmgr i get the following message on the client lpar
#cfgmgr
cfgmgr: 0514-621 WARNING: The following device packages are required for device support but are not currently installed.
devices.vscsi.disk
searching... (0 Replies)
Hi everybody,
I have Power5 server with 4 internal hdisks each of 70Gb.
VIOS server was installed via Virtual I/O Server Image Repository on the HMC.
HMC release - 7.7.0
VIOS rootvg installed on 2 disk(these disks merged to one storage pool during VIOS install process),and 2 others hdisks... (2 Replies)
Discussion started by: Ravil Khalilov
2 Replies
LEARN ABOUT NETBSD
trm
TRM(4) BSD Kernel Interfaces Manual TRM(4)NAME
trm -- Tekram TRM-S1040 ASIC based PCI SCSI host adapter driver
SYNOPSIS
trm* at pci? dev ? function ?
scsibus* at trm?
DESCRIPTION
The trm driver supports PCI SCSI host adapters based on the Tekram TRM-S1040 SCSI ASIC.
HARDWARE
Supported SCSI controllers include:
Tekram DC-315 PCI Ultra SCSI adapter without flash BIOS and internal SCSI connector
Tekram DC-315U PCI Ultra SCSI adapter without flash BIOS
Tekram DC-395U PCI Ultra SCSI adapter with flash BIOS
Tekram DC-395UW PCI Ultra-Wide SCSI adapter with flash BIOS
Tekram DC-395F PCI Ultra-Wide SCSI adapter with flash BIOS and 68-pin external SCSI connector
For Tekram DC-390 PCI SCSI host adapter, use pcscp(4) driver.
For Tekram DC-310/U and DC-390U/UW/F PCI SCSI host adapters, use siop(4) driver.
SEE ALSO cd(4), ch(4), intro(4), pci(4), scsi(4), sd(4), ss(4), st(4), uk(4), scsipi(9)
http://www.tekram.com/
AUTHORS
The trm driver was originally written for NetBSD 1.4/i386 by Erich Chen of Tekram Technology, and Rui-Xiang Guo rewrote the driver to use
bus_space(9) and bus_dma(9) for NetBSD 1.6.
BSD November 6, 2001 BSD