I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue.
I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN.
I have exposed the SAN to the Domain in two ways. Currently the other LDOM's on this server are exposing SAN direct without using zpools. However I believed zpool would be the best method so I setup zpool on the disk and created the zfs volume. Added the ZFS volume to the LDOM started the install process and after the System Identification screen the server said it could find no disks. I dropped to console and tried to check format and the disks were not there.
So I tried to setup the LDOM in the same manner that the other VM's SAN disks were allocated. So I destroyed the zfs volumes and zpool until my disk was free- removed all the links to the ldom and then bound the device by /dev/dsk instead.
Now when going through install I see the disk no problem, however I now get the error
Code:
One or more disks are found, but one of the
following problems exists:
> Hardware failure
> Unformatted disk.
So I dropped into the console format -e and there is my disk.
Code:
# format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0 <Unknown-Unknown-0001-59.61GB>
/virtual-devices@100/channel-devices@200/disk@0
Specify disk (enter its number): 0
selecting c0d0
[disk formatted, no defect list found]
# ldm -V
Logical Domain Manager (v 2.0)
Hypervisor control protocol v 1.6
Using Hypervisor MD v 1.3
System PROM:
Hostconfig v. 1.0.1. @(#)Hostconfig 1.0.1.a 2010/11/01 17:13 [jumilla:release]
Hypervisor v. 1.9.1. @(#)Hypervisor 1.9.1.a 2010/11/01 16:15
OpenBoot v. 4.32.1 @(#)OpenBoot 4.32.1 2010/10/13 18:23
SPECIFIC LDOM
Code:
# ldm ls-bindings ldg5
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
ldg5 active -n---- 5003 16 16G 0.3% 23m
UUID
de7cd037-8f46-6d0e-d355-9c5c8e91a77e
MAC
00:14:4f:f8:b8:40
HOSTID
0x84f8b840
CONTROL
failure-policy=ignore
DEPENDENCY
master=
CORE
CID CPUSET
7 (56, 57, 58, 59, 60, 61, 62, 63)
8 (64, 65, 66, 67, 68, 69, 70, 71)
VCPU
VID PID CID UTIL STRAND
0 56 7 0.2% 100%
1 57 7 0.2% 100%
2 58 7 0.0% 100%
3 59 7 0.0% 100%
4 60 7 0.0% 100%
5 61 7 0.0% 100%
6 62 7 0.0% 100%
7 63 7 0.0% 100%
8 64 8 0.1% 100%
9 65 8 0.0% 100%
10 66 8 0.0% 100%
11 67 8 0.0% 100%
12 68 8 0.0% 100%
13 69 8 0.5% 100%
14 70 8 0.0% 100%
15 71 8 0.0% 100%
MEMORY
RA PA SIZE
0x10000000 0xc20000000 16128M
0x400000000 0x1010000000 256M
VARIABLES
auto-boot?=true
boot-device=vdisk
NETWORK
NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
vnet1 primary-vsw1@primary 0 network@0 00:14:4f:fa:00:26 1 1500
PEER MAC MODE PVID VID MTU LINKPROP
primary-vsw1@primary 00:14:4f:fb:2f:44 1 1500
DISK
NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
iso iso@primary-vds5 1 disk@1 primary
rootdsk-ldg5 c7t257000C0FFDA9A7Bd34@primary-vds5 0 disk@0 primary
VCONS
NAME SERVICE PORT
ldg5 primary-vcc0@primary 5003
PRIMARY DOMAIN
Code:
# ldm ls-bindings primary
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- UART 8 8448M 0.9% 35d 21h 29m
UUID
c601588b-1aa0-4751-f96e-cb03c9f80149
MAC
00:21:28:80:8e:58
HOSTID
0x85808e58
CONTROL
failure-policy=ignore
DEPENDENCY
master=
CORE
CID CPUSET
0 (0, 1, 2, 3, 4, 5, 6, 7)
VCPU
VID PID CID UTIL STRAND
0 0 0 2.0% 100%
1 1 0 1.2% 100%
2 2 0 0.8% 100%
3 3 0 1.9% 100%
4 4 0 0.7% 100%
5 5 0 0.9% 100%
6 6 0 1.4% 100%
7 7 0 1.2% 100%
MAU
ID CPUSET
0 (0, 1, 2, 3, 4, 5, 6, 7)
MEMORY
RA PA SIZE
0x1df0000000 0x1df0000000 8448M
CONSTRAINT
whole-core
max-cores=1
VARIABLES
boot-device=/pci@400/pci@1/pci@0/pci@2/LSI,sas@0/disk@w36167a6fb5d3c4cf,0:a disk net
keyboard-layout=US-English
IO
DEVICE PSEUDONYM OPTIONS
pci@400 pci
niu@480 niu
pci@400/pci@1/pci@0/pci@c /SYS/MB/FEM0
pci@400/pci@2/pci@0/pci@c /SYS/MB/FEM1
pci@400/pci@1/pci@0/pci@2 /SYS/MB/REM
pci@400/pci@2/pci@0/pci@4 /SYS/MB/PCI-EM0
pci@400/pci@2/pci@0/pci@2 /SYS/MB/NET0
pci@400/pci@1/pci@0/pci@4 /SYS/MB/PCI-EM1
VCC
NAME PORT-RANGE
primary-vcc0 5000-5100
CLIENT PORT
ldg1@primary-vcc0 5000
ldg3@primary-vcc0 5001
ldg4@primary-vcc0 5002
ldg5@primary-vcc0 5003
VSW
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
primary-vsw0 00:14:4f:f8:2d:8f e1000g1 0 switch@0 1 1 1500
PEER MAC PVID VID MTU LINKPROP
vnet1@ldg1 00:14:4f:fa:85:df 1 1500
vnet4@ldg4 00:14:4f:fb:b7:45 1 1500
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
primary-vsw1 00:14:4f:fb:2f:44 e1000g2 1 switch@1 1 1 1500
PEER MAC PVID VID MTU LINKPROP
vnet1@ldg5 00:14:4f:fa:00:26 1 1500
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
primary-vsw2 00:14:4f:f9:d4:6c e1000g3 2 switch@2 1 1 1500
PEER MAC PVID VID MTU LINKPROP
vnet3@ldg3 00:14:4f:fa:8f:a5 1 1500
VDS
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-vds1 c0t35DDB26D851A8E49d0s3 /dev/dsk/c0t35DDB26D851A8E49d0s3
CLIENT VOLUME
rootdsk-ldg1@ldg1 c0t35DDB26D851A8E49d0s3
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-vds2
primary-vds3 c0t35DDB26D851A8E49d0s4 /dev/dsk/c0t35DDB26D851A8E49d0s4
CLIENT VOLUME
rootdsk-ldg3@ldg3 c0t35DDB26D851A8E49d0s4
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-vds4 c0t36167A6FB5D3C4CFd0s3 /dev/dsk/c0t36167A6FB5D3C4CFd0s3
CLIENT VOLUME
rootdsk-ldg4@ldg4 c0t36167A6FB5D3C4CFd0s3
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-sanvds1 c6t257000C0FFDA9A7Bd0s0 /dev/dsk/c6t257000C0FFDA9A7Bd0s0
c6t257000C0FFDA9A7Bd20s0 /dev/dsk/c6t257000C0FFDA9A7Bd20s0
c7t217000C0FFDA9A7Bd21s0 /dev/dsk/c7t217000C0FFDA9A7Bd21s0
c7t217000C0FFDA9A7Bd1s0 /dev/dsk/c7t217000C0FFDA9A7Bd1s0
CLIENT VOLUME
sanu01-ldg1@ldg1 c6t257000C0FFDA9A7Bd0s0
sanu02-ldg1@ldg1 c6t257000C0FFDA9A7Bd20s0
sanu03-ldg1@ldg1 c7t217000C0FFDA9A7Bd21s0
sanu04-ldg1@ldg1 c7t217000C0FFDA9A7Bd1s0
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-sanvds2
primary-sanvds3 c6t257000C0FFDA9A7Bd2s0 /dev/dsk/c6t257000C0FFDA9A7Bd2s0
c6t257000C0FFDA9A7Bd4s0 /dev/dsk/c6t257000C0FFDA9A7Bd4s0
c7t217000C0FFDA9A7Bd3s0 /dev/dsk/c7t217000C0FFDA9A7Bd3s0
c7t217000C0FFDA9A7Bd5s0 /dev/dsk/c7t217000C0FFDA9A7Bd5s0
CLIENT VOLUME
sanu01-ldg3@ldg3 c6t257000C0FFDA9A7Bd2s0
sanu02-ldg3@ldg3 c6t257000C0FFDA9A7Bd4s0
sanu03-ldg3@ldg3 c7t217000C0FFDA9A7Bd3s0
sanu04-ldg3@ldg3 c7t217000C0FFDA9A7Bd5s0
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-sanvds4 c6t257000C0FFDA9A7Bd6s0 /dev/dsk/c6t257000C0FFDA9A7Bd6s0
c7t217000C0FFDA9A7Bd23s0 /dev/dsk/c7t217000C0FFDA9A7Bd23s0
c6t257000C0FFDA9A7Bd8s0 /dev/dsk/c6t257000C0FFDA9A7Bd8s0
c7t257000C0FFDA9A7Bd9s0 /dev/dsk/c7t257000C0FFDA9A7Bd9s0
CLIENT VOLUME
sanu01-ldg4@ldg4 c6t257000C0FFDA9A7Bd6s0
sanu02-ldg4@ldg4 c7t217000C0FFDA9A7Bd23s0
sanu03-ldg4@ldg4 c6t257000C0FFDA9A7Bd8s0
sanu04-ldg4@ldg4 c7t257000C0FFDA9A7Bd9s0
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-vds5 iso /opt/Patches/sol-10-u10-ga2-sparc-dvd.iso
c7t257000C0FFDA9A7Bd34 slice /dev/dsk/c7t257000C0FFDA9A7Bd34
CLIENT VOLUME
iso@ldg5 iso
rootdsk-ldg5@ldg5 c7t257000C0FFDA9A7Bd34
VCONS
NAME SERVICE PORT
UART
LDOM HOST RELEASE
Code:
# cat /etc/release
Oracle Solaris 10 9/10 s10s_u9wos_14a SPARC
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
Assembled 11 August 2010
---------- Post updated at 03:18 PM ---------- Previous update was at 12:31 PM ----------
Needed to map the appropriate Partition I had the root disk instead of the "usr" space at s6.
After updating my mapping everything is working as expected.
I have a guest LDOM running Solaris 10U11 on a Sun T4-1 host running Solaris 11.4. The host has a disk named bkpool that I'd like to share with the LDOM so both can read and write it. The host is hemlock, the guest is sol10.
root@hemlock:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP ... (3 Replies)
Hi! Quick background for the question...
I have Solaris 11.4 control/primary zone with some LDOM's on top of it. I have some raw iSCSI LUN's presented to the control zone/primary zone from a NetApp, which I then pass up to the LDOM's via the VDS/vdisk. So basically the LDOM VM's see the disk as... (1 Reply)
Generally, this is what we do:-
On primary, export 2 LUNs (add-vdsdev).
On primary, assign these disks to the ldom in question (add-vdisk).
On ldom, created mirrored zpool from these two disks.
On one server (which is older) we have:-
On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Hi,
There is LDOM Guest where I need to expand /u02 file systems on it.
It is residing on a Solaris 11 Hypervisor (Primary Domain).
The storage is expanded on vdisk presented to Hypervisor.
I need steps to expand the /u02 on LDOM Guest. (2 Replies)
Hi All,
I have server : Sun-Fire-V490 configured with Solaris 10 zfs
..
and I have configured three mirror the third one from EMC
storage.
root@server # zpool status -v
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
... (8 Replies)
Hi,
I have issues installing Solaris into a LDom using a Solaris10u5/08 DVD. I have been troubleshooting for the past 2 days and I still cannot get it up.
Here is the logs:
root@eld-app2# ldm add-vdsdev /cdrom/sol_10_508_sparc/s0 cdrom0@primary-vds0
root@eld-app2# ldm add-vdisk cdrom0... (4 Replies)
Is it possible to use zvol from SAN LUN to install LDOM OS ? I 'm using following VDS from my service domain
VDS
NAME LDOM VOLUME DEVICE
primary-vds0 primary iso sol-10-u6-ga1-sparc-dvd.iso
cdrom ... (16 Replies)
I installed Solaris 10 on this Dell 5150 with only 1 SATA hard drive setup, all went well, and I could view the disk in the disk management window.
However, I setup a 2nd hard drive, identical to 1st drive. Solaris wont recognize it and gives an error when trying to view disks in disk management... (5 Replies)
Hello there !!!
I am going to install solaris 8 on brand new hard disk, which i just attatched with computer.
Hard disk is already formated or do i have to use some kind of fdisk prior to install solaris ?
OR solaris automatically format this new hard disk ?
How does this work with... (2 Replies)
Hi,
Ihave a Netra T1405 and would like to add 2 news disk (in mirror with SDS) in order to grow the size of the initial file system.
The idea is to mount a new partition in order to setup new products...
Could some one tell me the difrents steps to do this or give me links to documents.
... (1 Reply)