Ldom OS on SAN based zfs volume


 
Thread Tools Search this Thread
Operating Systems Solaris Ldom OS on SAN based zfs volume
# 8  
Old 07-19-2009
Quote:
Originally Posted by fugitive
Details as u asked SAMAR

Code:
ldm list-bindings ldom1
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
ldom1            active     -n----  5000    16    8G       0.0%  21m

MAC
    00:14:4f:fb:45:e5

HOSTID
    0x84fb45e5

VCPU
    VID    PID    UTIL STRAND
    0      32     0.3%   100%
    1      33     0.0%   100%
    2      34     0.0%   100%
    3      35     0.0%   100%
    4      36     0.0%   100%
    5      37     0.0%   100%
    6      38     0.0%   100%
    7      39     0.0%   100%
    8      40     1.4%   100%
    9      41     0.0%   100%
    10     42     0.0%   100%
    11     43     0.0%   100%
    12     44     0.0%   100%
    13     45     0.0%   100%
    14     46     0.0%   100%
    15     47     0.0%   100%

MEMORY
    RA               PA               SIZE
    0x8000000        0x408000000      8G

VARIABLES
    autoboot?=false
    boot-device=/virtual-devices@100/channel-devices@200/disk@0

NETWORK
    NAME             SERVICE                     DEVICE     MAC               MODE   PVID VID
    vnet0            primary-vsw0@primary        network@0  00:14:4f:fb:2e:78        1
        PEER                        MAC               MODE   PVID VID
        primary-vsw0@primary        00:14:4f:fb:da:f8        1

DISK
    NAME             VOLUME                      TOUT DEVICE  SERVER         MPGROUP
    iso              iso@primary-vds0                 disk@1  primary
    cdrom            cdrom@primary-vds0               disk@2  primary
    vdisk0           vol0@primary-vds0                disk@0  primary

VCONS
    NAME             SERVICE                     PORT
    ldom1            primary-vcc0@primary        5000

root@essapl020-u006 #

Code:
# ldm list-services primary
VCC
    NAME             LDOM             PORT-RANGE
    primary-vcc0     primary          5000-5100

VSW
    NAME             LDOM             MAC               NET-DEV   DEVICE     DEFAULT-VLAN-ID PVID VID                  MODE
    primary-vsw0     primary          00:14:4f:fb:da:f8 e1000g1   switch@0   1               1

VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    primary-vds0     primary          iso                                            sol-10-u6-ga1-sparc-dvd.iso
                                      cdrom                                          /data03/sol-10-u6-ga1-sparc-dvd.iso
                                      vol0                                           /dev/dsk/c1t1d0s0

If you need more details let me know .. i already tried changing zfs vol to a internal disk slice but still i 'm getting

NOTICE: [0] disk access failed.

Hi fugitive,

I see it seems ok with your own domain , but not with your control domain.
in LDoms control domain can use the same operating system, i mean your global OS, but not guest domains. thats why you must assign free disk, that is available for new installation of OS for guest domains in future. in your situation you have assigned /dev/dsk/c1t1d0s0 ..
with assigning disk to primary (control) domain your going to give disk service to your logical domains in future. hence they need free available disks for new installation of OS.
is that disk free and not active.??? I suspect that is your system disk Smilie)

let me see your format command result as well as df -h.

good luck.
# 9  
Old 07-19-2009
Code:
root@essapl020-u006 # df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d0          15G    13G   1.4G    91%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    40G   1.6M    40G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/platform/SUNW,SPARC-Enterprise-T5220/lib/libc_psr/libc_psr_hwcap2.so.1
                        15G    13G   1.4G    91%    /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,SPARC-Enterprise-T5220/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                        15G    13G   1.4G    91%    /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                    40G    32K    40G     1%    /tmp
swap                    40G   104K    40G     1%    /var/run
/dev/md/dsk/d2          39G    34G   4.7G    88%    /zones
emcpool3/FMW6/FMW       98G   5.9G    53G    10%    /FMW
newpool/SAR            437G   7.1G   419G     2%    /SAR
emcpool3/swdump         98G    17G    53G    25%    /data03

Code:
root@essapl020-u006 # echo | format
Searching for disks...

done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@0,0
       1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@1,0
       2. c3t5006016841E0A08Dd0 <DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890>
          /pci@0/pci@0/pci@8/pci@0/pci@2/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0a08d,0
       3. c3t5006016041E0A08Dd0 <DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890>
          /pci@0/pci@0/pci@8/pci@0/pci@2/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0a08d,0
       4. c3t5006016041E0A08Dd1 <DGC-RAID5-0326 cyl 51198 alt 2 hd 256 sec 16>
          /pci@0/pci@0/pci@8/pci@0/pci@2/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0a08d,1
       5. c3t5006016841E0A08Dd1 <DGC-RAID5-0326 cyl 51198 alt 2 hd 256 sec 16>

# 10  
Old 07-19-2009
Quote:
Originally Posted by fugitive
Code:
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d0          15G    13G   1.4G    91%    /


u have SVM configured .. give me output of metastat d0 .
its coming closer to fact that disk you have given to LDOM is in use Smilie)))
# 11  
Old 07-19-2009
Code:
d2 -m d12 1
d12 1 1 c1t0d0s3
d1 -m d11 1
d11 1 1 c1t0d0s1
d0 -m d10 1
d10 1 1 c1t0d0s0
d3 -m d13 1
d13 1 1 c1t0d0s4


and fyi ... its the disk i 'm using is c1t1d0s0 ;-)
# 12  
Old 07-19-2009
Ok then fugitive,
u r saying that mentioned device absolutely free and ready for use.
Try relabel it. Certainly that disk. Check is it available on system level.
Try create fs,mount etc.

One more moment look at obp which devices are available there.does the path of that disk visible on obp.I mean on #ok prompt

good luck
# 13  
Old 07-19-2009
You need to look at your LDM version as well. Older versions do not support guest domains on slices.
# 14  
Old 07-20-2009
# ldm -V

Logical Domain Manager (v 1.1)
Hypervisor control protocol v 1.3
Using Hypervisor MD v 0.1

System PROM:
Hypervisor v. 1.7.2. @(#)Hypervisor 1.7.2.a 2009/05/05 19:32\015

OpenBoot v. 4.30.2 @(#)OBP 4.30.2 2009/04/21 09:28
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies

2. Solaris

Solaris 11.3 - SAN mount QFS or ZFS

Hi all, I'm using Solaris 11.3. HBA port connected to SAN disk 3T. AVAILABLE DISK SELECTIONS: 0. c0t600A0B800033696A0000214B571938F1d0 <SUN-CSM200_R-0760 cyl 44556 alt 2 hd 255 sec 189> /scsi_vhci/ssd@g600a0b800033696a0000214b571938f1 1. c2t3C58620E0C565100d0... (1 Reply)
Discussion started by: manhte1
1 Replies

3. Red Hat

Volume group not activated at boot after SAN migration

I have an IBM blade running RHEL 5.4 server, connected to two Hitachi SANs using common fibre cards & Brocade switches. It has two volume groups made from old SAN LUNs. The old SAN needs to be retired so we allocated LUNs from the new SAN, discovered the LUNs as multipath disks (4 paths) and grew... (4 Replies)
Discussion started by: rbatte1
4 Replies

4. Solaris

ZFS LDOM problem on Solaris 10

Apologies if this is the wrong forum.. I have some LDOMs running on a Sparc server. I copied the disk0 file from one chassis over to another, stopped the ldom on the source system and started it on the 2nd one. All fine. Shut it down and flipped back. We then did a fair bit of work on the... (4 Replies)
Discussion started by: tommyq
4 Replies

5. Solaris

Installing Solaris OS on LDOM SAN Disk

I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue. I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN. I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies

6. Solaris

Grow / expand a ZFS volume

Hi, I need to expand a ZFS volume from 500GB to 800GB. I'd like to ask your help to confirm the following procedure: Can I do it on the fly without bothering the users working on this volume? Thank you in advance! (6 Replies)
Discussion started by: aixlover
6 Replies

7. Solaris

Mount A ZFS volume

Is there any way i can mount a zfs volume using snapshot or some other means ? (2 Replies)
Discussion started by: fugitive
2 Replies

8. AIX

Volume Groups and the SAN

Hello all. I have a perplexing problem I have an AIX 5.1 system on an EMC SAN. This system had been on a CX400 SAN for several years. The system was migrated to a CX700 just over a week ago. The migration consisted of utilizing on of the HBAs in the system and connecting to both SANs ... (9 Replies)
Discussion started by: mhenryj
9 Replies

9. Red Hat

how to mount SAN volume with its increased size

Hi, We have 200GB SAN volume mounted on Redhat EL 5. which is working fine. As my SAN supports dynamic resizing of volumes, i unmounted the volume and resized the SAN Volume to 300 GB successfully. Then i mounted again but it shows 200GB only but data is intact. Now, my requirement is to let... (3 Replies)
Discussion started by: prvnrk
3 Replies

10. Linux

Howto clone/migrate a volume in the SAN

Dear Srs, I have a Linux server (linux01) booting from SAN with a volume in a Nexsan SATAbeast storage array (san01). The disk/volume has four ext3 partitions, total size is near to 400GB, but only 20-30GB are in use. I need to move this disk/volume to another Nexsan SATAbeast storage array... (0 Replies)
Discussion started by: Santi
0 Replies
Login or Register to Ask a Question