I have a T5220 with following configuration of hypervisor and OS
I have one Ldom configured on the system and vdisks (ZFS, VxVM & ISO image ) assigned to it but the LDom give me following error and does not boot from any of the assgined disk
I have a Netra T5220 Solaris 10 server with LDoms installed and enabled. RAID1 is enabled in the control domain with the following slices:
d50 -m d51 d52 1
d51 1 1 c1t0d0s5
d52 1 1 c1t1d0s5
d10 -m d11 d12 1
d11 1 1 c1t0d0s0
d12 1 1 c1t1d0s0
d20 -m d21 d22 1
d21 1 1 c1t0d0s1
d22 1 1... (14 Replies)
We're just about to migrate a large application from an Enterprise 6900 to a pair of T5220s utilising LDOMs. Does anyone have any experience of LDOMs on this kit and can provide any recommendations or pitfalls to avoid?
I've heard that use of LDOMs can have an impact on I/O speeds as it's all... (9 Replies)
I've got Sun Fire T2000 with two LDoms - primary and ldom1, both being Solaris 10 u8. Both can be accessed over the network (ssh, ping), both can access the network, but they can't ping or ssh to each other.
I only use e1000g0 interface on T2000, the primary ldom has an address on it, ldm has a... (1 Reply)
Can any one tell me how to add a virtual disk from NFS share in Ldom .. i have one share /VMshare/boot.img file shared/exported from one server but i do now know how to add nfs based vdsdev as it gives primary domain cannot validate the disk. (1 Reply)
We are having a discussion on what order LDOMS and control domains should be booted.
I think it should be LDOMS first then Control. Can anyone tell me why I am wrong or why I am right.
Thanks,
:confused: (6 Replies)
I have a T4-1 with Solaris 11.1, LDom ver 3.0.0.1 and I'm encountering two problems with zones.
I'd greatly appreciate help on these, especially the first one.
I have created multiple guest LDoms, all Solaris 11.1, and they all work fine.
The first problem is that I can not boot any... (9 Replies)
If I do ldm ls l ldom1 the memory block is as follows:-
MEMORY
RA PA SIZE
0x20000000 0x20000000 64G
0x1420000000 0x1020000000 16G
0x1c20000000 0x1420000000 16G
0x2420000000 0x1820000000 16G
0x2c20000000... (1 Reply)
System: SPARC S7-2 Server; 2x8-core CPUs; 128Gb RAM; 2x600Gb HDD.
I have been experimenting on the above system, using ldmp2v to create "clones" of my physical systems as LDoms on the server when there was an unscheduled power outage. After the system came back up I had lost my LDoms, although... (7 Replies)
Hi,
I have a task of creating a UFS filesystem in an LDOM. It is located in a hypervisor (CDOM).
The storage has been provisioned to the CDOM. How do I make it reflect to the LDOM, and then from there configure/set up the filesystem in the LDOM?
Please help. (1 Reply)
Discussion started by: anaigini45
1 Replies
LEARN ABOUT SUNOS
i2o_bs
i2o_bs(7D) Devices i2o_bs(7D)NAME
i2o_bs - Block Storage OSM for I2O
SYNOPSIS
disk@local target id#:a through u
disk@local target id#:a through u raw
DESCRIPTION
The I2O Block Storage OSM abstraction (BSA, which also is referred to as block storage class) layer is the primary interface that Solaris
operating environments use to access block storage devices. A block storage device provides random access to a permanent storage medium.
The i2o_bs device driver uses I2O Block Storage class messages to control the block device; and provides the same functionality (ioctls,
for example) that is present in the Solaris device driver like 'cmdk, dadk' on x86 for disk. The maximum size disk supported by i2o_bs is
the same as what is available on x86.
The i2o_bs is currently implemented version 1.5 of Intelligent IO specification.
The block files access the disk using the system's normal buffering mechanism and are read and written without regard to physical disk
records. There is also a "raw" interface that provides for direct transmission between the disk and the user's read or write buffer. A
single read or write call usually results in one I/O operation; raw I/O is therefore considerably more efficient when many bytes are
transmitted. The names of the block files are found in /dev/dsk; the names of the raw files are found in /dev/rdsk.
I2O associates each block storage device with a unique ID called a local target id that is assigned by I2O hardware. This information can
be acquired by the block storage OSM through I2O Block Storage class messages. For Block Storage OSM, nodes are created in
/devices/pci#/pci# which include the local target ID as one component of device name that the node refers to. However the /dev names and
the names in /dev/dsk and /dev/rdsk do not encode the local target id in any part of the name.
For example, you might have the following:
/devices/ /dev/dsk name
---------------------------------------------------------------
/devices/pci@0,0/pci101e,0@10,1/disk@10:a /dev/dsk/c1d0s0
I/O requests to the disk must have an offset and transfer length that is a multiple of 512 bytes or the driver returns an EINVAL error.
Slice 0 is normally used for the root file system on a disk, slice 1 is used as a paging area (for example, swap), and slice 2 for backing
up the entire fdisk partition for Solaris software. Other slices may be used for usr file systems or system reserved area.
Fdisk partition 0 is to access the entire disk and is generally used by the fdisk(1M) program.
FILES
/dev/dsk/cndn[s|p]n block device
/dev/rdsk/cndn[s|p]n raw device
where:
cn controller n
dn instance number
sn UNIX system slice n (0-15)
pn fdisk partition(0)
/kernel/drv/i2o_bs i2o_bs driver
/kernel/drv/i2o_bs.conf Configuration file
ATTRIBUTES
See attributes(5)
for descriptions of the following attributes:
+-----------------------------+-----------------------------+
|ATTRIBUTE TYPE |ATTRIBUTE VALUE
+-----------------------------+-----------------------------+
|Architecture |x86 |
+-----------------------------+-----------------------------+
SEE ALSO fdisk(1M), format(1M)mount(1M),lseek(2), read(2), write(2), readdir(3C), vfstab(4), acct.h(3HEAD), attributes(5), dkio(7I)SunOS 5.10 21 Jul 1998 i2o_bs(7D)