I have a T5220 with following configuration of hypervisor and OS
I have one Ldom configured on the system and vdisks (ZFS, VxVM & ISO image ) assigned to it but the LDom give me following error and does not boot from any of the assgined disk
I have a Netra T5220 Solaris 10 server with LDoms installed and enabled. RAID1 is enabled in the control domain with the following slices:
d50 -m d51 d52 1
d51 1 1 c1t0d0s5
d52 1 1 c1t1d0s5
d10 -m d11 d12 1
d11 1 1 c1t0d0s0
d12 1 1 c1t1d0s0
d20 -m d21 d22 1
d21 1 1 c1t0d0s1
d22 1 1... (14 Replies)
We're just about to migrate a large application from an Enterprise 6900 to a pair of T5220s utilising LDOMs. Does anyone have any experience of LDOMs on this kit and can provide any recommendations or pitfalls to avoid?
I've heard that use of LDOMs can have an impact on I/O speeds as it's all... (9 Replies)
I've got Sun Fire T2000 with two LDoms - primary and ldom1, both being Solaris 10 u8. Both can be accessed over the network (ssh, ping), both can access the network, but they can't ping or ssh to each other.
I only use e1000g0 interface on T2000, the primary ldom has an address on it, ldm has a... (1 Reply)
Can any one tell me how to add a virtual disk from NFS share in Ldom .. i have one share /VMshare/boot.img file shared/exported from one server but i do now know how to add nfs based vdsdev as it gives primary domain cannot validate the disk. (1 Reply)
We are having a discussion on what order LDOMS and control domains should be booted.
I think it should be LDOMS first then Control. Can anyone tell me why I am wrong or why I am right.
Thanks,
:confused: (6 Replies)
I have a T4-1 with Solaris 11.1, LDom ver 3.0.0.1 and I'm encountering two problems with zones.
I'd greatly appreciate help on these, especially the first one.
I have created multiple guest LDoms, all Solaris 11.1, and they all work fine.
The first problem is that I can not boot any... (9 Replies)
If I do ldm ls l ldom1 the memory block is as follows:-
MEMORY
RA PA SIZE
0x20000000 0x20000000 64G
0x1420000000 0x1020000000 16G
0x1c20000000 0x1420000000 16G
0x2420000000 0x1820000000 16G
0x2c20000000... (1 Reply)
System: SPARC S7-2 Server; 2x8-core CPUs; 128Gb RAM; 2x600Gb HDD.
I have been experimenting on the above system, using ldmp2v to create "clones" of my physical systems as LDoms on the server when there was an unscheduled power outage. After the system came back up I had lost my LDoms, although... (7 Replies)
Hi,
I have a task of creating a UFS filesystem in an LDOM. It is located in a hypervisor (CDOM).
The storage has been provisioned to the CDOM. How do I make it reflect to the LDOM, and then from there configure/set up the filesystem in the LDOM?
Please help. (1 Reply)
Discussion started by: anaigini45
1 Replies
LEARN ABOUT NETBSD
dm
DM(4) BSD Kernel Interfaces Manual DM(4)NAME
dm -- Device-mapper disk driver
SYNOPSIS
pseudo-device dm
DESCRIPTION
The dm driver provides the capability of creating one or more virtual disks based on the target mapping.
This document assumes that you're familiar with how to generate kernels, how to properly configure disks and pseudo-devices in a kernel con-
figuration file, and how to partition disks. This driver is used by the Linux lvm2tools to create and manage lvm in NetBSD.
Currently, the linear, zero, and error targets are implemented. Each component partition should be offset at least 2 sectors from the begin-
ning of the component disk. This avoids potential conflicts between the component disk's disklabel and dm's disklabel. In i386 it is offset
by 65 sectors, where 63 sectors are the initial boot sectors and 2 sectors are used for the disklabel which is set to be read-only.
In order to compile in support for dm, you must add a line similar to the following to your kernel configuration file:
pseudo-device dm #device-mapper disk device
dm may create linear mapped devices, zero, and error block devices. Zero and error block devices are used mostly for testing. Linear
devices are used to create virtual disks with linearly mapped virtual blocks to blocks on real disk. dm Device-mapper devices are controlled
through the /dev/mapper/control device. For controlling this device ioctl(2) calls are used. For the implementation of the communication
channel, the proplib(3) library is used. The protocol channel is defined as a proplib dictionary with needed values. For more details, look
at sys/dev/dm/netbsd-dm.h. Before any device can be used, every device-mapper disk device must be initialized. For initialization one line
must be passed to the kernel driver in the form of a proplib dictionary. Every device can have more than one table active. An example for
such a line is:
0 10240 linear /dev/wd1a 384
dm The first parameter is the start sector for the table defined with this line, the second is the length in sectors which is described with
this table. The third parameter is the target name. All other parts of this line depend on the chosen target. dm For the linear target,
there are two additional parameters: The first parameter describes the disk device to which the device-mapper disk is mapped. The second
parameter is the offset on this disk from the start of the disk/partition.
SEE ALSO config(1), proplib(3), MAKEDEV(8), dmsetup(8), fsck(8), lvm(8), mount(8), newfs(8)HISTORY
The device-mapper disk driver first appeared in NetBSD 6.0.
AUTHORS
Adam Hamsik <haad@NetBSD.org> implemented the device-mapper driver for NetBSD.
Brett Lymn <blymn@NetBSD.org>,
Reinoud Zandijk <reinoud@NetBSD.org>, and
Bill Stouder-Studenmund <wrstuden@NetBSD.org> provided guidance and answered questions about the NetBSD implementation.
BUGS
This driver is still work-in-progress--there can be bugs.
BSD August 30, 2008 BSD