07-24-2015
Consider this post in about middle about domains, c/p from my previous post
As for bare metal domains (primary,secondary), let me offer a short explanation of domains as i understand it...
For instance, you have sparc t4-2 with two sockets, two 4 port network cards and two 2 port FC card.
You can create two hardware domains - primary and secondary, in which the actual I/O hardware is splited between those two domains (each has one PCI card and one FC card and one CPU socket and memory ).
Now you have a situation that you have one t4-2 sparc which is actually two machines separated on hardware level. So all LDOMS created on primary domain will use its resources (CPU,PCI - half of them) and ldoms on secondary will use its resources (other half)
Basically, if one socket fails due to hardware failure, only the primary domain and guest ldoms on them will fail, while secondary and guest ldoms on it will continue to run.
Those setups complicate things considerably and are done on machines which have more resources in redundant matter (like 4 cards or 4 sockets, 2 phys cards per domain for redundancy etc.)
For your setup i guess you need (keep it simple - as per scheme in the begining) :
One primary domain (bare metal)
One vsw created on top of the aggr0 interface in primary domain.
One vnet interface added to LDOM from primary-vsw on primary domain.
One VDS (virtual disk service) in primary domain per guest ldom (sneezy-vds@primary, otherguestldom-vds@primary etc.) in which you add disks for ldoms.
This is the entire topic :
Guest LDOMS on same subnet cant ping eachother
Bare in mind that most of the network stuff discussed here is patched, and there are some more additional options (like DLMP).
Hope that clears things out
Regards
Peasant.
6 More Discussions You Might Find Interesting
1. Linux
Hi Oracle Linux users,
You can probably guess from the title what the question is:
Does anyone know if Oracle Linux (the Unbreakable variety I think that is) comes in a
SPARC release or, if not, will there be one some time soon ?
Many thanks,
P;):D:b: (2 Replies)
Discussion started by: patcom
2 Replies
2. Solaris
Hello
Using VM server 3.1, I would like to configure a vswitch based on IPMP of 2 physical port so that IPMP is only configured on the primary domain and no more on guest domains.
Regarding the documentation (Configuring IPMP in a Logical Domains Environment - Oracle VM Server for SPARC 3.0... (10 Replies)
Discussion started by: yguilloux
10 Replies
3. Solaris
Hello all, thanks for reading my question:
So I've been a Unix/Linux SysAdmin for a couple years, and I'm a bit over my head running solo, trying to set up LDoms using Oracle VM Server 3.1 for SPARC. I've been very careful, and things have gone well up until the point I try to access the new... (9 Replies)
Discussion started by: Lyxix
9 Replies
4. Solaris
Hi all,
My internal SAS disks (4 of them) are all sitting on the same controller/pcie device which is now being own by my default primary aka io aka control domain.
I have created a vdisk server, that serves slices on these 4 disks to guest domain and everything is working fine.
The issue... (2 Replies)
Discussion started by: javanoob
2 Replies
5. Shell Programming and Scripting
Hi Gurus,
We are migrating Oracle from Solaris to RHEL 7 and looking for Solaris equivalent commands for Linux.
we are using lot of korn shell scripts built on Solaris so, i am looking for equivalent commands which are using in Solaris..
Could you please help me here by proving any info... (1 Reply)
Discussion started by: mssprince
1 Replies
6. Red Hat
Hi Gurus,
We are migrating Oracle from Solaris to RHEL 7 and looking for Solaris equivalent commands for Linux.
we are using lot of korn shell scripts built on Solaris so, i am looking for equivalent commands which are using in Solaris..
Could you please help me here by proving any info ... (4 Replies)
Discussion started by: mssprince
4 Replies
LEARN ABOUT NETBSD
pciback
PCIBACK(4) BSD/xen Kernel Interfaces Manual PCIBACK(4)
NAME
pciback -- Xen backend paravirtualized PCI pass-through driver
SYNOPSIS
pciback* at pci?
DESCRIPTION
The pciback driver is the backend part of the PCI pass-through functionality that can be used by the Xen dom0 to export pci(4) devices to a
guest domain. To export a PCI device to a guest domain, the device has to be attached to pciback in the dom0.
When the guest domain is NetBSD, the device attached to the pciback driver will attach to a xpci(4) bus inside the guest domain.
EXAMPLES
To attach a device to the pciback driver, follow these steps:
1. look for the device PCI ID, via pcictl(8).
2. edit boot.cfg(5) and add the PCI ID to the list of PCI IDs that you want to attach to pciback, in bus:device.function notation.
The list is passed to dom0 module via the pciback.hide parameter:
pciback.hide=(bus:dev.fun)(bus:dev.func)(...)
See also boot(8).
3. reboot dom0.
4. add the PCI ID to the list of PCI devices in the domain configuration file:
pci = ['bus:dev.fun', '...']
5. start the guest domain.
SEE ALSO
pci(4), xpci(4), boot(8), pcictl(8)
HISTORY
The pciback driver first appeared in NetBSD 5.1.
AUTHORS
The pciback driver was written by Manuel Bouyer <bouyer@NetBSD.org>.
CAVEATS
Currently, to attach a device to the pciback backend, this procedure has to be performed at boot(8) time. In the future, it will be possible
to do it without requiring a dom0 reboot.
SECURITY CONSIDERATIONS
As PCI passthrough offers the possibility for guest domains to send arbitrary PCI commands to a physical device, this has direct impact on
the overall stability and security of the system. For example, in case of erroneous or malicious commands, the device could overwrite physi-
cal memory portions, via DMA.
BSD
January 8, 2011 BSD