Sponsored Content
Operating Systems Solaris IPMP in the Service Domain (Oracle VM Sparc) Post 302892656 by Peasant on Friday 14th of March 2014 03:29:21 AM
Old 03-14-2014
I think you misunderstood the document.

You can use IPMP in service domain (control domain, hypervisor) in two ways i described earlier.

The document states the third option (which i haven't tried but i see no reason not to work), is to use vsw interfaces on control/service domain to make an ipmp group.

So you have 8 physical interfaces (net0 to net7), consisting of two cards with 4 ports each with their drivers (ixgbe, igb).

You use those interfaces to create VSW (one per interface, so 4 total VSW, 4 are still unused and 2 of those unused are not connected with cable).
I see you have chosen to configure 4 VSWs with 2 ports from each card (hopefully each card is connected to different physical switch)

This creates additional interfaces which you can see under show-phys (DEVICE vsw).

After that for control/service/primary domain you can :

1. Create an ipmp group on control domain consisting of two or more physical interfaces, preferably from different cards (DEVICE ixgbe, igb under show phys)

2. Create an ipmp group on control domain consisting of two or more vsw net interfaces preferably from different cards (DEVICE vsw under show phys)

3. Use ldm commands to add vnet's to primary domain (control/service domain) and use those interfaces (DEVICE vnet under show phys after adding via ldm) to create ipmp group.

For guest ldoms, you will need to add 2 or more vnet's from each VSW and configure ipmp inside ldom (same as third option above, but for guest ldoms).

Also, do you tend to use different VLAN's inside you guest ldoms now or in the future ?
If you even suspect you will use it, configure it now, since it will be a hussle later.

Hope that clears things out.

Regards
Peasant.

---------- Post updated at 08:29 ---------- Previous update was at 08:12 ----------

Just couple of more hints regarding VM.

For VDS, use one VDS - one guest LDOM, don't put everything in primary-vds.

Disable extended-mappin-space everywhere since i noticed sometimes live migration fails with this on.

You might want to disable INTER-VNET-LINK (to off)
I had a situation when network between two guests, that reside same physical machine on the same subnet, just stops working.

I have ran some internal test with those two options on/off (which should improve network throughput), and didn't notice any performance gains, only experienced issue above Smilie

Depends on which patchset you are, perhaps those issues are now fixed.

Regards
Peasant.
 

6 More Discussions You Might Find Interesting

1. Linux

Oracle Linux on SPARC

Hi Oracle Linux users, You can probably guess from the title what the question is: Does anyone know if Oracle Linux (the Unbreakable variety I think that is) comes in a SPARC release or, if not, will there be one some time soon ? Many thanks, P;):D:b: (2 Replies)
Discussion started by: patcom
2 Replies

2. Solaris

Service error on sparc server running solaris 10

Hi, I am getting following service error on one of the sparc servers running solaris 10 - Code : $ svcs -a | grep "maintenance" maintenance Nov_08 svc:/application/management/sma:default $ svcs -xv svc:/application/management/sma:default (net-snmp SNMP daemon) State: maintenance... (8 Replies)
Discussion started by: sunadmin
8 Replies

3. Solaris

Can we configure link based IPMP in private connectivity in Oracle RAC

Hi I would like to know whether we can configure link based IPMP in private connectivity in Oracle RAC Regarsd ---------- Post updated at 04:35 PM ---------- Previous update was at 04:27 PM ---------- Here I am taking about in case of private connectivity through cross cable (6 Replies)
Discussion started by: sb200
6 Replies

4. Solaris

SunOS sun4v sparc ntp service in maintenance mode.

Hi experts, This is a production server. Host information's are below SunOS hostname_srv 5.10 Generic_150400-09 sun4v sparc sun4v Now issue with ntp service, This host have zone in it with 9 hosts, Every hosts have ntp service issue. While i check for the service status it's in... (3 Replies)
Discussion started by: babinlonston
3 Replies

5. Solaris

Oracle VM for SPARC - control/io domain dilemna

Hi all, My internal SAS disks (4 of them) are all sitting on the same controller/pcie device which is now being own by my default primary aka io aka control domain. I have created a vdisk server, that serves slices on these 4 disks to guest domain and everything is working fine. The issue... (2 Replies)
Discussion started by: javanoob
2 Replies

6. Solaris

When do you need a separate service/IO domain?

Looking at latest recommendations - http://www.oracle.com/technetwork/server-storage/vm/ovmsparc-best-practices-2334546.pdf - specifically regarding domain roles. At the moment, we just have a physical host, primary control domain and then guest ldoms. We then export things like vdisks,vnet etc... (2 Replies)
Discussion started by: psychocandy
2 Replies
PCIBACK(4)						 BSD/xen Kernel Interfaces Manual						PCIBACK(4)

NAME
pciback -- Xen backend paravirtualized PCI pass-through driver SYNOPSIS
pciback* at pci? DESCRIPTION
The pciback driver is the backend part of the PCI pass-through functionality that can be used by the Xen dom0 to export pci(4) devices to a guest domain. To export a PCI device to a guest domain, the device has to be attached to pciback in the dom0. When the guest domain is NetBSD, the device attached to the pciback driver will attach to a xpci(4) bus inside the guest domain. EXAMPLES
To attach a device to the pciback driver, follow these steps: 1. look for the device PCI ID, via pcictl(8). 2. edit boot.cfg(5) and add the PCI ID to the list of PCI IDs that you want to attach to pciback, in bus:device.function notation. The list is passed to dom0 module via the pciback.hide parameter: pciback.hide=(bus:dev.fun)(bus:dev.func)(...) See also boot(8). 3. reboot dom0. 4. add the PCI ID to the list of PCI devices in the domain configuration file: pci = ['bus:dev.fun', '...'] 5. start the guest domain. SEE ALSO
pci(4), xpci(4), boot(8), pcictl(8) HISTORY
The pciback driver first appeared in NetBSD 5.1. AUTHORS
The pciback driver was written by Manuel Bouyer <bouyer@NetBSD.org>. CAVEATS
Currently, to attach a device to the pciback backend, this procedure has to be performed at boot(8) time. In the future, it will be possible to do it without requiring a dom0 reboot. SECURITY CONSIDERATIONS
As PCI passthrough offers the possibility for guest domains to send arbitrary PCI commands to a physical device, this has direct impact on the overall stability and security of the system. For example, in case of erroneous or malicious commands, the device could overwrite physi- cal memory portions, via DMA. BSD
January 8, 2011 BSD
All times are GMT -4. The time now is 10:29 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy