Guest LDOMS on same subnet cant ping eachother


 
Thread Tools Search this Thread
Operating Systems Solaris Guest LDOMS on same subnet cant ping eachother
# 8  
Old 03-14-2015
This should not happen if everything is configured properly.

I checked your initial output more carefully (sorry for that Smilie )

What looks wrong to me is that you are using L2 aggregation (aggro0 interface), and you have created from that interface two virtual switches, then you used those interfaces to create ipmp group inside ldom.

I don't think that is supported configuration, looks kinda silly Smilie

Since you have aggregated two interfaces (net0 and net1) which must be connected to the same physical switch, there is no need to use IPMP inside LDOM (guest domain, i don't think this is supported configuration at all, possibly why you are having mac collisions) or create multiple virtual switches over one interface (aggr0).

This schematic should be more illuminating :

Primary domain (hypervisor - bare metal)
---> net0 <> net1 [aggr0 L2] ---> primary-vsw50 (on primary, created using aggr0, add vsw command) ---> vnet0 for guest ldom1, ldom2 (add-vnet command)

Only one vnet is enough, since if net0 fails, all you will loose is bandwidth from one interface.

No need to tag the interfaces on the hypervisor os level (aggr5000, dladm create-vnet), since this is done for LDOMS on the vsw/vnet level (PVID,VID).
This should work, but it is a legacy way to implement vlan tagging in LDOMS.

As for bare metal domains (primary,secondary), let me offer a short explanation of domains as i understand it...
For instance, you have sparc t4-2 with two sockets, two 4 port network cards and two 2 port FC card.

You can create two hardware domains - primary and secondary, in which the actual I/O hardware is splited between those two domains (each has one PCI card and one FC card and one CPU socket and memory ).

Now you have a situation that you have one t4-2 sparc which is actually two machines separated on hardware level. So all LDOMS created on primary domain will use its resources (CPU,PCI - half of them) and ldoms on secondary will use its resources (other half)

Basically, if one socket fails due to hardware failure, only the primary domain and guest ldoms on them will fail, while secondary and guest ldoms on it will continue to run.
Those setups complicate things considerably and are done on machines which have more resources in redundant matter (like 4 cards or 4 sockets, 2 phys cards per domain for redundancy etc.)

For your setup i guess you need (keep it simple - as per scheme in the begining) :

One primary domain (bare metal)
One vsw created on top of the aggr0 interface in primary domain.
One vnet interface added to LDOM from primary-vsw on primary domain.
One VDS (virtual disk service) in primary domain per guest ldom (sneezy-vds@primary, otherguestldom-vds@primary etc.) in which you add disks for ldoms.

Hope that clears things out.

Regards
Peasant.
# 9  
Old 03-14-2015
Hi Peasant, again, I cant thank you enough for your input.

So what we actually have is a t5-2 which has two sockets, 2x two port FC Cards and 4x gigabit Ethernet ports.

As you said the machine is split right down the middle with each root complex owning exactly half of the hardware including local Hard drives.

What we have is:
1x Primary Control domain (Control, IO, Service). Obviously all LDOMS are managed from the Primary.

1x Secondary (Or what some people call 'Alternate') IO, Service domain which can see bare metal Storage.


Im sure im telling you what you already know but it help me explain it out Smilie The idea of us have two IO, service domains (Priamry and Secondary) is that we can actually take one of them down (i.e for patching) and all Guest LDOMS will continue to run, route traffic in/out, see LUNS etc.

And this is the case. When i init 6 or shutdown the primary LDOM, all Guests continue to operate via the Secndary (Alternate) Domain. And vice-a-versa.



So when I create a guest LDOM, i make sure to create two VNET's, one pointing to the Primary VSW and the other to the Secondary VSW. And when creating new LDOMS, i alterante which switch vnet0 point to so that all traffic does always go through one switch.

And this is the same principle for DISKS, i use multipathing groups (MPGROUP) to ensure that guest can see LUNs from both IO, SERVICE domains.


I think you are correct about the IPMP guest settings, I am just reading up more about that.

I also don't pretend to completely understand the difference between the trunk policies (L2, L3 etc.). I am also doing some more reading on that.


FYI, we also have some T5-2 servers which not only have 2x two port FC cards but also 2x two port ehternet cards in addition to the 4x on board ethernet ports. These serers follow the same principle as the on i use in the original post, but onviously each root complex has 4 ethernet ports each for the trunk.
# 10  
Old 03-15-2015
You do have more then one t5-2 machine ?

If you have only one root complex on each (only primary domain) and take care you don't over commit resources (cpu / mem), you should be able to live migrate a ldom from one machine to another.
All you have to watch is that the names are the same for all the virtual devices and the backend devices are the same (naturally)
I am not sure if you can have 2 root complexes (primary / secondary) and do a live migrate of guest ldom inside the same machine.

That is not a use case at all, but i could be wrong (i have never made such setup).

Use case for root complexes is two have complete hardware separation of, for instance, production and test and it is used on machines with more sockets, more cards etc.

Take this example - sneezy is a production ldom and sloppy is a test ldom.
We are using only primary domain only one root complex logically separated production and test

Both t5-2 have same names and configuration

FC (2 FC 2 PORT cards)

1 port from each FC card is for production usage (zoned on switch, production host group on storage)
1 port from each FC card is for test usage (zoned on switch, test host group on storage)

This way if one FC cards dies, production and test will continue to operate.

You just prefix when create a VDS (virtual disk services) with prod or test depending.

So you have sneezy-prodvds and sloppy-testvds on both sparcs in primary domains, with disks added to them according to layout above (test and prod host groups and paths).

Remember you have freedom here to add both to any ldom (test or production disk), only naming policy is telling you to which VDS you will add each disk.

NETWORK (2 LAN 4 PORT cards, 8 ports total, dladm show-phys)

Now you make a choice, will you use aggregation, ipmp or dlmp

Example here is with aggregation (aggr0,aggr1).

You take 2 ports from one card and 2 ports from other card which lead to the same LAN switch and create an aggr0 interface,then create a production-vsw from that interface (with production vlan tags configured pvid/vid)
You add vnet to sneezy ldom from production-vsw (or any other production ldom)

From other network ports (2 from each card remains), you create an aggr1 interface, then create a test-vsw from that interface (with test vlan tags configured on vsw pvid/vid)
You add vnet to sloppy ldom from test-vsw (or any other test ldom).

This way if one card dies, you will loose 2 production and 2 test paths, but both test and production will continue to operate on lower bandwidth (2 x instead of 4 x)

VSW and VDS are named the same on both sparc machines primary domain (this is a requirement for migration (live or cold).

Now you have sneezy production ldom, with production-vsw --> vnet for networking and disk added via proper FC path and added to sneezy-prodvds and sneezy ldom.

Also you have sloppy test ldom, with test-vsw --> vnet for networking and disk added via proper FC path and added to sloppy-testvds and sloppy ldom.

PRIMARY DOMAIN network with tagging on both sparcs :

I recommend using a separate vlan for primary domain IP addressing (control domain can be isolated on vlan layer on network for security reasons)

Since you have tagging on switch and aggr0/1 interfaces, you will have to create a tagged interface for primary domain.
dladm create-vlan -l aggr1 -v <your vlanid> vlan-link
If you don't provide the vlan-link it will create a aggr1<yourvlanid> interface.

This is the interface you will use to create ip address for primary domain on both machines. We are using aggr1 here (test network) for live migration of all ldoms (both production and test), but you are free to choose any (0 or 1) depending on the network topology and bandwidth required.

Now you can issue a test live migration from host1 to host2 with command
ldm migrate -n sloppy host2 # -n switch is just to check if migration will work and it is a great way to check if configuration on both physical machines are the same.

Final result that you can migrate any production or test guest LDOM to any sparc t5-2 machine without of interrupting the service (live migrate) or cold migrate (with ldom down), while having LAN and FC resources seperated for production and test and having more or less "keep it simple" configuration Smilie

Be sure firmware levels are the same on all your sparc t5-2 machines for live migrate to work (cold migrate will not have this limit).

Hope that helps.

Regards
Peasant.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. IP Networking

Loop in /24 Subnet, No ping beyond .1 and .2

Running 3650 switch. I have this odd issue where I cannot get 4 new Centos 7 boxes pinging out on public IPs (nor pinging in), only gateway .1 and first public IP .2 This is what I see, which doesn't look normal. How do I fix this? The server itself is configured fine (Centos 7) # cat... (0 Replies)
Discussion started by: Bashed
0 Replies

2. Linux

Unable to ping Linux guest from win7 host

Hi, I am using win7 on my PC and installed VMware on it on which i am running linux I am unable to ping my linux guest from my win machine, but i can ping my windows host from linux guest : Below is my system configuration Linux root@localhost ~]# ifconfig eth0 Link... (9 Replies)
Discussion started by: chander_1987
9 Replies

3. Solaris

Help needed - trying to run commands in Guest LDoms from Control LDOM

Hi Folks, I am used to writing scripts to get info by running commands at local zones level from their respective global zone by using zlogin <localzone> "command>" while remaining at the global zone level. Can the same be done with Guest LDoms while remaining at the control LDOM level? ... (4 Replies)
Discussion started by: momin
4 Replies

4. Linux

Ping check failed from Nagios master server on windows hosts in the same subnet

Hello All, We have added a windows host and its config files to Nagios master server and wanted to do a ping check alone at the moment however, the nagios master server identifies the host in its GUI and immediately disappears can anyone let me know the right approach to this one, We want to... (2 Replies)
Discussion started by: lovesaikrishna
2 Replies

5. Solaris

Migrating Solaris 8, 9 to Guest Domains (LDOMs)

Hi Everyone, one question is it possible to migrate a physical standalone Solaris 8 or 9 OS to Guest Domain (LDOMs). If yes, can someone please provide steps to migrate these OS to LDOMs. Thanks, Kartheek. (1 Reply)
Discussion started by: bobby320
1 Replies

6. Programming

putting numbers behind eachother

I want to make a program where you have to insert binary numbers like this: do { iBinary = getche(); }while(iBinary == 1 || iBinary == 0); after you get the numbers I want them to be placed behind eachother so you will get: input: 1 1 0 1 output: 1101 (7 Replies)
Discussion started by: metal005
7 Replies

7. Shell Programming and Scripting

Bash script for ping in your own subnet

I have a question for one who wants to help me. I want to create a bash script to ping IP-adresses. It must first ask me the beginnen IP, then the ending IP like: 192.168.100.1 - 192.168.100.255. When nothing is filled in, then it must find my subnet and ping that, like when my ip is... (14 Replies)
Discussion started by: ugurgazi
14 Replies

8. Solaris

LDoms can't ping each other

I've got Sun Fire T2000 with two LDoms - primary and ldom1, both being Solaris 10 u8. Both can be accessed over the network (ssh, ping), both can access the network, but they can't ping or ssh to each other. I only use e1000g0 interface on T2000, the primary ldom has an address on it, ldm has a... (1 Reply)
Discussion started by: mludvig
1 Replies

9. IP Networking

Migrating existing Subnet to a new subnet and changing ip addresses in UNIX

Hi, My project needs to migrate the existing Subnet (255.255.255.0) to a new subnet and change the ipaddresses (currently C class). How can I do that. I need some information. John (0 Replies)
Discussion started by: johnmarsh
0 Replies

10. UNIX for Dummies Questions & Answers

Unix TCP/IP ping to other subnet

I have Digital UNIX V4.0B (Rev. 564) on alpha machine. I'm trying to acces network in subnet (192.168.1.x). Ip on Alpha comp. is from 192.168.3.X subnet. My default gateway on this network is 192.168.3.1 and it working OK from other machines. This machine is visible from same subnet... (2 Replies)
Discussion started by: ermingut
2 Replies
Login or Register to Ask a Question