Visit Our UNIX and Linux User Community


Solaris 11 link aggregation with multiple VLANs - not working


 
Thread Tools Search this Thread
Operating Systems Solaris Solaris 11 link aggregation with multiple VLANs - not working
# 1  
Old 03-04-2020
Solaris 11 link aggregation with multiple VLANs - not working

Hello,

I need help on fixing my network configuration on Sparc S7 server. I have configured (maybe incorrect) and can't make it work. I can't ping gateway.

I need to create an aggregation of two NICs and on that aggr0, there will be multiple VLAN taggings. LACP is enabled on switch side for both NICs and VLAN tagging is enabled on switch side by network guy. If I run snoop on net0 and net7, I can see packets incoming from those VLANs (example below).

Code:
root@ovmi-host1:~# dladm show-phys
LINK            MEDIA         STATE      SPEED  DUPLEX    DEVICE
net0            Ethernet      up         1000   full      i40e0
net1            Ethernet      up         1000   full      i40e1
net2            Ethernet      down       0      unknown   i40e2
net3            Ethernet      down       0      unknown   i40e3
net4            Ethernet      unknown    0      unknown   igb0
net5            Ethernet      unknown    0      unknown   igb1
net6            Ethernet      unknown    0      unknown   igb2
net7            Ethernet      up         1000   full      igb3
net8            Ethernet      unknown    0      unknown   igb4
net9            Ethernet      unknown    0      unknown   igb5
net10           Ethernet      unknown    0      unknown   igb6
net11           Ethernet      unknown    0      unknown   igb7
net12           Ethernet      unknown    0      unknown   igb8
net13           Ethernet      unknown    0      unknown   igb9
net14           Ethernet      unknown    0      unknown   igb10
net15           Ethernet      unknown    0      unknown   igb11
net16           Ethernet      up         1000   full      vsw0
sp-phys0        Ethernet      up         10     full      usbecm2
root@ovmi-host1:~# dladm show-aggr -L
LINK                PORT         AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
aggr0               net0         yes          yes  yes  yes  no        no
--                  net7         yes          yes  yes  yes  no        no
root@ovmi-host1:~# dladm show-aggr -x
LINK       PORT           SPEED DUPLEX   STATE     ADDRESS            PORTSTATE
aggr0      --             1000Mb full    up        0:10:e0:e2:dc:8c   --
           net0           1000Mb full    up        0:10:e0:e2:dc:8c   attached
           net7           1000Mb full    up        b4:96:91:4c:ae:93  attached
root@ovmi-host1:~# dladm show-link
LINK                CLASS     MTU    STATE    OVER
aggr0               aggr      1500   up       net0 net7
net0                phys      1500   up       --
net1                phys      1500   up       --
net2                phys      1500   down     --
net3                phys      1500   down     --
net4                phys      1500   unknown  --
net5                phys      1500   unknown  --
net6                phys      1500   unknown  --
net7                phys      1500   up       --
net8                phys      1500   unknown  --
net9                phys      1500   unknown  --
net10               phys      1500   unknown  --
net11               phys      1500   unknown  --
net12               phys      1500   unknown  --
net13               phys      1500   unknown  --
net14               phys      1500   unknown  --
net15               phys      1500   unknown  --
net16               phys      1500   up       --
sp-phys0            phys      1500   up       --
vlan2154            vlan      1500   up       aggr0
vlan2160            vlan      1500   up       aggr0
vlan2161            vlan      1500   up       aggr0
vlan2170            vlan      1500   up       aggr0
root@ovmi-host1:~#
root@ovmi-host1:~# dladm show-vlan
LINK                VID  SVID PVLAN-TYPE  FLAGS  OVER
vlan2154            2154 --   --          -----  aggr0
vlan2160            2160 --   --          -----  aggr0
vlan2161            2161 --   --          -----  aggr0
vlan2170            2170 --   --          -----  aggr0
root@ovmi-host1:~#
root@ovmi-host1:~# snoop -d net0
Using device net0 (promiscuous mode)
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
VLAN#2170: 192.168.244.190 -> (broadcast)  ARP C Who is 192.168.244.151, 192.168.244.151 ?
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
VLAN#2161: 192.168.245.138 -> (broadcast)  ARP C Who is 192.168.245.138, 192.168.245.138 ?
VLAN#2170: 192.168.244.190 -> (broadcast)  ARP C Who is 192.168.244.151, 192.168.244.151 ?
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
           ? -> (multicast)  Bridge PDU T:2 L:118
VLAN#2170: 192.168.244.190 -> (broadcast)  ARP C Who is 192.168.244.151, 192.168.244.151 ?
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
root@ovmi-host1:~# snoop -d net7
Using device net7 (promiscuous mode)
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
VLAN#2170: 192.168.23.246 -> (broadcast)  ARP C Who is 192.168.23.220, 192.168.23.220 ?
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
VLAN#2170: 192.168.23.246 -> (broadcast)  ARP C Who is 192.168.23.220, 192.168.23.220 ?
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
           ? -> (multicast)  LLDP PDU Chassis ID = dc:38:e1:54:46:40  Port ID = 552  TTL = 120
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
VLAN#2161: 192.168.245.204 -> (broadcast)  ARP C Who is 192.168.245.204, 192.168.245.204 ?
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
VLAN#2170: 192.168.23.246 -> (broadcast)  ARP C Who is 192.168.23.220, 192.168.23.220 ?
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
VLAN#2170: 192.168.23.242 -> (broadcast)  ARP C Who is 192.168.23.230, 192.168.23.230 ?
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
VLAN#2170: 192.168.23.246 -> (broadcast)  ARP C Who is 192.168.23.220, 192.168.23.220 ?
           ? -> (multicast)  ETHER Type=8809 (Unknown), size=124 bytes
^Croot@ovmi-host1:~#
root@ovmi-host1:~# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
sp-phys0/v4       static   ok           169.254.182.77/24
aggr0/v4          static   ok           192.168.244.161/26
lo0/v6            static   ok           ::1/128
root@ovmi-host1:~#
root@ovmi-host1:~# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
sp-phys0/v4       static   ok           169.254.182.77/24
aggr0/v4          static   ok           192.168.244.161/26
lo0/v6            static   ok           ::1/128
root@ovmi-host1:~#
root@ovmi-host1:~# netstat -nrv

IRE Table: IPv4
  Destination             Mask           Gateway          Device  MTU  Ref Flg  Out  In/Fwd
-------------------- --------------- -------------------- ------ ----- --- --- ----- ------
default              0.0.0.0         192.168.244.129                 0   2 UG       0      0
127.0.0.1            255.255.255.255 127.0.0.1            lo0     8232  30 UH     375    375
169.254.182.0        255.255.255.0   169.254.182.77       sp-phys0  1500   3 U     6203      0
192.168.69.0         255.255.255.0   192.168.244.129                 0   1 UG       0      0
192.168.78.0         255.255.255.0   192.168.244.129                 0   1 UG       0      0
192.168.110.0        255.255.255.0   192.168.244.129                 0   1 UG       0      0
192.168.115.0        255.255.255.0   192.168.244.129                 0   1 UG       0      0
192.168.244.128      255.255.255.192 192.168.244.161      aggr0   1500   5 U       88      0

IRE Table: IPv6
  Destination/Mask            Gateway                    If    MTU  Ref Flags  Out   In/Fwd
--------------------------- --------------------------- ----- ----- --- ----- ------ ------
::1                         ::1                         lo0    8252   2 UH         7      7
root@ovmi-host1:~# ping 192.168.244.129
no answer from 192.168.244.129
root@ovmi-host1:~#

Please help, what I am missing or configured incorrectly.

Thanks
# 2  
Old 03-05-2020
You do not define IP address of tagged VLAN on aggr0.

In your case you should :

Delete the address
Code:
aggr0/v4          static   ok           192.168.244.161/26

Define the above address on VLAN interface vlan2170
Try pinging the gateway then.

Only native VLAN (untagged) will work on aggr0, while others such as 2170 will work on vlan2170 interface.
Those vlanNUM interfaces are the ones you configure IP address on for tagged VLAN(s) you created with dladm create-vlan ... over aggr0

Hope that helps
Regards
Peasant.
# 3  
Old 03-05-2020
Thanks for the suggestion.

I am expecting this setup, as the whole picture. Hope it makes sense.

One single interface (be it aggr or ipmp) with one IP will be presented to OVM (OV Manager) as host, as an IP with multiple VLAN tagged. After finishing the repository and other post-work, I should be create VM.
For example, I can get a request "create a Solaris-10 VM with 2 interfaces, one VLAN#2170 and one VLAN#2161". This interface will be visible on OVM Manager as bond0.

We have more than 20 VLANs, so don't want to configure 20 IPs for each VLAN. Ideally, all those VLANs should be tagged on this single interface. Any suggestion, how can I achieve it?
Please ask, if I am not able to explain it well.
# 4  
Old 03-05-2020
You will then create a vsw (virtual switch) on top of aggr0
Then using that vsw add interfaces to VMs.

There is no need to create vlanNUM interfaces on the hypervisor in that case.
You only define vlan interfaces on hypervisor if you wish to define IP for the hypervisor itself.

Check out this post :
LDOM Solaris 11 add Network vsw (Virtual switch)

Hope that helps
Regards
Peasant.
# 5  
Old 03-05-2020
Solaris 11 link aggregation - not working - can't ping gateway

This link was helpful and I got the idea that vlanNUM would be created on OVM/hypervisor level.
I deleted VLAN interfaces, recreated aggr0 with net0 and net7. But I can't ping its gateway. I can see packets incoming from two VLANs, if I snoop on net0, net7 and aggr0.
Per network guys, their side configurations are okay, but I will check again if my configurations are looking okay.
Code:
root@ovmi-host1:~# netstat -nrv | grep default
default              0.0.0.0         192.168.244.129                 0   2 UG       0      0
root@ovmi-host1:~# ping 192.168.244.129
no answer from 192.168.244.129
root@ovmi-host1:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
sp-phys0: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
        inet 169.254.182.77 netmask ffffff00 broadcast 169.254.182.255
        ether 2:21:28:57:47:17
aggr0: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 3
        inet 192.168.244.161 netmask ffffffc0 broadcast 192.168.244.191
        ether 0:10:e0:e2:dc:8c
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128
sp-phys0: flags=120002000840<RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
        inet6 ::/0
        ether 2:21:28:57:47:17
aggr0: flags=120002000840<RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 3
        inet6 ::/0
        ether 0:10:e0:e2:dc:8c
root@ovmi-host1:~#
root@ovmi-host1:~# dladm show-phys | grep up
net0            Ethernet      up         1000   full      i40e0
net1            Ethernet      up         1000   full      i40e1
net7            Ethernet      up         1000   full      igb3
net16           Ethernet      up         1000   full      vsw0
sp-phys0        Ethernet      up         10     full      usbecm2
root@ovmi-host1:~# dladm show-aggr -x
LINK       PORT           SPEED DUPLEX   STATE     ADDRESS            PORTSTATE
aggr0      --             1000Mb full    up        0:10:e0:e2:dc:8c   --
           net0           1000Mb full    up        0:10:e0:e2:dc:8c   attached
           net7           1000Mb full    up        b4:96:91:4c:ae:93  attached
root@ovmi-host1:~# dladm show-aggr -L
LINK                PORT         AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
aggr0               net0         yes          yes  yes  yes  no        no
--                  net7         yes          yes  yes  yes  no        no
root@ovmi-host1:~#
root@ovmi-host1:~# dladm show-link | grep up
aggr0               aggr      1500   up       net0 net7
net0                phys      1500   up       --
net1                phys      1500   up       --
net7                phys      1500   up       --
net16               phys      1500   up       --
sp-phys0            phys      1500   up       --
root@ovmi-host1:~#

Am I missing something in this config ?

Last edited by solaris_1977; 03-05-2020 at 03:03 AM.. Reason: Corrected title
# 6  
Old 03-05-2020
On aggr0 you can only define IP address from native VLAN on those ports.
If you wish to define a tagged IP on the hypervisor, you need to create vlan interface, then define IP on that.

If you wish to use the interface for virtual machines, you define a VSW with aggr0 link.
Then you add a vnet via ldm command using that VSW.

You can have both on the hypervisor, vlan interfaces and VSW above aggr0, but of course with different IP addresses (vm and hypervisor).
I would recommend reading about Oracle VM server for SPARC and a bit about networking as well
Looks like you are mixing hypervisor and vm part regarding network.

Regards
Peasant.
These 2 Users Gave Thanks to Peasant For This Post:
# 7  
Old 03-11-2020
I was able to figure it out and able to configure. Thanks for the help

Previous Thread | Next Thread
Test Your Knowledge in Computers #403
Difficulty: Easy
Cygwin is a POSIX-compatible environment that runs natively on Microsoft Windows.
True or False?

9 More Discussions You Might Find Interesting

1. Solaris

Solaris link aggregation not working as expected

Hi, This is Solaris-10 x86 platform. I am not able to ping gateway associated with aggr50001. I am not getting idea, where could be issue. Please advise. # netstat -nr Routing Table: IPv4 Destination Gateway Flags Ref Use Interface --------------------... (10 Replies)
Discussion started by: solaris_1977
10 Replies

2. IP Networking

Link Aggregation

Hi ihave three link of internet and iwant to put one linux front of a firewall that this three linux speard firewall such az load balance and fialover but dont close any port and protocol and only firewall have been internet what way can i use for it ? are there any script and services do that... (0 Replies)
Discussion started by: mnnn
0 Replies

3. Solaris

Link aggregation issues Solaris 10

I have setup link aggregation with 3 interfaces on my solaris 10 system. All looks good but my problem is that the traffic is only going out bge0 and not the other 2 links. bash-4.3# dladm show-aggr -s key:33 ipackets rbytes opackets obytes %ipkts %opkts ... (3 Replies)
Discussion started by: primeaup
3 Replies

4. Solaris

solaris link aggregation problem , once i reboot it is not showing, not able to ping the default gat

Hi All, I am trying to aggregate the NIC's,(igb2 and igb3) (igb0 is used by the physical system and igb1 is used by primary-vsw0) to create the domains on that for faster data transfer, I followed the process for creating the aggregation, dladm create-aggr -d igb2 -d igb3 1 after creating the... (2 Replies)
Discussion started by: buildscm
2 Replies

5. IP Networking

Interface bonding / Link aggregation (Multiple)

Hello, I've been using mode 4 with four slaves, however looking at ifconfig showed that the traffic was not balanced correctly between the interfaces, the outgoing traffic has been alot higher on the last slave. Example: eth0 RX 123.2 GiB TX 22.5 GiB eth1 RX 84.8 GiB TX 8.3 GiB eth2... (3 Replies)
Discussion started by: TehOne
3 Replies

6. Solaris

Link aggregation

Me again :) I'm trying to find a page describing the L2, L3 und L4 modes of dladm. It's nice to read "hashed by ip header", but how should I use that? On the file-server it's ok to have the six interfaces serving six clients each on it's own. But an rsync connection via switch between two... (8 Replies)
Discussion started by: PatrickBaer
8 Replies

7. AIX

Link aggregation with hacmp ?

Hi, I need to setup a hacmp cluster (my first one, we usually use VCS on AIX), but I require more network bandwith than a normal gigabit etherchannel setup can provide, so I am thinking about using linkaggregation - 2 active adapters to one switch and a single backup adapter to another switch... (4 Replies)
Discussion started by: zxmaus
4 Replies

8. UNIX for Advanced & Expert Users

Link Aggregation and LACP

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (1 Reply)
Discussion started by: hcclnoodles
1 Replies

9. Solaris

Link Aggregation

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (2 Replies)
Discussion started by: hcclnoodles
2 Replies

Featured Tech Videos