Sponsored Content
Operating Systems Solaris Solaris 11 link aggregation with multiple VLANs - not working Post 303044837 by solaris_1977 on Thursday 5th of March 2020 12:16:52 AM
Old 03-05-2020
Solaris 11 link aggregation - not working - can't ping gateway

This link was helpful and I got the idea that vlanNUM would be created on OVM/hypervisor level.
I deleted VLAN interfaces, recreated aggr0 with net0 and net7. But I can't ping its gateway. I can see packets incoming from two VLANs, if I snoop on net0, net7 and aggr0.
Per network guys, their side configurations are okay, but I will check again if my configurations are looking okay.
Code:
root@ovmi-host1:~# netstat -nrv | grep default
default              0.0.0.0         192.168.244.129                 0   2 UG       0      0
root@ovmi-host1:~# ping 192.168.244.129
no answer from 192.168.244.129
root@ovmi-host1:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
sp-phys0: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
        inet 169.254.182.77 netmask ffffff00 broadcast 169.254.182.255
        ether 2:21:28:57:47:17
aggr0: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 3
        inet 192.168.244.161 netmask ffffffc0 broadcast 192.168.244.191
        ether 0:10:e0:e2:dc:8c
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128
sp-phys0: flags=120002000840<RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
        inet6 ::/0
        ether 2:21:28:57:47:17
aggr0: flags=120002000840<RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 3
        inet6 ::/0
        ether 0:10:e0:e2:dc:8c
root@ovmi-host1:~#
root@ovmi-host1:~# dladm show-phys | grep up
net0            Ethernet      up         1000   full      i40e0
net1            Ethernet      up         1000   full      i40e1
net7            Ethernet      up         1000   full      igb3
net16           Ethernet      up         1000   full      vsw0
sp-phys0        Ethernet      up         10     full      usbecm2
root@ovmi-host1:~# dladm show-aggr -x
LINK       PORT           SPEED DUPLEX   STATE     ADDRESS            PORTSTATE
aggr0      --             1000Mb full    up        0:10:e0:e2:dc:8c   --
           net0           1000Mb full    up        0:10:e0:e2:dc:8c   attached
           net7           1000Mb full    up        b4:96:91:4c:ae:93  attached
root@ovmi-host1:~# dladm show-aggr -L
LINK                PORT         AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
aggr0               net0         yes          yes  yes  yes  no        no
--                  net7         yes          yes  yes  yes  no        no
root@ovmi-host1:~#
root@ovmi-host1:~# dladm show-link | grep up
aggr0               aggr      1500   up       net0 net7
net0                phys      1500   up       --
net1                phys      1500   up       --
net7                phys      1500   up       --
net16               phys      1500   up       --
sp-phys0            phys      1500   up       --
root@ovmi-host1:~#

Am I missing something in this config ?

Last edited by solaris_1977; 03-05-2020 at 02:03 AM.. Reason: Corrected title
 

9 More Discussions You Might Find Interesting

1. Solaris

Link Aggregation

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (2 Replies)
Discussion started by: hcclnoodles
2 Replies

2. UNIX for Advanced & Expert Users

Link Aggregation and LACP

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (1 Reply)
Discussion started by: hcclnoodles
1 Replies

3. AIX

Link aggregation with hacmp ?

Hi, I need to setup a hacmp cluster (my first one, we usually use VCS on AIX), but I require more network bandwith than a normal gigabit etherchannel setup can provide, so I am thinking about using linkaggregation - 2 active adapters to one switch and a single backup adapter to another switch... (4 Replies)
Discussion started by: zxmaus
4 Replies

4. Solaris

Link aggregation

Me again :) I'm trying to find a page describing the L2, L3 und L4 modes of dladm. It's nice to read "hashed by ip header", but how should I use that? On the file-server it's ok to have the six interfaces serving six clients each on it's own. But an rsync connection via switch between two... (8 Replies)
Discussion started by: PatrickBaer
8 Replies

5. IP Networking

Interface bonding / Link aggregation (Multiple)

Hello, I've been using mode 4 with four slaves, however looking at ifconfig showed that the traffic was not balanced correctly between the interfaces, the outgoing traffic has been alot higher on the last slave. Example: eth0 RX 123.2 GiB TX 22.5 GiB eth1 RX 84.8 GiB TX 8.3 GiB eth2... (3 Replies)
Discussion started by: TehOne
3 Replies

6. Solaris

solaris link aggregation problem , once i reboot it is not showing, not able to ping the default gat

Hi All, I am trying to aggregate the NIC's,(igb2 and igb3) (igb0 is used by the physical system and igb1 is used by primary-vsw0) to create the domains on that for faster data transfer, I followed the process for creating the aggregation, dladm create-aggr -d igb2 -d igb3 1 after creating the... (2 Replies)
Discussion started by: buildscm
2 Replies

7. Solaris

Link aggregation issues Solaris 10

I have setup link aggregation with 3 interfaces on my solaris 10 system. All looks good but my problem is that the traffic is only going out bge0 and not the other 2 links. bash-4.3# dladm show-aggr -s key:33 ipackets rbytes opackets obytes %ipkts %opkts ... (3 Replies)
Discussion started by: primeaup
3 Replies

8. IP Networking

Link Aggregation

Hi ihave three link of internet and iwant to put one linux front of a firewall that this three linux speard firewall such az load balance and fialover but dont close any port and protocol and only firewall have been internet what way can i use for it ? are there any script and services do that... (0 Replies)
Discussion started by: mnnn
0 Replies

9. Solaris

Solaris link aggregation not working as expected

Hi, This is Solaris-10 x86 platform. I am not able to ping gateway associated with aggr50001. I am not getting idea, where could be issue. Please advise. # netstat -nr Routing Table: IPv4 Destination Gateway Flags Ref Use Interface --------------------... (10 Replies)
Discussion started by: solaris_1977
10 Replies
bge(7D) 							      Devices								   bge(7D)

NAME
bge - SUNW,bge Gigabit Ethernet driver for Broadcom BCM57xx SYNOPSIS
/dev/bge* DESCRIPTION
The bge Gigabit Ethernet driver is a multi-threaded, loadable, clonable, GLD-based STREAMS driver supporting the Data Link Provider Inter- face, dlpi(7P), on Broadcom BCM57xx (BCM5700/5701/5703/5704/5705/5705M/5714/5721/5751/5751M/5782/5788 on x86) Gigabit Ethernet controllers fitted to the system motherboard. With the exception of BCM5700/BCM5701/BCM5704S, these devices incorporate both MAC and PHY functions and provide three-speed (copper) Ethernet operation on the RJ-45 connectors. (BCM5700/BCM5701/BCM5704S do not have a PHY integrated into the MAC chipset.) The bge driver functions include controller initialization, frame transmit and receive, promiscuous and multicast support, and error recov- ery and reporting. The bge driver and hardware support auto-negotiation, a protocol specified by the 1000 Base-T standard. Auto-negotiation allows each device to advertise its capabilities and discover those of its peer (link partner). The highest common denominator supported by both link partners is automatically selected, yielding the greatest available throughput, while requiring no manual configuration. The bge driver also allows you to configure the advertised capabilities to less than the maximum (where the full speed of the interface is not required), or to force a specific mode of operation, irrespective of the link partner's advertised capabilities. APPLICATION PROGRAMMING INTERFACE
The cloning character-special device, /dev/bge, is used to access all BCM57xx devices ( (BCM5700/5701/5703/5704, 5705/5714/5721/5751/5751M/5782 on x86) fitted to the system motherboard. The bge driver is managed by the dladm(1M) command line utility, which allows VLANs to be defined on top of bge instances and for bge instances to be aggregated. See dladm(1M) for more details. You must send an explicit DL_ATTACH_REQ message to associate the opened stream with a particular device (PPA). The PPA ID is interpreted as an unsigned integer data type and indicates the corresponding device instance (unit) number. The driver returns an error (DL_ERROR_ACK) if the PPA field value does not correspond to a valid device instance number for the system. The device is initialized on first attach and de- initialized (stopped) at last detach. The values returned by the driver in the DL_INFO_ACK primitive in response to a DL_INFO_REQ are: o Maximum SDU (default 1500). o Minimum SDU (default 0). o DLSAP address length is 8. o MAC type is DL_ETHER. o SAP length value is -2, meaning the physical address component is followed immediately by a 2-byte SAP component within the DLSAP address. o Broadcast address value is the Ethernet/IEEE broadcast address (FF:FF:FF:FF:FF:FF). Once in the DL_ATTACHED state, you must send a DL_BIND_REQ to associate a particular Service Access Point (SAP) with the stream. CONFIGURATION
By default, the bge driver performs auto-negotiation to select the link speed and mode. Link speed and mode can be any one of the follow- ing, (as described in the IEEE803.2 standard): o 1000 Mbps, full-duplex o 1000 Mbps, half-duplex o 100 Mbps, full-duplex o 100 Mbps, half-duplex o 10 Mbps, full-duplex o 10 Mbps, half-duplex The auto-negotiation protocol automatically selects: o Speed (1000 Mbps, 100 Mbps, or 10 Mbps) o Operation mode (full-duplex or half-duplex) as the highest common denominator supported by both link partners. Because the bge device supports all modes, the effect is to select the highest throughput mode supported by the other device. Alternatively, you can set the capabilities advertised by the bge device using dladm(1M). The driver supports a number of parameters whose names begin with en_ (see below). Each of these parameters contains a boolean value that determines whether the device advertises that mode of operation. If en_autoneg_cap is set to 0, the driver forces the mode of operation selected by the first non-zero parameter in priority order as listed below: (highest priority/greatest throughput) en_1000fdx_cap 1000Mbps full duplex en_1000hdx_cap 1000Mpbs half duplex en_100fdx_cap 100Mpbs full duplex en_100hdx_cap 100Mpbs half duplex en_10fdx_cap 10Mpbs full duplex en_10hdx_cap 10Mpbs half duplex (lowest priority/least throughput) For example, to prevent the device 'bge2' from advertising gigabit capabilities, enter (as super-user): # dladm set-linkprop -p enable_1000hdx_cap=0 bge2 # dladm set-linkprop -p enable_1000fdx_cap=0 bge2 All capabilities default to enabled. Note that changing any capability parameter causes the link to go down while the link partners renego- tiate the link speed/duplex using the newly changed capabilities. The current settings of the parameters may be found using dladm show-ether. In addition, the driver exports the current state, speed, duplex setting, and working mode of the link via kstat parameters (these are read only and may not be changed). For example, to check link state of device bge0: # dladm show-ether -x bge0 LINK PTYPE STATE AUTO SPEED-DUPLEX PAUSE bge0 current up yes 1G-f bi -- capable -- yes 1G-fh,100M-fh,10M-fh bi -- adv -- yes 1G-fh bi -- peeradv -- yes 1G-f bi The output above indicates that the link is up and running at 1Gbps full-duplex with its rx/tx direction pause capability. To extract link state information for the same link using kstat: # kstat bge:0:mac:link_state module: bge instance: 0 name: mac class: net link_state The default MTU is 1500. To enable Jumbo Frames support, you can configure the bge driver by defining the default_mtu property via dladm(1M) or in driver.conf(4) to greater than 1500 bytes (for example: default_mtu=9000). Note that the largest jumbo size supported by bge is 9000 bytes. Additionally, not all bge-derived devices currently support Jumbo Frames. The following devices support Jumbo Frames up to 9KB: BCM5700, 5701, 5702, 5703C, 5703S, 5704C, 5704S, 5714C, 5714S, 5715C and 5715S. Other devices currently do not support Jumbo Frames. FILES
/kernel/drv/bge* 32-bit ELF kernel module. (x86) /kernel/drv/amd64/bge 64-bit ELF kernel module (x86). /kernel/drv/sparcv9/bge 64-bit ELF kernel module (SPARC). ATTRIBUTES
See attributes(5) for a description of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Architecture |SPARC, x86 | +-----------------------------+-----------------------------+ SEE ALSO
dladm(1M), driver.conf(4), attributes(5), streamio(7I), dlpi(7P) Writing Device Drivers STREAMS Programming Guide Network Interfaces Programmer's Guide SunOS 5.11 9 Apr 2008 bge(7D)
All times are GMT -4. The time now is 08:08 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy