Sponsored Content
Operating Systems Solaris Solaris 11 link aggregation with multiple VLANs - not working Post 303044837 by solaris_1977 on Thursday 5th of March 2020 12:16:52 AM
Old 03-05-2020
Solaris 11 link aggregation - not working - can't ping gateway

This link was helpful and I got the idea that vlanNUM would be created on OVM/hypervisor level.
I deleted VLAN interfaces, recreated aggr0 with net0 and net7. But I can't ping its gateway. I can see packets incoming from two VLANs, if I snoop on net0, net7 and aggr0.
Per network guys, their side configurations are okay, but I will check again if my configurations are looking okay.
Code:
root@ovmi-host1:~# netstat -nrv | grep default
default              0.0.0.0         192.168.244.129                 0   2 UG       0      0
root@ovmi-host1:~# ping 192.168.244.129
no answer from 192.168.244.129
root@ovmi-host1:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
sp-phys0: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
        inet 169.254.182.77 netmask ffffff00 broadcast 169.254.182.255
        ether 2:21:28:57:47:17
aggr0: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 3
        inet 192.168.244.161 netmask ffffffc0 broadcast 192.168.244.191
        ether 0:10:e0:e2:dc:8c
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128
sp-phys0: flags=120002000840<RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
        inet6 ::/0
        ether 2:21:28:57:47:17
aggr0: flags=120002000840<RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 3
        inet6 ::/0
        ether 0:10:e0:e2:dc:8c
root@ovmi-host1:~#
root@ovmi-host1:~# dladm show-phys | grep up
net0            Ethernet      up         1000   full      i40e0
net1            Ethernet      up         1000   full      i40e1
net7            Ethernet      up         1000   full      igb3
net16           Ethernet      up         1000   full      vsw0
sp-phys0        Ethernet      up         10     full      usbecm2
root@ovmi-host1:~# dladm show-aggr -x
LINK       PORT           SPEED DUPLEX   STATE     ADDRESS            PORTSTATE
aggr0      --             1000Mb full    up        0:10:e0:e2:dc:8c   --
           net0           1000Mb full    up        0:10:e0:e2:dc:8c   attached
           net7           1000Mb full    up        b4:96:91:4c:ae:93  attached
root@ovmi-host1:~# dladm show-aggr -L
LINK                PORT         AGGREGATABLE SYNC COLL DIST DEFAULTED EXPIRED
aggr0               net0         yes          yes  yes  yes  no        no
--                  net7         yes          yes  yes  yes  no        no
root@ovmi-host1:~#
root@ovmi-host1:~# dladm show-link | grep up
aggr0               aggr      1500   up       net0 net7
net0                phys      1500   up       --
net1                phys      1500   up       --
net7                phys      1500   up       --
net16               phys      1500   up       --
sp-phys0            phys      1500   up       --
root@ovmi-host1:~#

Am I missing something in this config ?

Last edited by solaris_1977; 03-05-2020 at 02:03 AM.. Reason: Corrected title
 

9 More Discussions You Might Find Interesting

1. Solaris

Link Aggregation

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (2 Replies)
Discussion started by: hcclnoodles
2 Replies

2. UNIX for Advanced & Expert Users

Link Aggregation and LACP

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (1 Reply)
Discussion started by: hcclnoodles
1 Replies

3. AIX

Link aggregation with hacmp ?

Hi, I need to setup a hacmp cluster (my first one, we usually use VCS on AIX), but I require more network bandwith than a normal gigabit etherchannel setup can provide, so I am thinking about using linkaggregation - 2 active adapters to one switch and a single backup adapter to another switch... (4 Replies)
Discussion started by: zxmaus
4 Replies

4. Solaris

Link aggregation

Me again :) I'm trying to find a page describing the L2, L3 und L4 modes of dladm. It's nice to read "hashed by ip header", but how should I use that? On the file-server it's ok to have the six interfaces serving six clients each on it's own. But an rsync connection via switch between two... (8 Replies)
Discussion started by: PatrickBaer
8 Replies

5. IP Networking

Interface bonding / Link aggregation (Multiple)

Hello, I've been using mode 4 with four slaves, however looking at ifconfig showed that the traffic was not balanced correctly between the interfaces, the outgoing traffic has been alot higher on the last slave. Example: eth0 RX 123.2 GiB TX 22.5 GiB eth1 RX 84.8 GiB TX 8.3 GiB eth2... (3 Replies)
Discussion started by: TehOne
3 Replies

6. Solaris

solaris link aggregation problem , once i reboot it is not showing, not able to ping the default gat

Hi All, I am trying to aggregate the NIC's,(igb2 and igb3) (igb0 is used by the physical system and igb1 is used by primary-vsw0) to create the domains on that for faster data transfer, I followed the process for creating the aggregation, dladm create-aggr -d igb2 -d igb3 1 after creating the... (2 Replies)
Discussion started by: buildscm
2 Replies

7. Solaris

Link aggregation issues Solaris 10

I have setup link aggregation with 3 interfaces on my solaris 10 system. All looks good but my problem is that the traffic is only going out bge0 and not the other 2 links. bash-4.3# dladm show-aggr -s key:33 ipackets rbytes opackets obytes %ipkts %opkts ... (3 Replies)
Discussion started by: primeaup
3 Replies

8. IP Networking

Link Aggregation

Hi ihave three link of internet and iwant to put one linux front of a firewall that this three linux speard firewall such az load balance and fialover but dont close any port and protocol and only firewall have been internet what way can i use for it ? are there any script and services do that... (0 Replies)
Discussion started by: mnnn
0 Replies

9. Solaris

Solaris link aggregation not working as expected

Hi, This is Solaris-10 x86 platform. I am not able to ping gateway associated with aggr50001. I am not getting idea, where could be issue. Please advise. # netstat -nr Routing Table: IPv4 Destination Gateway Flags Ref Use Interface --------------------... (10 Replies)
Discussion started by: solaris_1977
10 Replies
FIPVLAN(8)							  Open-FCoE Tools							FIPVLAN(8)

NAME
fipvlan - Fibre Channel over Ethernet VLAN Discovery SYNOPSIS
fipvlan [-c|--create] [-s|--start] [-m|--mode fabric|vn2vn] interfaces fipvlan -a|--auto [-c|--create] [-d|--debug] [-s|--start] [-m|--mode fabric|vn2vn] [-l|--link-retry count] fipvlan -h|--help fipvlan -v|--version DESCRIPTION
The fipvlan command performs Fibre Channel over Ethernet (FCoE) Initialization Protocol (FIP) VLAN Discovery over Ethernet interfaces. fipvlan can be used as a diagnostic tool to determine which VLANs have FCoE services available on a network, prior to configuring VLAN interfaces and the Open-FCoE initiator. fipvlan can also be used to create VLAN interfaces as they are discovered, and to start the Open-FCoE initiator. The --create and --start options are primarily intended to be used as part of an Open-FCoE boot solution. FCoE instances started in this way cannot be destroyed or reset by fcoeadm. fipvlan takes a list of network interface names to run the VLAN discovery protocol over, or the --auto option to use all available Ethernet interfaces. fipvlan will enable any interface which is found disabled. If no response is received on that interface it will be shutdown again when fipvlan terminates. OPTIONS
-a, --auto Use all Ethernet interfaces currently available -c, --create Create network interfaces for discovered FCoE VLANs. If a VLAN device already exists for a discovered VLAN, a new VLAN device will not be created. -d, --debug Enable debugging output -s, --start Start the Open-FCoE initiator on discovered FCoE VLANs -m, --mode fabric|vn2vn Specify whether VLAN discovery is performed in the default fabric mode, or in VN2VN mode. -f, --suffix suffix Append the specified string suffix to VLAN interface names. -l, --link-retry count Retry check for link up to count times. The link state is checked every 500 ms. The default number of retries is 20. -h, --help Display a help message with basic usage instructions -v, --version Display the fipvlan version string VLAN NAMING CONVENTIONS
If a new VLAN device is created, it will have the name dev.vlan; where dev is the name of the Ethernet parent device and vlan is the discovered VLAN ID number. An optional suffix may be appended to this with the the -f command line option. EXAMPLES
Display all discoverable VLANs with FCoE services fipvlan --auto Discover FCoE VLANs on interface eth2, create VLAN devices and start the Open-FCoE initiator fipvlan --create --start eth2 In this example if FCoE services were available on VLAN 101 of network interface eth2, then a VLAN interface eth2.101 would be created and used as the parent device for the initiator. SEE ALSO
fcoeadm(8) fcoemon(8) SUPPORT
fipvlan is part of the fcoe-utils package, maintained through the Open-FCoE project. Resources for both developers and users can be found at the Open-FCoE website http://open-fcoe.org/ Open-FCoE 03/18/2013 FIPVLAN(8)
All times are GMT -4. The time now is 05:07 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy