Solaris 11 link aggregation - not working - can't ping gateway
This link was helpful and I got the idea that vlanNUM would be created on OVM/hypervisor level.
I deleted VLAN interfaces, recreated aggr0 with net0 and net7. But I can't ping its gateway. I can see packets incoming from two VLANs, if I snoop on net0, net7 and aggr0.
Per network guys, their side configurations are okay, but I will check again if my configurations are looking okay.
Am I missing something in this config ?
Last edited by solaris_1977; 03-05-2020 at 02:03 AM..
Reason: Corrected title
Hi there
I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question
Ive seen multiple websites that say the following
Does this also mean that if the... (2 Replies)
Hi there
I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question
Ive seen multiple websites that say the following
Does this also mean that if the... (1 Reply)
Hi,
I need to setup a hacmp cluster (my first one, we usually use VCS on AIX), but I require more network bandwith than a normal gigabit etherchannel setup can provide, so I am thinking about using linkaggregation - 2 active adapters to one switch and a single backup adapter to another switch... (4 Replies)
Me again :)
I'm trying to find a page describing the L2, L3 und L4 modes of dladm.
It's nice to read "hashed by ip header", but how should I use that?
On the file-server it's ok to have the six interfaces serving six clients each on it's own. But an rsync connection via switch between two... (8 Replies)
Hello,
I've been using mode 4 with four slaves, however looking at ifconfig showed that the traffic was not balanced correctly between the interfaces, the outgoing traffic has been alot higher on the last slave.
Example:
eth0 RX 123.2 GiB TX 22.5 GiB
eth1 RX 84.8 GiB TX 8.3 GiB
eth2... (3 Replies)
Hi All, I am trying to aggregate the NIC's,(igb2 and igb3) (igb0 is used by the physical system and igb1 is used by primary-vsw0) to create the domains on that for faster data transfer, I followed the process for creating the aggregation, dladm create-aggr -d igb2 -d igb3 1
after creating the... (2 Replies)
I have setup link aggregation with 3 interfaces on my solaris 10 system.
All looks good but my problem is that the traffic is only going out bge0 and not the other 2 links.
bash-4.3# dladm show-aggr -s
key:33 ipackets rbytes opackets obytes %ipkts %opkts
... (3 Replies)
Hi
ihave three link of internet and iwant to put one linux front of a firewall that this three linux speard firewall such az load balance and fialover but dont close any port and protocol and only firewall have been internet what way can i use for it ?
are there any script and services do that... (0 Replies)
Hi,
This is Solaris-10 x86 platform.
I am not able to ping gateway associated with aggr50001. I am not getting idea, where could be issue. Please advise.
# netstat -nr
Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
--------------------... (10 Replies)
Discussion started by: solaris_1977
10 Replies
LEARN ABOUT FREEBSD
ntb
NTB(4) BSD Kernel Interfaces Manual NTB(4)NAME
ntb, ntb_hw, if_ntb -- Intel(R) Non-Transparent Bridge driver
SYNOPSIS
To compile this driver into your kernel, place the following lines in your kernel configuration file:
device ntb_hw
device if_ntb
Or, to load the driver as a module at boot, place the following line in loader.conf(5):
if_ntb_load="YES"
DESCRIPTION
The ntb driver provides support for the Non-Transparent Bridge (NTB) in the Intel S1200, Xeon E3 and Xeon E5 processor families.
The NTB allows you to connect two computer systems using a PCI-e link if they have the correct equipment and connectors.
CONFIGURATION
The NTB memory windows need to be configured by the BIOS. If your BIOS allows you to set their size, you should set the size of both memory
windows to 1 MiB. This needs to be done on both systems.
Each system needs to have a different IP address assigned. The MAC address is randomly generated. Also for maximum performance, the MTU
should be set to 16 kiB. This can be done by adding the line below to rc.conf(5):
ifconfig_ntb0="inet 192.168.1.10 netmask 255.255.255.0 mtu 16384"
And on the second system :
ifconfig_ntb0="inet 192.168.1.11 netmask 255.255.255.0 mtu 16384"
If you are using the UDP protocol, you may want to increase the net.inet.udp.maxdgram sysctl(8) variable.
SEE ALSO rc.conf(5), sysctl(8)AUTHORS
The ntb driver was developed by Intel and originally written by Carl Delsey <carl@FreeBSD.org>.
BUGS
If the driver is unloaded, it cannot be reloaded without a system reboot.
The network support is limited. It isn't fully configurable yet. It also isn't integrated into netgraph(4) or bpf(4).
NTB to Root Port mode is not yet supported.
There is no way to protect your system from malicious behavior on the other system once the link is brought up. Anyone with root or kernel
access on the other system can read or write to any location on your system. In other words, only connect two systems that completely trust
each other.
BSD Apr 11, 2013 BSD