Sponsored Content
Operating Systems Solaris How can I test link aggregation? Post 303034020 by solaris_1977 on Tuesday 16th of April 2019 06:25:59 PM
Old 04-16-2019
How can I test link aggregation?

Hi,
I have Solaris-10 server and link aggregation is configured on this in below way
Code:
# dladm show-aggr
key: 1 (0x0001) policy: L4      address: 3c:d9:2b:ee:a8:a (auto)
           device       address                 speed           duplex  link    state
           bnx1         3c:d9:2b:ee:a8:a          1000  Mbps    full    up      attached
           igb2         f4:ce:46:a7:eb:92         1000  Mbps    full    up      attached
key: 2 (0x0002) policy: L4      address: 3c:d9:2b:ee:a8:8 (auto)
           device       address                 speed           duplex  link    state
           bnx0         3c:d9:2b:ee:a8:8          1000  Mbps    full    up      attached
           igb3         f4:ce:46:a7:eb:93         1000  Mbps    full    up      attached
# dladm show-link
bnx0            type: non-vlan  mtu: 1500       device: bnx0
bnx1            type: non-vlan  mtu: 1500       device: bnx1
bnx2            type: non-vlan  mtu: 1500       device: bnx2
bnx3            type: non-vlan  mtu: 1500       device: bnx3
igb0            type: non-vlan  mtu: 1500       device: igb0
igb1            type: non-vlan  mtu: 1500       device: igb1
igb2            type: non-vlan  mtu: 1500       device: igb2
igb3            type: non-vlan  mtu: 1500       device: igb3
aggr1           type: non-vlan  mtu: 1500       aggregation: key 1
aggr2           type: non-vlan  mtu: 1500       aggregation: key 2
aggr150002      type: vlan 150  mtu: 1500       aggregation: key 2
aggr50001       type: vlan 50   mtu: 1500       aggregation: key 1
aggr55001       type: vlan 55   mtu: 1500       aggregation: key 1
aggr60001       type: vlan 60   mtu: 1500       aggregation: key 1
aggr62001       type: vlan 62   mtu: 1500       aggregation: key 1
aggr64001       type: vlan 64   mtu: 1500       aggregation: key 1
aggr66001       type: vlan 66   mtu: 1500       aggregation: key 1
aggr81001       type: vlan 81   mtu: 1500       aggregation: key 1

# dladm show-dev
bnx0            link: up        speed: 1000  Mbps       duplex: full
bnx1            link: up        speed: 1000  Mbps       duplex: full
bnx2            link: unknown   speed: 0     Mbps       duplex: unknown
bnx3            link: unknown   speed: 0     Mbps       duplex: unknown
igb0            link: unknown   speed: 0     Mbps       duplex: half
igb1            link: unknown   speed: 0     Mbps       duplex: half
igb2            link: up        speed: 1000  Mbps       duplex: full
igb3            link: up        speed: 1000  Mbps       duplex: full
#

There will be switch replacement, so one by one, link will go down from one side. Before that activity, is there any way to check/test, if server will work fine, if one side goes down ? In same way, as used to check by if_mpadm -d in ipmp.

Thanks
 

10 More Discussions You Might Find Interesting

1. Solaris

Link Aggregation

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (2 Replies)
Discussion started by: hcclnoodles
2 Replies

2. UNIX for Advanced & Expert Users

Link Aggregation and LACP

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (1 Reply)
Discussion started by: hcclnoodles
1 Replies

3. IP Networking

bonding without switch link aggregation

I have some linux machines that I am trying to increase the throughput to on a single connection. They connect to a switch with two 1GbE lines and the switch does not have Link Aggregation support for these machines. I have tried bonding with balance-rr, balance-alb, but the machines can only... (4 Replies)
Discussion started by: Eruditass
4 Replies

4. AIX

Link aggregation with hacmp ?

Hi, I need to setup a hacmp cluster (my first one, we usually use VCS on AIX), but I require more network bandwith than a normal gigabit etherchannel setup can provide, so I am thinking about using linkaggregation - 2 active adapters to one switch and a single backup adapter to another switch... (4 Replies)
Discussion started by: zxmaus
4 Replies

5. HP-UX

Link Aggregation HPUX

Hi, Hoping someone can offer some advice on something i have not dealt with before. We have a server that seems to have some very strange configuration done on it. It has 2 physical NIC's and rather than both be aggregated into LAN900 we have 1 in LAN900 and 1 in LAN901? (See Below)... (2 Replies)
Discussion started by: Andyp2704
2 Replies

6. Solaris

Link aggregation

Me again :) I'm trying to find a page describing the L2, L3 und L4 modes of dladm. It's nice to read "hashed by ip header", but how should I use that? On the file-server it's ok to have the six interfaces serving six clients each on it's own. But an rsync connection via switch between two... (8 Replies)
Discussion started by: PatrickBaer
8 Replies

7. HP-UX

Break Link Aggregation in HP UX.

Hi, I want to Break the Link Aggregation. My aggregation are lan0+lan1 = lan900. Now I want to break this and put the IP in lan0. But i have cluster environment and this is the main database server. So It need to change in cluster script. But I dont know where I have to change it. Please... (1 Reply)
Discussion started by: mkiron
1 Replies

8. Solaris

Link Aggregation without LACP

Hi, I'm not from the Solaris world and some of these things are new to me. Can someone tell me if it is possible to configure link aggregation without using LACP? I am told etherchannel was setup without LACP. (3 Replies)
Discussion started by: techy1
3 Replies

9. Solaris

Link aggregation issues Solaris 10

I have setup link aggregation with 3 interfaces on my solaris 10 system. All looks good but my problem is that the traffic is only going out bge0 and not the other 2 links. bash-4.3# dladm show-aggr -s key:33 ipackets rbytes opackets obytes %ipkts %opkts ... (3 Replies)
Discussion started by: primeaup
3 Replies

10. IP Networking

Link Aggregation

Hi ihave three link of internet and iwant to put one linux front of a firewall that this three linux speard firewall such az load balance and fialover but dont close any port and protocol and only firewall have been internet what way can i use for it ? are there any script and services do that... (0 Replies)
Discussion started by: mnnn
0 Replies
igb(7D) 							      Devices								   igb(7D)

NAME
igb - Intel 82575 1Gb PCI Express NIC Driver SYNOPSIS
/dev/igb* DESCRIPTION
The igb Gigabit Ethernet driver is a multi-threaded, loadable, clonable, GLD-based STREAMS driver supporting the Data Link Provider Interface, dlpi(7P), on Intel 82575 Gigabit Ethernet controllers. The igb driver functions include controller initialization, frame transmit and receive, promiscuous and multicast support, and error recovery and reporting. The igb driver and hardware support auto-negotiation, a protocol specified by the 1000 Base-T standard. Auto-negotiation allows each device to advertise its capabilities and discover those of its peer (link partner). The highest common denominator supported by both link partners is automatically selected, yielding the greatest available throughput, while requiring no manual configuration. The igb driver also allows you to configure the advertised capabilities to less than the maximum (where the full speed of the interface is not required), or to force a specific mode of operation, irrespective of the link partner's advertised capabilities. APPLICATION PROGRAMMING INTERFACE
The cloning character-special device, /dev/igb, is used to access all Intel 82575 Gigabit devices installed within the system. The igb driver is managed by the dladm(1M) command line utility, which allows VLANs to be defined on top of igb instances and for igb instances to be aggregated. See dladm(1M) for more details. You must send an explicit DL_ATTACH_REQ message to associate the opened stream with a particular device (PPA). The PPA ID is interpreted as an unsigned integer data type and indicates the corresponding device instance (unit) number. The driver returns an error (DL_ERROR_ACK) if the PPA field value does not correspond to a valid device instance number for the system. The device is initialized on first attach and de- initialized (stopped) at last detach. The values returned by the driver in the DL_INFO_ACK primitive in response to your DL_INFO_REQ are: o Maximum SDU is 9000. o Minimum SDU is 0. o DLSAP address length is 8. o MAC type is DL_ETHER. o SAP (Service Access Point) length value is -2, meaning the physical address component is followed immediately by a 2-byte SAP component within the DLSAP address. o Broadcast address value is the Ethernet/IEEE broadcast address (FF:FF:FF:FF:FF:FF). Once in the DL_ATTACHED state, you must send a DL_BIND_REQ to associate a particular SAP with the stream. CONFIGURATION
By default, the igb driver performs auto-negotiation to select the link speed and mode. Link speed and mode can be any one of the follow- ing, (as described in the IEEE803.2 standard): 1000 Mbps, full-duplex. 100 Mbps, full-duplex. 100 Mbps, half-duplex. 10 Mbps, full-duplex. 10 Mbps, half-duplex. The auto-negotiation protocol automatically selects speed (1000 Mbps, 100 Mbps, or 10 Mbps) and operation mode (full-duplex or half-duplex) as the highest common denominator supported by both link partners. Alternatively, you can set the capabilities advertised by the igb device using ndd(1M). The driver supports a number of parameters whose names begin with adv_ (see below). Each of these parameters contains a boolean value that determines if the device advertises that mode of operation. For example, the adv_1000fdx_cap parameter indicates if 1000M full duplex is advertised to link partner. The adv_autoneg cap parameter controls whether auto-negotiation is performed. If adv_autoneg_cap is set to 0, the driver forces the mode of operation selected by the first non-zero parameter in priority order as shown below: (highest priority/greatest throughput) adv_1000fdx_cap 1000Mbps full duplex adv_100fdx_cap 100Mpbs full duplex adv_100hdx_cap 100Mbps half duplex adv_10fdx_cap 10Mpbs full duplex adv_10hdx_cap 10Mpbs half duplex (lowest priority/least throughput) All capabilities default to enabled. Note that changing any capability parameter causes the link to go down while the link partners renego- tiate the link speed/duplex using the newly changed capabilities. FILES
/dev/igb* Special character device. /kernel/drv/igb 32-bit device driver (x86). /kernel/drv/amd64/igb 64-bit device driver (x86). /kernel/drv/sparcv9/igb 64-bit device driver (SPARC). /kernel/drv/igb.conf Configuration file. SEE ALSO
dladm(1M), ndd(1M), netstat(1M), driver.conf(4), attributes(5), streamio(7I), dlpi(7P), Writing Device Drivers STREAMS Programming Guide Network Interfaces Programmer's Guide SunOS 5.11 20 Jul 2007 igb(7D)
All times are GMT -4. The time now is 03:00 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy