Sponsored Content
Full Discussion: Link aggregation
Operating Systems Solaris Link aggregation Post 302376767 by PatrickBaer on Wednesday 2nd of December 2009 08:31:40 AM
Old 12-02-2009
Link aggregation

Me again Smilie

I'm trying to find a page describing the L2, L3 und L4 modes of dladm.

It's nice to read "hashed by ip header", but how should I use that?

On the file-server it's ok to have the six interfaces serving six clients each on it's own. But an rsync connection via switch between two machines still uses only one link, is that true?
 

10 More Discussions You Might Find Interesting

1. Solaris

Link Aggregation

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (2 Replies)
Discussion started by: hcclnoodles
2 Replies

2. UNIX for Advanced & Expert Users

Link Aggregation and LACP

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (1 Reply)
Discussion started by: hcclnoodles
1 Replies

3. Solaris

Difference between IPMP and Link Aggregation ?

Hi everybody, One client asked me to configure network failover and load balancing for two Ethernet interfaces, I tried IPMP but I was unable to do so, because he's using his Gateway as firewall and PING is disabled. so IPMP kept telling me that all interfaces failed because gateway can't be... (8 Replies)
Discussion started by: Sun Fire
8 Replies

4. IP Networking

bonding without switch link aggregation

I have some linux machines that I am trying to increase the throughput to on a single connection. They connect to a switch with two 1GbE lines and the switch does not have Link Aggregation support for these machines. I have tried bonding with balance-rr, balance-alb, but the machines can only... (4 Replies)
Discussion started by: Eruditass
4 Replies

5. AIX

Link aggregation with hacmp ?

Hi, I need to setup a hacmp cluster (my first one, we usually use VCS on AIX), but I require more network bandwith than a normal gigabit etherchannel setup can provide, so I am thinking about using linkaggregation - 2 active adapters to one switch and a single backup adapter to another switch... (4 Replies)
Discussion started by: zxmaus
4 Replies

6. HP-UX

Link Aggregation HPUX

Hi, Hoping someone can offer some advice on something i have not dealt with before. We have a server that seems to have some very strange configuration done on it. It has 2 physical NIC's and rather than both be aggregated into LAN900 we have 1 in LAN900 and 1 in LAN901? (See Below)... (2 Replies)
Discussion started by: Andyp2704
2 Replies

7. HP-UX

Break Link Aggregation in HP UX.

Hi, I want to Break the Link Aggregation. My aggregation are lan0+lan1 = lan900. Now I want to break this and put the IP in lan0. But i have cluster environment and this is the main database server. So It need to change in cluster script. But I dont know where I have to change it. Please... (1 Reply)
Discussion started by: mkiron
1 Replies

8. Solaris

Link Aggregation without LACP

Hi, I'm not from the Solaris world and some of these things are new to me. Can someone tell me if it is possible to configure link aggregation without using LACP? I am told etherchannel was setup without LACP. (3 Replies)
Discussion started by: techy1
3 Replies

9. IP Networking

Link Aggregation

Hi ihave three link of internet and iwant to put one linux front of a firewall that this three linux speard firewall such az load balance and fialover but dont close any port and protocol and only firewall have been internet what way can i use for it ? are there any script and services do that... (0 Replies)
Discussion started by: mnnn
0 Replies

10. Solaris

How can I test link aggregation?

Hi, I have Solaris-10 server and link aggregation is configured on this in below way # dladm show-aggr key: 1 (0x0001) policy: L4 address: 3c:d9:2b:ee:a8:a (auto) device address speed duplex link state bnx1 3c:d9:2b:ee:a8:a... (8 Replies)
Discussion started by: solaris_1977
8 Replies
nxge(7D)							      Devices								  nxge(7D)

NAME
nxge - Sun 10/1 Gigabit Ethernet network driver SYNOPSIS
/dev/nxge* DESCRIPTION
The nxge Gigabit Ethernet driver is a multi-threaded, loadable, clonable, GLD-based STREAMS driver supporting the Data Link Provider Inter- face, dlpi(7P), on Sun Gigabit Ethernet hardware (NIU, Sun x8, Express Dual 10 Gigabit Ethernet fiber XFP low profile adapter and the 10/100/1000BASE-T x8 Express low profile adapter). The nxge driver functions include chip initialization, frame transmit and receive, flow classification, multicast and promiscuous support, and error recovery and reporting. The nxge device provides fully-compliant IEEE 802.3ae 10Gb/s full duplex operation using XFP-based 10GigE optics (NIU, dual 10 Gigabit fiber XFP adapter). The Sun Ethernet hardware supports the IEEE 802.3x frame-based flow control capabilities. For the 10/100/1000BASE-T adapter, the nxge driver and hardware support auto-negotiation, a protocol specified by the 1000 Base-T standard. Auto-negotiation allows each device to advertise its capabilities and discover those of its peer (link partner). The highest common denomi- nator supported by both link partners is automatically selected, yielding the greatest available throughput while requiring no manual con- figuration. The nxge driver also allows you to configure the advertised capabilities to less than the maximum (where the full speed of the interface is not required) or to force a specific mode of operation, irrespective of the link partner's advertised capabilities. APPLICATION PROGRAMMING INTERFACE
The cloning character-special device, /dev/nxge, is used to access all Sun Neptune NIU devices installed within the system. The nxge driver is managed by the dladm(1M) command line utility, which allows VLANs to be defined on top of nxge instances and for nxge instances to be aggregated. See dladm(1M) for more details. You must send an explicit DL_ATTACH_REQ message to associate the opened stream with a particular device (PPA). The PPA ID is interpreted as an unsigned integer data type and indicates the corresponding device instance (unit) number. The driver returns an error (DL_ERROR_ACK) if the PPA field value does not correspond to a valid device instance number for the system. The device is initialized on first attach and de- initialized (stopped) at last detach. The values returned by the driver in the DL_INFO_ACK primitive in response to a DL_INFO_REQ are: o Maximum SDU (default 1500). o Minimum SDU (default 0). The driver pads to the mandatory 60-octet minimum packet size. o DLSAP address length is 8. o MAC type is DL_ETHER. o SAP length value is -2, meaning the physical address component is followed immediately by a 2-byte SAP component within the DLSAP address. o Broadcast address value is the Ethernet/IEEE broadcast address (FF:FF:FF:FF:FF:FF). Due to the nature of link address definition for IPoIB, the DL_SET_PHYS_ADDR_REQ DLPI primitive is not supported. In the transmit case for streams that have been put in raw mode via the DLIOCRAW ioctl, the dlpi application must prepend the 20 byte IPoIB destination address to the data it wants to transmit over-the-wire. In the receive case, applications receive the IP/ARP datagram along with the IETF defined 4 byte header. Once in the DL_ATTACHED state, you must send a DL_BIND_REQ to associate a particular Service Access Point (SAP) with the stream. CONFIGURATION
For the 10/100/1000BASE-T adapter, the nxge driver performs auto-negotiation to select the link speed and mode. Link speed and mode may be 10000 Mbps full-duplex (10 Gigabit adapter), 1000 Mbps full-duplex, 100 Mbps full-duplex, or 10 Mbps full-duplex, depending on the hardware adapter type. See the IEEE802.3 standard for more information. The auto-negotiation protocol automatically selects the 1000 Mbps, 100 Mbps, or 10 Mbps operation modes (full-duplex only) as the highest common denominator supported by both link partners. Because the nxge device supports all modes, the effect is to select the highest throughput mode supported by the other device. You can also set the capabilities advertised by the nxge device using dladm(1M). The driver supports a number of parameters whose names begin with en_ (see below). Each of these parameters contains a boolean value that determines if the device advertises that mode of opera- tion. The adv_autoneg_cap parameter controls whether auto-negotiation is performed. If adv_autoneg_cap is set to 0, the driver forces the mode of operation selected by the first non-zero parameter in priority order as shown below: (highest priority/greatest throughput) en_1000fdx_cap 1000Mbps full duplex en_100fdx_cap 100Mpbs full duplex en_10fdx_cap 10Mpbs full duplex (lowest priority/least throughput) All capabilities default to enabled. Note that changing any capability parameter causes the link to go down while the link partners renego- tiate the link speed/duplex using the newly changed capabilities. FILES
/dev/nxge* Special character device. /kernel/drv/nxge 32-bit device driver (x86). /kernel/drv/sparcv9/nxge 64-bit device driver (SPARC). /kernel/drv/amd64/nxge 64-bit device driver (x86). /kernel/drv/nxge.conf Configuration file. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Architecture |SPARC, x86 | +-----------------------------+-----------------------------+ SEE ALSO
dladm(1M), netstat(1M), attributes(5), streamio(7I), dlpi(7P), driver.conf(4) Writing Device Drivers STREAMS Programming Guide Network Interfaces Programmer's Guide IEEE 802.3ae Specification -- 2002 SunOS 5.11 12 Apr 2008 nxge(7D)
All times are GMT -4. The time now is 02:36 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy