Sponsored Content
Operating Systems AIX Link aggregation with hacmp ? Post 302345937 by zxmaus on Thursday 20th of August 2009 03:23:18 PM
Old 08-20-2009
Link aggregation with hacmp ?

Hi,

I need to setup a hacmp cluster (my first one, we usually use VCS on AIX), but I require more network bandwith than a normal gigabit etherchannel setup can provide, so I am thinking about using linkaggregation - 2 active adapters to one switch and a single backup adapter to another switch ... would this work or am I thinking wrong somewhere ?

Kind regards
zxmaus
 

10 More Discussions You Might Find Interesting

1. Solaris

Link Aggregation

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (2 Replies)
Discussion started by: hcclnoodles
2 Replies

2. UNIX for Advanced & Expert Users

Link Aggregation and LACP

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (1 Reply)
Discussion started by: hcclnoodles
1 Replies

3. IP Networking

bonding without switch link aggregation

I have some linux machines that I am trying to increase the throughput to on a single connection. They connect to a switch with two 1GbE lines and the switch does not have Link Aggregation support for these machines. I have tried bonding with balance-rr, balance-alb, but the machines can only... (4 Replies)
Discussion started by: Eruditass
4 Replies

4. HP-UX

Link Aggregation HPUX

Hi, Hoping someone can offer some advice on something i have not dealt with before. We have a server that seems to have some very strange configuration done on it. It has 2 physical NIC's and rather than both be aggregated into LAN900 we have 1 in LAN900 and 1 in LAN901? (See Below)... (2 Replies)
Discussion started by: Andyp2704
2 Replies

5. Solaris

Link aggregation

Me again :) I'm trying to find a page describing the L2, L3 und L4 modes of dladm. It's nice to read "hashed by ip header", but how should I use that? On the file-server it's ok to have the six interfaces serving six clients each on it's own. But an rsync connection via switch between two... (8 Replies)
Discussion started by: PatrickBaer
8 Replies

6. HP-UX

Break Link Aggregation in HP UX.

Hi, I want to Break the Link Aggregation. My aggregation are lan0+lan1 = lan900. Now I want to break this and put the IP in lan0. But i have cluster environment and this is the main database server. So It need to change in cluster script. But I dont know where I have to change it. Please... (1 Reply)
Discussion started by: mkiron
1 Replies

7. Solaris

Link Aggregation without LACP

Hi, I'm not from the Solaris world and some of these things are new to me. Can someone tell me if it is possible to configure link aggregation without using LACP? I am told etherchannel was setup without LACP. (3 Replies)
Discussion started by: techy1
3 Replies

8. Solaris

Link aggregation issues Solaris 10

I have setup link aggregation with 3 interfaces on my solaris 10 system. All looks good but my problem is that the traffic is only going out bge0 and not the other 2 links. bash-4.3# dladm show-aggr -s key:33 ipackets rbytes opackets obytes %ipkts %opkts ... (3 Replies)
Discussion started by: primeaup
3 Replies

9. IP Networking

Link Aggregation

Hi ihave three link of internet and iwant to put one linux front of a firewall that this three linux speard firewall such az load balance and fialover but dont close any port and protocol and only firewall have been internet what way can i use for it ? are there any script and services do that... (0 Replies)
Discussion started by: mnnn
0 Replies

10. Solaris

How can I test link aggregation?

Hi, I have Solaris-10 server and link aggregation is configured on this in below way # dladm show-aggr key: 1 (0x0001) policy: L4 address: 3c:d9:2b:ee:a8:a (auto) device address speed duplex link state bnx1 3c:d9:2b:ee:a8:a... (8 Replies)
Discussion started by: solaris_1977
8 Replies
SK(4)							   BSD Kernel Interfaces Manual 						     SK(4)

NAME
sk, skc -- SysKonnect XMAC II and Marvell GMAC based gigabit ethernet SYNOPSIS
skc* at pci? dev ? function ? sk* at skc? mskc* at pci? dev ? function ? msk* at skc? DESCRIPTION
The sk driver provides support for SysKonnect based gigabit ethernet adapters and Marvell based gigabit ethernet adapters, including the fol- lowing: o SK-9821 SK-NET GE-T single port, copper adapter o SK-9822 SK-NET GE-T dual port, copper adapter o SK-9841 SK-NET GE-LX single port, single mode fiber adapter o SK-9842 SK-NET GE-LX dual port, single mode fiber adapter o SK-9843 SK-NET GE-SX single port, multimode fiber adapter o SK-9844 SK-NET GE-SX dual port, multimode fiber adapter o SK-9521 V2.0 single port, copper adapter (32-bit) o SK-9821 V2.0 single port, copper adapter o SK-9843 V2.0 single port, copper adapter o 3Com 3c940 single port, copper adapter o Belkin Gigabit Desktop Network PCI Card, single port, copper (32-bit) o D-Link DGE-530T single port, copper adapter o Linksys EG1032v2 single-port, copper adapter o Linksys EG1064v2 single-port, copper adapter The msk driver provides support for the Marvell Yukon-2 based Gigabit Ethernet adapters, including the following: o Marvell Yukon 88E8035, copper adapter o Marvell Yukon 88E8036, copper adapter o Marvell Yukon 88E8038, copper adapter o Marvell Yukon 88E8050, copper adapter o Marvell Yukon 88E8052, copper adapter o Marvell Yukon 88E8053, copper adapter o Marvell Yukon 88E8055, copper adapter o SK-9E21 1000Base-T single port, copper adapter o SK-9E22 1000Base-T dual port, copper adapter o SK-9E81 1000Base-SX single port, multimode fiber adapter o SK-9E82 1000Base-SX dual port, multimode fiber adapter o SK-9E91 1000Base-LX single port, single mode fiber adapter o SK-9E92 1000Base-LX dual port, single mode fiber adapter o SK-9S21 1000Base-T single port, copper adapter o SK-9S22 1000Base-T dual port, copper adapter o SK-9S81 1000Base-SX single port, multimode fiber adapter o SK-9S82 1000Base-SX dual port, multimode fiber adapter o SK-9S91 1000Base-LX single port, single mode fiber adapter o SK-9S92 1000Base-LX dual port, single mode fiber adapter o SK-9E21D 1000Base-T single port, copper adapter The SysKonnect based adapters consist of two main components: the XaQti Corp. XMAC II Gigabit MAC (sk) and the SysKonnect GEnesis controller ASIC (skc). The XMAC provides the Gigabit MAC and PHY support while the GEnesis provides an interface to the PCI bus, DMA support, packet buffering and arbitration. The GEnesis can control up to two XMACs simultaneously, allowing dual-port NIC configurations. The Marvell based adapters are a single integrated circuit, but are still presented as a separate MAC (sk) and controller ASIC (skc). At this time, there are no dual-port Marvell based NICs. The sk driver configures dual port SysKonnect adapters such that each XMAC is treated as a separate logical network interface. Both ports can operate independently of each other and can be connected to separate networks. The SysKonnect driver software currently only uses the second port on dual port adapters for failover purposes: if the link on the primary port fails, the SysKonnect driver will automatically switch traffic onto the second port. The XaQti XMAC II supports full and half duplex operation with autonegotiation. The XMAC also supports unlimited frame sizes. Support for jumbo frames is provided via the interface MTU setting. Selecting an MTU larger than 1500 bytes with the ifconfig(8) utility configures the adapter to receive and transmit jumbo frames. Using jumbo frames can greatly improve performance for certain tasks, such as file transfers and data streaming. Hardware TCP/IP checksum offloading for IPv4 is supported. The following media types and options (as given to ifconfig(8)) are supported: media autoselect Enable autoselection of the media type and options. The user can manually override the autoselected mode. media 1000baseSX mediaopt full-duplex Set 1000Mbps (Gigabit Ethernet) operation on fiber and force full-duplex mode. media 1000baseSX mediaopt half-duplex Set 1000Mbps (Gigabit Ethernet) operation on fiber and force half-duplex mode. media 1000baseT mediaopt full-duplex Set 1000Mbps (Gigabit Ethernet) operation and force full-duplex mode. For more information on configuring this device, see ifconfig(8). To view a list of media types and options supported by the card, try ifconfig -m <device>. For example, ifconfig -m sk0. DIAGNOSTICS
sk%d: couldn't map memory A fatal initialization error has occurred. sk%d: couldn't map ports A fatal initialization error has occurred. sk%d: couldn't map interrupt A fatal initialization error has occurred. sk%d: failed to enable memory mapping! The driver failed to initialize PCI shared memory mapping. This might happen if the card is not in a bus-master slot. sk%d: no memory for jumbo buffers! The driver failed to allocate memory for jumbo frames during initialization. sk%d: watchdog timeout The device has stopped responding to the network, or there is a problem with the network connection (cable). SEE ALSO
ifmedia(4), intro(4), netintro(4), pci(4), ifconfig(8) XaQti XMAC II datasheet, http://www.xaqti.com. SysKonnect GEnesis programming manual, http://www.syskonnect.com. HISTORY
The sk device driver first appeared in FreeBSD 3.0. OpenBSD support was added in OpenBSD 2.6. NetBSD support was added in NetBSD 2.0. The msk driver first appeared in OpenBSD 4.0, and was ported to NetBSD 4.0. AUTHORS
The sk driver was written by Bill Paul <wpaul@ctr.columbia.edu>. Support for the Marvell Yukon-2 was added by Mark Kettenis <kettenis@openbsd.org>. BUGS
This driver is experimental. Support for checksum offload is unimplemented. Performance with at least some Marvell-based adapters is poor, especially on loaded PCI buses or when the adapters are behind PCI-PCI bridges. It is believed that this is because the Marvell parts have significantly less buffering than the original SysKonnect cards had. BSD
September 9, 2006 BSD
All times are GMT -4. The time now is 09:28 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy