Sponsored Content
Full Discussion: VIOS Configuration Question
Operating Systems AIX VIOS Configuration Question Post 302979316 by zaxxon on Friday 12th of August 2016 07:53:32 AM
Old 08-12-2016
Hi gull04,

you're absolutely welcome. It also makes me recapitulate these very interessting things again Smilie

I don't want to be a nitpicker, but each VIOS is his own OS/LPAR. VIOS #2 will start counting devices from ent0 again.
You wrote you have 1 HMC - quite a bit dangerous if it fails. The LPARs and VIOS will not complain and run if the HMC is down, but you can't do anything on the Managed Systems for this time.
Also remember that each HMC has a dhcpd which provides internal IP-addresses for the service processors (SP). If you have more than one, they may not see each other. They are attached to a NIC per Managed System often very simple by a plain network hub.

I do not completely understand the layout, tbh.

Does the same colors mean, that they are in the same EtherChannel? For instance the yellow

VIOS #1: ent0, ent1
VIOS #2: ent16

Which of the 3 is the main, which the backup? Are 2 of them supposed to form up as an aggregation as main adapter so you have a higher bandwith?

In terms of hardware failure, it would not make sense to use 1 port as main and 1 as backup when they are on the same adapter.
This might work if only 1 port fails, but usually the whole adapter says goodbye and then there would be nothing left.

Not sure if intended but you cannot form an EtherChannel from adapters of VIOS #1 and VIOS #2. You can only create an EtherChannel with adapters from the same OS/LPAR/VIOS.
Later on the upper layers you can hand over traffic to a vNIC on another VIOS with the HASEA.

For clarity - an EtherChannel is good for 2 things:
  • To have a backup NIC available when one of the physical NICs fail.
  • To be able to get a higher throughput by aggregating physical adapters as "main" adapter in an EtherChannel. The backup adapter can always be only 1 physical NIC.

If the whole EtherChannel fails, ie. the main and backup physical adapters in it, or the whole VIOS goes down - then the HASEA comes into action. It hands over the traffic to VIOS #2, that is hopefully still up and running Smilie

If you describe the requirements on hardware failure or possible aggregation of adapters to get a higher throughput and how many networks are needed, we can maybe assist designing.
You wrote you have 4 x 4port NICs per Managed System - the 2 other are not used on the plan yet - maybe we can do something to make it more failsafe with these.

Cheers
zaxxon

Last edited by zaxxon; 08-12-2016 at 09:45 AM..
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Configuration Newbie Question

Hi - I am fairly new with unix. I have a unix box, installed with Solaris 2.8. I can get the box hooked up to the internet, not using DHCP but static IP address behind a Linksys firewall. However, I cannot telnet to the unix box or from within the unix box. If I telnet from a windows... (8 Replies)
Discussion started by: trigeek8888
8 Replies

2. Linux

xterm font colors - configuration question?

When I telnet (ssh) over to my Fedora system, I find the colors horrible. For instance, regular files are white text, which is fine, but directories show up as dark blue which is virtually invisible against the black background). Additionally, when using vi, I find the colors great doing perl... (3 Replies)
Discussion started by: ripley
3 Replies

3. Debian

gnome file manager configuration question

Is there a way to configure the gnome file manager to open each directory in the same window instead of opening a new one? (Debian 5.0.0) (5 Replies)
Discussion started by: snorkack59
5 Replies

4. UNIX for Dummies Questions & Answers

OpenSSH Configuration Question

In sshd_config it is written: # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. So does that mean for the following: ... (2 Replies)
Discussion started by: mojoman
2 Replies

5. AIX

VIOS

Can anybody provide me with usefull links to get knowledge how VIOS works and how to configure Lpars on it? I am tired and feel lazy to search through IBM redbooks :-) .. so pls help me :p (3 Replies)
Discussion started by: wwwzviadi
3 Replies

6. Debian

Question on the dhcp-server configuration for a fixed IP address.

Hi , when i configure my server with a fixed ip address in dhcpd.conf as below subnet 172.21.151.0 netmask 255.255.255.0 { range 172.21.151.66 172.21.151.66; } host switch { hardware ethernet 00:05:30:02:DB:31; fixed-address 172.21.151.66; } when i start dhcpd server , it... (2 Replies)
Discussion started by: Gopi Krishna P
2 Replies

7. UNIX for Advanced & Expert Users

Free Radius Configuration question

Solaris 10 on an X86 box - Config runs fine has a couple of items in log (really long log file). When I try make I get make: Fatal error in reader: Make.inc, line 84: Unexpected end of line seen line 84 is LIBRADIUS_WITH_OPENSSL = 1 any ideas on what to try? (1 Reply)
Discussion started by: NeedLotsofHelp
1 Replies

8. AIX

VIOS IP address - separate vlan for vios servers ?

Hello, Lets say for simplicity that I do not use any vlan config inside my server - one lpar group use hea physical port1, another group hea physical port2. Physical port1 configured as vlan1 on external switch, physical port2 as vlan2. What is the common practice - should I isolate my vios... (0 Replies)
Discussion started by: vilius
0 Replies

9. AIX

VIOS 2.2.4.10 - Errors

Hi, $ ioslevel 2.2.4.10 I always get these errors when I perform any action using the web interface of IVM / VIOS Shutdown Partitions Help You have chosen to shutdown the following partitions. The recommended shutdown method is to use the client operating systems shutdown... (1 Reply)
Discussion started by: filosophizer
1 Replies

10. Solaris

IPMP Configuration Question/Problem

Hi, I have two physical interface connected to solaris box. 1. e1000g1 2. e1000g2 I have added these interfaced under a same IPMP group "IPMP1" After that I have configured a test address for e1000g1 like below ifconfig e1000g1 addif <ip-address> netmask + broadcast + -failover... (1 Reply)
Discussion started by: praveensharma21
1 Replies
SK(4)							   BSD Kernel Interfaces Manual 						     SK(4)

NAME
sk, skc -- SysKonnect XMAC II and Marvell GMAC based gigabit ethernet SYNOPSIS
skc* at pci? dev ? function ? sk* at skc? mskc* at pci? dev ? function ? msk* at skc? DESCRIPTION
The sk driver provides support for SysKonnect based gigabit ethernet adapters and Marvell based gigabit ethernet adapters, including the fol- lowing: o SK-9821 SK-NET GE-T single port, copper adapter o SK-9822 SK-NET GE-T dual port, copper adapter o SK-9841 SK-NET GE-LX single port, single mode fiber adapter o SK-9842 SK-NET GE-LX dual port, single mode fiber adapter o SK-9843 SK-NET GE-SX single port, multimode fiber adapter o SK-9844 SK-NET GE-SX dual port, multimode fiber adapter o SK-9521 V2.0 single port, copper adapter (32-bit) o SK-9821 V2.0 single port, copper adapter o SK-9843 V2.0 single port, copper adapter o 3Com 3c940 single port, copper adapter o Belkin Gigabit Desktop Network PCI Card, single port, copper (32-bit) o D-Link DGE-530T single port, copper adapter o Linksys EG1032v2 single-port, copper adapter o Linksys EG1064v2 single-port, copper adapter The msk driver provides support for the Marvell Yukon-2 based Gigabit Ethernet adapters, including the following: o Marvell Yukon 88E8035, copper adapter o Marvell Yukon 88E8036, copper adapter o Marvell Yukon 88E8038, copper adapter o Marvell Yukon 88E8050, copper adapter o Marvell Yukon 88E8052, copper adapter o Marvell Yukon 88E8053, copper adapter o Marvell Yukon 88E8055, copper adapter o SK-9E21 1000Base-T single port, copper adapter o SK-9E22 1000Base-T dual port, copper adapter o SK-9E81 1000Base-SX single port, multimode fiber adapter o SK-9E82 1000Base-SX dual port, multimode fiber adapter o SK-9E91 1000Base-LX single port, single mode fiber adapter o SK-9E92 1000Base-LX dual port, single mode fiber adapter o SK-9S21 1000Base-T single port, copper adapter o SK-9S22 1000Base-T dual port, copper adapter o SK-9S81 1000Base-SX single port, multimode fiber adapter o SK-9S82 1000Base-SX dual port, multimode fiber adapter o SK-9S91 1000Base-LX single port, single mode fiber adapter o SK-9S92 1000Base-LX dual port, single mode fiber adapter o SK-9E21D 1000Base-T single port, copper adapter The SysKonnect based adapters consist of two main components: the XaQti Corp. XMAC II Gigabit MAC (sk) and the SysKonnect GEnesis controller ASIC (skc). The XMAC provides the Gigabit MAC and PHY support while the GEnesis provides an interface to the PCI bus, DMA support, packet buffering and arbitration. The GEnesis can control up to two XMACs simultaneously, allowing dual-port NIC configurations. The Marvell based adapters are a single integrated circuit, but are still presented as a separate MAC (sk) and controller ASIC (skc). At this time, there are no dual-port Marvell based NICs. The sk driver configures dual port SysKonnect adapters such that each XMAC is treated as a separate logical network interface. Both ports can operate independently of each other and can be connected to separate networks. The SysKonnect driver software currently only uses the second port on dual port adapters for failover purposes: if the link on the primary port fails, the SysKonnect driver will automatically switch traffic onto the second port. The XaQti XMAC II supports full and half duplex operation with autonegotiation. The XMAC also supports unlimited frame sizes. Support for jumbo frames is provided via the interface MTU setting. Selecting an MTU larger than 1500 bytes with the ifconfig(8) utility configures the adapter to receive and transmit jumbo frames. Using jumbo frames can greatly improve performance for certain tasks, such as file transfers and data streaming. Hardware TCP/IP checksum offloading for IPv4 is supported. The following media types and options (as given to ifconfig(8)) are supported: media autoselect Enable autoselection of the media type and options. The user can manually override the autoselected mode. media 1000baseSX mediaopt full-duplex Set 1000Mbps (Gigabit Ethernet) operation on fiber and force full-duplex mode. media 1000baseSX mediaopt half-duplex Set 1000Mbps (Gigabit Ethernet) operation on fiber and force half-duplex mode. media 1000baseT mediaopt full-duplex Set 1000Mbps (Gigabit Ethernet) operation and force full-duplex mode. For more information on configuring this device, see ifconfig(8). To view a list of media types and options supported by the card, try ifconfig -m <device>. For example, ifconfig -m sk0. DIAGNOSTICS
sk%d: couldn't map memory A fatal initialization error has occurred. sk%d: couldn't map ports A fatal initialization error has occurred. sk%d: couldn't map interrupt A fatal initialization error has occurred. sk%d: failed to enable memory mapping! The driver failed to initialize PCI shared memory mapping. This might happen if the card is not in a bus-master slot. sk%d: no memory for jumbo buffers! The driver failed to allocate memory for jumbo frames during initialization. sk%d: watchdog timeout The device has stopped responding to the network, or there is a problem with the network connection (cable). SEE ALSO
ifmedia(4), intro(4), netintro(4), pci(4), ifconfig(8) XaQti XMAC II datasheet, http://www.xaqti.com. SysKonnect GEnesis programming manual, http://www.syskonnect.com. HISTORY
The sk device driver first appeared in FreeBSD 3.0. OpenBSD support was added in OpenBSD 2.6. NetBSD support was added in NetBSD 2.0. The msk driver first appeared in OpenBSD 4.0, and was ported to NetBSD 4.0. AUTHORS
The sk driver was written by Bill Paul <wpaul@ctr.columbia.edu>. Support for the Marvell Yukon-2 was added by Mark Kettenis <kettenis@openbsd.org>. BUGS
This driver is experimental. Support for checksum offload is unimplemented. Performance with at least some Marvell-based adapters is poor, especially on loaded PCI buses or when the adapters are behind PCI-PCI bridges. It is believed that this is because the Marvell parts have significantly less buffering than the original SysKonnect cards had. BSD
September 9, 2006 BSD
All times are GMT -4. The time now is 03:19 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy