10-01-2015
Hi DukeNuke2,
I have read that documentation, but it doesn't says how are the load spreading done ?
Will they be using same mac address / ip address for the multiple interfaces ?
How does the system chooses the interfaces for doing the load sharing ?
Will a single file be split over 2 interfaces for sending out ?
Regards,
Alan
10 More Discussions You Might Find Interesting
1. Solaris
Hello All,
I work for a Health care company at a local trauma hospital. I maintain a Picture Archiving and Communication System (PAC's). Basically, any medical images (X-Ray, CT, MRI, Mammo, etc) are stored digitally on the servers for viewing and dictation from diagnostic stations. I took over... (10 Replies)
Discussion started by: mainegeek
10 Replies
2. Solaris
Does Veritas Cluster work with IPMP on Solaris 10?
If anyone has set it up do you have a doc or tips?
I have heard several different statements ranging from , not working at all to Yes it works! Great How?
* Test and Base IPs????
* configure the MultiNICB agent ?
I can give details... (1 Reply)
Discussion started by: dfezz1
1 Replies
3. Solaris
Hi friends ,
can anyone provide me the complete steps to configure IPMP in solaris 9 or 10 provided i have two NIC card ?
regards
jagan (4 Replies)
Discussion started by: jaganblore
4 Replies
4. Shell Programming and Scripting
Hi All,
I checked the old posts here. But could not find a solution for my question.
I have a file created by one application in HP-UX. My client wants it to be converted into ANSI PC version. I have heard about unixtodos and have worked with it also. But I am totally unaware of of this ANSI... (0 Replies)
Discussion started by: Tuxidow
0 Replies
5. Solaris
Can any one please explain me the concept behind IPMP in solaris clustering.Basic explanation would be really appreciated...
Thanks in Advance
vks (2 Replies)
Discussion started by: vks47
2 Replies
6. Solaris
Hi,
This may have already been raised previously so sorry for the duplication. What I want to achieve is have a physical server using link based IPMP setup in the global zone (not problem doing that) and then create a zone set as Shared-IP so when the servers NIC has an issue the IP will... (0 Replies)
Discussion started by: giles.cardew
0 Replies
7. Solaris
All.
I am trying to create a 10 branded zone on a Sol 11.1 T5. The Global is using IPMP...so aggregating is out of the question. Has anyone successfully created a branded zone with IPMP? If they have can you please show me the steps you took to get this to run.
Thanks (4 Replies)
Discussion started by: aeroforce
4 Replies
8. Solaris
hi all,
i start with solaris 11 and i am disapointed by the change on ip managing.
i want to set a ipmp over tow aggregate but i dont find any doc and i am lost with the new commande
switch1
net0 aggregate1 |
net1 aggregate1 |-----|
|... (1 Reply)
Discussion started by: sylvain
1 Replies
9. Solaris
Hi all,
Just a few questions ->
Is an "OFFLINE" interface going back to "ONLINE" consider as a failback by IPMP ?
I have "FAILBACK=no" in my /etc/default/mpathd; however when i do the following
(igb0 and igb7 are in the same ipmp link based group)
q1) why does "if_mpadm -r igb7" cause... (0 Replies)
Discussion started by: javanoob
0 Replies
10. Solaris
Hi,
I have Solaris-9 server, V240.
I got alert that one of the interface on IPMP configuration, is failed. Found that two IPs (192.168.120.32 and 192.168.120.35) are not pingable from this server. These two IPs were plumbed on another server and that is decommissioned now. That is the reason,... (5 Replies)
Discussion started by: solaris_1977
5 Replies
LEARN ABOUT DEBIAN
if_hme
HME(4) BSD Kernel Interfaces Manual HME(4)
NAME
hme -- Sun Microelectronics STP2002-STQ Ethernet interfaces device driver
SYNOPSIS
To compile this driver into the kernel, place the following lines in your kernel configuration file:
device miibus
device hme
Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5):
if_hme_load="YES"
DESCRIPTION
The hme driver supports Sun Microelectronics STP2002-STQ ``Happy Meal Ethernet'' Fast Ethernet interfaces.
All controllers supported by the hme driver have TCP checksum offload capability for both receive and transmit, support for the reception and
transmission of extended frames for vlan(4) and a 128-bit multicast hash filter.
HARDWARE
The hme driver supports the on-board Ethernet interfaces of many Sun UltraSPARC workstation and server models.
Cards supported by the hme driver include:
o Sun PCI SunSwift Adapter (``SUNW,hme'')
o Sun SBus SunSwift Adapter (``hme'' and ``SUNW,hme'')
o Sun PCI Sun100BaseT Adapter 2.0 (``SUNW,hme'')
o Sun SBus Sun100BaseT 2.0 (``SUNW,hme'')
o Sun PCI Quad FastEthernet Controller (``SUNW,qfe'')
o Sun SBus Quad FastEthernet Controller (``SUNW,qfe'')
NOTES
On sparc64 the hme driver respects the local-mac-address? system configuration variable which can be set in the Open Firmware boot monitor
using the setenv command or by eeprom(8). If set to ``false'' (the default), the hme driver will use the system's default MAC address for
all of its devices. If set to ``true'', the unique MAC address of each interface is used if present rather than the system's default MAC
address.
Supported interfaces having their own MAC address include on-board versions on boards equipped with more than one Ethernet interface and all
add-on cards except the single-port SBus versions.
SEE ALSO
altq(4), intro(4), miibus(4), netintro(4), vlan(4), eeprom(8), ifconfig(8)
Sun Microelectronics, STP2002QFP Fast Ethernet, Parallel Port, SCSI (FEPS) User's Guide, April 1996,
http://mediacast.sun.com/users/Barton808/media/STP2002QFP-FEPs_UG.pdf.
HISTORY
The hme driver first appeared in NetBSD 1.5. The first FreeBSD version to include it was FreeBSD 5.0.
AUTHORS
The hme driver was written by Paul Kranenburg <pk@NetBSD.org>.
BSD
June 14, 2009 BSD