Sponsored Content
Special Forums IP Networking bonding lacp and link aggregation Post 302187691 by ramen_noodle on Monday 21st of April 2008 05:43:08 PM
Old 04-21-2008
If you have a source DTE with 4 1gb/s interfaces trunked into a duplex capable 10g switch then you should easily attain your desired throughput. Whether the end host system can receive 320/mbs throughput depends on it's configuration. It's noted that lacp is far from perfect without factoring in the latencies from NFS and/or VM translation. Also, your backend potential throughput is faster than a lot of local
disk buses.
 

10 More Discussions You Might Find Interesting

1. Solaris

Link Aggregation

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (2 Replies)
Discussion started by: hcclnoodles
2 Replies

2. UNIX for Advanced & Expert Users

Link Aggregation and LACP

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (1 Reply)
Discussion started by: hcclnoodles
1 Replies

3. IP Networking

LACP aggregation on separates switches

Hello, I'm working on LACP architecture. I would like to know if it's possible to aggregate two links on two separate switches. Here an example I want : Aggregation of link1 and link2 to obtain a logical 2 gbit/s link. Also have redundancy, if one of them is down, the traffic goes through the... (1 Reply)
Discussion started by: jbemonet
1 Replies

4. IP Networking

bonding without switch link aggregation

I have some linux machines that I am trying to increase the throughput to on a single connection. They connect to a switch with two 1GbE lines and the switch does not have Link Aggregation support for these machines. I have tried bonding with balance-rr, balance-alb, but the machines can only... (4 Replies)
Discussion started by: Eruditass
4 Replies

5. Solaris

Link aggregation

Me again :) I'm trying to find a page describing the L2, L3 und L4 modes of dladm. It's nice to read "hashed by ip header", but how should I use that? On the file-server it's ok to have the six interfaces serving six clients each on it's own. But an rsync connection via switch between two... (8 Replies)
Discussion started by: PatrickBaer
8 Replies

6. IP Networking

Interface bonding / Link aggregation (Multiple)

Hello, I've been using mode 4 with four slaves, however looking at ifconfig showed that the traffic was not balanced correctly between the interfaces, the outgoing traffic has been alot higher on the last slave. Example: eth0 RX 123.2 GiB TX 22.5 GiB eth1 RX 84.8 GiB TX 8.3 GiB eth2... (3 Replies)
Discussion started by: TehOne
3 Replies

7. Red Hat

Bonding a Bond with LACP

Does anyone know if it's possible to bond two bonds together? My situation is I have two older Cisco switches that cannot carry a LACP (bond level 4) aggregated between them, but separate aggregates can be setup on the switches themselves. In order to have redundancy of two switches I would... (0 Replies)
Discussion started by: christr
0 Replies

8. Solaris

Link Aggregation without LACP

Hi, I'm not from the Solaris world and some of these things are new to me. Can someone tell me if it is possible to configure link aggregation without using LACP? I am told etherchannel was setup without LACP. (3 Replies)
Discussion started by: techy1
3 Replies

9. IP Networking

Link Aggregation

Hi ihave three link of internet and iwant to put one linux front of a firewall that this three linux speard firewall such az load balance and fialover but dont close any port and protocol and only firewall have been internet what way can i use for it ? are there any script and services do that... (0 Replies)
Discussion started by: mnnn
0 Replies

10. UNIX for Advanced & Expert Users

Bonding IEEE 802.3ad Dynamic link aggregation : Bond showing less than desired throughput

Hi All, I have done IEEE 802.3ad Dynamic link aggregation bond configuration with name bond0 which has 4 slaves (each 25GB/s) in it on cent os 6.8. Issue i am facing is bonding throughput is only 50GB/s not 100GB/s. below are the configuration files : DEVICE=bond0 IPADDR=xx.xx.xx.xx... (1 Reply)
Discussion started by: omkar.jadhav
1 Replies
DMC(1)																	    DMC(1)

NAME
dmc - controls the Disk Mount Conditioner SYNOPSIS
dmc start mount [profile-name|profile-index [-boot]] dmc stop mount dmc status mount [-json] dmc show profile-name|profile-index dmc list dmc select mount profile-name|profile-index dmc configure mount type access-time read-throughput write-throughput [ioqueue-depth maxreadcnt maxwritecnt segreadcnt segwritecnt] dmc help | -h DESCRIPTION
dmc(1) configures the Disk Mount Conditioner. The Disk Mount Conditioner is a kernel provided service that can degrade the disk I/O being issued to specific mount points, providing the illusion that the I/O is executing on a slower device. It can also cause the conditioned mount point to advertise itself as a different device type, e.g. the disk type of an SSD could be set to an HDD. This behavior consequently changes various parameters such as read-ahead settings, disk I/O throttling, etc., which normally have different behavior depending on the underlying device type. COMMANDS
Common command parameters: o mount - the mount point to be used in the command o profile-name - the name of a profile as shown in dmc list o profile-index - the index of a profile as shown in dmc list dmc start mount [profile-name|profile-index [-boot]] Start the Disk Mount Conditioner on the given mount point with the current settings (from dmc status) or the given profile, if pro- vided. Optionally configure the profile to remain enabled across reboots, if -boot is supplied. dmc stop mount Disable the Disk Mount Conditioner on the given mount point. Also disables any settings that persist across reboot via the -boot flag provided to dmc start, if any. dmc status mount [-json] Display the current settings (including on/off state), optionally as JSON dmc show profile-name|profile-index Display the settings of the given profile dmc list Display all profile names and indices dmc select mount profile-name|profile-index Choose a different profile for the given mount point without enabling or disabling the Disk Mount Conditioner dmc configure mount type access-time read-throughput write-throughput [ioqueue-depth maxreadcnt maxwritecnt segreadcnt segwritecnt] Select custom parameters for the given mount point rather than using the settings provided by a default profile. See dmc list for example parameter settings for various disk presets. o type - 'SSD' or 'HDD'. The type determines how various system behaviors like disk I/O throttling and read-ahead algorithms affect the issued I/O. Additionally, choosing 'HDD' will attempt to simulate seek times, including drive spin-up from idle. o access-time - latency in microseconds for a single I/O. For SSD types this latency is applied exactly as specified to all I/O. For HDD types, the latency scales based on a simulated seek time (thus making the access-time the maximum latency or seek penalty). o read-throughput - integer specifying megabytes-per-second maximum throughput for disk reads o write-throughput - integer specifying megabytes-per-second maxmimu throughput for disk writes o ioqueue-depth - maximum number of commands that a device can accept o maxreadcnt - maximum byte count per read o maxwritecnt - maximum byte count per write o segreadcnt - maximum physically disjoint segments processed per read o segwritecnt - maximum physically disjoint segments processed per write dmc help | -h Display help text EXAMPLES
dmc start / '5400 HDD' Turn on the Disk Mount Conditioner for the boot volume, acting like a 5400 RPM hard drive. dmc configure /Volumes/ExtDisk SSD 100 100 50 Configure an external disk to use custom parameters to degrade performance as if it were a slow SSD with 100 microsecond latencies, 100MB/s read throughput, and 50MB/s write throughput. IMPORTANT
The Disk Mount Conditioner is not a 'simulator'. It can only degrade (or 'condition') the I/O such that a faster disk device behaves like a slower device, not vice-versa. For example, a 5400 RPM hard drive cannot be conditioned to act like a SSD that is capable of a higher throughput than the theoretical limitations of the hard disk. In addition to running dmc stop, rebooting is also a sufficient way to clear any existing settings and disable Disk Mount Conditioner on all mount points (unless started with -boot). SEE ALSO
nlc(1) January 2018 DMC(1)
All times are GMT -4. The time now is 03:38 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy