Sponsored Content
Operating Systems Linux Red Hat Extremely low throughput between AIX 7.2 and RHEL Maipo Post 303044532 by ubu389 on Tuesday 25th of February 2020 12:11:48 PM
Old 02-25-2020
Hi, thank you for your kind reply

This is my output, as required:

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 42:01:0a:99:c8:ad brd ff:ff:ff:ff:ff:ff

Actually I don't have a VPN but a Partner Interconnect: I don't know if this may change something.
What I made in these days was working on the AIX-side: basically I focused on TCP performance/congestion window, modifying some parameters in the "no -a" (let me know if you need the output). After that, I made some scp transfers between AIX and Google GCP and I got a higher throughput (around 240 Mbit/s). Unfortunately in some of my attemps the throughput decreases very quickly, especially when I try sending larger files (around 1 or 2GB), and so I end up with my usual and previous 16 Mbit/s. In other attemps, instead, my speed connection keeps increasing (max 400 Mbit/s) or is stable.

Concerning the traceroute, I tried a traceroute toward the same Google GCP machine using many Linux on-premises but I didn't get any "fragmentation required" issue. On the other side, I always get the "fragmentation required" issue if I traceroute from an AIX.. I'm focusing on this issue because I feel like this problem is correlated to the low throughput one.

Thank you all for your time Smilie
 

9 More Discussions You Might Find Interesting

1. IP Networking

Gigabit Link throughput

As a rule of thumb in doing calculations, what figure would you use in Mbytes/sec? I know the answer varies grealty on the topolgy of the network but I wonde what newteok engineers use a rough rule of thumb? Many thanks. (1 Reply)
Discussion started by: debd
1 Replies

2. IP Networking

How to improve throughput?

I have a 10Gbps network link connecting two machines A and B. I want to transfer 20GB data from A to B using TCP. With default setting, I can use 50% bandwidth. How to improve the throughput? Is there any way to make throughput as close to 10Gbps as possible? thanks~ :) (3 Replies)
Discussion started by: andrewust
3 Replies

3. UNIX for Dummies Questions & Answers

SOLARIS vs AIX vs RHEL

Hi I'm wondering what are the differences between SOLARIS, AIX and RHEL ? I would like to know in which operating system is best for what kind of implementation ? Why some companies use Solaris instead of e.g. AIX and etc. ? thx for help. (1 Reply)
Discussion started by: presul
1 Replies

4. AIX

Migrating RHEL 5 UNIX users to AIX

Hi, I'm able to migrate UNIX users/groups from Solaris to AIX (with same password using 13-char encrypted password from shadow file) but no luck with RHEL 5 to AIX. I see encrypted password in RHEL 5 is bit lengthier than 13-char. Is there any way to convert encrypted password such that same... (1 Reply)
Discussion started by: reddyr
1 Replies

5. AIX

High Runqueue (R) LOW CPU LOW I/O Low Network Low memory usage

Hello All I have a system running AIX 61 shared uncapped partition (with 11 physical processors, 24 Virtual 72GB of Memory) . The output from NMON, vmstat show a high run queue (60+) for continous periods of time intervals, but NO paging, relatively low I/o (6000) , CPU % is 40, Low network.... (9 Replies)
Discussion started by: IL-Malti
9 Replies

6. Red Hat

High RAM usage, extremely low swapping

Hi team I have three physical servers running on Red Hat Enterprise Linux Server release 6.2 with the following memory conditions: # cat /proc/meminfo | grep -i mem MemTotal: 8062888 kB MemFree: 184540 kB Shmem: 516 kB and the following swap conditions: ... (6 Replies)
Discussion started by: hedkandi
6 Replies

7. UNIX for Advanced & Expert Users

Help understanding differences between AIX and RHEL

I have started a new job which requires AIX admin skills, which I have, and RHEL skills. Does anyone have a cheat sheet that if I know how to solve the problem in AIX how would I do that in RHEL? I was an IBM pre-sales technical trying to keep sales guys honest - not possible. Any other links to... (5 Replies)
Discussion started by: SpenceSnyder
5 Replies

8. Shell Programming and Scripting

AIX to RHEL migration - awk treating 0e[0-9]+ as 0 instead of string issue

Greetings Experts, We are migrating from AIX to RHEL Linux. I have created a script to verify and report the NULLs and SPACEs in the key columns and duplicates on key combination of "|" delimited set of big files. Following is the code that was successfully running in AIX. awk -F "|" 'BEGIN {... (5 Replies)
Discussion started by: chill3chee
5 Replies

9. AIX

Question about shared filesystem btw AIX and RHEL

We found out that the Spectrum Scale (GPFS) doesnt support mix nodes (AIX and RHEL) on direct attached storage. Is there any other options besides NFS for mix O/S? Trying to avoid network type of shared filesystem which might end up high traffic on IO because we do run backup jobs on those... (0 Replies)
Discussion started by: kiasu
0 Replies
TC(8)                                                                  Linux                                                                 TC(8)

NAME
tbf - Token Bucket Filter SYNOPSIS
tc qdisc ... tbf rate rate burst bytes/cell ( latency ms | limit bytes ) [ mpu bytes [ peakrate rate mtu bytes/cell ] ] burst is also known as buffer and maxburst. mtu is also known as minburst. DESCRIPTION
The Token Bucket Filter is a classful queueing discipline available for traffic control with the tc(8) command. TBF is a pure shaper and never schedules traffic. It is non-work-conserving and may throttle itself, although packets are available, to ensure that the configured rate is not exceeded. It is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, send- ing out data exactly at the configured rates. Much higher rates are possible but at the cost of losing the minimal burstiness. In that case, data is on average dequeued at the config- ured rate but may be sent much faster at millisecond timescales. Because of further queues living in network adaptors, this is often not a problem. ALGORITHM
As the name implies, traffic is filtered based on the expenditure of tokens. Tokens roughly correspond to bytes, with the additional con- straint that each packet consumes some tokens, no matter how small it is. This reflects the fact that even a zero-sized packet occupies the link for some time. On creation, the TBF is stocked with tokens which correspond to the amount of traffic that can be burst in one go. Tokens arrive at a steady rate, until the bucket is full. If no tokens are available, packets are queued, up to a configured limit. The TBF now calculates the token deficit, and throttles until the first packet in the queue can be sent. If it is not acceptable to burst out packets at maximum speed, a peakrate can be configured to limit the speed at which the bucket empties. This peakrate is implemented as a second TBF with a very small bucket, so that it doesn't burst. To achieve perfection, the second bucket may contain only a single packet, which leads to the earlier mentioned 1mbit/s limit. This limit is caused by the fact that the kernel can only throttle for at minimum 1 'jiffy', which depends on HZ as 1/HZ. For perfect shap- ing, only a single packet can get sent per jiffy - for HZ=100, this means 100 packets of on average 1000 bytes each, which roughly corre- sponds to 1mbit/s. PARAMETERS
See tc(8) for how to specify the units of these values. limit or latency Limit is the number of bytes that can be queued waiting for tokens to become available. You can also specify this the other way around by setting the latency parameter, which specifies the maximum amount of time a packet can sit in the TBF. The latter calcula- tion takes into account the size of the bucket, the rate and possibly the peakrate (if set). These two parameters are mutually exclusive. burst Also known as buffer or maxburst. Size of the bucket, in bytes. This is the maximum amount of bytes that tokens can be available for instantaneously. In general, larger shaping rates require a larger buffer. For 10mbit/s on Intel, you need at least 10kbyte buffer if you want to reach your configured rate! If your buffer is too small, packets may be dropped because more tokens arrive per timer tick than fit in your bucket. The minimum buffer size can be calculated by dividing the rate by HZ. Token usage calculations are performed using a table which by default has a resolution of 8 packets. This resolution can be changed by specifying the cell size with the burst. For example, to specify a 6000 byte buffer with a 16 byte cell size, set a burst of 6000/16. You will probably never have to set this. Must be an integral power of 2. mpu A zero-sized packet does not use zero bandwidth. For ethernet, no packet uses less than 64 bytes. The Minimum Packet Unit determines the minimal token usage (specified in bytes) for a packet. Defaults to zero. rate The speed knob. See remarks above about limits! See tc(8) for units. Furthermore, if a peakrate is desired, the following parameters are available: peakrate Maximum depletion rate of the bucket. The peakrate does not need to be set, it is only necessary if perfect millisecond timescale shaping is required. mtu/minburst Specifies the size of the peakrate bucket. For perfect accuracy, should be set to the MTU of the interface. If a peakrate is needed, but some burstiness is acceptable, this size can be raised. A 3000 byte minburst allows around 3mbit/s of peakrate, given 1000 byte packets. Like the regular burstsize you can also specify a cell size. EXAMPLE &; USAGE To attach a TBF with a sustained maximum rate of 0.5mbit/s, a peakrate of 1.0mbit/s, a 5kilobyte buffer, with a pre-bucket queue size limit calculated so the TBF causes at most 70ms of latency, with perfect peakrate behaviour, issue: # tc qdisc add dev eth0 handle 10: root tbf rate 0.5mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540 To attach an inner qdisc, for example sfq, issue: # tc qdisc add dev eth0 parent 10:1 handle 100: sfq Without inner qdisc TBF queue acts as bfifo. If the inner qdisc is changed the limit/latency is not effective anymore. SEE ALSO
tc(8) AUTHOR
Alexey N. Kuznetsov, <kuznet@ms2.inr.ac.ru>. This manpage maintained by bert hubert <ahu@ds9a.nl> iproute2 13 December 2001 TC(8)
All times are GMT -4. The time now is 07:34 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy