Sponsored Content
Full Discussion: How to improve throughput?
Special Forums IP Networking How to improve throughput? Post 302471547 by jim mcnamara on Saturday 13th of November 2010 02:20:19 PM
Old 11-13-2010
Ah. That means homework. We have a homework forum. Please set up your post over there... When you do that you will see why we have you do that. Be sure to mention TCP performance in your title.
 

9 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

how can i test my tape throughput in Mb/sec?

Is there a tool or cmd line program i can use to test my tape throughput in Mb/sec? thank you (2 Replies)
Discussion started by: progressdll
2 Replies

2. High Performance Computing

IBM Scheduler for High Throughput Computing on IBM Blue Gene P

A lightweight scheduler that supports high-throughput computing (HTC) applications on Blue Gene/P. (NEW: 06/12/2008 in grid) More... (0 Replies)
Discussion started by: Linux Bot
0 Replies

3. UNIX for Advanced & Expert Users

tool to monitor throughput

Was wonder if there was a tool or program I could run to measure throughput on our CentoS 4.x server. Our current dedicated host provider is charging us by how much throughput we are using and I just want to see if their numbers add up to whatever I get using a throughput tool of some kind. ... (6 Replies)
Discussion started by: mcraul
6 Replies

4. IP Networking

Gigabit Link throughput

As a rule of thumb in doing calculations, what figure would you use in Mbytes/sec? I know the answer varies grealty on the topolgy of the network but I wonde what newteok engineers use a rough rule of thumb? Many thanks. (1 Reply)
Discussion started by: debd
1 Replies

5. Solaris

Network writes contantly spiking in throughput

Hey guys First post... and im not exactly a solaris guru but here goes Ive setup a solaris 10 box with a raidz2 set of 6 disks... I have also setup Samba with open shares for some CIFs access... now my issue is that when i transfer large files to it the network performance contantly... (8 Replies)
Discussion started by: silicoon
8 Replies

6. Solaris

Throughput problems with solaris aggregation

Hello gurus, I have the following configuration in the server side: # dladm show-aggr key: 33 (0x0021) policy: L4 address: 0:14:4f:6c:11:8 (auto) device address speed duplex link state nxge0 0:14:4f:6c:11:8 1000 Mbps ... (3 Replies)
Discussion started by: FERCA
3 Replies

7. IP Networking

Issue with ns2 - no throughput data

Hello, First time poster here hoping to get some help with ns2. I've recently started using ns2(first time user) but I'm having difficulty getting the results I'm after. I am trying to set up a network with wireless nodes(5-15 nodes) and then use xgraph to display a timing diagram,... (0 Replies)
Discussion started by: UnicksMan
0 Replies

8. IP Networking

OID for Bandwith and Throughput Measurement

Hey Guys, Does anybody know, which OID's of Net-SNMP is used to collect throughput and bandwith usage of machine?? I got these OID's ..iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifOutOctets ..1.3.6.1.2.1.2.2.1.16 ... (1 Reply)
Discussion started by: franzramadhan
1 Replies

9. UNIX for Advanced & Expert Users

Bonding IEEE 802.3ad Dynamic link aggregation : Bond showing less than desired throughput

Hi All, I have done IEEE 802.3ad Dynamic link aggregation bond configuration with name bond0 which has 4 slaves (each 25GB/s) in it on cent os 6.8. Issue i am facing is bonding throughput is only 50GB/s not 100GB/s. below are the configuration files : DEVICE=bond0 IPADDR=xx.xx.xx.xx... (1 Reply)
Discussion started by: omkar.jadhav
1 Replies
IPERF(1)							   User Manuals 							  IPERF(1)

NAME
iperf - perform network throughput tests SYNOPSIS
iperf -s [ options ] iperf -c server [ options ] iperf -u -s [ options ] iperf -u -c server [ options ] DESCRIPTION
iperf is a tool for performing network throughput measurements. It can test either TCP or UDP throughput. To perform an iperf test the user must establish both a server (to discard traffic) and a client (to generate traffic). GENERAL OPTIONS
-f, --format [kmKM] format to report: Kbits, Mbits, KBytes, MBytes -h, --help print a help synopsis -i, --interval n pause n seconds between periodic bandwidth reports -l, --len n[KM] set length read/write buffer to n (default 8 KB) -m, --print_mss print TCP maximum segment size (MTU - TCP/IP header) -o, --output <filename> output the report or error message to this specified file -p, --port n set server port to listen on/connect to to n (default 5001) -u, --udp use UDP rather than TCP -w, --window n[KM] TCP window size (socket buffer size) -B, --bind <host> bind to <host>, an interface or multicast address -C, --compatibility for use with older versions does not sent extra msgs -M, --mss n set TCP maximum segment size (MTU - 40 bytes) -N, --nodelay set TCP no delay, disabling Nagle's Algorithm -v, --version print version information and quit -V, --IPv6Version Set the domain to IPv6 -x, --reportexclude [CDMSV] exclude C(connection) D(data) M(multicast) S(settings) V(server) reports -y, --reportstyle C|c if set to C or c report results as CSV (comma separated values) SERVER SPECIFIC OPTIONS
-s, --server run in server mode -U, --single_udp run in single threaded UDP mode -D, --daemon run the server as a daemon CLIENT SPECIFIC OPTIONS
-b, --bandwidth n[KM] set target bandwidth to n bits/sec (default 1 Mbit/sec). This setting requires UDP (-u). -c, --client <host> run in client mode, connecting to <host> -d, --dualtest Do a bidirectional test simultaneously -n, --num n[KM] number of bytes to transmit (instead of -t) -r, --tradeoff Do a bidirectional test individually -t, --time n time in seconds to transmit for (default 10 secs) -F, --fileinput <name> input the data to be transmitted from a file -I, --stdin input the data to be transmitted from stdin -L, --listenport n port to receive bidirectional tests back on -P, --parallel n number of parallel client threads to run -T, --ttl n time-to-live, for multicast (default 1) -Z, --linux-congestion <algo> set TCP congestion control algorithm (Linux only) ENVIRONMENT
TCP_WINDOW_SIZE Controls the size of TCP buffers. DIAGNOSTICS
This section needs to be filled in. BUGS
Exit statuses are inconsistent. The threading implementation is rather heinous. AUTHORS
Iperf was originally written by Mark Gates and Alex Warshavsky. Man page and maintence by Jon Dugan <jdugan at x1024 dot net>. Other con- tributions from Ajay Tirumala, Jim Ferguson, Feng Qin, Kevin Gibbs, John Estabrook <jestabro at ncsa.uiuc.edu>, Andrew Gallatin <gallatin at gmail.com>, Stephen Hemminger <shemminger at linux-foundation.org> SEE ALSO
http://iperf.sourceforge.net/ NLANR
/DAST APRIL 2008 IPERF(1)
All times are GMT -4. The time now is 05:37 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy