Iperf: Iperf3 reports slow network throughput perfromance compared to iperf on 40Gb Network

Created on 4 May 2016  ·  12Comments  ·  Source: esnet/iperf

Hi

I am testing the 40G Network and form this testing I am seeing that Iperf3 v 3.1.2 reports low network throughput compared to old version of Iperf 2.0.5.

Here is the benchmark results with Iperf3.

iperf3 -A 8,8 -c 192.168.110.135 -Z -P 4

Connecting to host 192.168.110.135, port 5201
[ 4] local 192.168.110.136 port 57275 connected to 192.168.110.135 port 5201
[ 6] local 192.168.110.136 port 57276 connected to 192.168.110.135 port 5201
[ 8] local 192.168.110.136 port 57277 connected to 192.168.110.135 port 5201
[ 10] local 192.168.110.136 port 57278 connected to 192.168.110.135 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 559 MBytes 4.68 Gbits/sec 0 201 KBytes
[ 6] 0.00-1.00 sec 559 MBytes 4.68 Gbits/sec 0 236 KBytes
[ 8] 0.00-1.00 sec 559 MBytes 4.68 Gbits/sec 0 219 KBytes
[ 10] 0.00-1.00 sec 559 MBytes 4.68 Gbits/sec 0 245 KBytes
[SUM] 0.00-1.00 sec 2.18 GBytes 18.7 Gbits/sec 0


[ 4] 1.00-2.00 sec 622 MBytes 5.22 Gbits/sec 0 298 KBytes
[ 6] 1.00-2.00 sec 622 MBytes 5.22 Gbits/sec 0 306 KBytes
[ 8] 1.00-2.00 sec 620 MBytes 5.20 Gbits/sec 0 359 KBytes
[ 10] 1.00-2.00 sec 619 MBytes 5.19 Gbits/sec 0 359 KBytes
[SUM] 1.00-2.00 sec 2.43 GBytes 20.8 Gbits/sec 0


[ 4] 2.00-3.00 sec 635 MBytes 5.33 Gbits/sec 0 298 KBytes
[ 6] 2.00-3.00 sec 635 MBytes 5.33 Gbits/sec 0 324 KBytes
[ 8] 2.00-3.00 sec 635 MBytes 5.33 Gbits/sec 0 359 KBytes
[ 10] 2.00-3.00 sec 635 MBytes 5.33 Gbits/sec 0 359 KBytes
[SUM] 2.00-3.00 sec 2.48 GBytes 21.3 Gbits/sec 0


[ 4] 3.00-4.00 sec 635 MBytes 5.33 Gbits/sec 0 350 KBytes
[ 6] 3.00-4.00 sec 634 MBytes 5.31 Gbits/sec 0 332 KBytes
[ 8] 3.00-4.00 sec 635 MBytes 5.33 Gbits/sec 0 394 KBytes
[ 10] 3.00-4.00 sec 635 MBytes 5.33 Gbits/sec 0 385 KBytes
[SUM] 3.00-4.00 sec 2.48 GBytes 21.3 Gbits/sec 0


[ 4] 4.00-5.00 sec 635 MBytes 5.32 Gbits/sec 0 350 KBytes
[ 6] 4.00-5.00 sec 635 MBytes 5.32 Gbits/sec 0 332 KBytes
[ 8] 4.00-5.00 sec 635 MBytes 5.32 Gbits/sec 0 394 KBytes
[ 10] 4.00-5.00 sec 635 MBytes 5.32 Gbits/sec 0 385 KBytes
[SUM] 4.00-5.00 sec 2.48 GBytes 21.3 Gbits/sec 0


[ 4] 5.00-6.00 sec 632 MBytes 5.31 Gbits/sec 0 350 KBytes
[ 6] 5.00-6.00 sec 632 MBytes 5.31 Gbits/sec 0 332 KBytes
[ 8] 5.00-6.00 sec 632 MBytes 5.31 Gbits/sec 0 394 KBytes
[ 10] 5.00-6.00 sec 632 MBytes 5.31 Gbits/sec 0 385 KBytes
[SUM] 5.00-6.00 sec 2.47 GBytes 21.2 Gbits/sec 0


[ 4] 6.00-7.00 sec 634 MBytes 5.32 Gbits/sec 0 350 KBytes
[ 6] 6.00-7.00 sec 634 MBytes 5.32 Gbits/sec 0 332 KBytes
[ 8] 6.00-7.00 sec 634 MBytes 5.32 Gbits/sec 0 394 KBytes
[ 10] 6.00-7.00 sec 634 MBytes 5.32 Gbits/sec 0 385 KBytes
[SUM] 6.00-7.00 sec 2.48 GBytes 21.3 Gbits/sec 0


[ 4] 7.00-8.00 sec 624 MBytes 5.23 Gbits/sec 0 350 KBytes
[ 6] 7.00-8.00 sec 624 MBytes 5.23 Gbits/sec 0 332 KBytes
[ 8] 7.00-8.00 sec 624 MBytes 5.23 Gbits/sec 0 394 KBytes
[ 10] 7.00-8.00 sec 624 MBytes 5.23 Gbits/sec 0 394 KBytes
[SUM] 7.00-8.00 sec 2.44 GBytes 20.9 Gbits/sec 0


[ 4] 8.00-9.00 sec 635 MBytes 5.32 Gbits/sec 0 350 KBytes
[ 6] 8.00-9.00 sec 635 MBytes 5.32 Gbits/sec 0 332 KBytes
[ 8] 8.00-9.00 sec 635 MBytes 5.32 Gbits/sec 0 394 KBytes
[ 10] 8.00-9.00 sec 635 MBytes 5.32 Gbits/sec 0 394 KBytes
[SUM] 8.00-9.00 sec 2.48 GBytes 21.3 Gbits/sec 0


[ 4] 9.00-10.00 sec 631 MBytes 5.30 Gbits/sec 0 350 KBytes
[ 6] 9.00-10.00 sec 631 MBytes 5.30 Gbits/sec 0 332 KBytes
[ 8] 9.00-10.00 sec 631 MBytes 5.30 Gbits/sec 0 394 KBytes
[ 10] 9.00-10.00 sec 631 MBytes 5.30 Gbits/sec 0 394 KBytes
[SUM] 9.00-10.00 sec 2.47 GBytes 21.2 Gbits/sec 0


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 6.10 GBytes 5.24 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 6.10 GBytes 5.24 Gbits/sec receiver
[ 6] 0.00-10.00 sec 6.09 GBytes 5.24 Gbits/sec 0 sender
[ 6] 0.00-10.00 sec 6.09 GBytes 5.24 Gbits/sec receiver
[ 8] 0.00-10.00 sec 6.09 GBytes 5.23 Gbits/sec 0 sender
[ 8] 0.00-10.00 sec 6.09 GBytes 5.23 Gbits/sec receiver
[ 10] 0.00-10.00 sec 6.09 GBytes 5.23 Gbits/sec 0 sender
[ 10] 0.00-10.00 sec 6.09 GBytes 5.23 Gbits/sec receiver
[SUM] 0.00-10.00 sec 24.4 GBytes 20.9 Gbits/sec 0 sender
[SUM] 0.00-10.00 sec 24.4 GBytes 20.9 Gbits/sec receiver

iperf Done.

Here is the benchmark results with Iperf version 2.0.5.

iperf -c 192.168.110.135 -P 4


Client connecting to 192.168.110.135, TCP port 5001

TCP window size: 92.6 KByte (default)

[ 4] local 192.168.110.136 port 53855 connected with 192.168.110.135 port 5001
[ 3] local 192.168.110.136 port 53854 connected with 192.168.110.135 port 5001
[ 5] local 192.168.110.136 port 53856 connected with 192.168.110.135 port 5001
[ 6] local 192.168.110.136 port 53857 connected with 192.168.110.135 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 7.99 GBytes 6.86 Gbits/sec
[ 3] 0.0-10.0 sec 10.9 GBytes 9.34 Gbits/sec
[ 5] 0.0-10.0 sec 14.7 GBytes 12.6 Gbits/sec
[ 6] 0.0-10.0 sec 12.1 GBytes 10.4 Gbits/sec
[SUM] 0.0-10.0 sec 45.6 GBytes 39.2 Gbits/sec

So here iperf reports 39.2 Gbits/sec which is closer to 40Gbits but on otherhand iperf3 reports only 20.9 Gbits/sec.

Is there I am missing some switching in using the iperf3, I have also tried to tune cpu affinity using this url https://fasterdata.es.net/host-tuning/40g-tuning/.

Thank You,
Manish

question

Most helpful comment

oh boy! it seems like a downgrade from iperf 3 or 2. may i ask what's the reason to make 3 single-threaded?

iperf3 parallel stream performance is much less than iperf2. Why?
iperf3 is single threaded, and iperf2 is multi-threaded. We recommend using iperf2 for parallel streams. If you want to use multiple iperf3 streams use the method described here.

Yeah, pretty much. I had the same question and the only answer I could ever seem to find was "because". iperf2 is abandoned, yet if you want to reliably test 40+ Gb/s devices, the iperf3 developers suggest either using that same abandoned iperf2 that they wrote iperf3 to get around, OR you have to use a kludge. We had to use the kludge because my needs were to test network devices from 1 to 100+Gb/s and I don't need to introduce two different tools for this, and iperf2 doesn't have the updated features that 3 has like CPU utilization reports.

SO unfortunately, we've had to hack around this limitation using multiple iperf3 instances in a multi-threaded python script, and run upwards of 10 threads for 100Gb Controllers. The sweet spot we found seems to be 1 thread for every 10 - 20GB in bandwidth (so 40Gb/s would be 4 threads, etc).

Then you have to trap all the output, parse it to get each thread's total, then add all that up to get a total throughput. It's hacky, it's overly complicated, but they left us little choice, unfortunately.

As to WHY? Who knows. maybe there's some technical reason for not making a multi-threaded application.

All 12 comments

Hi,

By looking at the ethernet port traffic it seems "Iperf3" is not pushing that much traffic to fill up the Network port limit. So is there any switch with iperf3 to send proper traffic to fill up the port. Or is there I am missing somthing in doing the network test using "iperf3" compared to "iperf"

I'm seeing the same thing here:
ubuntu@ubuntu:~$ iperf3 -c 10.10.10.3 -t 10 -P8 -i0
Connecting to host 10.10.10.3, port 5201
[ 4] local 10.10.10.2 port 46132 connected to 10.10.10.3 port 5201
[ 6] local 10.10.10.2 port 46134 connected to 10.10.10.3 port 5201
[ 8] local 10.10.10.2 port 46136 connected to 10.10.10.3 port 5201
[ 10] local 10.10.10.2 port 46138 connected to 10.10.10.3 port 5201
[ 12] local 10.10.10.2 port 46140 connected to 10.10.10.3 port 5201
[ 14] local 10.10.10.2 port 46142 connected to 10.10.10.3 port 5201
[ 16] local 10.10.10.2 port 46144 connected to 10.10.10.3 port 5201
[ 18] local 10.10.10.2 port 46146 connected to 10.10.10.3 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-10.00 sec 692 MBytes 580 Mbits/sec 548 717 KBytes
[ 6] 0.00-10.00 sec 689 MBytes 578 Mbits/sec 352 629 KBytes
[ 8] 0.00-10.00 sec 685 MBytes 575 Mbits/sec 416 655 KBytes
[ 10] 0.00-10.00 sec 686 MBytes 575 Mbits/sec 428 17.5 KBytes
[ 12] 0.00-10.00 sec 686 MBytes 575 Mbits/sec 433 612 KBytes
[ 14] 0.00-10.00 sec 685 MBytes 575 Mbits/sec 431 533 KBytes
[ 16] 0.00-10.00 sec 685 MBytes 575 Mbits/sec 426 533 KBytes
[ 18] 0.00-10.00 sec 685 MBytes 575 Mbits/sec 422 533 KBytes
[SUM] 0.00-10.00 sec 5.36 GBytes 4.61 Gbits/sec 3456


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 692 MBytes 580 Mbits/sec 548 sender
[ 4] 0.00-10.00 sec 674 MBytes 565 Mbits/sec receiver
[ 6] 0.00-10.00 sec 689 MBytes 578 Mbits/sec 352 sender
[ 6] 0.00-10.00 sec 674 MBytes 565 Mbits/sec receiver
[ 8] 0.00-10.00 sec 685 MBytes 575 Mbits/sec 416 sender
[ 8] 0.00-10.00 sec 673 MBytes 565 Mbits/sec receiver
[ 10] 0.00-10.00 sec 686 MBytes 575 Mbits/sec 428 sender
[ 10] 0.00-10.00 sec 674 MBytes 565 Mbits/sec receiver
[ 12] 0.00-10.00 sec 686 MBytes 575 Mbits/sec 433 sender
[ 12] 0.00-10.00 sec 674 MBytes 565 Mbits/sec receiver
[ 14] 0.00-10.00 sec 685 MBytes 575 Mbits/sec 431 sender
[ 14] 0.00-10.00 sec 674 MBytes 565 Mbits/sec receiver
[ 16] 0.00-10.00 sec 685 MBytes 575 Mbits/sec 426 sender
[ 16] 0.00-10.00 sec 674 MBytes 565 Mbits/sec receiver
[ 18] 0.00-10.00 sec 685 MBytes 575 Mbits/sec 422 sender
[ 18] 0.00-10.00 sec 674 MBytes 565 Mbits/sec receiver
[SUM] 0.00-10.00 sec 5.36 GBytes 4.61 Gbits/sec 3456 sender
[SUM] 0.00-10.00 sec 5.26 GBytes 4.52 Gbits/sec receiver

ubuntu@ubuntu:~$ iperf -c 10.10.10.3 -t 10 -P8

Client connecting to 10.10.10.3, TCP port 5001

TCP window size: 325 KByte (default)

[ 10] local 10.10.10.2 port 44030 connected with 10.10.10.3 port 5001
[ 5] local 10.10.10.2 port 44018 connected with 10.10.10.3 port 5001
[ 6] local 10.10.10.2 port 44020 connected with 10.10.10.3 port 5001
[ 8] local 10.10.10.2 port 44024 connected with 10.10.10.3 port 5001
[ 7] local 10.10.10.2 port 44022 connected with 10.10.10.3 port 5001
[ 9] local 10.10.10.2 port 44026 connected with 10.10.10.3 port 5001
[ 4] local 10.10.10.2 port 44028 connected with 10.10.10.3 port 5001
[ 3] local 10.10.10.2 port 44016 connected with 10.10.10.3 port 5001
[ ID] Interval Transfer Bandwidth
[ 10] 0.0-10.0 sec 4.82 GBytes 4.14 Gbits/sec
[ 5] 0.0-10.0 sec 2.28 GBytes 1.96 Gbits/sec
[ 6] 0.0-10.0 sec 2.47 GBytes 2.12 Gbits/sec
[ 8] 0.0-10.0 sec 5.16 GBytes 4.43 Gbits/sec
[ 7] 0.0-10.0 sec 4.81 GBytes 4.13 Gbits/sec
[ 9] 0.0-10.0 sec 4.92 GBytes 4.23 Gbits/sec
[ 4] 0.0-10.0 sec 2.33 GBytes 2.00 Gbits/sec
[ 3] 0.0-10.0 sec 2.32 GBytes 1.99 Gbits/sec
[SUM] 0.0-10.0 sec 29.1 GBytes 25.0 Gbits/sec

I've tried various things, jumbo frames, changing window size, more or less parallel connections, etc, but iperf3 NEVER gets higher than about 10Gb/s and that's only when I'm pinned to a single CPU using -A.

Same here
Connecting to host thunder-mojo-2-2, port 5201
[ 4] local 192.168.128.213 port 43273 connected to 192.168.128.212 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 499 MBytes 4.18 Gbits/sec 0 571 KBytes
[ 4] 1.00-2.00 sec 496 MBytes 4.17 Gbits/sec 0 708 KBytes
[ 4] 2.00-3.00 sec 498 MBytes 4.18 Gbits/sec 0 844 KBytes
[ 4] 3.00-4.00 sec 498 MBytes 4.17 Gbits/sec 0 889 KBytes
[ 4] 4.00-5.00 sec 499 MBytes 4.18 Gbits/sec 0 988 KBytes
[ 4] 5.00-6.00 sec 499 MBytes 4.18 Gbits/sec 0 1.07 MBytes
[ 4] 6.00-7.00 sec 499 MBytes 4.18 Gbits/sec 0 1.07 MBytes
[ 4] 7.00-8.00 sec 498 MBytes 4.18 Gbits/sec 0 1.07 MBytes
[ 4] 8.00-9.00 sec 500 MBytes 4.19 Gbits/sec 0 1.07 MBytes
[ 4] 9.00-10.00 sec 499 MBytes 4.18 Gbits/sec 0 1.18 MBytes


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 4.87 GBytes 4.18 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 4.85 GBytes 4.17 Gbits/sec receiver

FWIW, with iperf2, I can reliably hit about 34-35Gb/s on exactly the same config.

The reason for this performance differences is that iperf3 is single threaded, so all parallel streams will use a single core. At 40G you will be core limited

To test 40G with iperf3, I do the following:

Start 3 servers:
iperf3 -s -p 5101
iperf3 -s -p 5102
iperf3 -s -p 5103

and then run 3 clients, using the "-T" flag to label the output:
iperf3 -c hostname -T s1 -p 5101 &; iperf3 -c hostname -T s2 -p 5102 &; iperf3 -c hostname -T s3 -p 5103 &;

We have also updated the Fasterdata website:

https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/iperf-and-iperf3/iperf3-at-speeds-about-10gbps

If you could confirm that solves the issue for you I will close this issue.

Hi

I tried the above suggestions and that also didn't worked and each port able to push just 3.31 Gbits/sec on 40Gb network card. Here is the test result as below:

On Server node

~]# iperf3 -s -p 5101

Server listening on 5101

Accepted connection from 192.168.110.95, port 45420
[ 5] local 192.168.110.94 port 5101 connected to 192.168.110.95 port 45424
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 375 MBytes 3.14 Gbits/sec
[ 5] 1.00-2.00 sec 392 MBytes 3.29 Gbits/sec
[ 5] 2.00-3.00 sec 397 MBytes 3.33 Gbits/sec
[ 5] 3.00-4.00 sec 396 MBytes 3.32 Gbits/sec
[ 5] 4.00-5.00 sec 393 MBytes 3.30 Gbits/sec
[ 5] 5.00-6.00 sec 394 MBytes 3.31 Gbits/sec
[ 5] 6.00-7.00 sec 400 MBytes 3.35 Gbits/sec
[ 5] 7.00-8.00 sec 399 MBytes 3.35 Gbits/sec
[ 5] 8.00-9.00 sec 398 MBytes 3.34 Gbits/sec
[ 5] 9.00-10.00 sec 397 MBytes 3.33 Gbits/sec
[ 5] 10.00-10.04 sec 15.3 MBytes 3.32 Gbits/sec


[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender

[ 5] 0.00-10.04 sec 3.86 GBytes 3.31 Gbits/sec receiver

Server listening on 5101

~]# iperf3 -s -p 5102

Server listening on 5102

Accepted connection from 192.168.110.95, port 42242
[ 5] local 192.168.110.94 port 5102 connected to 192.168.110.95 port 42245
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 376 MBytes 3.16 Gbits/sec
[ 5] 1.00-2.00 sec 388 MBytes 3.26 Gbits/sec
[ 5] 2.00-3.00 sec 387 MBytes 3.25 Gbits/sec
[ 5] 3.00-4.00 sec 388 MBytes 3.25 Gbits/sec
[ 5] 4.00-5.00 sec 393 MBytes 3.29 Gbits/sec
[ 5] 5.00-6.00 sec 394 MBytes 3.31 Gbits/sec
[ 5] 6.00-7.00 sec 390 MBytes 3.27 Gbits/sec
[ 5] 7.00-8.00 sec 387 MBytes 3.24 Gbits/sec
[ 5] 8.00-9.00 sec 386 MBytes 3.24 Gbits/sec
[ 5] 9.00-10.00 sec 387 MBytes 3.25 Gbits/sec
[ 5] 10.00-10.04 sec 15.0 MBytes 3.26 Gbits/sec


[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender

[ 5] 0.00-10.04 sec 3.80 GBytes 3.25 Gbits/sec receiver

Server listening on 5102

~]# iperf3 -s -p 5103

Server listening on 5103

Accepted connection from 192.168.110.95, port 36757
[ 5] local 192.168.110.94 port 5103 connected to 192.168.110.95 port 36758
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 386 MBytes 3.24 Gbits/sec
[ 5] 1.00-2.00 sec 401 MBytes 3.36 Gbits/sec
[ 5] 2.00-3.00 sec 397 MBytes 3.33 Gbits/sec
[ 5] 3.00-4.00 sec 398 MBytes 3.34 Gbits/sec
[ 5] 4.00-5.00 sec 395 MBytes 3.32 Gbits/sec
[ 5] 5.00-6.00 sec 393 MBytes 3.29 Gbits/sec
[ 5] 6.00-7.00 sec 392 MBytes 3.29 Gbits/sec
[ 5] 7.00-8.00 sec 396 MBytes 3.32 Gbits/sec
[ 5] 8.00-9.00 sec 397 MBytes 3.33 Gbits/sec
[ 5] 9.00-10.00 sec 397 MBytes 3.33 Gbits/sec
[ 5] 10.00-10.04 sec 15.1 MBytes 3.34 Gbits/sec


[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender

[ 5] 0.00-10.04 sec 3.87 GBytes 3.31 Gbits/sec receiver

Server listening on 5103

On Client node:
[root@server35 ~]# iperf3 -c 192.168.110.94 -T s1 -p 5101 &; iperf3 -c 192.168.110.94 -T s2 -p 5102 &; iperf3 -c 192.168.110.94 -T s3 -p 5103 &;
bash: syntax error near unexpected token `;'
[root@server35 ~]# iperf3 -c 192.168.110.94 -T s1 -p 5101 & iperf3 -c 192.168.110.94 -T s2 -p 5102 & iperf3 -c 192.168.110.94 -T s3 -p 5103 &
[1] 38008
[2] 38009
[3] 38010
[root@server35 ~]# s1: Connecting to host 192.168.110.94, port 5101
s2: Connecting to host 192.168.110.94, port 5102
s3: Connecting to host 192.168.110.94, port 5103
s3: [ 4] local 192.168.110.95 port 36758 connected to 192.168.110.94 port 5103
s2: [ 4] local 192.168.110.95 port 42245 connected to 192.168.110.94 port 5102
s1: [ 4] local 192.168.110.95 port 45424 connected to 192.168.110.94 port 5101
s3: [ ID] Interval Transfer Bandwidth Retr Cwnd
s3: [ 4] 0.00-1.00 sec 401 MBytes 3.37 Gbits/sec 0 446 KBytes
s2: [ ID] Interval Transfer Bandwidth Retr Cwnd
s2: [ 4] 0.00-1.00 sec 391 MBytes 3.28 Gbits/sec 0 446 KBytes
s1: [ ID] Interval Transfer Bandwidth Retr Cwnd
s1: [ 4] 0.00-1.00 sec 390 MBytes 3.27 Gbits/sec 0 446 KBytes
s3: [ 4] 1.00-2.00 sec 400 MBytes 3.36 Gbits/sec 0 446 KBytes
s2: [ 4] 1.00-2.00 sec 388 MBytes 3.26 Gbits/sec 0 446 KBytes
s1: [ 4] 1.00-2.00 sec 393 MBytes 3.29 Gbits/sec 0 446 KBytes
s3: [ 4] 2.00-3.00 sec 397 MBytes 3.33 Gbits/sec 0 446 KBytes
s2: [ 4] 2.00-3.00 sec 387 MBytes 3.24 Gbits/sec 0 446 KBytes
s1: [ 4] 2.00-3.00 sec 397 MBytes 3.33 Gbits/sec 0 446 KBytes
s3: [ 4] 3.00-4.00 sec 398 MBytes 3.34 Gbits/sec 0 446 KBytes
s2: [ 4] 3.00-4.00 sec 388 MBytes 3.25 Gbits/sec 0 446 KBytes
s1: [ 4] 3.00-4.00 sec 396 MBytes 3.32 Gbits/sec 0 446 KBytes
s3: [ 4] 4.00-5.00 sec 395 MBytes 3.31 Gbits/sec 0 446 KBytes
s2: [ 4] 4.00-5.00 sec 393 MBytes 3.30 Gbits/sec 0 446 KBytes
s1: [ 4] 4.00-5.00 sec 393 MBytes 3.30 Gbits/sec 0 446 KBytes
s3: [ 4] 5.00-6.00 sec 393 MBytes 3.29 Gbits/sec 0 446 KBytes
s2: [ 4] 5.00-6.00 sec 394 MBytes 3.31 Gbits/sec 0 455 KBytes
s1: [ 4] 5.00-6.00 sec 395 MBytes 3.31 Gbits/sec 0 446 KBytes
s3: [ 4] 6.00-7.00 sec 392 MBytes 3.29 Gbits/sec 0 446 KBytes
s2: [ 4] 6.00-7.00 sec 389 MBytes 3.27 Gbits/sec 0 455 KBytes
s1: [ 4] 6.00-7.00 sec 400 MBytes 3.35 Gbits/sec 0 446 KBytes
s3: [ 4] 7.00-8.00 sec 396 MBytes 3.32 Gbits/sec 0 446 KBytes
s2: [ 4] 7.00-8.00 sec 387 MBytes 3.24 Gbits/sec 0 455 KBytes
s1: [ 4] 7.00-8.00 sec 399 MBytes 3.35 Gbits/sec 0 446 KBytes
s3: [ 4] 8.00-9.00 sec 397 MBytes 3.33 Gbits/sec 0 446 KBytes
s2: [ 4] 8.00-9.00 sec 386 MBytes 3.24 Gbits/sec 0 455 KBytes
s1: [ 4] 8.00-9.00 sec 399 MBytes 3.34 Gbits/sec 0 446 KBytes
s3: [ 4] 9.00-10.00 sec 397 MBytes 3.33 Gbits/sec 0 446 KBytes

s3: - - - - - - - - - - - - - - - - - - - - - - - - -
s3: [ ID] Interval Transfer Bandwidth Retr
s3: [ 4] 0.00-10.00 sec 3.87 GBytes 3.33 Gbits/sec 0 sender
s3: [ 4] 0.00-10.00 sec 3.87 GBytes 3.33 Gbits/sec receiver
s3:
s3: iperf Done.
s1: [ 4] 9.00-10.00 sec 397 MBytes 3.33 Gbits/sec 0 446 KBytes
s1: - - - - - - - - - - - - - - - - - - - - - - - - -
s1: [ ID] Interval Transfer Bandwidth Retr
s2: [ 4] 9.00-10.00 sec 387 MBytes 3.24 Gbits/sec 0 455 KBytes
s1: [ 4] 0.00-10.00 sec 3.86 GBytes 3.32 Gbits/sec 0 sender
s2: - - - - - - - - - - - - - - - - - - - - - - - - -
s1: [ 4] 0.00-10.00 sec 3.86 GBytes 3.32 Gbits/sec receiver
s2: [ ID] Interval Transfer Bandwidth Retr
s1:
s2: [ 4] 0.00-10.00 sec 3.80 GBytes 3.26 Gbits/sec 0 sender
s1: iperf Done.
s2: [ 4] 0.00-10.00 sec 3.80 GBytes 3.26 Gbits/sec receiver
s2:
s2: iperf Done.

[1] Done iperf3 -c 192.168.110.94 -T s1 -p 5101
[2]- Done iperf3 -c 192.168.110.94 -T s2 -p 5102
[3]+ Done iperf3 -c 192.168.110.94 -T s3 -p 5103

do you still have this issue? I have no problem filling a 100G pipe with 4 iperf3 processes using v3.1.5.

This is now addressed in the new FAQ.

This is now addressed in the new FAQ.

Can u link here?

oh boy! it seems like a downgrade from iperf 3 or 2. may i ask what's the reason to make 3 single-threaded?

iperf3 parallel stream performance is much less than iperf2. Why?
iperf3 is single threaded, and iperf2 is multi-threaded. We recommend using iperf2 for parallel streams. If you want to use multiple iperf3 streams use the method described here.

oh boy! it seems like a downgrade from iperf 3 or 2. may i ask what's the reason to make 3 single-threaded?

iperf3 parallel stream performance is much less than iperf2. Why?
iperf3 is single threaded, and iperf2 is multi-threaded. We recommend using iperf2 for parallel streams. If you want to use multiple iperf3 streams use the method described here.

Yeah, pretty much. I had the same question and the only answer I could ever seem to find was "because". iperf2 is abandoned, yet if you want to reliably test 40+ Gb/s devices, the iperf3 developers suggest either using that same abandoned iperf2 that they wrote iperf3 to get around, OR you have to use a kludge. We had to use the kludge because my needs were to test network devices from 1 to 100+Gb/s and I don't need to introduce two different tools for this, and iperf2 doesn't have the updated features that 3 has like CPU utilization reports.

SO unfortunately, we've had to hack around this limitation using multiple iperf3 instances in a multi-threaded python script, and run upwards of 10 threads for 100Gb Controllers. The sweet spot we found seems to be 1 thread for every 10 - 20GB in bandwidth (so 40Gb/s would be 4 threads, etc).

Then you have to trap all the output, parse it to get each thread's total, then add all that up to get a total throughput. It's hacky, it's overly complicated, but they left us little choice, unfortunately.

As to WHY? Who knows. maybe there's some technical reason for not making a multi-threaded application.

Was this page helpful?
0 / 5 - 0 ratings