On Fri, Nov 26, 2021 at 10:28:19AM +0100, Arnd Bergmann wrote:
On Fri, Nov 26, 2021 at 9:28 AM Jean-Philippe Brucker jean-philippe@linaro.org wrote:
On Thu, Nov 18, 2021 at 03:05:27PM +0000, Alex Bennée via Stratos-dev wrote:
2.1 Test Setup ──────────────
The test setup will require two machines. The test controller will be the source of the test packets and measure the round trip latency to get a reply from the test client. The test client will be setup in multiple configurations so the latency can be checked.
+-----------------------+ +-----------------------+ |c1AB | |c1AB | | Test Control | | Test Client | | | | | +-+--------+-+--------+-+ +-+--------+-+--------+-+ |{mo} | |{mo} | |{mo} | |{mo} | | eth0 | | eth1 | | eth1 | | eth0 | |cRED | |cPNK | |cPNK | |cRED | +--------+ +--------+ +--------+ +--------+ | ^ ^ | | | test link | | : +---------------------------+ : | 10GbE | | | /--------------------------------------------=-----------\ | LAN | ---------------------------------------------=----------/
One problem I've had with network benchmark was reaching the link limit. On an recent Arm server the bottleneck was 10GbE so I was unable to compare different software optimizations (measuring mainly bandwidth, though, not latency). If that happens and you do need a physical link (e.g. testing VFIO or HW acceleration) then 50/100+ GbE will be needed, and that's a different budget. Currently I do VM <-> host userspace for network benchmarks so everything is CPU-bound.
I would think that latency testing is more useful than throughput testing then. I'd focus on the maximum latency in a bufferbloat scenario here, flooding the link with large packets while measuring the round-trip ping time.
While 'latency' can mean different things, is the round-trip time what we want to know?
For instance, jitters among (one-way) packets may be more important.
-Takahiro Akashi
Arnd