This series fixes issues in devlink_rate_tc_bw.py selftest that made its checks unreliable and its documentation inconsistent with the actual configuration.
V2: - Dropped the patch that relaxed the total bandwidth check. Jakub suggested addressing the instability with interval-based measurement and by migrating to load.py. That will be handled in a follow-up. - Link to V1: https://lore.kernel.org/netdev/20250831080641.1828455-1-cjubran@nvidia.com/
Thanks
Carolina Jubran (2): selftests: drv-net: Fix and clarify TC bandwidth split in devlink_rate_tc_bw.py selftests: drv-net: Fix tolerance calculation in devlink_rate_tc_bw.py
.../drivers/net/hw/devlink_rate_tc_bw.py | 100 ++++++++---------- 1 file changed, 43 insertions(+), 57 deletions(-)
Correct the documented bandwidth distribution between TC3 and TC4 from 80/20 to 20/80. Update test descriptions and printed messages to consistently reflect the intended split.
Fixes: 23ca32e4ead4 ("selftests: drv-net: Add test for devlink-rate traffic class bandwidth distribution") Tested-by: Carolina Jubran cjubran@nvidia.com Signed-off-by: Carolina Jubran cjubran@nvidia.com Reviewed-by: Cosmin Ratiu cratiu@nvidia.com Reviewed-by: Nimrod Oren noren@nvidia.com --- .../drivers/net/hw/devlink_rate_tc_bw.py | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/tools/testing/selftests/drivers/net/hw/devlink_rate_tc_bw.py b/tools/testing/selftests/drivers/net/hw/devlink_rate_tc_bw.py index ead6784d1910..4da91e3292bf 100755 --- a/tools/testing/selftests/drivers/net/hw/devlink_rate_tc_bw.py +++ b/tools/testing/selftests/drivers/net/hw/devlink_rate_tc_bw.py @@ -21,21 +21,21 @@ Test Cases: ---------- 1. test_no_tc_mapping_bandwidth: - Verifies that without TC mapping, bandwidth is NOT distributed according to - the configured 80/20 split between TC4 and TC3 - - This test should fail if bandwidth matches the 80/20 split without TC + the configured 20/80 split between TC3 and TC4 + - This test should fail if bandwidth matches the 20/80 split without TC mapping - - Expected: Bandwidth should NOT be distributed as 80/20 + - Expected: Bandwidth should NOT be distributed as 20/80
2. test_tc_mapping_bandwidth: - Configures TC mapping using mqprio qdisc - Verifies that with TC mapping, bandwidth IS distributed according to the - configured 80/20 split between TC3 and TC4 - - Expected: Bandwidth should be distributed as 80/20 + configured 20/80 split between TC3 and TC4 + - Expected: Bandwidth should be distributed as 20/80
Bandwidth Distribution: ---------------------- -- TC3 (VLAN 101): Configured for 80% of total bandwidth -- TC4 (VLAN 102): Configured for 20% of total bandwidth +- TC3 (VLAN 101): Configured for 20% of total bandwidth +- TC4 (VLAN 102): Configured for 80% of total bandwidth - Total bandwidth: 1Gbps - Tolerance: +-12%
@@ -413,10 +413,10 @@ def run_bandwidth_distribution_test(cfg, set_tc_mapping):
def test_no_tc_mapping_bandwidth(cfg): """ - Verifies that bandwidth is not split 80/20 without traffic class mapping. + Verifies that bandwidth is not split 20/80 without traffic class mapping. """ - pass_bw_msg = "Bandwidth is NOT distributed as 80/20 without TC mapping" - fail_bw_msg = "Bandwidth matched 80/20 split without TC mapping" + pass_bw_msg = "Bandwidth is NOT distributed as 20/80 without TC mapping" + fail_bw_msg = "Bandwidth matched 20/80 split without TC mapping" is_mlx5 = "driver: mlx5" in ethtool(f"-i {cfg.ifname}").stdout
if run_bandwidth_distribution_test(cfg, set_tc_mapping=False): @@ -430,13 +430,13 @@ def test_no_tc_mapping_bandwidth(cfg):
def test_tc_mapping_bandwidth(cfg): """ - Verifies that bandwidth is correctly split 80/20 between TC3 and TC4 + Verifies that bandwidth is correctly split 20/80 between TC3 and TC4 when traffic class mapping is set. """ if run_bandwidth_distribution_test(cfg, set_tc_mapping=True): - ksft_pr("Bandwidth is distributed as 80/20 with TC mapping") + ksft_pr("Bandwidth is distributed as 20/80 with TC mapping") else: - raise KsftFailEx("Bandwidth did not match 80/20 split with TC mapping") + raise KsftFailEx("Bandwidth did not match 20/80 split with TC mapping")
def main() -> None:
On Tue, Sep 09, 2025 at 01:13:52PM +0300, Carolina Jubran wrote:
Correct the documented bandwidth distribution between TC3 and TC4 from 80/20 to 20/80. Update test descriptions and printed messages to consistently reflect the intended split.
Fixes: 23ca32e4ead4 ("selftests: drv-net: Add test for devlink-rate traffic class bandwidth distribution") Tested-by: Carolina Jubran cjubran@nvidia.com Signed-off-by: Carolina Jubran cjubran@nvidia.com Reviewed-by: Cosmin Ratiu cratiu@nvidia.com Reviewed-by: Nimrod Oren noren@nvidia.com
Reviewed-by: Simon Horman horms@kernel.org
Currently, tolerance is computed against the TC’s expected percentage, making TC3 (20%) validation overly strict and TC4 (80%) overly loose.
Update BandwidthValidator to take a dict of shares and compute bounds relative to the overall total, so that all shares are validated consistently.
Fixes: 23ca32e4ead4 ("selftests: drv-net: Add test for devlink-rate traffic class bandwidth distribution") Tested-by: Carolina Jubran cjubran@nvidia.com Signed-off-by: Carolina Jubran cjubran@nvidia.com Reviewed-by: Cosmin Ratiu cratiu@nvidia.com Reviewed-by: Nimrod Oren noren@nvidia.com --- .../drivers/net/hw/devlink_rate_tc_bw.py | 74 ++++++++----------- 1 file changed, 30 insertions(+), 44 deletions(-)
diff --git a/tools/testing/selftests/drivers/net/hw/devlink_rate_tc_bw.py b/tools/testing/selftests/drivers/net/hw/devlink_rate_tc_bw.py index 4da91e3292bf..abc20bc4a34a 100755 --- a/tools/testing/selftests/drivers/net/hw/devlink_rate_tc_bw.py +++ b/tools/testing/selftests/drivers/net/hw/devlink_rate_tc_bw.py @@ -68,39 +68,35 @@ from lib.py import cmd, defer, ethtool, ip
class BandwidthValidator: """ - Validates bandwidth totals and per-TC shares against expected values - with a tolerance. + Validates total bandwidth and individual shares with tolerance + relative to the overall total. """
- def __init__(self): + def __init__(self, shares): self.tolerance_percent = 12 - self.expected_total_gbps = 1.0 - self.total_min_expected = self.min_expected(self.expected_total_gbps) - self.total_max_expected = self.max_expected(self.expected_total_gbps) - self.tc_expected_percent = { - 3: 20.0, - 4: 80.0, - } + self.expected_total = sum(shares.values()) + self.bounds = {} + + for name, exp in shares.items(): + self.bounds[name] = (self.min_expected(exp), self.max_expected(exp))
def min_expected(self, value): """Calculates the minimum acceptable value based on tolerance.""" - return value - (value * self.tolerance_percent / 100) + return value - (self.expected_total * self.tolerance_percent / 100)
def max_expected(self, value): """Calculates the maximum acceptable value based on tolerance.""" - return value + (value * self.tolerance_percent / 100) - - def bound(self, expected, value): - """Returns True if value is within expected tolerance.""" - return self.min_expected(expected) <= value <= self.max_expected(expected) + return value + (self.expected_total * self.tolerance_percent / 100)
- def tc_bandwidth_bound(self, value, tc_ix): + def bound(self, values): """ - Returns True if the given bandwidth value is within tolerance - for the TC's expected bandwidth. + Return True if all given values fall within tolerance. """ - expected = self.tc_expected_percent[tc_ix] - return self.bound(expected, value) + for name, value in values.items(): + low, high = self.bounds[name] + if not low <= value <= high: + return False + return True
def setup_vf(cfg, set_tc_mapping=True): @@ -364,38 +360,26 @@ def verify_total_bandwidth(bw_data, validator): """ total = bw_data['total_bw']
- if validator.bound(validator.expected_total_gbps, total): + if validator.bound({"total": total}): return
- if total < validator.total_min_expected: + low, high = validator.bounds["total"] + + if total < low: raise KsftSkipEx( f"Total bandwidth {total:.2f} Gbps < minimum " - f"{validator.total_min_expected:.2f} Gbps; " - f"parent tx_max ({validator.expected_total_gbps:.1f} G) " + f"{low:.2f} Gbps; " + f"parent tx_max ({validator.expected_total:.1f} G) " f"not reached, cannot validate share" )
raise KsftFailEx( f"Total bandwidth {total:.2f} Gbps exceeds allowed ceiling " - f"{validator.total_max_expected:.2f} Gbps " - f"(VF tx_max set to {validator.expected_total_gbps:.1f} G)" + f"{high:.2f} Gbps " + f"(VF tx_max set to {validator.expected_total:.1f} G)" )
-def check_bandwidth_distribution(bw_data, validator): - """ - Checks whether the measured TC3 and TC4 bandwidth percentages - fall within their expected tolerance ranges. - - Returns: - bool: True if both TC3 and TC4 percentages are within bounds. - """ - tc3_valid = validator.tc_bandwidth_bound(bw_data['tc3_percentage'], 3) - tc4_valid = validator.tc_bandwidth_bound(bw_data['tc4_percentage'], 4) - - return tc3_valid and tc4_valid - - def run_bandwidth_distribution_test(cfg, set_tc_mapping): """ Runs parallel iperf3 tests for both TCs and collects results. @@ -406,9 +390,10 @@ def run_bandwidth_distribution_test(cfg, set_tc_mapping): test_name = "with TC mapping" if set_tc_mapping else "without TC mapping" print_bandwidth_results(bw_data, test_name)
- verify_total_bandwidth(bw_data, cfg.bw_validator) + verify_total_bandwidth(bw_data, cfg.traffic_bw_validator)
- return check_bandwidth_distribution(bw_data, cfg.bw_validator) + return cfg.tc_bw_validator.bound({"tc3": bw_data['tc3_percentage'], + "tc4": bw_data['tc4_percentage']})
def test_no_tc_mapping_bandwidth(cfg): @@ -453,7 +438,8 @@ def main() -> None: raise KsftSkipEx("Could not get PCI address of the interface") cfg.require_cmd("iperf3", local=True, remote=True)
- cfg.bw_validator = BandwidthValidator() + cfg.traffic_bw_validator = BandwidthValidator({"total": 1}) + cfg.tc_bw_validator = BandwidthValidator({"tc3": 20, "tc4": 80})
cases = [test_no_tc_mapping_bandwidth, test_tc_mapping_bandwidth]
On Tue, Sep 09, 2025 at 01:13:53PM +0300, Carolina Jubran wrote:
Currently, tolerance is computed against the TC’s expected percentage, making TC3 (20%) validation overly strict and TC4 (80%) overly loose.
Update BandwidthValidator to take a dict of shares and compute bounds relative to the overall total, so that all shares are validated consistently.
Fixes: 23ca32e4ead4 ("selftests: drv-net: Add test for devlink-rate traffic class bandwidth distribution") Tested-by: Carolina Jubran cjubran@nvidia.com Signed-off-by: Carolina Jubran cjubran@nvidia.com Reviewed-by: Cosmin Ratiu cratiu@nvidia.com Reviewed-by: Nimrod Oren noren@nvidia.com
Reviewed-by: Simon Horman horms@kernel.org
On Tue, Sep 09, 2025 at 01:13:51PM +0300, Carolina Jubran wrote:
This series fixes issues in devlink_rate_tc_bw.py selftest that made its checks unreliable and its documentation inconsistent with the actual configuration.
V2:
- Dropped the patch that relaxed the total bandwidth check. Jakub suggested addressing the instability with interval-based measurement and by migrating to load.py. That will be handled in a follow-up.
- Link to V1: https://lore.kernel.org/netdev/20250831080641.1828455-1-cjubran@nvidia.com/
Thanks
Carolina Jubran (2): selftests: drv-net: Fix and clarify TC bandwidth split in devlink_rate_tc_bw.py selftests: drv-net: Fix tolerance calculation in devlink_rate_tc_bw.py
.../drivers/net/hw/devlink_rate_tc_bw.py | 100 ++++++++---------- 1 file changed, 43 insertions(+), 57 deletions(-)
Hi Carolina,
It isn't strictly related to these changes. But CI flaggs that devlink_rate_tc_bw.py should be present in the Makefile for in the same directory.
Given the wildcard in the Makefile I'm unsure if that is true or not. Could you take a look?
linux-kselftest-mirror@lists.linaro.org