The seccomp benchmark runs five scenarios, one calibration run with no seccomp filters enabled then four further runs each adding a filter. The calibration run times itself for 15s and then each additional run executes for the same number of times.
Currently the seccomp tests, including the benchmark, run with an extended 120s timeout but this is not sufficient to robustly run the tests on a lot of platforms. Sample timings from some recent runs:
Platform Run 1 Run 2 Run 3 Run 4 --------- ----- ----- ----- ----- PowerEdge R200 16.6s 16.6s 31.6s 37.4s BBB (arm) 20.4s 20.4s 54.5s Synquacer (arm64) 20.7s 23.7s 40.3s
The x86 runs from the PowerEdge are quite marginal and routinely fail, for the successful run reported here the timed portions of the run are at 117.2s leaving less than 3s of margin which is frequently breached. The added overhead of adding filters on the other platforms is such that there is no prospect of their runs fitting into the 120s timeout, especially on 32 bit arm where there is no BPF JIT.
While we could lower the time we calibrate for I'm also already seeing the currently completing runs reporting issues with the per filter overheads not matching expectations:
Let's instead raise the timeout to 180s which is only a 50% increase on the current timeout which is itself not *too* large given that there's only two tests in this suite.
Signed-off-by: Mark Brown broonie@kernel.org --- Changes in v2: - Rebase onto v6.9-rc1. - Link to v1: https://lore.kernel.org/r/20231219-b4-kselftest-seccomp-benchmark-timeout-v1... --- tools/testing/selftests/seccomp/settings | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/seccomp/settings b/tools/testing/selftests/seccomp/settings index 6091b45d226b..a953c96aa16e 100644 --- a/tools/testing/selftests/seccomp/settings +++ b/tools/testing/selftests/seccomp/settings @@ -1 +1 @@ -timeout=120 +timeout=180
--- base-commit: 4cece764965020c22cff7665b18a012006359095 change-id: 20231219-b4-kselftest-seccomp-benchmark-timeout-05b66e7d29d1
Best regards,
On 3/25/24 10:57, Mark Brown wrote:
The seccomp benchmark runs five scenarios, one calibration run with no seccomp filters enabled then four further runs each adding a filter. The calibration run times itself for 15s and then each additional run executes for the same number of times.
Currently the seccomp tests, including the benchmark, run with an extended 120s timeout but this is not sufficient to robustly run the tests on a lot of platforms. Sample timings from some recent runs:
Platform Run 1 Run 2 Run 3 Run 4 --------- ----- ----- ----- ----- PowerEdge R200 16.6s 16.6s 31.6s 37.4s BBB (arm) 20.4s 20.4s 54.5s Synquacer (arm64) 20.7s 23.7s 40.3s
The x86 runs from the PowerEdge are quite marginal and routinely fail, for the successful run reported here the timed portions of the run are at 117.2s leaving less than 3s of margin which is frequently breached. The added overhead of adding filters on the other platforms is such that there is no prospect of their runs fitting into the 120s timeout, especially on 32 bit arm where there is no BPF JIT.
While we could lower the time we calibrate for I'm also already seeing the currently completing runs reporting issues with the per filter overheads not matching expectations:
Let's instead raise the timeout to 180s which is only a 50% increase on the current timeout which is itself not *too* large given that there's only two tests in this suite.
Signed-off-by: Mark Brown broonie@kernel.org
Changes in v2:
- Rebase onto v6.9-rc1.
- Link to v1: https://lore.kernel.org/r/20231219-b4-kselftest-seccomp-benchmark-timeout-v1...
Applied to linux-kselftest fixes for next rc.
thanks, -- Shuah
linux-kselftest-mirror@lists.linaro.org