On 8/7/22 06:44, Leo Yan wrote:
On Thu, Jul 28, 2022 at 03:52:54PM +0100, carsten.haitzler@foss.arm.com wrote:
From: "Carsten Haitzler (Rasterman)" raster@rasterman.com
This adds scripts to drive the unroll thread tests to compare perf output against a minimum bar of content/quality.
Signed-off-by: Carsten Haitzler carsten.haitzler@arm.com
.../shell/coresight/unroll_loop_thread_10.sh | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100755 tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh
diff --git a/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh b/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh new file mode 100755 index 000000000000..f48c85230b15 --- /dev/null +++ b/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh @@ -0,0 +1,18 @@ +#!/bin/sh -e +# CoreSight / Unroll Loop Thread 10
+# SPDX-License-Identifier: GPL-2.0 +# Carsten Haitzler carsten.haitzler@arm.com, 2021
+TEST="unroll_loop_thread" +. $(dirname $0)/../lib/coresight.sh +ARGS="10" +DATV="10" +DATA="$DATD/perf-$TEST-$DATV.data"
+perf record $PERFRECOPT -o "$DATA" "$BIN" $ARGS
+perf_dump_aux_verify "$DATA" 10 10 10
Just minor comments for checking the trace data quality:
The unroll program loops for 10000 times per thread, and this test creates 10 threads; so if we pass the parameter "10 10 10" for perf_dump_aux_verify, seems to me this is very conservative?
Correct. It's very conservative. It's essentially saying "i need just SOME data... something minimal to say it caught something executing". It SHOULD catch it, but the more I raise these numbers, the more likely it is sometimes you get failures. At the start My numbers were chosen at about 20% of the minimum run of the code empirically. It's not 100%^ pure ASM so some is compiler generated code thus some may vary per binary produced by different compilers and options, so there has to be some leeway.
So I did lower the bar to "some data - just some" as opposed to "a reasonable amount of data" which would be larger numbers. The csv files still store this side-bad data anyway.
I would like hear Mike's opinion for these quality metrics; the patch itself is fine for me, you could add my review tag:
Reviewed-by: Leo Yan leo.yan@linaro.org
P.s. it's off-topic, just want to remind to use the "b4" tool when you spin for next version's patch set, e.g. you could use below commands:
$ b4 am 20220728145256.2985298-1-carsten.haitzler@foss.arm.com ^ ` I get the message ID from the page: https://lore.kernel.org/lkml/20220728145256.2985298-1-carsten.haitzler@foss.... $ git am ./v5_20220728_carsten_haitzler_a_patch_series_improving_data_quality_of_perf_test_for_coresight.mbx
We can benefit from this due "b4" can automatically append tags in patches; this can help us to track which patches have been reviewed and tested in previous versions.
OK - that's news to me. I'll look into it.
Thanks, Leo
+err=$?
+exit $err
2.32.0