On 06/10/2022 15:48, Leo Yan wrote:
Hi James,
On Wed, Oct 05, 2022 at 03:05:08PM +0100, James Clark wrote:
This test commonly fails on Arm Juno because the instruction interval is large enough to miss generating any samples for Perf in system-wide mode.
Fix this by lowering the interval until a comfortable number of Perf instructions are generated. The test is still quick to run because only a small amount of trace is gathered.
Before:
sudo ./perf test coresight -vvv ... Recording trace with system wide mode Looking at perf.data file for dumping branch samples: Looking at perf.data file for reporting branch samples: Looking at perf.data file for instruction samples: CoreSight system wide testing: FAIL ...
After:
sudo ./perf test coresight -vvv ... Recording trace with system wide mode Looking at perf.data file for dumping branch samples: Looking at perf.data file for reporting branch samples: Looking at perf.data file for instruction samples: CoreSight system wide testing: PASS ...
Since Arm Juno board has zero timestamp for CoreSight, I don't think now arm_cs_etm.sh can really work on it.
If we want to pass the test on Juno board, we need to add option "--itrace=Zi1000i" for "perf report" and "perf script"; but seems to me "--itrace=Z..." is not a general case for testing ...
Unfortunately I now think that adding the Z option didn't improve anything in Coresight decoding other than removing the warning. I've never seen the zero timestamp issue on Juno though. I thought that was on some Qualcomm device? I'm not getting the warning on this test anyway.
The problem is that timeless mode assumes per thread mode, and in per thread mode there is a separate buffer per thread, so the Coresight channel IDs are ignored. In systemwide mode the channel ID is important to know which CPU the trace came from. If this info is thrown away then not much works correctly.
I plan to overhaul the whole decoder and remove all the assumptions about per-thread and timeless mode. It would be better if they were completely separate concepts.
Signed-off-by: James Clark james.clark@arm.com
tools/perf/tests/shell/test_arm_coresight.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/tests/shell/test_arm_coresight.sh b/tools/perf/tests/shell/test_arm_coresight.sh index e4cb4f1806ff..daad786cf48d 100755 --- a/tools/perf/tests/shell/test_arm_coresight.sh +++ b/tools/perf/tests/shell/test_arm_coresight.sh @@ -70,7 +70,7 @@ perf_report_instruction_samples() { # 68.12% touch libc-2.27.so [.] _dl_addr # 5.80% touch libc-2.27.so [.] getenv # 4.35% touch ld-2.27.so [.] _dl_fixup
- perf report --itrace=i1000i --stdio -i ${perfdata} 2>&1 | \
- perf report --itrace=i20i --stdio -i ${perfdata} 2>&1 | \ egrep " +[0-9]+.[0-9]+% +$1" > /dev/null 2>&1
So here I am suspect that changing to "--itrace=i20i" can allow the test to pass on Juno board. Could you confirm for this?
On Juno:
./perf record -e cs_etm// -a -- ls
With interval 20, 23 instruction samples are generated:
./perf report --stdio --itrace=i20i | egrep " +[0-9]+.[0-9]+% +perf " | wc -l
23
With interval 1000, 0 are generated:
./perf report --stdio --itrace=i1000i | egrep " +[0-9]+.[0-9]+% +perf " | wc -l
Error: The perf.data data has no samples! 0
I think the issue is that ls is quite quick to run, so not much trace is generated for Perf. And it just depends on the scheduling which is slightly different on Juno. I don't think it's a bug. On N1SDP there are only 134 samples generated with i1000i, so it could probably end up with a random run generating 0 there too.
Thanks, Leo