On Fri, May 06, 2022 at 04:50:41PM +0200, Cornelia Huck wrote:
I'm currently trying to run the MTE selftests on the FVP simulator (Base Model)[1], mainly to verify things are sane on the host before wiring up the KVM support in QEMU. However, I'm seeing some failures (the non-mte tests seemed all fine):
Are the MTE tests supposed to work on the FVP model? Something broken in my config? Anything I can debug?
I would expect them to work, they seemed happy when I was doing the async mode support IIRC and a quick spin with -next in qemu everything seems fine, I'm travelling so don't have the environment for models to hand right now.
[1] Command line: "$MODEL" \ -C cache_state_modelled=0 \ -C bp.refcounter.non_arch_start_at_default=1 \ -C bp.secure_memory=false \ -C cluster0.has_arm_v8-1=1 \ -C cluster0.has_arm_v8-2=1 \ -C cluster0.has_arm_v8-3=1 \ -C cluster0.has_arm_v8-4=1 \ -C cluster0.has_arm_v8-5=1 \ -C cluster0.has_amu=1 \ -C cluster0.NUM_CORES=4 \ -C cluster0.memory_tagging_support_level=2 \ -a "cluster0.*=$AXF" \
where $AXF contains a kernel at v5.18-rc5-16-g107c948d1d3e[2] and an initrd built by mbuto[3] from that level with a slightly tweaked "kselftests" profile (adding /dev/shm).
What are you using for EL3 with the model? Both TF-A and boot-wrapper are in regular use, TF-A gets *way* more testing than boot-wrapper which is mostly used by individual developers.
On Fri, May 06 2022, Mark Brown broonie@kernel.org wrote:
On Fri, May 06, 2022 at 04:50:41PM +0200, Cornelia Huck wrote:
I'm currently trying to run the MTE selftests on the FVP simulator (Base Model)[1], mainly to verify things are sane on the host before wiring up the KVM support in QEMU. However, I'm seeing some failures (the non-mte tests seemed all fine):
Are the MTE tests supposed to work on the FVP model? Something broken in my config? Anything I can debug?
I would expect them to work, they seemed happy when I was doing the async mode support IIRC and a quick spin with -next in qemu everything seems fine, I'm travelling so don't have the environment for models to hand right now.
Thanks; I think that points to some setup/config problem on my side, then :/ (I ran the selftests under QEMU's tcg emulation, and while it looks better, I still get timeouts for check_gcr_el1_cswitch and check_user_mem.)
[1] Command line: "$MODEL" \ -C cache_state_modelled=0 \ -C bp.refcounter.non_arch_start_at_default=1 \ -C bp.secure_memory=false \ -C cluster0.has_arm_v8-1=1 \ -C cluster0.has_arm_v8-2=1 \ -C cluster0.has_arm_v8-3=1 \ -C cluster0.has_arm_v8-4=1 \ -C cluster0.has_arm_v8-5=1 \ -C cluster0.has_amu=1 \ -C cluster0.NUM_CORES=4 \ -C cluster0.memory_tagging_support_level=2 \ -a "cluster0.*=$AXF" \
where $AXF contains a kernel at v5.18-rc5-16-g107c948d1d3e[2] and an initrd built by mbuto[3] from that level with a slightly tweaked "kselftests" profile (adding /dev/shm).
What are you using for EL3 with the model? Both TF-A and boot-wrapper are in regular use, TF-A gets *way* more testing than boot-wrapper which is mostly used by individual developers.
I'm building the .axf via boot-wrapper-aarch64 (enabling psci and gicv3, if that matters.) Didn't try to make use of TF-A yet beyond the dtb (I'm still in the process of getting familiar with the arm64 world, so I'm currently starting out with the setups that others had shared with me.)
On Mon, May 09 2022, Cornelia Huck cohuck@redhat.com wrote:
On Fri, May 06 2022, Mark Brown broonie@kernel.org wrote:
On Fri, May 06, 2022 at 04:50:41PM +0200, Cornelia Huck wrote:
I'm currently trying to run the MTE selftests on the FVP simulator (Base Model)[1], mainly to verify things are sane on the host before wiring up the KVM support in QEMU. However, I'm seeing some failures (the non-mte tests seemed all fine):
Are the MTE tests supposed to work on the FVP model? Something broken in my config? Anything I can debug?
I would expect them to work, they seemed happy when I was doing the async mode support IIRC and a quick spin with -next in qemu everything seems fine, I'm travelling so don't have the environment for models to hand right now.
Thanks; I think that points to some setup/config problem on my side, then :/ (I ran the selftests under QEMU's tcg emulation, and while it looks better, I still get timeouts for check_gcr_el1_cswitch and check_user_mem.)
...so these two tests are simply very slow; if I run them directly, they take longer than 45s, but eventually finish. So all seems good (in a slow way) on QEMU + tcg.
On the simulator, running check_gcr_el1_cswitch directly finishes successfully after several minutes as well; however, I get all the other failures in tests that I reported in my first mail even when I run them directly.
On Mon, May 09, 2022 at 11:59:59AM +0200, Cornelia Huck wrote:
On Fri, May 06 2022, Mark Brown broonie@kernel.org wrote:
I would expect them to work, they seemed happy when I was doing the async mode support IIRC and a quick spin with -next in qemu everything seems fine, I'm travelling so don't have the environment for models to hand right now.
Thanks; I think that points to some setup/config problem on my side, then :/ (I ran the selftests under QEMU's tcg emulation, and while it looks better, I still get timeouts for check_gcr_el1_cswitch and check_user_mem.)
That might just be an actual timeout depending on the preformance of the host system.
where $AXF contains a kernel at v5.18-rc5-16-g107c948d1d3e[2] and an initrd built by mbuto[3] from that level with a slightly tweaked "kselftests" profile (adding /dev/shm).
What are you using for EL3 with the model? Both TF-A and boot-wrapper are in regular use, TF-A gets *way* more testing than boot-wrapper which is mostly used by individual developers.
I'm building the .axf via boot-wrapper-aarch64 (enabling psci and gicv3, if that matters.) Didn't try to make use of TF-A yet beyond the dtb (I'm still in the process of getting familiar with the arm64 world, so I'm currently starting out with the setups that others had shared with me.)
I'm now back with the models and it turns out that while qemu is happy I can reproduce what you're seeing with the model, at least as far back as v5.15 which suggests it's likely to be more operator error than a bug. Trying to figure it out now.
On Mon, May 09 2022, Mark Brown broonie@kernel.org wrote:
On Mon, May 09, 2022 at 11:59:59AM +0200, Cornelia Huck wrote:
On Fri, May 06 2022, Mark Brown broonie@kernel.org wrote:
I would expect them to work, they seemed happy when I was doing the async mode support IIRC and a quick spin with -next in qemu everything seems fine, I'm travelling so don't have the environment for models to hand right now.
Thanks; I think that points to some setup/config problem on my side, then :/ (I ran the selftests under QEMU's tcg emulation, and while it looks better, I still get timeouts for check_gcr_el1_cswitch and check_user_mem.)
That might just be an actual timeout depending on the preformance of the host system.
Our mails may have crossed mid-air; the tests finish for me eventually.
where $AXF contains a kernel at v5.18-rc5-16-g107c948d1d3e[2] and an initrd built by mbuto[3] from that level with a slightly tweaked "kselftests" profile (adding /dev/shm).
What are you using for EL3 with the model? Both TF-A and boot-wrapper are in regular use, TF-A gets *way* more testing than boot-wrapper which is mostly used by individual developers.
I'm building the .axf via boot-wrapper-aarch64 (enabling psci and gicv3, if that matters.) Didn't try to make use of TF-A yet beyond the dtb (I'm still in the process of getting familiar with the arm64 world, so I'm currently starting out with the setups that others had shared with me.)
I'm now back with the models and it turns out that while qemu is happy I can reproduce what you're seeing with the model, at least as far back as v5.15 which suggests it's likely to be more operator error than a bug. Trying to figure it out now.
Ok, thanks for looking.
linux-kselftest-mirror@lists.linaro.org