On Thu, Oct 26, 2023 at 05:39:11PM +0200, Ard Biesheuvel wrote:
On Thu, 26 Oct 2023 at 17:30, Mark Rutland mark.rutland@arm.com wrote:
On Thu, Oct 26, 2023 at 08:11:26PM +0530, Naresh Kamboju wrote:
Following kernel crash noticed on qemu-arm64 while running LTP syscalls set_robust_list test case running Linux next 6.6.0-rc7-next-20231026 and 6.6.0-rc7-next-20231025.
BAD: next-20231025 Good: next-20231024
Reported-by: Linux Kernel Functional Testing lkft@linaro.org Reported-by: Naresh Kamboju naresh.kamboju@linaro.org
Log:
<1>[ 203.119139] Unable to handle kernel unknown 43 at virtual address 0001ffff9e2e7d78 <1>[ 203.119838] Mem abort info: <1>[ 203.120064] ESR = 0x000000009793002b <1>[ 203.121040] EC = 0x25: DABT (current EL), IL = 32 bits set_robust_list01 1 TPASS : set_robust_list: retval = -1 (expected -1), errno = 22 (expected 22) set_robust_list01 2 TPASS : set_robust_list: retval = 0 (expected 0), errno = 0 (expected 0) <1>[ 203.124496] SET = 0, FnV = 0 <1>[ 203.124778] EA = 0, S1PTW = 0 <1>[ 203.125029] FSC = 0x2b: unknown 43
It looks like this is fallout from the LPA2 enablement.
According to the latest ARM ARM (ARM DDI 0487J.a), page D19-6475, that "unknown 43" (0x2b / 0b101011) is the DFSC for a level -1 translation fault:
0b101011 When FEAT_LPA2 is implemented: Translation fault, level -1.
It's triggered here by an LDTR in a get_user() on a bogus userspace address. The exception is expected, and it's supposed to be handled via the exception fixups, but the LPA2 patches didn't update the fault_info table entries for all the level -1 faults, and so those all get handled by do_bad() and don't call fixup_exception(), causing them to be fatal.
It should be relatively simple to update the fault_info table for the level -1 faults, but given the other issues we're seeing I think it's probably worth dropping the LPA2 patches for the moment.
Thanks for the analysis Mark.
I agree that this should not be difficult to fix, but given the other CI problems and identified loose ends, I am not going to object to dropping this partially or entirely at this point. I'm sure everybody will be thrilled to go over those 60 patches again after I rebase them onto v6.7-rc1 :-)
FWIW, I'm more than happy to try; the issue has lagely been finding the time. Hopefully that'll be a bit easier after LPC!
Mark.