Currently, the __is_lm_address() check just masks out the top 12 bits of the address, but if they are 0, it still yields a true result. This has as a side effect that virt_addr_valid() returns true even for invalid virtual addresses (e.g. 0x0).
Fix the detection checking that it's actually a kernel address starting at PAGE_OFFSET.
Fixes: f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations") Cc: stable@vger.kernel.org # 5.4.x Cc: Catalin Marinas catalin.marinas@arm.com Cc: Will Deacon will@kernel.org Suggested-by: Catalin Marinas catalin.marinas@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Acked-by: Mark Rutland mark.rutland@arm.com Signed-off-by: Vincenzo Frascino vincenzo.frascino@arm.com --- arch/arm64/include/asm/memory.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 18fce223b67b..99d7e1494aaa 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -247,9 +247,11 @@ static inline const void *__tag_set(const void *addr, u8 tag)
/* - * The linear kernel range starts at the bottom of the virtual address space. + * Check whether an arbitrary address is within the linear map, which + * lives in the [PAGE_OFFSET, PAGE_END) interval at the bottom of the + * kernel's TTBR1 address range. */ -#define __is_lm_address(addr) (((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)) +#define __is_lm_address(addr) (((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
#define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) #define __kimg_to_phys(addr) ((addr) - kimage_voffset)