Currently, the __is_lm_address() check just masks out the top 12 bits of the address, but if they are 0, it still yields a true result. This has as a side effect that virt_addr_valid() returns true even for invalid virtual addresses (e.g. 0x0).
Fix the detection checking that it's actually a kernel address starting at PAGE_OFFSET.
Fixes: f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations") Cc: stable@vger.kernel.org # 5.4.x Cc: Catalin Marinas catalin.marinas@arm.com Cc: Will Deacon will@kernel.org Suggested-by: Catalin Marinas catalin.marinas@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Acked-by: Mark Rutland mark.rutland@arm.com Signed-off-by: Vincenzo Frascino vincenzo.frascino@arm.com --- arch/arm64/include/asm/memory.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 18fce223b67b..99d7e1494aaa 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -247,9 +247,11 @@ static inline const void *__tag_set(const void *addr, u8 tag)
/* - * The linear kernel range starts at the bottom of the virtual address space. + * Check whether an arbitrary address is within the linear map, which + * lives in the [PAGE_OFFSET, PAGE_END) interval at the bottom of the + * kernel's TTBR1 address range. */ -#define __is_lm_address(addr) (((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)) +#define __is_lm_address(addr) (((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
#define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) #define __kimg_to_phys(addr) ((addr) - kimage_voffset)
On Tue, Jan 26, 2021 at 01:40:56PM +0000, Vincenzo Frascino wrote:
Currently, the __is_lm_address() check just masks out the top 12 bits of the address, but if they are 0, it still yields a true result. This has as a side effect that virt_addr_valid() returns true even for invalid virtual addresses (e.g. 0x0).
Fix the detection checking that it's actually a kernel address starting at PAGE_OFFSET.
Fixes: f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations") Cc: stable@vger.kernel.org # 5.4.x
Not sure what happened with the Fixes tag but that's definitely not what it fixes. The above is a 5.11 commit that preserves the semantics of an older commit. So it should be:
Fixes: 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using __is_lm_address()")
The above also had a fix for another commit but no need to add two entries, we just fix the original fix: 14c127c957c1 ("arm64: mm: Flip kernel VA space").
Anyway, no need to repost, I can update the fixes tag myself.
In terms of stable backports, it may be cleaner to backport 7bc1a0f9e176 ("arm64: mm: use single quantity to represent the PA to VA translation") which has a Fixes tag already but never made it to -stable. On top of this, we can backport Ard's latest f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations"). I just tried these locally and the conflicts were fairly trivial.
On 1/26/21 4:36 PM, Catalin Marinas wrote:
On Tue, Jan 26, 2021 at 01:40:56PM +0000, Vincenzo Frascino wrote:
Currently, the __is_lm_address() check just masks out the top 12 bits of the address, but if they are 0, it still yields a true result. This has as a side effect that virt_addr_valid() returns true even for invalid virtual addresses (e.g. 0x0).
Fix the detection checking that it's actually a kernel address starting at PAGE_OFFSET.
Fixes: f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations") Cc: stable@vger.kernel.org # 5.4.x
Not sure what happened with the Fixes tag but that's definitely not what it fixes. The above is a 5.11 commit that preserves the semantics of an older commit. So it should be:
Fixes: 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using __is_lm_address()")
Yes that is correct. I moved the release to which applies backword but I forgot to update the fixes tag I suppose.
...
Anyway, no need to repost, I can update the fixes tag myself.
Thank you for this.
In terms of stable backports, it may be cleaner to backport 7bc1a0f9e176 ("arm64: mm: use single quantity to represent the PA to VA translation") which has a Fixes tag already but never made it to -stable. On top of this, we can backport Ard's latest f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations"). I just tried these locally and the conflicts were fairly trivial.
Ok, thank you for digging it. I will give it a try tomorrow.
On Tue, 26 Jan 2021 13:40:56 +0000, Vincenzo Frascino wrote:
Currently, the __is_lm_address() check just masks out the top 12 bits of the address, but if they are 0, it still yields a true result. This has as a side effect that virt_addr_valid() returns true even for invalid virtual addresses (e.g. 0x0).
Fix the detection checking that it's actually a kernel address starting at PAGE_OFFSET.
Applied to arm64 (for-next/fixes), thanks!
[1/1] arm64: Fix kernel address detection of __is_lm_address() https://git.kernel.org/arm64/c/519ea6f1c82f
linux-stable-mirror@lists.linaro.org