The patch titled Subject: arm[64]/memremap: don't abuse pfn_valid() to ensure presence of linear map has been added to the -mm mm-hotfixes-unstable branch. Its filename is arm-memremap-dont-abuse-pfn_valid-to-ensure-presence-of-linear-map.patch
This patch should soon appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: Mike Rapoport rppt@linux.ibm.com Subject: arm[64]/memremap: don't abuse pfn_valid() to ensure presence of linear map
The semantics of pfn_valid() is to check presence of the memory map for a PFN and not whether a PFN is covered by the linear map. The memory map may be present for NOMAP memory regions, but they won't be mapped in the linear mapping. Accessing such regions via __va() when they are memremap()'ed will cause a crash.
On v5.4.y the crash happens on qemu-arm with UEFI [1]:
<1>[ 0.084476] 8<--- cut here --- <1>[ 0.084595] Unable to handle kernel paging request at virtual address dfb76000 <1>[ 0.084938] pgd = (ptrval) <1>[ 0.085038] [dfb76000] *pgd=5f7fe801, *pte=00000000, *ppte=00000000
...
<4>[ 0.093923] [<c0ed6ce8>] (memcpy) from [<c16a06f8>] (dmi_setup+0x60/0x418) <4>[ 0.094204] [<c16a06f8>] (dmi_setup) from [<c16a38d4>] (arm_dmi_init+0x8/0x10) <4>[ 0.094408] [<c16a38d4>] (arm_dmi_init) from [<c0302e9c>] (do_one_initcall+0x50/0x228) <4>[ 0.094619] [<c0302e9c>] (do_one_initcall) from [<c16011e4>] (kernel_init_freeable+0x15c/0x1f8) <4>[ 0.094841] [<c16011e4>] (kernel_init_freeable) from [<c0f028cc>] (kernel_init+0x8/0x10c) <4>[ 0.095057] [<c0f028cc>] (kernel_init) from [<c03010e8>] (ret_from_fork+0x14/0x2c)
On kernels v5.10.y and newer the same crash won't reproduce on ARM because commit b10d6bca8720 ("arch, drivers: replace for_each_membock() with for_each_mem_range()") changed the way memory regions are registered in the resource tree, but that merely covers up the problem.
On ARM64 memory resources registered in yet another way and there the issue of wrong usage of pfn_valid() to ensure availability of the linear map is also covered.
Implement arch_memremap_can_ram_remap() on ARM and ARM64 to prevent access to NOMAP regions via the linear mapping in memremap().
Link: https://lore.kernel.org/all/Yl65zxGgFzF1Okac@sirena.org.uk Link: https://lkml.kernel.org/r/20220426060107.7618-1-rppt@kernel.org Signed-off-by: Mike Rapoport rppt@linux.ibm.com Reported-by: "kernelci.org bot" bot@kernelci.org Tested-by: Mark Brown broonie@kernel.org Reviewed-by: Ard Biesheuvel ardb@kernel.org Acked-by: Catalin Marinas catalin.marinas@arm.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Mark Brown broonie@kernel.org Cc: Mark-PK Tsai mark-pk.tsai@mediatek.com Cc: Russell King linux@armlinux.org.uk Cc: Tony Lindgren tony@atomide.com Cc: Will Deacon will@kernel.org Cc: stable@vger.kernel.org [5.4+] Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
arch/arm/include/asm/io.h | 3 +++ arch/arm/mm/ioremap.c | 8 ++++++++ arch/arm64/include/asm/io.h | 4 ++++ arch/arm64/mm/ioremap.c | 8 ++++++++ 4 files changed, 23 insertions(+)
--- a/arch/arm64/include/asm/io.h~arm-memremap-dont-abuse-pfn_valid-to-ensure-presence-of-linear-map +++ a/arch/arm64/include/asm/io.h @@ -192,4 +192,8 @@ extern void __iomem *ioremap_cache(phys_ extern int valid_phys_addr_range(phys_addr_t addr, size_t size); extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);
+extern bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size, + unsigned long flags); +#define arch_memremap_can_ram_remap arch_memremap_can_ram_remap + #endif /* __ASM_IO_H */ --- a/arch/arm64/mm/ioremap.c~arm-memremap-dont-abuse-pfn_valid-to-ensure-presence-of-linear-map +++ a/arch/arm64/mm/ioremap.c @@ -99,3 +99,11 @@ void __init early_ioremap_init(void) { early_ioremap_setup(); } + +bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size, + unsigned long flags) +{ + unsigned long pfn = PHYS_PFN(offset); + + return pfn_is_map_memory(pfn); +} --- a/arch/arm/include/asm/io.h~arm-memremap-dont-abuse-pfn_valid-to-ensure-presence-of-linear-map +++ a/arch/arm/include/asm/io.h @@ -440,6 +440,9 @@ extern void pci_iounmap(struct pci_dev * #define ARCH_HAS_VALID_PHYS_ADDR_RANGE extern int valid_phys_addr_range(phys_addr_t addr, size_t size); extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size); +extern bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size, + unsigned long flags); +#define arch_memremap_can_ram_remap arch_memremap_can_ram_remap #endif
/* --- a/arch/arm/mm/ioremap.c~arm-memremap-dont-abuse-pfn_valid-to-ensure-presence-of-linear-map +++ a/arch/arm/mm/ioremap.c @@ -493,3 +493,11 @@ void __init early_ioremap_init(void) { early_ioremap_setup(); } + +bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size, + unsigned long flags) +{ + unsigned long pfn = PHYS_PFN(offset); + + return memblock_is_map_memory(pfn); +} _
Patches currently in -mm which might be from rppt@linux.ibm.com are
arm-memremap-dont-abuse-pfn_valid-to-ensure-presence-of-linear-map.patch
linux-stable-mirror@lists.linaro.org