On Wed, Apr 08, 2020 at 01:59:08PM +0200, Christoph Hellwig wrote:
This allows to unexport map_vm_area and unmap_kernel_range, which are rather deep internal and should not be available to modules.
Even though I don't know how many usecase we have using zsmalloc as module(I heard only once by dumb reason), it could affect existing users. Thus, please include concrete explanation in the patch to justify when the complain occurs.
Signed-off-by: Christoph Hellwig hch@lst.de
mm/Kconfig | 2 +- mm/vmalloc.c | 2 -- 2 files changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 36949a9425b8..614cc786b519 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -702,7 +702,7 @@ config ZSMALLOC config ZSMALLOC_PGTABLE_MAPPING bool "Use page table mapping to access object in zsmalloc"
- depends on ZSMALLOC
- depends on ZSMALLOC=y help By default, zsmalloc uses a copy-based object mapping method to access allocations that span two pages. However, if a particular
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 3375f9508ef6..9183fc0d365a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2046,7 +2046,6 @@ void unmap_kernel_range(unsigned long addr, unsigned long size) vunmap_page_range(addr, end); flush_tlb_kernel_range(addr, end); } -EXPORT_SYMBOL_GPL(unmap_kernel_range); int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages) { @@ -2058,7 +2057,6 @@ int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages) return err > 0 ? 0 : err; } -EXPORT_SYMBOL_GPL(map_vm_area); static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) -- 2.25.1