The patch titled Subject: mm/vmalloc: fix vbq->free breakage has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-vmalloc-fix-vbq-free-breakage.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: "hailong.liu" hailong.liu@oppo.com Subject: mm/vmalloc: fix vbq->free breakage Date: Thu, 30 May 2024 17:31:08 +0800
The function xa_for_each() in _vm_unmap_aliases() loops through all vbs. However, since commit 062eacf57ad9 ("mm: vmalloc: remove a global vmap_blocks xarray") the vb from xarray may not be on the corresponding CPU vmap_block_queue. Consequently, purge_fragmented_block() might use the wrong vbq->lock to protect the free list, leading to vbq->free breakage.
Link: https://lkml.kernel.org/r/20240530093108.4512-1-hailong.liu@oppo.com Fixes: fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized blocks") Signed-off-by: Hailong.Liu liuhailong@oppo.com Reported-by: Guangye Yang guangye.yang@mediatek.com Cc: Barry Song 21cnbao@gmail.com Cc: Christoph Hellwig hch@infradead.org Cc: Gao Xiang xiang@kernel.org Cc: Guangye Yang guangye.yang@mediatek.com Cc: liuhailong liuhailong@oppo.com Cc: Lorenzo Stoakes lstoakes@gmail.com Cc: Uladzislau Rezki (Sony) urezki@gmail.com Cc: Zhaoyang Huang zhaoyang.huang@unisoc.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/vmalloc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/mm/vmalloc.c~mm-vmalloc-fix-vbq-free-breakage +++ a/mm/vmalloc.c @@ -2830,10 +2830,9 @@ static void _vm_unmap_aliases(unsigned l for_each_possible_cpu(cpu) { struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); struct vmap_block *vb; - unsigned long idx;
rcu_read_lock(); - xa_for_each(&vbq->vmap_blocks, idx, vb) { + list_for_each_entry_rcu(vb, &vbq->free, free_list) { spin_lock(&vb->lock);
/* _
Patches currently in -mm which might be from hailong.liu@oppo.com are
mm-vmalloc-fix-vbq-free-breakage.patch
On Fri, May 31, 2024 at 4:12 AM Andrew Morton akpm@linux-foundation.org wrote:
The patch titled Subject: mm/vmalloc: fix vbq->free breakage has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-vmalloc-fix-vbq-free-breakage.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
From: "hailong.liu" hailong.liu@oppo.com Subject: mm/vmalloc: fix vbq->free breakage Date: Thu, 30 May 2024 17:31:08 +0800
The function xa_for_each() in _vm_unmap_aliases() loops through all vbs. However, since commit 062eacf57ad9 ("mm: vmalloc: remove a global vmap_blocks xarray") the vb from xarray may not be on the corresponding CPU vmap_block_queue. Consequently, purge_fragmented_block() might use the wrong vbq->lock to protect the free list, leading to vbq->free breakage.
Link: https://lkml.kernel.org/r/20240530093108.4512-1-hailong.liu@oppo.com Fixes: fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized blocks") Signed-off-by: Hailong.Liu liuhailong@oppo.com Reported-by: Guangye Yang guangye.yang@mediatek.com Cc: Barry Song 21cnbao@gmail.com Cc: Christoph Hellwig hch@infradead.org Cc: Gao Xiang xiang@kernel.org Cc: Guangye Yang guangye.yang@mediatek.com Cc: liuhailong liuhailong@oppo.com Cc: Lorenzo Stoakes lstoakes@gmail.com Cc: Uladzislau Rezki (Sony) urezki@gmail.com Cc: Zhaoyang Huang zhaoyang.huang@unisoc.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
mm/vmalloc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/mm/vmalloc.c~mm-vmalloc-fix-vbq-free-breakage +++ a/mm/vmalloc.c @@ -2830,10 +2830,9 @@ static void _vm_unmap_aliases(unsigned l for_each_possible_cpu(cpu) { struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); struct vmap_block *vb;
unsigned long idx; rcu_read_lock();
xa_for_each(&vbq->vmap_blocks, idx, vb) {
list_for_each_entry_rcu(vb, &vbq->free, free_list) {
No, this is wrong as the fully used vb's TLB will be kept since they are not on the vbq->free. I have sent Patchv2 out.
spin_lock(&vb->lock); /*
_
Patches currently in -mm which might be from hailong.liu@oppo.com are
mm-vmalloc-fix-vbq-free-breakage.patch
On Fri, 31. May 08:51, Zhaoyang Huang wrote:
On Fri, May 31, 2024 at 4:12 AM Andrew Morton akpm@linux-foundation.org wrote:
The patch titled Subject: mm/vmalloc: fix vbq->free breakage has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-vmalloc-fix-vbq-free-breakage.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
From: "hailong.liu" hailong.liu@oppo.com Subject: mm/vmalloc: fix vbq->free breakage Date: Thu, 30 May 2024 17:31:08 +0800
The function xa_for_each() in _vm_unmap_aliases() loops through all vbs. However, since commit 062eacf57ad9 ("mm: vmalloc: remove a global vmap_blocks xarray") the vb from xarray may not be on the corresponding CPU vmap_block_queue. Consequently, purge_fragmented_block() might use the wrong vbq->lock to protect the free list, leading to vbq->free breakage.
Link: https://lkml.kernel.org/r/20240530093108.4512-1-hailong.liu@oppo.com Fixes: fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized blocks") Signed-off-by: Hailong.Liu liuhailong@oppo.com Reported-by: Guangye Yang guangye.yang@mediatek.com Cc: Barry Song 21cnbao@gmail.com Cc: Christoph Hellwig hch@infradead.org Cc: Gao Xiang xiang@kernel.org Cc: Guangye Yang guangye.yang@mediatek.com Cc: liuhailong liuhailong@oppo.com Cc: Lorenzo Stoakes lstoakes@gmail.com Cc: Uladzislau Rezki (Sony) urezki@gmail.com Cc: Zhaoyang Huang zhaoyang.huang@unisoc.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
mm/vmalloc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/mm/vmalloc.c~mm-vmalloc-fix-vbq-free-breakage +++ a/mm/vmalloc.c @@ -2830,10 +2830,9 @@ static void _vm_unmap_aliases(unsigned l for_each_possible_cpu(cpu) { struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); struct vmap_block *vb;
unsigned long idx; rcu_read_lock();
xa_for_each(&vbq->vmap_blocks, idx, vb) {
list_for_each_entry_rcu(vb, &vbq->free, free_list) {
No, this is wrong as the fully used vb's TLB will be kept since they are not on the vbq->free. I have sent Patchv2 out.
as in https://lore.kernel.org/linux-mm/877csxn6ls.ffs@tglx/ the $VB either in purge_list or in free_list may not flushed in vm_unmap_aliases(). but $VB's flush is defferred.
In fact, we don’t necessarily need to flush here, and doing so could lead to flushing twice. one in xa, one in purge_list
so IMO loop through list_for_each_entry_rcu() is more reasonable to me
spin_lock(&vb->lock); /*
_
Patches currently in -mm which might be from hailong.liu@oppo.com are
mm-vmalloc-fix-vbq-free-breakage.patch
--
Best Regards, Hailong.
On 5/31/2024 8:51 AM, Zhaoyang Huang wrote:
On Fri, May 31, 2024 at 4:12 AM Andrew Morton akpm@linux-foundation.org wrote:
The patch titled Subject: mm/vmalloc: fix vbq->free breakage has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-vmalloc-fix-vbq-free-breakage.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
From: "hailong.liu" hailong.liu@oppo.com Subject: mm/vmalloc: fix vbq->free breakage Date: Thu, 30 May 2024 17:31:08 +0800
The function xa_for_each() in _vm_unmap_aliases() loops through all vbs. However, since commit 062eacf57ad9 ("mm: vmalloc: remove a global vmap_blocks xarray") the vb from xarray may not be on the corresponding CPU vmap_block_queue. Consequently, purge_fragmented_block() might use the wrong vbq->lock to protect the free list, leading to vbq->free breakage.
Link: https://lkml.kernel.org/r/20240530093108.4512-1-hailong.liu@oppo.com Fixes: fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized blocks") Signed-off-by: Hailong.Liu liuhailong@oppo.com Reported-by: Guangye Yang guangye.yang@mediatek.com Cc: Barry Song 21cnbao@gmail.com Cc: Christoph Hellwig hch@infradead.org Cc: Gao Xiang xiang@kernel.org Cc: Guangye Yang guangye.yang@mediatek.com Cc: liuhailong liuhailong@oppo.com Cc: Lorenzo Stoakes lstoakes@gmail.com Cc: Uladzislau Rezki (Sony) urezki@gmail.com Cc: Zhaoyang Huang zhaoyang.huang@unisoc.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
mm/vmalloc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/mm/vmalloc.c~mm-vmalloc-fix-vbq-free-breakage +++ a/mm/vmalloc.c @@ -2830,10 +2830,9 @@ static void _vm_unmap_aliases(unsigned l for_each_possible_cpu(cpu) { struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); struct vmap_block *vb;
unsigned long idx; rcu_read_lock();
xa_for_each(&vbq->vmap_blocks, idx, vb) {
list_for_each_entry_rcu(vb, &vbq->free, free_list) {
No, this is wrong as the fully used vb's TLB will be kept since they are not on the vbq->free. I have sent Patchv2 out.
spin_lock(&vb->lock); /*
_
Patches currently in -mm which might be from hailong.liu@oppo.com are
mm-vmalloc-fix-vbq-free-breakage.patch
My bad, I should see the context why use xa_for_each. https://lore.kernel.org/linux-mm/20230523140002.634591885@linutronix.de/
waiting for the Zhaoyang's patch.
Brs, Hailong.
linux-stable-mirror@lists.linaro.org