From: Kairui Song kasong@tencent.com
The order check and fallback loop is updating the index value on every loop, this will cause the index to be wrongly aligned by a larger value while the loop shrinks the order.
This may result in inserting and returning a folio of the wrong index and cause data corruption with some userspace workloads [1].
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy... [1] Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Kairui Song kasong@tencent.com
---
Changes from V2: - Introduce a temporary variable to improve code, no behavior change, generated code is identical. - Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
Changes from V1: - Remove unnecessary cleanup and simplify the commit message. - Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
--- mm/shmem.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c index b50ce7dbc84a..e1dc2d8e939c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, struct shmem_inode_info *info = SHMEM_I(inode); unsigned long suitable_orders = 0; struct folio *folio = NULL; + pgoff_t aligned_index; long pages; int error, order;
@@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, order = highest_order(suitable_orders); while (suitable_orders) { pages = 1UL << order; - index = round_down(index, pages); - folio = shmem_alloc_folio(gfp, order, info, index); - if (folio) + aligned_index = round_down(index, pages); + folio = shmem_alloc_folio(gfp, order, info, aligned_index); + if (folio) { + index = aligned_index; goto allocated; + }
if (pages == HPAGE_PMD_NR) count_vm_event(THP_FILE_FALLBACK);
On 23.10.25 08:59, Kairui Song wrote:
From: Kairui Song kasong@tencent.com
The order check and fallback loop is updating the index value on every loop, this will cause the index to be wrongly aligned by a larger value while the loop shrinks the order.
This may result in inserting and returning a folio of the wrong index and cause data corruption with some userspace workloads [1].
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy... [1] Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Kairui Song kasong@tencent.com
Changes from V2:
- Introduce a temporary variable to improve code, no behavior change, generated code is identical.
- Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
Changes from V1:
- Remove unnecessary cleanup and simplify the commit message.
- Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
mm/shmem.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c index b50ce7dbc84a..e1dc2d8e939c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, struct shmem_inode_info *info = SHMEM_I(inode); unsigned long suitable_orders = 0; struct folio *folio = NULL;
- pgoff_t aligned_index; long pages; int error, order;
@@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, order = highest_order(suitable_orders); while (suitable_orders) { pages = 1UL << order;
index = round_down(index, pages);folio = shmem_alloc_folio(gfp, order, info, index);if (folio)
aligned_index = round_down(index, pages);folio = shmem_alloc_folio(gfp, order, info, aligned_index);if (folio) {index = aligned_index; goto allocated;}
Was the found by code inspection or was there a report about this?
Acked-by: David Hildenbrand david@redhat.com
On 23.10.25 18:13, David Hildenbrand wrote:
On 23.10.25 08:59, Kairui Song wrote:
From: Kairui Song kasong@tencent.com
The order check and fallback loop is updating the index value on every loop, this will cause the index to be wrongly aligned by a larger value while the loop shrinks the order.
This may result in inserting and returning a folio of the wrong index and cause data corruption with some userspace workloads [1].
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy... [1] Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Kairui Song kasong@tencent.com
Changes from V2:
- Introduce a temporary variable to improve code, no behavior change, generated code is identical.
- Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
Changes from V1:
- Remove unnecessary cleanup and simplify the commit message.
- Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
mm/shmem.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c index b50ce7dbc84a..e1dc2d8e939c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, struct shmem_inode_info *info = SHMEM_I(inode); unsigned long suitable_orders = 0; struct folio *folio = NULL;
- pgoff_t aligned_index; long pages; int error, order;
@@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, order = highest_order(suitable_orders); while (suitable_orders) { pages = 1UL << order;
index = round_down(index, pages);folio = shmem_alloc_folio(gfp, order, info, index);if (folio)
aligned_index = round_down(index, pages);folio = shmem_alloc_folio(gfp, order, info, aligned_index);if (folio) {index = aligned_index; goto allocated;}Was the found by code inspection or was there a report about this?
Answering my own question, the "Link:" above should be
Closes: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy...
On Fri, Oct 24, 2025 at 12:14 AM David Hildenbrand david@redhat.com wrote:
On 23.10.25 18:13, David Hildenbrand wrote:
On 23.10.25 08:59, Kairui Song wrote:
From: Kairui Song kasong@tencent.com
The order check and fallback loop is updating the index value on every loop, this will cause the index to be wrongly aligned by a larger value while the loop shrinks the order.
This may result in inserting and returning a folio of the wrong index and cause data corruption with some userspace workloads [1].
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy... [1] Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Kairui Song kasong@tencent.com
Changes from V2:
- Introduce a temporary variable to improve code, no behavior change, generated code is identical.
- Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
Changes from V1:
- Remove unnecessary cleanup and simplify the commit message.
- Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
mm/shmem.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c index b50ce7dbc84a..e1dc2d8e939c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, struct shmem_inode_info *info = SHMEM_I(inode); unsigned long suitable_orders = 0; struct folio *folio = NULL;
- pgoff_t aligned_index; long pages; int error, order;
@@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, order = highest_order(suitable_orders); while (suitable_orders) { pages = 1UL << order;
index = round_down(index, pages);folio = shmem_alloc_folio(gfp, order, info, index);if (folio)
aligned_index = round_down(index, pages);folio = shmem_alloc_folio(gfp, order, info, aligned_index);if (folio) {index = aligned_index; goto allocated;}Was the found by code inspection or was there a report about this?
Answering my own question, the "Link:" above should be
Closes: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy...
Thanks for the review. It's reported by and fixed by me, so I didn't include an extra Report-By & Closes, I thought that's kind of redundant. Do we need that? Maybe Andrew can help add it :) ?
Answering my own question, the "Link:" above should be
Closes: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy...
Thanks for the review. It's reported by and fixed by me, so I didn't include an extra Report-By & Closes, I thought that's kind of redundant. Do we need that? Maybe Andrew can help add it :) ?
I also think it’s better to use “Closes:”. In that case, we might need to slightly adjust the commit log to remove "[1]" here.
" This may result in inserting and returning a folio of the wrong index and cause data corruption with some userspace workloads [1]."
With that, Reviewed-by: Barry Song baohua@kernel.org
Thanks Barry
On 23 Oct 2025, at 2:59, Kairui Song wrote:
From: Kairui Song kasong@tencent.com
The order check and fallback loop is updating the index value on every loop, this will cause the index to be wrongly aligned by a larger value while the loop shrinks the order.
This may result in inserting and returning a folio of the wrong index and cause data corruption with some userspace workloads [1].
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy... [1] Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Kairui Song kasong@tencent.com
Changes from V2:
- Introduce a temporary variable to improve code, no behavior change, generated code is identical.
- Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
Changes from V1:
- Remove unnecessary cleanup and simplify the commit message.
- Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
mm/shmem.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
Acked-by: Zi Yan ziy@nvidia.com
-- Best Regards, Yan, Zi
On 2025/10/23 14:59, Kairui Song wrote:
From: Kairui Song kasong@tencent.com
The order check and fallback loop is updating the index value on every loop, this will cause the index to be wrongly aligned by a larger value while the loop shrinks the order.
This may result in inserting and returning a folio of the wrong index and cause data corruption with some userspace workloads [1].
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy... [1] Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Kairui Song kasong@tencent.com
LGTM. Thanks. Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
On Thu, Oct 23, 2025 at 02:59:13PM +0800, Kairui Song wrote:
From: Kairui Song kasong@tencent.com
The order check and fallback loop is updating the index value on every loop, this will cause the index to be wrongly aligned by a larger value while the loop shrinks the order.
This may result in inserting and returning a folio of the wrong index and cause data corruption with some userspace workloads [1].
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy... [1] Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Kairui Song kasong@tencent.com
Yikes... LGTM so:
Reviewed-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
See below for a small nit.
Changes from V2:
- Introduce a temporary variable to improve code, no behavior change, generated code is identical.
- Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
Changes from V1:
- Remove unnecessary cleanup and simplify the commit message.
- Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
mm/shmem.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c index b50ce7dbc84a..e1dc2d8e939c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, struct shmem_inode_info *info = SHMEM_I(inode); unsigned long suitable_orders = 0; struct folio *folio = NULL;
- pgoff_t aligned_index;
Nit, but can't we just declare this in the loop? That makes it even clearer that we don't reuse the value.
long pages; int error, order;
@@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, order = highest_order(suitable_orders); while (suitable_orders) { pages = 1UL << order;
index = round_down(index, pages);folio = shmem_alloc_folio(gfp, order, info, index);if (folio)
aligned_index = round_down(index, pages);folio = shmem_alloc_folio(gfp, order, info, aligned_index);if (folio) {index = aligned_index; goto allocated;} if (pages == HPAGE_PMD_NR) count_vm_event(THP_FILE_FALLBACK);-- 2.51.0
linux-stable-mirror@lists.linaro.org