Buffer bouncing is needed only when memory exists above the lowmem region, i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= max_pfn) was inverted and prevented bouncing when it could actually be required.
Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled on 32-bit ARM where not all memory is permanently mapped into the kernel’s lowmem region.
Branch-Specific Note:
This fix is specific to this branch (6.6.y) only. In the upstream “tip” kernel, bounce buffer support for highmem pages was completely removed after kernel version 6.12. Therefore, this modification is not possible or relevant in the tip branch.
Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") Cc: stable@vger.kernel.org Signed-off-by: Hardeep Sharma quic_hardshar@quicinc.com --- block/blk.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk.h b/block/blk.h index 67915b04b3c1..f8a1d64be5a2 100644 --- a/block/blk.h +++ b/block/blk.h @@ -383,7 +383,7 @@ static inline bool blk_queue_may_bounce(struct request_queue *q) { return IS_ENABLED(CONFIG_BOUNCE) && q->limits.bounce == BLK_BOUNCE_HIGH && - max_low_pfn >= max_pfn; + max_low_pfn < max_pfn; }
static inline struct bio *blk_queue_bounce(struct bio *bio,
On Thu, Aug 14, 2025 at 12:06:55PM +0530, Hardeep Sharma wrote:
Buffer bouncing is needed only when memory exists above the lowmem region, i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= max_pfn) was inverted and prevented bouncing when it could actually be required.
Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled on 32-bit ARM where not all memory is permanently mapped into the kernel’s lowmem region.
Branch-Specific Note:
This fix is specific to this branch (6.6.y) only. In the upstream “tip” kernel, bounce buffer support for highmem pages was completely removed after kernel version 6.12. Therefore, this modification is not possible or relevant in the tip branch.
Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") Cc: stable@vger.kernel.org Signed-off-by: Hardeep Sharma quic_hardshar@quicinc.com
Why do you say this is only for 6.6.y, yet your Fixes: line is older than that?
And why wasn't this ever found or noticed before?
Also, why can't we just remove all of the bounce buffering code in this older kernel tree? What is wrong with doing that instead?
And finally, how was this tested?
thanks,
greg k-h
On 8/14/2025 2:33 PM, Greg KH wrote:
On Thu, Aug 14, 2025 at 12:06:55PM +0530, Hardeep Sharma wrote:
Buffer bouncing is needed only when memory exists above the lowmem region, i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= max_pfn) was inverted and prevented bouncing when it could actually be required.
Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled on 32-bit ARM where not all memory is permanently mapped into the kernel’s lowmem region.
Branch-Specific Note:
This fix is specific to this branch (6.6.y) only. In the upstream “tip” kernel, bounce buffer support for highmem pages was completely removed after kernel version 6.12. Therefore, this modification is not possible or relevant in the tip branch.
Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") Cc: stable@vger.kernel.org Signed-off-by: Hardeep Sharma quic_hardshar@quicinc.com
Why do you say this is only for 6.6.y, yet your Fixes: line is older than that?
[Hardeep Sharma]::
Yes, the original commit was merged in kernel 5.13-rc1, as indicated by the Fixes: line. However, we are currently working with kernel 6.6, where we encountered the issue. While it could be merged into 6.12 and then backported to earlier versions, our focus is on addressing it in 6.6.y, where the problem was observed.
And why wasn't this ever found or noticed before?
[Hardeep Sharma] ::
This issue remained unnoticed likely because the bounce buffering logic is only triggered under specific hardware and configuration conditions—primarily on 32-bit ARM systems with CONFIG_HIGHMEM enabled and devices requiring DMA from lowmem. Many platforms either do not use highmem or have hardware that does not require bounce buffering, so the bug did not manifest widely.
Also, why can't we just remove all of the bounce buffering code in this older kernel tree? What is wrong with doing that instead?
[Hardeep Sharma]::
it's too intrusive — I'd need to backport 40+ dependency patches, and I'm unsure about the instability this might introduce in block layer on kernel 6.6. Plus, we don't know if it'll work reliably on 32-bit with 1GB+ DDR and highmem enabled. So I'd prefer to push just this single tested patch on kernel 6.6 and older affected versions.
Removing bounce buffering code from older kernel trees is not feasible for all use cases. Some legacy platforms and drivers still rely on bounce buffering to support DMA operations with highmem pages, especially on 32-bit systems.
And finally, how was this tested?
[Hardeep Sharma]:
The patch was tested on a 32-bit ARM platform with CONFIG_HIGHMEM enabled and a storage device requiring DMA from lowmem.>
thanks,
greg k-h
On Thu, Aug 14, 2025 at 04:24:25PM +0530, Hardeep Sharma wrote:
On 8/14/2025 2:33 PM, Greg KH wrote:
On Thu, Aug 14, 2025 at 12:06:55PM +0530, Hardeep Sharma wrote:
Buffer bouncing is needed only when memory exists above the lowmem region, i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= max_pfn) was inverted and prevented bouncing when it could actually be required.
Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled on 32-bit ARM where not all memory is permanently mapped into the kernel’s lowmem region.
Branch-Specific Note:
This fix is specific to this branch (6.6.y) only. In the upstream “tip” kernel, bounce buffer support for highmem pages was completely removed after kernel version 6.12. Therefore, this modification is not possible or relevant in the tip branch.
Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") Cc: stable@vger.kernel.org Signed-off-by: Hardeep Sharma quic_hardshar@quicinc.com
Why do you say this is only for 6.6.y, yet your Fixes: line is older than that?
Yes, the original commit was merged in kernel 5.13-rc1, as indicated by the Fixes: line. However, we are currently working with kernel 6.6, where we encountered the issue. While it could be merged into 6.12 and then backported to earlier versions, our focus is on addressing it in 6.6.y, where the problem was observed.
For obvious reasons, we can not take a patch only for one older kernel and not a newer (or the older ones if possible), otherwise you will have a regression when you move forward to the new version as you will be doing eventually.
So for that reason alone, we can not take this patch, NOR should you want us to.
And why wasn't this ever found or noticed before?
[Hardeep Sharma] ::
Odd quoting, please fix your email client :)
This issue remained unnoticed likely because the bounce buffering logic is only triggered under specific hardware and configuration conditions—primarily on 32-bit ARM systems with CONFIG_HIGHMEM enabled and devices requiring DMA from lowmem. Many platforms either do not use highmem or have hardware that does not require bounce buffering, so the bug did not manifest widely.
So no one has hit this on any 5.15 or newer devices? I find that really hard to believe given the number of those devices in the world. So what is unique about your platform that you are hitting this and no one else is?
Also, why can't we just remove all of the bounce buffering code in this older kernel tree? What is wrong with doing that instead?
it's too intrusive — I'd need to backport 40+ dependency patches, and I'm unsure about the instability this might introduce in block layer on kernel 6.6. Plus, we don't know if it'll work reliably on 32-bit with 1GB+ DDR and highmem enabled. So I'd prefer to push just this single tested patch on kernel 6.6 and older affected versions.
Whenever we take one-off patches, 90% of the time it causes problems, both with the fact that the patch is usually buggy, AND the fact that it now will cause merge conflicts going forward. 40+ patches is nothing in stable patch acceptance, please try that first as you want us to be able to maintain these kernels well for your devices over time, right?
So please do that first. Only after proof that that would not work should you even consider a one-off patch.
Removing bounce buffering code from older kernel trees is not feasible for all use cases. Some legacy platforms and drivers still rely on bounce buffering to support DMA operations with highmem pages, especially on 32-bit systems.
Then how was it removed in newer kernels at all? Did we just drop support for that hardware? What happens when you move to a newer kernel on your hardware, does it stop working? Based on what I have seen with some Android devices, they seem to work just fine on Linus's tree today, so what is unique about your platform that is going to break and not work anymore?
And finally, how was this tested?
[Hardeep Sharma]:
The patch was tested on a 32-bit ARM platform with CONFIG_HIGHMEM enabled and a storage device requiring DMA from lowmem.>
So this is for a 32bit ARM system only? Not 64bit? If so, why is this also being submitted to the Android kernel tree which does not support 32bit ARM at all?
And again, does your system not work properly on 6.16? If not, why not fix that first?
thanks,
greg k-h
This change to blk_queue_may_bounce() in block/blk.h will only affect systems with the following configuration:
1. 32-bit ARM architecture 2. Physical DDR memory greater than 1GB 3. CONFIG_HIGHMEM enabled 4. Virtual memory split of 1GB for kernel and 3GB for userspace
Under these conditions, the logic for buffer bouncing is relevant because the kernel may need to handle memory above the low memory threshold, which is typical for highmem-enabled 32-bit systems with large RAM. On other architectures or configurations, this code path will not be exercised.
On 8/14/2025 5:06 PM, Greg KH wrote:
On Thu, Aug 14, 2025 at 04:24:25PM +0530, Hardeep Sharma wrote:
On 8/14/2025 2:33 PM, Greg KH wrote:
On Thu, Aug 14, 2025 at 12:06:55PM +0530, Hardeep Sharma wrote:
Buffer bouncing is needed only when memory exists above the lowmem region, i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= max_pfn) was inverted and prevented bouncing when it could actually be required.
Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled on 32-bit ARM where not all memory is permanently mapped into the kernel’s lowmem region.
Branch-Specific Note:
This fix is specific to this branch (6.6.y) only. In the upstream “tip” kernel, bounce buffer support for highmem pages was completely removed after kernel version 6.12. Therefore, this modification is not possible or relevant in the tip branch.
Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") Cc: stable@vger.kernel.org Signed-off-by: Hardeep Sharma quic_hardshar@quicinc.com
Why do you say this is only for 6.6.y, yet your Fixes: line is older than that?
Yes, the original commit was merged in kernel 5.13-rc1, as indicated by the Fixes: line. However, we are currently working with kernel 6.6, where we encountered the issue. While it could be merged into 6.12 and then backported to earlier versions, our focus is on addressing it in 6.6.y, where the problem was observed.
For obvious reasons, we can not take a patch only for one older kernel and not a newer (or the older ones if possible), otherwise you will have a regression when you move forward to the new version as you will be doing eventually.
So for that reason alone, we can not take this patch, NOR should you want us to.
And why wasn't this ever found or noticed before?
[Hardeep Sharma] ::
Odd quoting, please fix your email client :)
This issue remained unnoticed likely because the bounce buffering logic is only triggered under specific hardware and configuration conditions—primarily on 32-bit ARM systems with CONFIG_HIGHMEM enabled and devices requiring DMA from lowmem. Many platforms either do not use highmem or have hardware that does not require bounce buffering, so the bug did not manifest widely.
So no one has hit this on any 5.15 or newer devices? I find that really hard to believe given the number of those devices in the world. So what is unique about your platform that you are hitting this and no one else is?
Also, why can't we just remove all of the bounce buffering code in this older kernel tree? What is wrong with doing that instead?
it's too intrusive — I'd need to backport 40+ dependency patches, and I'm unsure about the instability this might introduce in block layer on kernel 6.6. Plus, we don't know if it'll work reliably on 32-bit with 1GB+ DDR and highmem enabled. So I'd prefer to push just this single tested patch on kernel 6.6 and older affected versions.
Whenever we take one-off patches, 90% of the time it causes problems, both with the fact that the patch is usually buggy, AND the fact that it now will cause merge conflicts going forward. 40+ patches is nothing in stable patch acceptance, please try that first as you want us to be able to maintain these kernels well for your devices over time, right?
So please do that first. Only after proof that that would not work should you even consider a one-off patch.
Removing bounce buffering code from older kernel trees is not feasible for all use cases. Some legacy platforms and drivers still rely on bounce buffering to support DMA operations with highmem pages, especially on 32-bit systems.
Then how was it removed in newer kernels at all? Did we just drop support for that hardware? What happens when you move to a newer kernel on your hardware, does it stop working? Based on what I have seen with some Android devices, they seem to work just fine on Linus's tree today, so what is unique about your platform that is going to break and not work anymore?
And finally, how was this tested?
[Hardeep Sharma]:
The patch was tested on a 32-bit ARM platform with CONFIG_HIGHMEM enabled and a storage device requiring DMA from lowmem.>
So this is for a 32bit ARM system only? Not 64bit? If so, why is this also being submitted to the Android kernel tree which does not support 32bit ARM at all?
And again, does your system not work properly on 6.16? If not, why not fix that first?
thanks,
greg k-h
A: http://en.wikipedia.org/wiki/Top_post Q: Were do I find info about this thing called top-posting? A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing in e-mail?
A: No. Q: Should I include quotations after my reply?
http://daringfireball.net/2007/07/on_top
On Thu, Aug 14, 2025 at 06:36:29PM +0530, Hardeep Sharma wrote:
This change to blk_queue_may_bounce() in block/blk.h will only affect systems with the following configuration:
- 32-bit ARM architecture
- Physical DDR memory greater than 1GB
- CONFIG_HIGHMEM enabled
- Virtual memory split of 1GB for kernel and 3GB for userspace
Under these conditions, the logic for buffer bouncing is relevant because the kernel may need to handle memory above the low memory threshold, which is typical for highmem-enabled 32-bit systems with large RAM. On other architectures or configurations, this code path will not be exercised.
You did not answer most of the questions I asked for some reason :(
This change to blk_queue_may_bounce() in block/blk.h will only affect systems with the following configuration:
1. 32-bit ARM architecture 2. Physical DDR memory greater than or equal to 1GB (greater than lowmem region ) 3. CONFIG_HIGHMEM enabled 4. Virtual memory split of 1GB for kernel and 3GB for userspace OR when we cannot map all physical address in lowmem region
Under these conditions, the logic for buffer bouncing is relevant because the kernel may need to handle memory above the low memory threshold, which is typical for highmem-enabled 32-bit systems with large RAM. On other architectures or configurations, this code path will not be exercised.
On 8/14/2025 5:06 PM, Greg KH wrote:
On Thu, Aug 14, 2025 at 04:24:25PM +0530, Hardeep Sharma wrote:
On 8/14/2025 2:33 PM, Greg KH wrote:
On Thu, Aug 14, 2025 at 12:06:55PM +0530, Hardeep Sharma wrote:
Buffer bouncing is needed only when memory exists above the lowmem region, i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= max_pfn) was inverted and prevented bouncing when it could actually be required.
Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled on 32-bit ARM where not all memory is permanently mapped into the kernel’s lowmem region.
Branch-Specific Note:
This fix is specific to this branch (6.6.y) only. In the upstream “tip” kernel, bounce buffer support for highmem pages was completely removed after kernel version 6.12. Therefore, this modification is not possible or relevant in the tip branch.
Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") Cc: stable@vger.kernel.org Signed-off-by: Hardeep Sharma quic_hardshar@quicinc.com
Why do you say this is only for 6.6.y, yet your Fixes: line is older than that?
Yes, the original commit was merged in kernel 5.13-rc1, as indicated by the Fixes: line. However, we are currently working with kernel 6.6, where we encountered the issue. While it could be merged into 6.12 and then backported to earlier versions, our focus is on addressing it in 6.6.y, where the problem was observed.
For obvious reasons, we can not take a patch only for one older kernel and not a newer (or the older ones if possible), otherwise you will have a regression when you move forward to the new version as you will be doing eventually.
So for that reason alone, we can not take this patch, NOR should you want us to.
And why wasn't this ever found or noticed before?
[Hardeep Sharma] ::
Odd quoting, please fix your email client :)
This issue remained unnoticed likely because the bounce buffering logic is only triggered under specific hardware and configuration conditions—primarily on 32-bit ARM systems with CONFIG_HIGHMEM enabled and devices requiring DMA from lowmem. Many platforms either do not use highmem or have hardware that does not require bounce buffering, so the bug did not manifest widely.
So no one has hit this on any 5.15 or newer devices? I find that really hard to believe given the number of those devices in the world. So what is unique about your platform that you are hitting this and no one else is?
Also, why can't we just remove all of the bounce buffering code in this older kernel tree? What is wrong with doing that instead?
it's too intrusive — I'd need to backport 40+ dependency patches, and I'm unsure about the instability this might introduce in block layer on kernel 6.6. Plus, we don't know if it'll work reliably on 32-bit with 1GB+ DDR and highmem enabled. So I'd prefer to push just this single tested patch on kernel 6.6 and older affected versions.
Whenever we take one-off patches, 90% of the time it causes problems, both with the fact that the patch is usually buggy, AND the fact that it now will cause merge conflicts going forward. 40+ patches is nothing in stable patch acceptance, please try that first as you want us to be able to maintain these kernels well for your devices over time, right?
So please do that first. Only after proof that that would not work should you even consider a one-off patch.
Removing bounce buffering code from older kernel trees is not feasible for all use cases. Some legacy platforms and drivers still rely on bounce buffering to support DMA operations with highmem pages, especially on 32-bit systems.
Then how was it removed in newer kernels at all? Did we just drop support for that hardware? What happens when you move to a newer kernel on your hardware, does it stop working? Based on what I have seen with some Android devices, they seem to work just fine on Linus's tree today, so what is unique about your platform that is going to break and not work anymore?
And finally, how was this tested?
[Hardeep Sharma]:
The patch was tested on a 32-bit ARM platform with CONFIG_HIGHMEM enabled and a storage device requiring DMA from lowmem.>
So this is for a 32bit ARM system only? Not 64bit? If so, why is this also being submitted to the Android kernel tree which does not support 32bit ARM at all?
And again, does your system not work properly on 6.16? If not, why not fix that first?
thanks,
greg k-h
linux-stable-mirror@lists.linaro.org