In the parse_adv_monitor_pattern() function, the value of
the 'length' variable is currently limited to HCI_MAX_EXT_AD_LENGTH(251).
The size of the 'value' array in the mgmt_adv_pattern structure is 31.
If the value of 'pattern[i].length' is set in the user space
and exceeds 31, the 'patterns[i].value' array can be accessed
out of bound when copied.
Increasing the size of the 'value' array in
the 'mgmt_adv_pattern' structure will break the userspace.
Considering this, and to avoid OOB access revert the limits for 'offset'
and 'length' back to the value of HCI_MAX_AD_LENGTH.
Found by InfoTeCS on behalf of Linux Verification Center
(linuxtesting.org) with SVACE.
Fixes: db08722fc7d4 ("Bluetooth: hci_core: Fix missing instances using HCI_MAX_AD_LENGTH")
Cc: stable(a)vger.kernel.org
Signed-off-by: Ilia Gavrilov <Ilia.Gavrilov(a)infotecs.ru>
---
include/net/bluetooth/mgmt.h | 2 +-
net/bluetooth/mgmt.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/net/bluetooth/mgmt.h b/include/net/bluetooth/mgmt.h
index 74edea06985b..4b07ce6dfd69 100644
--- a/include/net/bluetooth/mgmt.h
+++ b/include/net/bluetooth/mgmt.h
@@ -780,7 +780,7 @@ struct mgmt_adv_pattern {
__u8 ad_type;
__u8 offset;
__u8 length;
- __u8 value[31];
+ __u8 value[HCI_MAX_AD_LENGTH];
} __packed;
#define MGMT_OP_ADD_ADV_PATTERNS_MONITOR 0x0052
diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
index a3d16eece0d2..500033b70a96 100644
--- a/net/bluetooth/mgmt.c
+++ b/net/bluetooth/mgmt.c
@@ -5391,9 +5391,9 @@ static u8 parse_adv_monitor_pattern(struct adv_monitor *m, u8 pattern_count,
for (i = 0; i < pattern_count; i++) {
offset = patterns[i].offset;
length = patterns[i].length;
- if (offset >= HCI_MAX_EXT_AD_LENGTH ||
- length > HCI_MAX_EXT_AD_LENGTH ||
- (offset + length) > HCI_MAX_EXT_AD_LENGTH)
+ if (offset >= HCI_MAX_AD_LENGTH ||
+ length > HCI_MAX_AD_LENGTH ||
+ (offset + length) > HCI_MAX_AD_LENGTH)
return MGMT_STATUS_INVALID_PARAMS;
p = kmalloc(sizeof(*p), GFP_KERNEL);
--
2.39.5
Hi. This is supposed to be a patch, but I think it's worth discussing
how it should be backported to -stable, so I've labeled it as [DISCUSSION].
The bug described below was unintentionally fixed in v6.5 and not
backported to -stable. So technically I would need to use "Option 3" [A],
but since the original patch [B] did not intend to fix a bug (and it's also
part of a larger patch series), it looks quite different from the patch below,
and I'm not sure what the backport should look like.
I think there are probably two options:
1. Provide the description of the original patch along with a very long,
detailed explanation of why the patch deviates from the upstream version, or
2. Post the patch below with a clarification that it was fixed upstream
by commit 670ddd8cdcbd1.
Any thoughts?
[A] https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html#opt…
[B] https://lkml.kernel.org/r/725a42a9-91e9-c868-925-e3a5fd40bb4f@google.com
(Upstream commit 670ddd8cdcbd1)
In any case, no matter how we backport this, it needs some review and
feedback would be appreciated. The patch applies to v6.1 and v5.15, and
v5.10 but not v5.4.
From cf45867ab8e48b42160b7253390db7bdecef1455 Mon Sep 17 00:00:00 2001
From: Harry Yoo <harry.yoo(a)oracle.com>
Date: Thu, 11 Sep 2025 20:05:40 +0900
Subject: [PATCH] mm, numa: fix bad pmd by atomically checking is_swap_pmd() in
change_prot_numa()
It was observed that a bad pmd is seen when automatic NUMA balancing
is marking page table entries as prot_numa:
[2437548.196018] mm/pgtable-generic.c:50: bad pmd 00000000af22fc02(dffffffe71fbfe02)
With some kernel modification, the call stack was dumped:
[2437548.235022] Call Trace:
[2437548.238234] <TASK>
[2437548.241060] dump_stack_lvl+0x46/0x61
[2437548.245689] panic+0x106/0x2e5
[2437548.249497] pmd_clear_bad+0x3c/0x3c
[2437548.253967] change_pmd_range.isra.0+0x34d/0x3a7
[2437548.259537] change_p4d_range+0x156/0x20e
[2437548.264392] change_protection_range+0x116/0x1a9
[2437548.269976] change_prot_numa+0x15/0x37
[2437548.274774] task_numa_work+0x1b8/0x302
[2437548.279512] task_work_run+0x62/0x95
[2437548.283882] exit_to_user_mode_loop+0x1a4/0x1a9
[2437548.289277] exit_to_user_mode_prepare+0xf4/0xfc
[2437548.294751] ? sysvec_apic_timer_interrupt+0x34/0x81
[2437548.300677] irqentry_exit_to_user_mode+0x5/0x25
[2437548.306153] asm_sysvec_apic_timer_interrupt+0x16/0x1b
This is due to a race condition between change_prot_numa() and
THP migration because the kernel doesn't check is_swap_pmd() and
pmd_trans_huge() atomically:
change_prot_numa() THP migration
======================================================================
- change_pmd_range()
-> is_swap_pmd() returns false,
meaning it's not a PMD migration
entry.
- do_huge_pmd_numa_page()
-> migrate_misplaced_page() sets
migration entries for the THP.
- change_pmd_range()
-> pmd_none_or_clear_bad_unless_trans_huge()
-> pmd_none() and pmd_trans_huge() returns false
- pmd_none_or_clear_bad_unless_trans_huge()
-> pmd_bad() returns true for the migration entry!
For the race condition described above to occur:
1) AutoNUMA must be unmapping a range of pages, with at least part of the
range already unmapped by AutoNUMA.
2) While AutoNUMA is in the process of unmapping, a NUMA hinting fault
occurs within that range, specifically when we are about to unmap
the PMD entry, between the is_swap_pmd() and pmd_trans_huge() checks.
So this is a really rare race condition and it's observed that it takes
usually a few days of autonuma-intensive testing to trigger.
A bit of history on a similar race condition in the past:
In fact, a similar race condition caused by not checking pmd_trans_huge()
atomically was reported [1] in 2017. However, instead of the patch [1],
another patch series [3] fixed the problem [2] by not clearing the pmd
entry but invaliding it instead (so that pmd_trans_huge() would still
return true).
Despite patch series [3], the bad pmd error continued to be reported
in mainline. As a result, [1] was resurrected [4] and it landed mainline
in 2020 in a hope that it would resolve the issue. However, now it turns
out that [3] was not sufficient.
Fix this race condition by checking is_swap_pmd() and pmd_trans_huge()
atomically. With that, the kernel should see either
pmd_trans_huge() == true, or is_swap_pmd() == true when another task is
migrating the page concurrently.
This bug was introduced when THP migration support was added. More
specifically, by commit 84c3fc4e9c56 ("mm: thp: check pmd migration entry
in common path")).
It is unintentionally fixed since v6.5 by commit 670ddd8cdcbd1
("mm/mprotect: delete pmd_none_or_clear_bad_unless_trans_huge()") while
removing pmd_none_or_clear_bad_unless_trans_huge() function. But it's not
backported to -stable because it was fixed unintentionally.
Link: https://lore.kernel.org/linux-mm/20170410094825.2yfo5zehn7pchg6a@techsingul… [1]
Link: https://lore.kernel.org/linux-mm/8A6309F4-DB76-48FA-BE7F-BF9536A4C4E5@cs.ru… [2]
Link: https://lore.kernel.org/linux-mm/20170302151034.27829-1-kirill.shutemov@lin… [3]
Link: https://lore.kernel.org/linux-mm/20200216191800.22423-1-aquini@redhat.com [4]
Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path")
Signed-off-by: Harry Yoo <harry.yoo(a)oracle.com>
---
mm/mprotect.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 668bfaa6ed2a..c0e796c0f9b0 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -303,7 +303,7 @@ static inline int pmd_none_or_clear_bad_unless_trans_huge(pmd_t *pmd)
if (pmd_none(pmdval))
return 1;
- if (pmd_trans_huge(pmdval))
+ if (is_swap_pmd(pmdval) || pmd_trans_huge(pmdval))
return 0;
if (unlikely(pmd_bad(pmdval))) {
pmd_clear_bad(pmd);
@@ -373,7 +373,7 @@ static inline unsigned long change_pmd_range(struct mmu_gather *tlb,
* Hence, it's necessary to atomically read the PMD value
* for all the checks.
*/
- if (!is_swap_pmd(*pmd) && !pmd_devmap(*pmd) &&
+ if (!pmd_devmap(*pmd) &&
pmd_none_or_clear_bad_unless_trans_huge(pmd))
goto next;
--
2.43.0
From: Aurabindo Pillai <aurabindo.pillai(a)amd.com>
Justificiation:
This fixes DisplayPort lockups on Polaris GPUs during DPMS transitions,
which have been a major headache on our POWER9 platforms. Backport to
Debian stable kernel version.
[ Upstream commit c210b757b400959577a5a17b783b5959b82baed8 ]
Accessing DC from amdgpu_dm is usually preceded by acquisition of
dc_lock mutex. Most of the DC API that DM calls are under a DC lock.
However, there are a few that are not. Some DC API called from interrupt
context end up sending DMUB commands via a DC API, while other threads were
using DMUB. This was apparent from a race between calls for setting idle
optimization enable/disable and the DC API to set vmin/vmax.
Offload the call to dc_stream_adjust_vmin_vmax() to a thread instead
of directly calling them from the interrupt handler such that it waits
for dc_lock.
[Timothy Pearson]
Modified header file patch to apply to 6.12
Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas(a)amd.com>
Signed-off-by: Aurabindo Pillai <aurabindo.pillai(a)amd.com>
Signed-off-by: Roman Li <roman.li(a)amd.com>
Tested-by: Daniel Wheeler <daniel.wheeler(a)amd.com>
Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com>
Signed-off-by: Timothy Pearson <tpearson(a)raptorengineering.com>
---
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 55 +++++++++++++++++--
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 14 +++++
2 files changed, 63 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index b02ff92bae0b..fd6d66832ccf 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -533,6 +533,50 @@ static void dm_pflip_high_irq(void *interrupt_params)
amdgpu_crtc->crtc_id, amdgpu_crtc, vrr_active, (int)!e);
}
+static void dm_handle_vmin_vmax_update(struct work_struct *offload_work)
+{
+ struct vupdate_offload_work *work = container_of(offload_work, struct vupdate_offload_work, work);
+ struct amdgpu_device *adev = work->adev;
+ struct dc_stream_state *stream = work->stream;
+ struct dc_crtc_timing_adjust *adjust = work->adjust;
+
+ mutex_lock(&adev->dm.dc_lock);
+ dc_stream_adjust_vmin_vmax(adev->dm.dc, stream, adjust);
+ mutex_unlock(&adev->dm.dc_lock);
+
+ dc_stream_release(stream);
+ kfree(work->adjust);
+ kfree(work);
+}
+
+static void schedule_dc_vmin_vmax(struct amdgpu_device *adev,
+ struct dc_stream_state *stream,
+ struct dc_crtc_timing_adjust *adjust)
+{
+ struct vupdate_offload_work *offload_work = kzalloc(sizeof(*offload_work), GFP_KERNEL);
+ if (!offload_work) {
+ drm_dbg_driver(adev_to_drm(adev), "Failed to allocate vupdate_offload_work\n");
+ return;
+ }
+
+ struct dc_crtc_timing_adjust *adjust_copy = kzalloc(sizeof(*adjust_copy), GFP_KERNEL);
+ if (!adjust_copy) {
+ drm_dbg_driver(adev_to_drm(adev), "Failed to allocate adjust_copy\n");
+ kfree(offload_work);
+ return;
+ }
+
+ dc_stream_retain(stream);
+ memcpy(adjust_copy, adjust, sizeof(*adjust_copy));
+
+ INIT_WORK(&offload_work->work, dm_handle_vmin_vmax_update);
+ offload_work->adev = adev;
+ offload_work->stream = stream;
+ offload_work->adjust = adjust_copy;
+
+ queue_work(system_wq, &offload_work->work);
+}
+
static void dm_vupdate_high_irq(void *interrupt_params)
{
struct common_irq_params *irq_params = interrupt_params;
@@ -582,10 +626,9 @@ static void dm_vupdate_high_irq(void *interrupt_params)
acrtc->dm_irq_params.stream,
&acrtc->dm_irq_params.vrr_params);
- dc_stream_adjust_vmin_vmax(
- adev->dm.dc,
- acrtc->dm_irq_params.stream,
- &acrtc->dm_irq_params.vrr_params.adjust);
+ schedule_dc_vmin_vmax(adev,
+ acrtc->dm_irq_params.stream,
+ &acrtc->dm_irq_params.vrr_params.adjust);
spin_unlock_irqrestore(&adev_to_drm(adev)->event_lock, flags);
}
}
@@ -675,8 +718,8 @@ static void dm_crtc_high_irq(void *interrupt_params)
acrtc->dm_irq_params.stream,
&acrtc->dm_irq_params.vrr_params);
- dc_stream_adjust_vmin_vmax(adev->dm.dc, acrtc->dm_irq_params.stream,
- &acrtc->dm_irq_params.vrr_params.adjust);
+ schedule_dc_vmin_vmax(adev, acrtc->dm_irq_params.stream,
+ &acrtc->dm_irq_params.vrr_params.adjust);
}
/*
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index 9603352ee094..aa99e226a381 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -1012,4 +1012,18 @@ void dm_free_gpu_mem(struct amdgpu_device *adev,
bool amdgpu_dm_is_headless(struct amdgpu_device *adev);
+/**
+ * struct dm_vupdate_work - Work data for periodic action in idle
+ * @work: Kernel work data for the work event
+ * @adev: amdgpu_device back pointer
+ * @stream: DC stream associated with the crtc
+ * @adjust: DC CRTC timing adjust to be applied to the crtc
+ */
+struct vupdate_offload_work {
+ struct work_struct work;
+ struct amdgpu_device *adev;
+ struct dc_stream_state *stream;
+ struct dc_crtc_timing_adjust *adjust;
+};
+
#endif /* __AMDGPU_DM_H__ */
--
2.47.2
The migration of IDIO-16 GPIO drivers to the regmap API resulted in some
regressions to the gpio-104-idio-16, gpio-pci-idio-16, and gpio-idio-16
modules. Specifically, the 104-idio-16 and pci-idio-16 GPIO drivers
utilize regmap caching and thus must set max_register for their
regmap_config, while gpio-idio-16 requires fixed_direction_output to
represent the fixed direction of the IDIO-16 GPIO lines. Fixes for these
regressions are provided by this series.
Link: https://lore.kernel.org/r/cover.1680618405.git.william.gray@linaro.org
Link: https://lore.kernel.org/r/9b0375fd-235f-4ee1-a7fa-daca296ef6bf@nutanix.com
Signed-off-by: William Breathitt Gray <wbg(a)kernel.org>
---
William Breathitt Gray (3):
gpio: 104-idio-16: Define maximum valid register address offset
gpio: pci-idio-16: Define maximum valid register address offset
gpio: idio-16: Define fixed direction of the GPIO lines
drivers/gpio/gpio-104-idio-16.c | 1 +
drivers/gpio/gpio-idio-16.c | 5 +++++
drivers/gpio/gpio-pci-idio-16.c | 1 +
3 files changed, 7 insertions(+)
---
base-commit: eba11116f39533d2e38cc5898014f2c95f32d23a
change-id: 20251017-fix-gpio-idio-16-regmap-1282cdc56a19
Best regards,
--
William Breathitt Gray <wbg(a)kernel.org>
The patch below does not apply to the 5.10-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-5.10.y
git checkout FETCH_HEAD
git cherry-pick -x 466f7a2fef2a4e426f809f79845a1ec1aeb558f4
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable(a)vger.kernel.org>' --in-reply-to '2025101624-legacy-gab-3e94@gregkh' --subject-prefix 'PATCH 5.10.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 466f7a2fef2a4e426f809f79845a1ec1aeb558f4 Mon Sep 17 00:00:00 2001
From: Sean Nyekjaer <sean(a)geanix.com>
Date: Mon, 1 Sep 2025 09:49:15 +0200
Subject: [PATCH] iio: imu: inv_icm42600: Avoid configuring if already
pm_runtime suspended
Do as in suspend, skip resume configuration steps if the device is already
pm_runtime suspended. This avoids reconfiguring a device that is already
in the correct low-power state and ensures that pm_runtime handles the
power state transitions properly.
Fixes: 31c24c1e93c3 ("iio: imu: inv_icm42600: add core of new inv_icm42600 driver")
Signed-off-by: Sean Nyekjaer <sean(a)geanix.com>
Link: https://patch.msgid.link/20250901-icm42pmreg-v3-3-ef1336246960@geanix.com
Cc: <Stable(a)vger.kernel.org>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron(a)huawei.com>
diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
index 41b275ecc7e2..ee780f530dc8 100644
--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
+++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
@@ -837,17 +837,15 @@ static int inv_icm42600_suspend(struct device *dev)
struct device *accel_dev;
bool wakeup;
int accel_conf;
- int ret;
+ int ret = 0;
mutex_lock(&st->lock);
st->suspended.gyro = st->conf.gyro.mode;
st->suspended.accel = st->conf.accel.mode;
st->suspended.temp = st->conf.temp_en;
- if (pm_runtime_suspended(dev)) {
- ret = 0;
+ if (pm_runtime_suspended(dev))
goto out_unlock;
- }
/* disable FIFO data streaming */
if (st->fifo.on) {
@@ -900,10 +898,13 @@ static int inv_icm42600_resume(struct device *dev)
struct inv_icm42600_sensor_state *accel_st = iio_priv(st->indio_accel);
struct device *accel_dev;
bool wakeup;
- int ret;
+ int ret = 0;
mutex_lock(&st->lock);
+ if (pm_runtime_suspended(dev))
+ goto out_unlock;
+
/* check wakeup capability */
accel_dev = &st->indio_accel->dev;
wakeup = st->apex.on && device_may_wakeup(accel_dev);
The patch below does not apply to the 5.15-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-5.15.y
git checkout FETCH_HEAD
git cherry-pick -x 466f7a2fef2a4e426f809f79845a1ec1aeb558f4
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable(a)vger.kernel.org>' --in-reply-to '2025101624-blazing-sharpener-111d@gregkh' --subject-prefix 'PATCH 5.15.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 466f7a2fef2a4e426f809f79845a1ec1aeb558f4 Mon Sep 17 00:00:00 2001
From: Sean Nyekjaer <sean(a)geanix.com>
Date: Mon, 1 Sep 2025 09:49:15 +0200
Subject: [PATCH] iio: imu: inv_icm42600: Avoid configuring if already
pm_runtime suspended
Do as in suspend, skip resume configuration steps if the device is already
pm_runtime suspended. This avoids reconfiguring a device that is already
in the correct low-power state and ensures that pm_runtime handles the
power state transitions properly.
Fixes: 31c24c1e93c3 ("iio: imu: inv_icm42600: add core of new inv_icm42600 driver")
Signed-off-by: Sean Nyekjaer <sean(a)geanix.com>
Link: https://patch.msgid.link/20250901-icm42pmreg-v3-3-ef1336246960@geanix.com
Cc: <Stable(a)vger.kernel.org>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron(a)huawei.com>
diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
index 41b275ecc7e2..ee780f530dc8 100644
--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
+++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
@@ -837,17 +837,15 @@ static int inv_icm42600_suspend(struct device *dev)
struct device *accel_dev;
bool wakeup;
int accel_conf;
- int ret;
+ int ret = 0;
mutex_lock(&st->lock);
st->suspended.gyro = st->conf.gyro.mode;
st->suspended.accel = st->conf.accel.mode;
st->suspended.temp = st->conf.temp_en;
- if (pm_runtime_suspended(dev)) {
- ret = 0;
+ if (pm_runtime_suspended(dev))
goto out_unlock;
- }
/* disable FIFO data streaming */
if (st->fifo.on) {
@@ -900,10 +898,13 @@ static int inv_icm42600_resume(struct device *dev)
struct inv_icm42600_sensor_state *accel_st = iio_priv(st->indio_accel);
struct device *accel_dev;
bool wakeup;
- int ret;
+ int ret = 0;
mutex_lock(&st->lock);
+ if (pm_runtime_suspended(dev))
+ goto out_unlock;
+
/* check wakeup capability */
accel_dev = &st->indio_accel->dev;
wakeup = st->apex.on && device_may_wakeup(accel_dev);