From: Peter Zijlstra <peterz(a)infradead.org>
[ Upstream commit 517e6a301f34613bff24a8e35b5455884f2d83d8 ]
Per syzbot it is possible for perf_pending_task() to run after the
event is free()'d. There are two related but distinct cases:
- the task_work was already queued before destroying the event;
- destroying the event itself queues the task_work.
The first cannot be solved using task_work_cancel() since
perf_release() itself might be called from a task_work (____fput),
which means the current->task_works list is already empty and
task_work_cancel() won't be able to find the perf_pending_task()
entry.
The simplest alternative is extending the perf_event lifetime to cover
the task_work.
The second is just silly, queueing a task_work while you know the
event is going away makes no sense and is easily avoided by
re-arranging how the event is marked STATE_DEAD and ensuring it goes
through STATE_OFF on the way down.
Reported-by: syzbot+9228d6098455bb209ec8(a)syzkaller.appspotmail.com
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Tested-by: Marco Elver <elver(a)google.com>
[ Discard the changes in event_sched_out() due to 5.10 don't have the
commit: 97ba62b27867 ("perf: Add support for SIGTRAP on perf events")
and commit: ca6c21327c6a ("perf: Fix missing SIGTRAPs") ]
Signed-off-by: Xiangyu Chen <xiangyu.chen(a)windriver.com>
Signed-off-by: He Zhe <zhe.he(a)windriver.com>
---
Verified the build test.
---
kernel/events/core.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 8f19d6ab039e..798c839a00b3 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2419,6 +2419,7 @@ group_sched_out(struct perf_event *group_event,
}
#define DETACH_GROUP 0x01UL
+#define DETACH_DEAD 0x04UL
/*
* Cross CPU call to remove a performance event
@@ -2439,10 +2440,18 @@ __perf_remove_from_context(struct perf_event *event,
update_cgrp_time_from_cpuctx(cpuctx, false);
}
+ /*
+ * Ensure event_sched_out() switches to OFF, at the very least
+ * this avoids raising perf_pending_task() at this time.
+ */
+ if (flags & DETACH_DEAD)
+ event->pending_disable = 1;
event_sched_out(event, cpuctx, ctx);
if (flags & DETACH_GROUP)
perf_group_detach(event);
list_del_event(event, ctx);
+ if (flags & DETACH_DEAD)
+ event->state = PERF_EVENT_STATE_DEAD;
if (!ctx->nr_events && ctx->is_active) {
if (ctx == &cpuctx->ctx)
@@ -5111,9 +5120,7 @@ int perf_event_release_kernel(struct perf_event *event)
ctx = perf_event_ctx_lock(event);
WARN_ON_ONCE(ctx->parent_ctx);
- perf_remove_from_context(event, DETACH_GROUP);
- raw_spin_lock_irq(&ctx->lock);
/*
* Mark this event as STATE_DEAD, there is no external reference to it
* anymore.
@@ -5125,8 +5132,7 @@ int perf_event_release_kernel(struct perf_event *event)
* Thus this guarantees that we will in fact observe and kill _ALL_
* child events.
*/
- event->state = PERF_EVENT_STATE_DEAD;
- raw_spin_unlock_irq(&ctx->lock);
+ perf_remove_from_context(event, DETACH_GROUP|DETACH_DEAD);
perf_event_ctx_unlock(event, ctx);
@@ -6533,6 +6539,8 @@ static void perf_pending_event(struct irq_work *entry)
if (rctx >= 0)
perf_swevent_put_recursion_context(rctx);
+
+ put_event(event);
}
/*
--
2.34.1
[Why]
There is no handling for I2C-read-over-AUX when receive reply of
I2C_ACK|AUX_ACK followed by the total number of data bytes Fewer
than LEN + 1
[How]
Refer to DP v2.1: 2.11.7.1.6.3 & 2.11.7.1.6.4, repeat the identical
I2C-read-over-AUX transaction with the updated LEN value equal to
the original LEN value minus the total number of data bytes received
so far.
Fixes: 68ec2a2a2481 ("drm/dp: Use I2C_WRITE_STATUS_UPDATE to drain partial I2C_WRITE requests")
Cc: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
Cc: Jani Nikula <jani.nikula(a)intel.com>
Cc: Mario Limonciello <mario.limonciello(a)amd.com>
Cc: Harry Wentland <harry.wentland(a)amd.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Wayne Lin <Wayne.Lin(a)amd.com>
---
drivers/gpu/drm/display/drm_dp_helper.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
index 28f0708c3b27..938214a980a9 100644
--- a/drivers/gpu/drm/display/drm_dp_helper.c
+++ b/drivers/gpu/drm/display/drm_dp_helper.c
@@ -1812,10 +1812,11 @@ static int drm_dp_i2c_do_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
drm_dbg_kms(aux->drm_dev,
"%s: I2C partially ack (result=%d, size=%zu)\n",
aux->name, ret, msg->size);
- if (!(msg->request & DP_AUX_I2C_READ)) {
- usleep_range(AUX_RETRY_INTERVAL, AUX_RETRY_INTERVAL + 100);
+ usleep_range(AUX_RETRY_INTERVAL, AUX_RETRY_INTERVAL + 100);
+ if (msg->request & DP_AUX_I2C_READ)
+ msg->size -= ret;
+ else
drm_dp_i2c_msg_write_status_update(msg);
- }
continue;
}
--
2.43.0
The idxd driver attaches the default domain to a PASID of the device to
perform kernel DMA using that PASID. The domain is attached to the
device's PASID through iommu_attach_device_pasid(), which checks if the
domain->owner matches the iommu_ops retrieved from the device. If they
do not match, it returns a failure.
if (ops != domain->owner || pasid == IOMMU_NO_PASID)
return -EINVAL;
The static identity domain implemented by the intel iommu driver doesn't
specify the domain owner. Therefore, kernel DMA with PASID doesn't work
for the idxd driver if the device translation mode is set to passthrough.
Generally the owner field of static domains are not set because they are
already part of iommu ops. Add a helper domain_iommu_ops_compatible()
that checks if a domain is compatible with the device's iommu ops. This
helper explicitly allows the static blocked and identity domains associated
with the device's iommu_ops to be considered compatible.
Fixes: 2031c469f816 ("iommu/vt-d: Add support for static identity domain")
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=220031
Cc: stable(a)vger.kernel.org
Suggested-by: Jason Gunthorpe <jgg(a)nvidia.com>
Link: https://lore.kernel.org/linux-iommu/20250422191554.GC1213339@ziepe.ca/
Signed-off-by: Lu Baolu <baolu.lu(a)linux.intel.com>
---
drivers/iommu/iommu.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
Change log:
-v2:
- Make the solution generic for all static domains as suggested by
Jason.
-v1: https://lore.kernel.org/linux-iommu/20250422075422.2084548-1-baolu.lu@linux…
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 4f91a740c15f..abda40ec377a 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -3402,6 +3402,19 @@ static void __iommu_remove_group_pasid(struct iommu_group *group,
iommu_remove_dev_pasid(device->dev, pasid, domain);
}
+static bool domain_iommu_ops_compatible(const struct iommu_ops *ops,
+ struct iommu_domain *domain)
+{
+ if (domain->owner == ops)
+ return true;
+
+ /* For static domains, owner isn't set. */
+ if (domain == ops->blocked_domain || domain == ops->identity_domain)
+ return true;
+
+ return false;
+}
+
/*
* iommu_attach_device_pasid() - Attach a domain to pasid of device
* @domain: the iommu domain.
@@ -3435,7 +3448,8 @@ int iommu_attach_device_pasid(struct iommu_domain *domain,
!ops->blocked_domain->ops->set_dev_pasid)
return -EOPNOTSUPP;
- if (ops != domain->owner || pasid == IOMMU_NO_PASID)
+ if (!domain_iommu_ops_compatible(ops, domain) ||
+ pasid == IOMMU_NO_PASID)
return -EINVAL;
mutex_lock(&group->mutex);
--
2.43.0