From: Nicolai Buchwitz <nb(a)tipi-net.de>
Before commit 25f51b76f90f1 ("xhci-pci: Make xhci-pci-renesas a proper
modular driver"), the xhci-pci driver handled the Renesas uPD72020x USB3
PHY and only utilized features of xhci-pci-renesas when no external
firmware EEPROM was attached. This allowed devices with a valid firmware
stored in EEPROM to function without requiring xhci-pci-renesas.
That commit changed the behavior, making xhci-pci-renesas responsible for
handling these devices entirely, even when firmware was already present
in EEPROM. As a result, unnecessary warnings about missing firmware files
appeared, and more critically, USB functionality broke whens
CONFIG_USB_XHCI_PCI_RENESAS was not enabled—despite previously workings
without it.
Fix this by ensuring that devices are only handed over to xhci-pci-renesas
if the config option is enabled. Otherwise, restore the original behavior
and handle them as standard xhci-pci devices.
Signed-off-by: Nicolai Buchwitz <nb(a)tipi-net.de>
Fixes: 25f51b76f90f ("xhci-pci: Make xhci-pci-renesas a proper modular driver")
---
drivers/usb/host/xhci-pci.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index 2d1e205c14c60..4ce80d8ac603e 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -654,9 +654,11 @@ int xhci_pci_common_probe(struct pci_dev *dev, const struct pci_device_id *id)
EXPORT_SYMBOL_NS_GPL(xhci_pci_common_probe, "xhci");
static const struct pci_device_id pci_ids_reject[] = {
+#if IS_ENABLED(CONFIG_USB_XHCI_PCI_RENESAS)
/* handled by xhci-pci-renesas */
{ PCI_DEVICE(PCI_VENDOR_ID_RENESAS, 0x0014) },
{ PCI_DEVICE(PCI_VENDOR_ID_RENESAS, 0x0015) },
+#endif
{ /* end: all zeroes */ }
};
--
2.39.5
Cases rec->counter == {0, 1} are checked already.
While x * (x - 1) * 1000 = 0 have many solutions greater than 1 for both
modulo 2^32 and 2^64, that is not the case for x * (x - 1) = 0, so split
division into two.
It is not scary in practice because mod 2^64 solutions are huge and
minimal mod 2^32 solution is 30-bit number.
Cc: stable(a)vger.kernel.org
Fixes: e31f7939c1c27 ("ftrace: Avoid potential division by zero in function profiler")
Signed-off-by: Nikolay Kuratov <kniv(a)yandex-team.ru>
---
kernel/trace/ftrace.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 728ecda6e8d4..e1c05c4c29c2 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -570,12 +570,12 @@ static int function_stat_show(struct seq_file *m, void *v)
stddev = rec->counter * rec->time_squared -
rec->time * rec->time;
+ stddev = div64_ul(stddev, rec->counter * (rec->counter - 1));
/*
* Divide only 1000 for ns^2 -> us^2 conversion.
* trace_print_graph_duration will divide 1000 again.
*/
- stddev = div64_ul(stddev,
- rec->counter * (rec->counter - 1) * 1000);
+ stddev = div64_ul(stddev, 1000);
}
trace_seq_init(&s);
--
2.34.1
Commit 1c56e9a39833 ("drm/i915/dp: Get optimal link config to have best
compressed bpp") tries to find the best compressed bpp for the
link. However, it iterates from max to min bpp on display 13+, and from
min to max on other platforms. This presumably leads to minimum
compressed bpp always being chosen on display 11-12.
Iterate from high to low on all platforms to actually use the best
possible compressed bpp.
Fixes: 1c56e9a39833 ("drm/i915/dp: Get optimal link config to have best compressed bpp")
Cc: Ankit Nautiyal <ankit.k.nautiyal(a)intel.com>
Cc: Imre Deak <imre.deak(a)intel.com>
Cc: <stable(a)vger.kernel.org> # v6.7+
Signed-off-by: Jani Nikula <jani.nikula(a)intel.com>
---
drivers/gpu/drm/i915/display/intel_dp.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index d1b4fd542a1f..ecf192262eb9 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2073,11 +2073,10 @@ icl_dsc_compute_link_config(struct intel_dp *intel_dp,
/* Compressed BPP should be less than the Input DSC bpp */
dsc_max_bpp = min(dsc_max_bpp, output_bpp - 1);
- for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
- if (valid_dsc_bpp[i] < dsc_min_bpp)
+ for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
+ if (valid_dsc_bpp[i] < dsc_min_bpp ||
+ valid_dsc_bpp[i] > dsc_max_bpp)
continue;
- if (valid_dsc_bpp[i] > dsc_max_bpp)
- break;
ret = dsc_compute_link_config(intel_dp,
pipe_config,
--
2.39.5
On Mon, Feb 03, 2025 at 11:40:11AM +0000, Shubham Pushpkar -X (spushpka - E INFOCHIPS PRIVATE LIMITED at Cisco) wrote:
> Hi Greg,
>
> Thank you for your valuable feedback on my recent patch submission regarding the CVE fix.
>
> I appreciate your point about the CVE reference in the commit message. I will revise the patch to remove the CVE identifier, as it is indeed managed in the assignment database.
>
> Regarding the cc list, I apologize for not including everyone involved in the original commit. I will ensure to cc all relevant parties when I resubmit the patch to facilitate better communication and feedback.
>
> Thank you once again for your guidance. Please let me know if there are any additional changes or considerations I should be aware of.
Yes, please always test your patches before sending them out.
Because of a lack of testing here, I'm going to have to ask you to get a
signed-off-by from another of your coworkers so that we know two people
have verified that the change is correct and actually works for any
future stuff you submit.
thanks,
greg k-h
On the following path, flush_tlb_range() can be used for zapping normal
PMD entries (PMD entries that point to page tables) together with the PTE
entries in the pointed-to page table:
collapse_pte_mapped_thp
pmdp_collapse_flush
flush_tlb_range
The arm64 version of flush_tlb_range() has a comment describing that it can
be used for page table removal, and does not use any last-level
invalidation optimizations. Fix the X86 version by making it behave the
same way.
Currently, X86 only uses this information for the following two purposes,
which I think means the issue doesn't have much impact:
- In native_flush_tlb_multi() for checking if lazy TLB CPUs need to be
IPI'd to avoid issues with speculative page table walks.
- In Hyper-V TLB paravirtualization, again for lazy TLB stuff.
The patch "x86/mm: only invalidate final translations with INVLPGB" which
is currently under review (see
<https://lore.kernel.org/all/20241230175550.4046587-13-riel@surriel.com/>)
would probably be making the impact of this a lot worse.
Cc: stable(a)vger.kernel.org
Fixes: 016c4d92cd16 ("x86/mm/tlb: Add freed_tables argument to flush_tlb_mm_range")
Signed-off-by: Jann Horn <jannh(a)google.com>
---
arch/x86/include/asm/tlbflush.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 02fc2aa06e9e0ecdba3fe948cafe5892b72e86c0..3da645139748538daac70166618d8ad95116eb74 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -242,7 +242,7 @@ void flush_tlb_multi(const struct cpumask *cpumask,
flush_tlb_mm_range((vma)->vm_mm, start, end, \
((vma)->vm_flags & VM_HUGETLB) \
? huge_page_shift(hstate_vma(vma)) \
- : PAGE_SHIFT, false)
+ : PAGE_SHIFT, true)
extern void flush_tlb_all(void);
extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
---
base-commit: aa135d1d0902c49ed45bec98c61c1b4022652b7e
change-id: 20250103-x86-collapse-flush-fix-fa87ac4d5834
--
Jann Horn <jannh(a)google.com>
OP-TEE supplicant is a user-space daemon and it's possible for it
being hung or crashed or killed in the middle of processing an OP-TEE
RPC call. It becomes more complicated when there is incorrect shutdown
ordering of the supplicant process vs the OP-TEE client application which
can eventually lead to system hang-up waiting for the closure of the
client application.
Allow the client process waiting in kernel for supplicant response to
be killed rather than indefinitetly waiting in an unkillable state. This
fixes issues observed during system reboot/shutdown when supplicant got
hung for some reason or gets crashed/killed which lead to client getting
hung in an unkillable state. It in turn lead to system being in hung up
state requiring hard power off/on to recover.
Fixes: 4fb0a5eb364d ("tee: add OP-TEE driver")
Suggested-by: Arnd Bergmann <arnd(a)arndb.de>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Sumit Garg <sumit.garg(a)linaro.org>
---
Changes in v2:
- Switch to killable wait instead as suggested by Arnd instead
of supplicant timeout. It atleast allow the client to wait for
supplicant in killable state which in turn allows system to reboot
or shutdown gracefully.
drivers/tee/optee/supp.c | 32 +++++++-------------------------
1 file changed, 7 insertions(+), 25 deletions(-)
diff --git a/drivers/tee/optee/supp.c b/drivers/tee/optee/supp.c
index 322a543b8c27..3fbfa9751931 100644
--- a/drivers/tee/optee/supp.c
+++ b/drivers/tee/optee/supp.c
@@ -80,7 +80,6 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params,
struct optee *optee = tee_get_drvdata(ctx->teedev);
struct optee_supp *supp = &optee->supp;
struct optee_supp_req *req;
- bool interruptable;
u32 ret;
/*
@@ -111,36 +110,19 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params,
/*
* Wait for supplicant to process and return result, once we've
* returned from wait_for_completion(&req->c) successfully we have
- * exclusive access again.
+ * exclusive access again. Allow the wait to be killable such that
+ * the wait doesn't turn into an indefinite state if the supplicant
+ * gets hung for some reason.
*/
- while (wait_for_completion_interruptible(&req->c)) {
- mutex_lock(&supp->mutex);
- interruptable = !supp->ctx;
- if (interruptable) {
- /*
- * There's no supplicant available and since the
- * supp->mutex currently is held none can
- * become available until the mutex released
- * again.
- *
- * Interrupting an RPC to supplicant is only
- * allowed as a way of slightly improving the user
- * experience in case the supplicant hasn't been
- * started yet. During normal operation the supplicant
- * will serve all requests in a timely manner and
- * interrupting then wouldn't make sense.
- */
+ if (wait_for_completion_killable(&req->c)) {
+ if (!mutex_lock_killable(&supp->mutex)) {
if (req->in_queue) {
list_del(&req->link);
req->in_queue = false;
}
+ mutex_unlock(&supp->mutex);
}
- mutex_unlock(&supp->mutex);
-
- if (interruptable) {
- req->ret = TEEC_ERROR_COMMUNICATION;
- break;
- }
+ req->ret = TEEC_ERROR_COMMUNICATION;
}
ret = req->ret;
--
2.43.0