TCR2_ELx.E0POE is set during smp_init().
However, this bit is not reprogrammed when the CPU enters suspension and
later resumes via cpu_resume(), as __cpu_setup() does not re-enable E0POE
and there is no save/restore logic for the TCR2_ELx system register.
As a result, the E0POE feature no longer works after cpu_resume().
To address this, save and restore TCR2_EL1 in the cpu_suspend()/cpu_resume()
path, rather than adding related logic to __cpu_setup(), taking into account
possible future extensions of the TCR2_ELx feature.
Cc: stable(a)vger.kernel.org
Fixes: bf83dae90fbc ("arm64: enable the Permission Overlay Extension for EL0")
Signed-off-by: Yeoreum Yun <yeoreum.yun(a)arm.com>
---
Patch History
==============
from v1 to v2:
- following @Kevin Brodsky suggestion.
- https://lore.kernel.org/all/20260105200707.2071169-1-yeoreum.yun@arm.com/
NOTE:
This patch based on v6.19-rc4
---
arch/arm64/include/asm/suspend.h | 2 +-
arch/arm64/mm/proc.S | 8 ++++++++
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h
index e65f33edf9d6..e9ce68d50ba4 100644
--- a/arch/arm64/include/asm/suspend.h
+++ b/arch/arm64/include/asm/suspend.h
@@ -2,7 +2,7 @@
#ifndef __ASM_SUSPEND_H
#define __ASM_SUSPEND_H
-#define NR_CTX_REGS 13
+#define NR_CTX_REGS 14
#define NR_CALLEE_SAVED_REGS 12
/*
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 01e868116448..5d907ce3b6d3 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -110,6 +110,10 @@ SYM_FUNC_START(cpu_do_suspend)
* call stack.
*/
str x18, [x0, #96]
+alternative_if ARM64_HAS_TCR2
+ mrs x2, REG_TCR2_EL1
+ str x2, [x0, #104]
+alternative_else_nop_endif
ret
SYM_FUNC_END(cpu_do_suspend)
@@ -144,6 +148,10 @@ SYM_FUNC_START(cpu_do_resume)
msr tcr_el1, x8
msr vbar_el1, x9
msr mdscr_el1, x10
+alternative_if ARM64_HAS_TCR2
+ ldr x2, [x0, #104]
+ msr REG_TCR2_EL1, x2
+alternative_else_nop_endif
msr sctlr_el1, x12
set_this_cpu_offset x13
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
Sparse inode cluster allocation sets min/max agbno values to avoid
allocating an inode cluster that might map to an invalid inode
chunk. For example, we can't have an inode record mapped to agbno 0
or that extends past the end of a runt AG of misaligned size.
The initial calculation of max_agbno is unnecessarily conservative,
however. This has triggered a corner case allocation failure where a
small runt AG (i.e. 2063 blocks) is mostly full save for an extent
to the EOFS boundary: [2050,13]. max_agbno is set to 2048 in this
case, which happens to be the offset of the last possible valid
inode chunk in the AG. In practice, we should be able to allocate
the 4-block cluster at agbno 2052 to map to the parent inode record
at agbno 2048, but the max_agbno value precludes it.
Note that this can result in filesystem shutdown via dirty trans
cancel on stable kernels prior to commit 9eb775968b68 ("xfs: walk
all AGs if TRYLOCK passed to xfs_alloc_vextent_iterate_ags") because
the tail AG selection by the allocator sets t_highest_agno on the
transaction. If the inode allocator spins around and finds an inode
chunk with free inodes in an earlier AG, the subsequent dir name
creation path may still fail to allocate due to the AG restriction
and cancel.
To avoid this problem, update the max_agbno calculation to the agbno
prior to the last chunk aligned agbno in the AG. This is not
necessarily the last valid allocation target for a sparse chunk, but
since inode chunks (i.e. records) are chunk aligned and sparse
allocs are cluster sized/aligned, this allows the sb_spino_align
alignment restriction to take over and round down the max effective
agbno to within the last valid inode chunk in the AG.
Note that even though the allocator improvements in the
aforementioned commit seem to avoid this particular dirty trans
cancel situation, the max_agbno logic improvement still applies as
we should be able to allocate from an AG that has been appropriately
selected. The more important target for this patch however are
older/stable kernels prior to this allocator rework/improvement.
Cc: <stable(a)vger.kernel.org> # v4.2
Fixes: 56d1115c9bc7 ("xfs: allocate sparse inode chunks on full chunk allocation failure")
Signed-off-by: Brian Foster <bfoster(a)redhat.com>
Reviewed-by: "Darrick J. Wong" <djwong(a)kernel.org>
---
v2:
- Added misc. commit log tags.
v1: https://lore.kernel.org/linux-xfs/20260108141129.7765-1-bfoster@redhat.com/
fs/xfs/libxfs/xfs_ialloc.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c
index d97295eaebe6..c19d6d713780 100644
--- a/fs/xfs/libxfs/xfs_ialloc.c
+++ b/fs/xfs/libxfs/xfs_ialloc.c
@@ -848,15 +848,16 @@ xfs_ialloc_ag_alloc(
* invalid inode records, such as records that start at agbno 0
* or extend beyond the AG.
*
- * Set min agbno to the first aligned, non-zero agbno and max to
- * the last aligned agbno that is at least one full chunk from
- * the end of the AG.
+ * Set min agbno to the first chunk aligned, non-zero agbno and
+ * max to one less than the last chunk aligned agbno from the
+ * end of the AG. We subtract 1 from max so that the cluster
+ * allocation alignment takes over and allows allocation within
+ * the last full inode chunk in the AG.
*/
args.min_agbno = args.mp->m_sb.sb_inoalignmt;
args.max_agbno = round_down(xfs_ag_block_count(args.mp,
pag_agno(pag)),
- args.mp->m_sb.sb_inoalignmt) -
- igeo->ialloc_blks;
+ args.mp->m_sb.sb_inoalignmt) - 1;
error = xfs_alloc_vextent_near_bno(&args,
xfs_agbno_to_fsb(pag,
--
2.52.0
Since commit c6e126de43e7 ("of: Keep track of populated platform
devices") child devices will not be created by of_platform_populate()
if the devices had previously been deregistered individually so that the
OF_POPULATED flag is still set in the corresponding OF nodes.
Switch to using of_platform_depopulate() instead of open coding so that
the child devices are created if the driver is rebound.
Fixes: c6e126de43e7 ("of: Keep track of populated platform devices")
Cc: stable(a)vger.kernel.org # 3.16
Signed-off-by: Johan Hovold <johan(a)kernel.org>
---
drivers/mfd/omap-usb-host.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/mfd/omap-usb-host.c b/drivers/mfd/omap-usb-host.c
index a77b6fc790f2..4d29a6e2ed87 100644
--- a/drivers/mfd/omap-usb-host.c
+++ b/drivers/mfd/omap-usb-host.c
@@ -819,8 +819,10 @@ static void usbhs_omap_remove(struct platform_device *pdev)
{
pm_runtime_disable(&pdev->dev);
- /* remove children */
- device_for_each_child(&pdev->dev, NULL, usbhs_omap_remove_child);
+ if (pdev->dev.of_node)
+ of_platform_depopulate(&pdev->dev);
+ else
+ device_for_each_child(&pdev->dev, NULL, usbhs_omap_remove_child);
}
static const struct dev_pm_ops usbhsomap_dev_pm_ops = {
--
2.51.2
From ade501a5ea27db18e827054d812ea6cc4679b65e Mon Sep 17 00:00:00 2001
From: Ionut Nechita <ionut.nechita(a)windriver.com>
Date: Tue, 23 Dec 2025 12:29:14 +0200
Subject: [PATCH] block/blk-mq: fix RT kernel regression with dedicated
quiesce_sync_lock
In RT kernel (PREEMPT_RT), commit 679b1874eba7 ("block: fix ordering
between checking QUEUE_FLAG_QUIESCED request adding") causes severe
performance regression on systems with multiple MSI-X interrupt vectors.
The commit added spinlock_t queue_lock to blk_mq_run_hw_queue() to
synchronize QUEUE_FLAG_QUIESCED checks with blk_mq_unquiesce_queue().
While this works correctly in standard kernel, it causes catastrophic
serialization in RT kernel where spinlock_t converts to sleeping
rt_mutex.
Problem in RT kernel:
- blk_mq_run_hw_queue() is called from IRQ thread context (I/O completion)
- With 8 MSI-X vectors, all 8 IRQ threads contend on the same queue_lock
- queue_lock becomes rt_mutex (sleeping) in RT kernel
- IRQ threads serialize and enter D-state waiting for lock
- Throughput drops from 640 MB/s to 153 MB/s
The original commit message noted that memory barriers were considered
but rejected because "memory barrier is not easy to be maintained" -
barriers would need to be added at multiple call sites throughout the
block layer where work is added before calling blk_mq_run_hw_queue().
Solution:
Instead of using the general-purpose queue_lock or attempting complex
memory barrier pairing across many call sites, introduce a dedicated
raw_spinlock_t quiesce_sync_lock specifically for synchronizing the
quiesce state between:
- blk_mq_quiesce_queue_nowait()
- blk_mq_unquiesce_queue()
- blk_mq_run_hw_queue()
Why raw_spinlock is safe:
- Critical section is provably short (only flag and counter checks)
- No sleeping operations under lock
- raw_spinlock does not convert to rt_mutex in RT kernel
- Provides same ordering guarantees as original queue_lock approach
This approach:
- Maintains correctness of original synchronization
- Avoids sleeping in RT kernel's IRQ thread context
- Limits scope to only quiesce-related synchronization
- Simpler than auditing all call sites for memory barrier pairing
Additionally, change blk_freeze_queue_start to use async=true for better
performance in RT kernel by avoiding synchronous queue runs during freeze.
Test results on RT kernel (megaraid_sas with 8 MSI-X vectors):
- Before: 153 MB/s, 6-8 IRQ threads in D-state
- After: 640 MB/s, 0 IRQ threads blocked
Fixes: 679b1874eba7 ("block: fix ordering between checking QUEUE_FLAG_QUIESCED request adding")
Cc: stable(a)vger.kernel.org
Signed-off-by: Ionut Nechita <ionut.nechita(a)windriver.com>
---
block/blk-core.c | 1 +
block/blk-mq.c | 30 +++++++++++++++++++-----------
include/linux/blkdev.h | 6 ++++++
3 files changed, 26 insertions(+), 11 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index c7b6c1f76359..33a954422415 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -434,6 +434,7 @@ struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id)
mutex_init(&q->limits_lock);
mutex_init(&q->rq_qos_mutex);
spin_lock_init(&q->queue_lock);
+ raw_spin_lock_init(&q->quiesce_sync_lock);
init_waitqueue_head(&q->mq_freeze_wq);
mutex_init(&q->mq_freeze_lock);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index e1bca29dc358..c7ca2f485e8e 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -178,7 +178,7 @@ bool __blk_freeze_queue_start(struct request_queue *q,
percpu_ref_kill(&q->q_usage_counter);
mutex_unlock(&q->mq_freeze_lock);
if (queue_is_mq(q))
- blk_mq_run_hw_queues(q, false);
+ blk_mq_run_hw_queues(q, true);
} else {
mutex_unlock(&q->mq_freeze_lock);
}
@@ -289,10 +289,10 @@ void blk_mq_quiesce_queue_nowait(struct request_queue *q)
{
unsigned long flags;
- spin_lock_irqsave(&q->queue_lock, flags);
+ raw_spin_lock_irqsave(&q->quiesce_sync_lock, flags);
if (!q->quiesce_depth++)
blk_queue_flag_set(QUEUE_FLAG_QUIESCED, q);
- spin_unlock_irqrestore(&q->queue_lock, flags);
+ raw_spin_unlock_irqrestore(&q->quiesce_sync_lock, flags);
}
EXPORT_SYMBOL_GPL(blk_mq_quiesce_queue_nowait);
@@ -344,14 +344,14 @@ void blk_mq_unquiesce_queue(struct request_queue *q)
unsigned long flags;
bool run_queue = false;
- spin_lock_irqsave(&q->queue_lock, flags);
+ raw_spin_lock_irqsave(&q->quiesce_sync_lock, flags);
if (WARN_ON_ONCE(q->quiesce_depth <= 0)) {
;
} else if (!--q->quiesce_depth) {
blk_queue_flag_clear(QUEUE_FLAG_QUIESCED, q);
run_queue = true;
}
- spin_unlock_irqrestore(&q->queue_lock, flags);
+ raw_spin_unlock_irqrestore(&q->quiesce_sync_lock, flags);
/* dispatch requests which are inserted during quiescing */
if (run_queue)
@@ -2323,19 +2323,27 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
+ /*
+ * First lockless check to avoid unnecessary overhead.
+ */
need_run = blk_mq_hw_queue_need_run(hctx);
if (!need_run) {
unsigned long flags;
/*
- * Synchronize with blk_mq_unquiesce_queue(), because we check
- * if hw queue is quiesced locklessly above, we need the use
- * ->queue_lock to make sure we see the up-to-date status to
- * not miss rerunning the hw queue.
+ * Synchronize with blk_mq_unquiesce_queue(). We check if hw
+ * queue is quiesced locklessly above, so we need to use
+ * quiesce_sync_lock to ensure we see the up-to-date status
+ * and don't miss rerunning the hw queue.
+ *
+ * Uses raw_spinlock to avoid sleeping in RT kernel's IRQ
+ * thread context during I/O completion. Critical section is
+ * short (only flag and counter checks), making raw_spinlock
+ * safe.
*/
- spin_lock_irqsave(&hctx->queue->queue_lock, flags);
+ raw_spin_lock_irqsave(&hctx->queue->quiesce_sync_lock, flags);
need_run = blk_mq_hw_queue_need_run(hctx);
- spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
+ raw_spin_unlock_irqrestore(&hctx->queue->quiesce_sync_lock, flags);
if (!need_run)
return;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cd9c97f6f948..0f651a4fae8d 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -480,6 +480,12 @@ struct request_queue {
struct request *last_merge;
spinlock_t queue_lock;
+ /*
+ * Synchronizes quiesce state checks between blk_mq_run_hw_queue()
+ * and blk_mq_unquiesce_queue(). Uses raw_spinlock to avoid sleeping
+ * in RT kernel's IRQ thread context during I/O completion.
+ */
+ raw_spinlock_t quiesce_sync_lock;
int quiesce_depth;
--
2.43.0
Now that the upstream code has been getting broader test coverage by our
users we occasionally see issues with USB2 devices plugged in during boot.
Before Linux is running, the USB2 PHY has usually been running in device
mode and it turns out that sometimes host->device or device->host
transitions don't work.
The root cause: If the role inside the USB2 PHY is re-configured when it
has already been powered on or when dwc3 has already enabled the ULPI
interface the new configuration sometimes doesn't take affect until dwc3
is reset again. Fix this rare issue by configuring the role much earlier.
Note that the USB3 PHY does not suffer from this issue and actually
requires dwc3 to be up before the correct role can be configured there.
Reported-by: James Calligeros <jcalligeros99(a)gmail.com>
Reported-by: Janne Grunau <j(a)jannau.net>
Fixes: 0ec946d32ef7 ("usb: dwc3: Add Apple Silicon DWC3 glue layer driver")
Cc: stable(a)vger.kernel.org
Tested-by: Janne Grunau <j(a)jannau.net>
Reviewed-by: Janne Grunau <j(a)jannau.net>
Acked-by: Thinh Nguyen <Thinh.Nguyen(a)synopsys.com>
Signed-off-by: Sven Peter <sven(a)kernel.org>
---
Changes in v2:
- Picked up tags, thanks!
- Fixed a typo in the commit messages (dwc2 -> dwc3)
- Link to v1: https://patch.msgid.link/20260108-dwc3-apple-usb2phy-fix-v1-1-5dd7bc642040@…
---
drivers/usb/dwc3/dwc3-apple.c | 48 +++++++++++++++++++++++++++++--------------
1 file changed, 33 insertions(+), 15 deletions(-)
diff --git a/drivers/usb/dwc3/dwc3-apple.c b/drivers/usb/dwc3/dwc3-apple.c
index cc47cad232e397ac4498b09165dfdb5bd215ded7..c2ae8eb21d514e5e493d2927bc12908c308dfe19 100644
--- a/drivers/usb/dwc3/dwc3-apple.c
+++ b/drivers/usb/dwc3/dwc3-apple.c
@@ -218,25 +218,31 @@ static int dwc3_apple_core_init(struct dwc3_apple *appledwc)
return ret;
}
-static void dwc3_apple_phy_set_mode(struct dwc3_apple *appledwc, enum phy_mode mode)
-{
- lockdep_assert_held(&appledwc->lock);
-
- /*
- * This platform requires SUSPHY to be enabled here already in order to properly configure
- * the PHY and switch dwc3's PIPE interface to USB3 PHY.
- */
- dwc3_enable_susphy(&appledwc->dwc, true);
- phy_set_mode(appledwc->dwc.usb2_generic_phy[0], mode);
- phy_set_mode(appledwc->dwc.usb3_generic_phy[0], mode);
-}
-
static int dwc3_apple_init(struct dwc3_apple *appledwc, enum dwc3_apple_state state)
{
int ret, ret_reset;
lockdep_assert_held(&appledwc->lock);
+ /*
+ * The USB2 PHY on this platform must be configured for host or device mode while it is
+ * still powered off and before dwc3 tries to access it. Otherwise, the new configuration
+ * will sometimes only take affect after the *next* time dwc3 is brought up which causes
+ * the connected device to just not work.
+ * The USB3 PHY must be configured later after dwc3 has already been initialized.
+ */
+ switch (state) {
+ case DWC3_APPLE_HOST:
+ phy_set_mode(appledwc->dwc.usb2_generic_phy[0], PHY_MODE_USB_HOST);
+ break;
+ case DWC3_APPLE_DEVICE:
+ phy_set_mode(appledwc->dwc.usb2_generic_phy[0], PHY_MODE_USB_DEVICE);
+ break;
+ default:
+ /* Unreachable unless there's a bug in this driver */
+ return -EINVAL;
+ }
+
ret = reset_control_deassert(appledwc->reset);
if (ret) {
dev_err(appledwc->dev, "Failed to deassert reset, err=%d\n", ret);
@@ -257,7 +263,13 @@ static int dwc3_apple_init(struct dwc3_apple *appledwc, enum dwc3_apple_state st
case DWC3_APPLE_HOST:
appledwc->dwc.dr_mode = USB_DR_MODE_HOST;
dwc3_apple_set_ptrcap(appledwc, DWC3_GCTL_PRTCAP_HOST);
- dwc3_apple_phy_set_mode(appledwc, PHY_MODE_USB_HOST);
+ /*
+ * This platform requires SUSPHY to be enabled here already in order to properly
+ * configure the PHY and switch dwc3's PIPE interface to USB3 PHY. The USB2 PHY
+ * has already been configured to the correct mode earlier.
+ */
+ dwc3_enable_susphy(&appledwc->dwc, true);
+ phy_set_mode(appledwc->dwc.usb3_generic_phy[0], PHY_MODE_USB_HOST);
ret = dwc3_host_init(&appledwc->dwc);
if (ret) {
dev_err(appledwc->dev, "Failed to initialize host, ret=%d\n", ret);
@@ -268,7 +280,13 @@ static int dwc3_apple_init(struct dwc3_apple *appledwc, enum dwc3_apple_state st
case DWC3_APPLE_DEVICE:
appledwc->dwc.dr_mode = USB_DR_MODE_PERIPHERAL;
dwc3_apple_set_ptrcap(appledwc, DWC3_GCTL_PRTCAP_DEVICE);
- dwc3_apple_phy_set_mode(appledwc, PHY_MODE_USB_DEVICE);
+ /*
+ * This platform requires SUSPHY to be enabled here already in order to properly
+ * configure the PHY and switch dwc3's PIPE interface to USB3 PHY. The USB2 PHY
+ * has already been configured to the correct mode earlier.
+ */
+ dwc3_enable_susphy(&appledwc->dwc, true);
+ phy_set_mode(appledwc->dwc.usb3_generic_phy[0], PHY_MODE_USB_DEVICE);
ret = dwc3_gadget_init(&appledwc->dwc);
if (ret) {
dev_err(appledwc->dev, "Failed to initialize gadget, ret=%d\n", ret);
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20260108-dwc3-apple-usb2phy-fix-cf1d26018dd0
Best regards,
--
Sven Peter <sven(a)kernel.org>
The error path of xfs_attr_leaf_hasname() can leave a NULL
xfs_buf pointer. xfs_has_attr() checks for the NULL pointer but
the other callers do not.
We tripped over the NULL pointer in xfs_attr_leaf_get() but fix
the other callers too.
Fixes v5.8-rc4-95-g07120f1abdff ("xfs: Add xfs_has_attr and subroutines")
No reproducer.
Cc: stable(a)vger.kernel.org # v5.19+ with another port for v5.9 - v5.18
Signed-off-by: Mark Tinguely <mark.tinguely(a)oracle.com>
---
fs/xfs/libxfs/xfs_attr.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_attr.c b/fs/xfs/libxfs/xfs_attr.c
index 8c04acd30d48..25e2ecf20d14 100644
--- a/fs/xfs/libxfs/xfs_attr.c
+++ b/fs/xfs/libxfs/xfs_attr.c
@@ -1266,7 +1266,8 @@ xfs_attr_leaf_removename(
error = xfs_attr_leaf_hasname(args, &bp);
if (error == -ENOATTR) {
- xfs_trans_brelse(args->trans, bp);
+ if (bp)
+ xfs_trans_brelse(args->trans, bp);
if (args->op_flags & XFS_DA_OP_RECOVERY)
return 0;
return error;
@@ -1305,7 +1306,8 @@ xfs_attr_leaf_get(xfs_da_args_t *args)
error = xfs_attr_leaf_hasname(args, &bp);
if (error == -ENOATTR) {
- xfs_trans_brelse(args->trans, bp);
+ if (bp)
+ xfs_trans_brelse(args->trans, bp);
return error;
} else if (error != -EEXIST)
return error;
--
2.50.1 (Apple Git-155)
The for_each_available_child_of_node() calls of_node_put() to
release child_np in each success loop. After breaking from the
loop with the child_np has been released, the code will jump to
the put_child label and will call the of_node_put() again if the
devm_request_threaded_irq() fails. These cause a double free bug.
Fix by returning directly to avoid the duplicate of_node_put().
Fixes: ed2b5a8e6b98 ("phy: phy-rockchip-inno-usb2: support muxed interrupts")
Cc: stable(a)vger.kernel.org
Signed-off-by: Wentao Liang <vulab(a)iscas.ac.cn>
---
Changes in v2:
- Drop error jumping label.
---
drivers/phy/rockchip/phy-rockchip-inno-usb2.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
index b0f23690ec30..fe97a26297af 100644
--- a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
+++ b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
@@ -1491,7 +1491,7 @@ static int rockchip_usb2phy_probe(struct platform_device *pdev)
rphy);
if (ret) {
dev_err_probe(rphy->dev, ret, "failed to request usb2phy irq handle\n");
- goto put_child;
+ return ret;
}
}
--
2.34.1