The function sdm845_slim_snd_hw_params() calls the functuion
snd_soc_dai_set_channel_map() but does not check its return
value. A proper implementation can be found in msm_snd_hw_params().
Add error handling for snd_soc_dai_set_channel_map(). If the
function fails and it is not a unsupported error, return the
error code immediately.
Fixes: 5caf64c633a3 ("ASoC: qcom: sdm845: add support to DB845c and Lenovo Yoga")
Cc: stable(a)vger.kernel.org # v5.6
Signed-off-by: Wentao Liang <vulab(a)iscas.ac.cn>
---
sound/soc/qcom/sdm845.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/sound/soc/qcom/sdm845.c b/sound/soc/qcom/sdm845.c
index a479d7e5b7fb..314ff68506d9 100644
--- a/sound/soc/qcom/sdm845.c
+++ b/sound/soc/qcom/sdm845.c
@@ -91,6 +91,10 @@ static int sdm845_slim_snd_hw_params(struct snd_pcm_substream *substream,
else
ret = snd_soc_dai_set_channel_map(cpu_dai, tx_ch_cnt,
tx_ch, 0, NULL);
+ if (ret != 0 && ret != -ENOTSUPP) {
+ dev_err(rtd->dev, "failed to set cpu chan map, err:%d\n", ret);
+ return ret;
+ }
}
return 0;
--
2.42.0.windows.2
On 5/21/25 11:48, Maud Spierings wrote:
> On 5/21/25 11:29, Christian Heusel wrote:
>> On 25/05/21 10:53AM, Maud Spierings wrote:
>>> I've just experienced an Issue that I think may be a regression.
>>>
>>> I'm enabling a device which incorporates a lis2dw12 accelerometer,
>>> currently
>>> I am running 6.12 lts, so 6.12.29 as of typing this message.
>>
>> Could you check whether the latest mainline release (at the time this is
>> v6.15-rc7) is also affected? If that's not the case the bug might
>> already be fixed ^_^
>
> Unfortunately doesn't seem to be the case, still gets the panic. I also
> tried 6.12(.0), but that also has the panic, so it is definitely older
> than this lts.
>
>> Also as you said that this is a regression, what is the last revision
>> that the accelerometer worked with?
>
> Thats a difficult one to pin down, I'm moving from the nxp vendor kernel
> to mainline, the last working one that I know sure is 5.10.72 of that
> vendor kernel.
I did some more digging and the latest lts it seems to work with is
6.1.139, 6.6.91 also crashes. So it seems to be a very old regression.
>>> This is where my ability to fix thing fizzles out and so here I am
>>> asking
>>> for assistance.
>>>
>>> Kind regards,
>>> Maud
>>
>> Cheers,
>> Chris
>
> From: Parav Pandit <parav(a)nvidia.com>
> Sent: Thursday, May 22, 2025 1:19 PM
> To: Max Gurtovoy <mgurtovoy(a)nvidia.com>; Israel Rukshin
> <israelr(a)nvidia.com>
> Cc: Parav Pandit <parav(a)nvidia.com>; stable(a)vger.kernel.org; NBU-Contact-
> Li Rongqing (EXTERNAL) <lirongqing(a)baidu.com>
> Subject: [PATCH v6] virtio_blk: Fix disk deletion hang on device surprise
> removal
>
> When the PCI device is surprise removed, requests may not complete the
> device as the VQ is marked as broken. Due to this, the disk deletion hangs.
>
> Fix it by aborting the requests when the VQ is broken.
>
> With this fix now fio completes swiftly.
> An alternative of IO timeout has been considered, however when the driver
> knows about unresponsive block device, swiftly clearing them enables users
> and upper layers to react quickly.
>
> Verified with multiple device unplug iterations with pending requests in virtio
> used ring and some pending with the device.
>
> Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci
> device")
> Cc: stable(a)vger.kernel.org
> Reported-by: Li RongQing <lirongqing(a)baidu.com>
> Closes:
> https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741
> @baidu.com/
> Signed-off-by: Parav Pandit <parav(a)nvidia.com>
>
This is an internal patch, which got CCed to stable by mistake.
Please ignore this patch for stable kernels.
It is still under internal review.
I am sorry for the noise.
> ---
> v1->v2: (internal v5->v6):
> - Addressed comments from Stephan
> - fixed spelling to 'waiting'
> v1->v2: (internal v4->v5):
> - Addressed comments from MST
> - removed the vq broken check in queue_rq(s)
> ---
> drivers/block/virtio_blk.c | 85
> ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 85 insertions(+)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index
> 7cffea01d868..04f24ec20405 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -1554,6 +1554,89 @@ static int virtblk_probe(struct virtio_device *vdev)
> return err;
> }
>
> +static bool virtblk_request_cancel(struct request *rq, void *data) {
> + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> + struct virtio_blk *vblk = data;
> + struct virtio_blk_vq *vq;
> + unsigned long flags;
> +
> + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> +
> + spin_lock_irqsave(&vq->lock, flags);
> +
> + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
> + blk_mq_complete_request(rq);
> +
> + spin_unlock_irqrestore(&vq->lock, flags);
> + return true;
> +}
> +
> +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) {
> + struct request_queue *q = vblk->disk->queue;
> +
> + return;
> +
> + if (!virtqueue_is_broken(vblk->vqs[0].vq))
> + return;
> +
> + /* Start freezing the queue, so that new requests keeps waiting at the
> + * door of bio_queue_enter(). We cannot fully freeze the queue
> because
> + * freezed queue is an empty queue and there are pending requests,
> so
> + * only start freezing it.
> + */
> + blk_freeze_queue_start(q);
> +
> + /* When quiescing completes, all ongoing dispatches have completed
> + * and no new dispatch will happen towards the driver.
> + * This ensures that later when cancel is attempted, then are not
> + * getting processed by the queue_rq() or queue_rqs() handlers.
> + */
> + blk_mq_quiesce_queue(q);
> +
> + /*
> + * Synchronize with any ongoing VQ callbacks, effectively quiescing
> + * the device and preventing it from completing further requests
> + * to the block layer. Any outstanding, incomplete requests will be
> + * completed by virtblk_request_cancel().
> + */
> + virtio_synchronize_cbs(vblk->vdev);
> +
> + /* At this point, no new requests can enter the queue_rq() and
> + * completion routine will not complete any new requests either for
> the
> + * broken vq. Hence, it is safe to cancel all requests which are
> + * started.
> + */
> + blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel,
> vblk);
> + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> +
> + /* All pending requests are cleaned up. Time to resume so that disk
> + * deletion can be smooth. Start the HW queues so that when queue is
> + * unquiesced requests can again enter the driver.
> + */
> + blk_mq_start_stopped_hw_queues(q, true);
> +
> + /* Unquiescing will trigger dispatching any pending requests to the
> + * driver which has crossed bio_queue_enter() to the driver.
> + */
> + blk_mq_unquiesce_queue(q);
> +
> + /* Wait for all pending dispatches to terminate which may have been
> + * initiated after unquiescing.
> + */
> + blk_mq_freeze_queue_wait(q);
> +
> + /* Mark the disk dead so that once queue unfreeze, the requests
> + * waiting at the door of bio_queue_enter() can be aborted right away.
> + */
> + blk_mark_disk_dead(vblk->disk);
> +
> + /* Unfreeze the queue so that any waiting requests will be aborted. */
> + blk_mq_unfreeze_queue_nomemrestore(q);
> +}
> +
> static void virtblk_remove(struct virtio_device *vdev) {
> struct virtio_blk *vblk = vdev->priv;
> @@ -1561,6 +1644,8 @@ static void virtblk_remove(struct virtio_device
> *vdev)
> /* Make sure no work handler is accessing the device. */
> flush_work(&vblk->config_work);
>
> + virtblk_broken_device_cleanup(vblk);
> +
> del_gendisk(vblk->disk);
> blk_mq_free_tag_set(&vblk->tag_set);
>
> --
> 2.34.1
Hi,
I'd like to report a regression which seems related to the latest
ITS mitigations in Linux 6.1.x:
The server in question is a Supermicro SYS-120C-TN10R with
a "Intel(R) Xeon(R) Silver 4310 CPU @ 2.10GHz" CPU, running
Debian Bookworm. The full output of /proc/cpuinfo is attached
as cpuinfo.txt
In addition to the kernel changes between 6.1.135 and 6.1.139
there is also some additional invariant, namely the Intel microcode
loaded at early boot:
On Linux 6.1.135 every works fine with both the 20250211 and
20250512 microcode releases (kern.log is attached as
6.1.135-feb-microcode.log and 6.1.135-may-microcode.log)
With 6.1.139 and the February microcode, oopses appear related
to clear_bhb_loop() (which may be related to "x86/its: Align
RETs in BHB clear sequence to avoid thunking"?). This is
captured in 6.1.139-feb-microcode.log.
With 6.1.139 and the May microcode, the system mostly
crashes on bootup (in my tests it crashed in three out of
four attempts). I've captured both the crash
(6.1.139-may-microcode-crash.log) and a working boot
(6.1.139-may-microcode-noncrash.log).
If you need any additional information, please let me know!
Cheers,
Moritz