The patch below does not apply to the 4.16-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From f572a034d9e07157dd07d2b6be3a1459b5574b58 Mon Sep 17 00:00:00 2001
From: Greentime Hu <greentime(a)andestech.com>
Date: Thu, 16 Nov 2017 19:33:35 +0800
Subject: [PATCH] earlycon: add reg-offset to physical address before mapping
It will get the wrong virtual address because port->mapbase is not added
the correct reg-offset yet. We have to update it before earlycon_map()
is called
Signed-off-by: Greentime Hu <greentime(a)andestech.com>
Acked-by: Arnd Bergmann <arnd(a)arndb.de>
Acked-by: Rob Herring <robh(a)kernel.org>
Cc: Peter Hurley <peter(a)hurleysoftware.com>
Cc: stable(a)vger.kernel.org
Fixes: 088da2a17619 ("of: earlycon: Initialize port fields from DT
properties")
diff --git a/drivers/tty/serial/earlycon.c b/drivers/tty/serial/earlycon.c
index 870e84fb6e39..a24278380fec 100644
--- a/drivers/tty/serial/earlycon.c
+++ b/drivers/tty/serial/earlycon.c
@@ -245,11 +245,12 @@ int __init of_setup_earlycon(const struct earlycon_id *match,
}
port->mapbase = addr;
port->uartclk = BASE_BAUD * 16;
- port->membase = earlycon_map(port->mapbase, SZ_4K);
val = of_get_flat_dt_prop(node, "reg-offset", NULL);
if (val)
port->mapbase += be32_to_cpu(*val);
+ port->membase = earlycon_map(port->mapbase, SZ_4K);
+
val = of_get_flat_dt_prop(node, "reg-shift", NULL);
if (val)
port->regshift = be32_to_cpu(*val);
Hi,
Building 4.9.94 in the same way we have been building previous 4.9 releases yields the following error:
DEBUG: tests/code-reading.c: In function 'read_object_code':
DEBUG: tests/code-reading.c:228:19: error: 'KMOD_DECOMP_LEN' undeclared (first use in this function)
DEBUG: char decomp_name[KMOD_DECOMP_LEN];
DEBUG: ^
DEBUG: tests/code-reading.c:228:19: note: each undeclared identifier is reported only once for each
function it appears in
DEBUG: tests/code-reading.c:291:3: warning: implicit declaration of function
'dso__decompress_kmodule_path' [-Wimplicit-function-declaration]
DEBUG: if (dso__decompress_kmodule_path(al.map->dso, objdump_name,
DEBUG: ^
DEBUG: tests/code-reading.c:291:3: warning: nested extern declaration of
'dso__decompress_kmodule_path' [-Wnested-externs]
DEBUG: tests/code-reading.c:228:7: warning: unused variable 'decomp_name' [-Wunused-variable]
DEBUG: char decomp_name[KMOD_DECOMP_LEN];
DEBUG: ^
DEBUG: CC tests/topology.o
DEBUG: CC tests/cpumap.o
DEBUG: CC tests/stat.o
DEBUG: CC tests/event_update.o
DEBUG: mv: cannot stat 'tests/.code-reading.o.tmp': No such file or directory
DEBUG: make[3]: *** [tests/code-reading.o] Error 1
DEBUG: make[3]: *** Waiting for unfinished jobs....
DEBUG: make[2]: *** [util] Error 2
DEBUG: make[1]: *** [libperf-in.o] Error 2
DEBUG: make[1]: *** Waiting for unfinished jobs....
DEBUG: LD bench/perf-in.o
DEBUG: make[2]: *** [tests] Error 2
DEBUG: make[1]: *** [perf-in.o] Error 2
As far as I can see, KMOD_DECOMP_LEN was introduced by 7525a238be8f ("perf tests: Decompress kernel
module before objdump"), but I have zero deep knowledge in this area so I may be very wrong here.
Cheers,
Pavlos Parissis
The normal request completion can be done before or during handling
BLK_EH_RESET_TIMER, and this race may cause the request to never
be completed since driver's .timeout() may always return
BLK_EH_RESET_TIMER.
This issue can't be fixed completely by driver, since the normal
completion can be done between returning .timeout() and handling
BLK_EH_RESET_TIMER.
This patch fixes the race by introducing rq state of MQ_RQ_COMPLETE_IN_RESET,
and reading/writing rq's state by holding queue lock, which can be
per-request actually, but just not necessary to introduce one lock for
so unusual event.
Also handle the timeout requests in two steps:
1) in 1st step, call .timeout(), and reset timer for BLK_EH_RESET_TIMER
2) in 2nd step, sync with normal completion path by holding queue lock
for avoiding race between BLK_EH_RESET_TIMER and normal completion.
Cc: "jianchao.wang" <jianchao.w.wang(a)oracle.com>
Cc: Bart Van Assche <bart.vanassche(a)wdc.com>
Cc: Tejun Heo <tj(a)kernel.org>
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: Ming Lei <ming.lei(a)redhat.com>
Cc: Sagi Grimberg <sagi(a)grimberg.me>
Cc: Israel Rukshin <israelr(a)mellanox.com>,
Cc: Max Gurtovoy <maxg(a)mellanox.com>
Cc: stable(a)vger.kernel.org
Cc: Martin Steigerwald <Martin(a)Lichtvoll.de>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
---
block/blk-mq.c | 116 ++++++++++++++++++++++++++++++++++++++++---------
block/blk-mq.h | 1 +
include/linux/blkdev.h | 6 +++
3 files changed, 102 insertions(+), 21 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index d6a21898933d..9415e65302a8 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -630,10 +630,27 @@ void blk_mq_complete_request(struct request *rq)
* However, that would complicate paths which want to synchronize
* against us. Let stay in sync with the issue path so that
* hctx_lock() covers both issue and completion paths.
+ *
+ * Cover complete vs BLK_EH_RESET_TIMER race in slow path with
+ * holding queue lock.
*/
hctx_lock(hctx, &srcu_idx);
if (blk_mq_rq_aborted_gstate(rq) != rq->gstate)
__blk_mq_complete_request(rq);
+ else {
+ unsigned long flags;
+ bool need_complete = false;
+
+ spin_lock_irqsave(q->queue_lock, flags);
+ if (!blk_mq_rq_aborted_gstate(rq))
+ need_complete = true;
+ else
+ blk_mq_rq_update_state(rq, MQ_RQ_COMPLETE_IN_TIMEOUT);
+ spin_unlock_irqrestore(q->queue_lock, flags);
+
+ if (need_complete)
+ __blk_mq_complete_request(rq);
+ }
hctx_unlock(hctx, srcu_idx);
}
EXPORT_SYMBOL(blk_mq_complete_request);
@@ -810,7 +827,7 @@ struct blk_mq_timeout_data {
unsigned int nr_expired;
};
-static void blk_mq_rq_timed_out(struct request *req, bool reserved)
+static void blk_mq_rq_pre_timed_out(struct request *req, bool reserved)
{
const struct blk_mq_ops *ops = req->q->mq_ops;
enum blk_eh_timer_return ret = BLK_EH_RESET_TIMER;
@@ -818,18 +835,44 @@ static void blk_mq_rq_timed_out(struct request *req, bool reserved)
if (ops->timeout)
ret = ops->timeout(req, reserved);
+ if (ret == BLK_EH_RESET_TIMER)
+ blk_add_timer(req);
+
+ req->timeout_ret = ret;
+}
+
+static void blk_mq_rq_timed_out(struct request *req, bool reserved)
+{
+ enum blk_eh_timer_return ret = req->timeout_ret;
+ unsigned long flags;
+
switch (ret) {
case BLK_EH_HANDLED:
+ spin_lock_irqsave(req->q->queue_lock, flags);
+ complete_rq:
+ if (blk_mq_rq_state(req) == MQ_RQ_COMPLETE_IN_TIMEOUT)
+ blk_mq_rq_update_state(req, MQ_RQ_IN_FLIGHT);
+ spin_unlock_irqrestore(req->q->queue_lock, flags);
__blk_mq_complete_request(req);
break;
case BLK_EH_RESET_TIMER:
/*
- * As nothing prevents from completion happening while
- * ->aborted_gstate is set, this may lead to ignored
- * completions and further spurious timeouts.
+ * The normal completion may happen during handling the
+ * timeout, or even after returning from .timeout(), so
+ * once the request has been completed, we can't reset
+ * timer any more since this request may be handled as
+ * BLK_EH_RESET_TIMER in next timeout handling too, and
+ * it has to be completed in this situation.
+ *
+ * Holding the queue lock to cover read/write rq's
+ * aborted_gstate and normal state, so the race can be
+ * avoided completely.
*/
+ spin_lock_irqsave(req->q->queue_lock, flags);
blk_mq_rq_update_aborted_gstate(req, 0);
- blk_add_timer(req);
+ if (blk_mq_rq_state(req) == MQ_RQ_COMPLETE_IN_TIMEOUT)
+ goto complete_rq;
+ spin_unlock_irqrestore(req->q->queue_lock, flags);
break;
case BLK_EH_NOT_HANDLED:
req->rq_flags |= RQF_MQ_TIMEOUT_EXPIRED;
@@ -875,7 +918,7 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
}
}
-static void blk_mq_terminate_expired(struct blk_mq_hw_ctx *hctx,
+static void blk_mq_prepare_expired(struct blk_mq_hw_ctx *hctx,
struct request *rq, void *priv, bool reserved)
{
/*
@@ -887,9 +930,40 @@ static void blk_mq_terminate_expired(struct blk_mq_hw_ctx *hctx,
*/
if (!(rq->rq_flags & RQF_MQ_TIMEOUT_EXPIRED) &&
READ_ONCE(rq->gstate) == rq->aborted_gstate)
+ blk_mq_rq_pre_timed_out(rq, reserved);
+}
+
+static void blk_mq_terminate_expired(struct blk_mq_hw_ctx *hctx,
+ struct request *rq, void *priv, bool reserved)
+{
+ if (!(rq->rq_flags & RQF_MQ_TIMEOUT_EXPIRED) &&
+ READ_ONCE(rq->gstate) == rq->aborted_gstate)
blk_mq_rq_timed_out(rq, reserved);
}
+static void blk_mq_timeout_synchronize_rcu(struct request_queue *q,
+ bool reset_expired)
+{
+ struct blk_mq_hw_ctx *hctx;
+ int i;
+ bool has_rcu = false;
+
+ queue_for_each_hw_ctx(q, hctx, i) {
+ if (!hctx->nr_expired)
+ continue;
+
+ if (!(hctx->flags & BLK_MQ_F_BLOCKING))
+ has_rcu = true;
+ else
+ synchronize_srcu(hctx->srcu);
+
+ if (reset_expired)
+ hctx->nr_expired = 0;
+ }
+ if (has_rcu)
+ synchronize_rcu();
+}
+
static void blk_mq_timeout_work(struct work_struct *work)
{
struct request_queue *q =
@@ -899,8 +973,6 @@ static void blk_mq_timeout_work(struct work_struct *work)
.next_set = 0,
.nr_expired = 0,
};
- struct blk_mq_hw_ctx *hctx;
- int i;
/* A deadlock might occur if a request is stuck requiring a
* timeout at the same time a queue freeze is waiting
@@ -922,27 +994,26 @@ static void blk_mq_timeout_work(struct work_struct *work)
blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &data);
if (data.nr_expired) {
- bool has_rcu = false;
-
/*
* Wait till everyone sees ->aborted_gstate. The
* sequential waits for SRCUs aren't ideal. If this ever
* becomes a problem, we can add per-hw_ctx rcu_head and
* wait in parallel.
*/
- queue_for_each_hw_ctx(q, hctx, i) {
- if (!hctx->nr_expired)
- continue;
+ blk_mq_timeout_synchronize_rcu(q, false);
- if (!(hctx->flags & BLK_MQ_F_BLOCKING))
- has_rcu = true;
- else
- synchronize_srcu(hctx->srcu);
+ /* call .timeout() for timed-out requests */
+ blk_mq_queue_tag_busy_iter(q, blk_mq_prepare_expired, NULL);
- hctx->nr_expired = 0;
- }
- if (has_rcu)
- synchronize_rcu();
+ /*
+ * If .timeout returns BLK_EH_HANDLED, wait till current
+ * completion is done, for avoiding to update state on
+ * completed request.
+ *
+ * If .timeout returns BLK_EH_RESET_TIMER, wait till
+ * blk_add_timer() is commited before completing this rq.
+ */
+ blk_mq_timeout_synchronize_rcu(q, true);
/* terminate the ones we won */
blk_mq_queue_tag_busy_iter(q, blk_mq_terminate_expired, NULL);
@@ -952,6 +1023,9 @@ static void blk_mq_timeout_work(struct work_struct *work)
data.next = blk_rq_timeout(round_jiffies_up(data.next));
mod_timer(&q->timeout, data.next);
} else {
+ struct blk_mq_hw_ctx *hctx;
+ int i;
+
/*
* Request timeouts are handled as a forward rolling timer. If
* we end up here it means that no requests are pending and
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 88c558f71819..0426d048743d 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -35,6 +35,7 @@ enum mq_rq_state {
MQ_RQ_IDLE = 0,
MQ_RQ_IN_FLIGHT = 1,
MQ_RQ_COMPLETE = 2,
+ MQ_RQ_COMPLETE_IN_TIMEOUT = 3,
MQ_RQ_STATE_BITS = 2,
MQ_RQ_STATE_MASK = (1 << MQ_RQ_STATE_BITS) - 1,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 9af3e0f430bc..8278f67d39a6 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -252,8 +252,14 @@ struct request {
struct list_head timeout_list;
union {
+ /* used after completion */
struct __call_single_data csd;
+
+ /* used in io scheduler, before dispatch */
u64 fifo_time;
+
+ /* used after dispatch and before completion */
+ int timeout_ret;
};
/*
--
2.9.5
The normal request completion can be done before or during handling
BLK_EH_RESET_TIMER, and this race may cause the request to never
be completed since driver's .timeout() may always return
BLK_EH_RESET_TIMER.
This issue can't be fixed completely by driver, since the normal
completion can be done between returning .timeout() and handling
BLK_EH_RESET_TIMER.
This patch fixes the race by introducing rq state of MQ_RQ_COMPLETE_IN_RESET,
and reading/writing rq's state by holding queue lock, which can be
per-request actually, but just not necessary to introduce one lock for
so unusual event.
Also when .timeout() returns BLK_EH_HANDLED, sync with normal completion
path before completing this timed-out rq finally for avoiding this rq's
state touched by normal completion.
Cc: "jianchao.wang" <jianchao.w.wang(a)oracle.com>
Cc: Bart Van Assche <bart.vanassche(a)wdc.com>
Cc: Tejun Heo <tj(a)kernel.org>
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: Ming Lei <ming.lei(a)redhat.com>
Cc: Sagi Grimberg <sagi(a)grimberg.me>
Cc: Israel Rukshin <israelr(a)mellanox.com>,
Cc: Max Gurtovoy <maxg(a)mellanox.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
---
V3:
- before completing rq for BLK_EH_HANDLED, sync with normal
completion path
- make sure rq's state updated as MQ_RQ_IN_FLIGHT before completing
V2:
- rename the new flag as MQ_RQ_COMPLETE_IN_TIMEOUT
- fix lock uses in blk_mq_rq_timed_out
- document re-order between blk_add_timer() and
blk_mq_rq_update_aborted_gstate(req, 0)
block/blk-mq.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++-----------
block/blk-mq.h | 1 +
2 files changed, 70 insertions(+), 16 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0dc9e341c2a7..d70f69a32226 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -198,23 +198,12 @@ void blk_mq_quiesce_queue_nowait(struct request_queue *q)
}
EXPORT_SYMBOL_GPL(blk_mq_quiesce_queue_nowait);
-/**
- * blk_mq_quiesce_queue() - wait until all ongoing dispatches have finished
- * @q: request queue.
- *
- * Note: this function does not prevent that the struct request end_io()
- * callback function is invoked. Once this function is returned, we make
- * sure no dispatch can happen until the queue is unquiesced via
- * blk_mq_unquiesce_queue().
- */
-void blk_mq_quiesce_queue(struct request_queue *q)
+static void blk_mq_queue_synchronize_rcu(struct request_queue *q)
{
struct blk_mq_hw_ctx *hctx;
unsigned int i;
bool rcu = false;
- blk_mq_quiesce_queue_nowait(q);
-
queue_for_each_hw_ctx(q, hctx, i) {
if (hctx->flags & BLK_MQ_F_BLOCKING)
synchronize_srcu(hctx->srcu);
@@ -224,6 +213,21 @@ void blk_mq_quiesce_queue(struct request_queue *q)
if (rcu)
synchronize_rcu();
}
+
+/**
+ * blk_mq_quiesce_queue() - wait until all ongoing dispatches have finished
+ * @q: request queue.
+ *
+ * Note: this function does not prevent that the struct request end_io()
+ * callback function is invoked. Once this function is returned, we make
+ * sure no dispatch can happen until the queue is unquiesced via
+ * blk_mq_unquiesce_queue().
+ */
+void blk_mq_quiesce_queue(struct request_queue *q)
+{
+ blk_mq_quiesce_queue_nowait(q);
+ blk_mq_queue_synchronize_rcu(q);
+}
EXPORT_SYMBOL_GPL(blk_mq_quiesce_queue);
/*
@@ -630,10 +634,27 @@ void blk_mq_complete_request(struct request *rq)
* However, that would complicate paths which want to synchronize
* against us. Let stay in sync with the issue path so that
* hctx_lock() covers both issue and completion paths.
+ *
+ * Cover complete vs BLK_EH_RESET_TIMER race in slow path with
+ * holding queue lock.
*/
hctx_lock(hctx, &srcu_idx);
if (blk_mq_rq_aborted_gstate(rq) != rq->gstate)
__blk_mq_complete_request(rq);
+ else {
+ unsigned long flags;
+ bool need_complete = false;
+
+ spin_lock_irqsave(q->queue_lock, flags);
+ if (!blk_mq_rq_aborted_gstate(rq))
+ need_complete = true;
+ else
+ blk_mq_rq_update_state(rq, MQ_RQ_COMPLETE_IN_TIMEOUT);
+ spin_unlock_irqrestore(q->queue_lock, flags);
+
+ if (need_complete)
+ __blk_mq_complete_request(rq);
+ }
hctx_unlock(hctx, srcu_idx);
}
EXPORT_SYMBOL(blk_mq_complete_request);
@@ -814,6 +835,7 @@ static void blk_mq_rq_timed_out(struct request *req, bool reserved)
{
const struct blk_mq_ops *ops = req->q->mq_ops;
enum blk_eh_timer_return ret = BLK_EH_RESET_TIMER;
+ unsigned long flags;
req->rq_flags |= RQF_MQ_TIMEOUT_EXPIRED;
@@ -822,16 +844,47 @@ static void blk_mq_rq_timed_out(struct request *req, bool reserved)
switch (ret) {
case BLK_EH_HANDLED:
+ /*
+ * If .timeout returns BLK_EH_HANDLED, this rq shouldn't
+ * be completed by normal irq context any more, but for
+ * the sake of safety, sync with normal completion path
+ * before completing this request finally because the
+ * normal completion path may touch this rq's state.
+ */
+ blk_mq_queue_synchronize_rcu(req->q);
+
+ spin_lock_irqsave(req->q->queue_lock, flags);
+ complete_rq:
+ if (blk_mq_rq_state(req) == MQ_RQ_COMPLETE_IN_TIMEOUT)
+ blk_mq_rq_update_state(req, MQ_RQ_IN_FLIGHT);
+ spin_unlock_irqrestore(req->q->queue_lock, flags);
__blk_mq_complete_request(req);
break;
case BLK_EH_RESET_TIMER:
/*
- * As nothing prevents from completion happening while
- * ->aborted_gstate is set, this may lead to ignored
- * completions and further spurious timeouts.
+ * The normal completion may happen during handling the
+ * timeout, or even after returning from .timeout(), so
+ * once the request has been completed, we can't reset
+ * timer any more since this request may be handled as
+ * BLK_EH_RESET_TIMER in next timeout handling too, and
+ * it has to be completed in this situation.
+ *
+ * Holding the queue lock to cover read/write rq's
+ * aborted_gstate and normal state, so the race can be
+ * avoided completely.
+ *
+ * blk_add_timer() may be re-ordered with resetting
+ * aborted_gstate, and the only side-effec is that if this
+ * request is recycled after aborted_gstate is cleared, it
+ * may be timed out a bit late, that is what we can survive
+ * given timeout event is so unusual.
*/
- blk_mq_rq_update_aborted_gstate(req, 0);
+ spin_lock_irqsave(req->q->queue_lock, flags);
+ if (blk_mq_rq_state(req) == MQ_RQ_COMPLETE_IN_TIMEOUT)
+ goto complete_rq;
blk_add_timer(req);
+ blk_mq_rq_update_aborted_gstate(req, 0);
+ spin_unlock_irqrestore(req->q->queue_lock, flags);
break;
case BLK_EH_NOT_HANDLED:
break;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 88c558f71819..0426d048743d 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -35,6 +35,7 @@ enum mq_rq_state {
MQ_RQ_IDLE = 0,
MQ_RQ_IN_FLIGHT = 1,
MQ_RQ_COMPLETE = 2,
+ MQ_RQ_COMPLETE_IN_TIMEOUT = 3,
MQ_RQ_STATE_BITS = 2,
MQ_RQ_STATE_MASK = (1 << MQ_RQ_STATE_BITS) - 1,
--
2.9.5
From: Takashi Iwai <tiwai(a)suse.de>
[ Upstream commit d7f910bfedd863d13ea320030fe98e42d0938ed5 ]
For accessing the snd_timer_user queue indices, we take tu->qlock.
But it's forgotten in a couple of places.
The one in snd_timer_user_params() should be safe without the
spinlock as the timer is already stopped. But it's better for
consistency.
The one in poll is just a read-out, so it's not inevitably needed, but
it'd be good to make the result consistent, too.
Tested-by: Alexander Potapenko <glider(a)google.com>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Sasha Levin <alexander.levin(a)microsoft.com>
---
sound/core/timer.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/sound/core/timer.c b/sound/core/timer.c
index 48eaccba82a3..fd622aa0bb93 100644
--- a/sound/core/timer.c
+++ b/sound/core/timer.c
@@ -1771,6 +1771,7 @@ static int snd_timer_user_params(struct file *file,
}
}
}
+ spin_lock_irq(&tu->qlock);
tu->qhead = tu->qtail = tu->qused = 0;
if (tu->timeri->flags & SNDRV_TIMER_IFLG_EARLY_EVENT) {
if (tu->tread) {
@@ -1791,6 +1792,7 @@ static int snd_timer_user_params(struct file *file,
}
tu->filter = params.filter;
tu->ticks = params.ticks;
+ spin_unlock_irq(&tu->qlock);
err = 0;
_end:
if (copy_to_user(_params, ¶ms, sizeof(params)))
@@ -2029,10 +2031,12 @@ static unsigned int snd_timer_user_poll(struct file *file, poll_table * wait)
poll_wait(file, &tu->qchange_sleep, wait);
mask = 0;
+ spin_lock_irq(&tu->qlock);
if (tu->qused)
mask |= POLLIN | POLLRDNORM;
if (tu->disconnected)
mask |= POLLERR;
+ spin_unlock_irq(&tu->qlock);
return mask;
}
--
2.15.1
From: Fabio Estevam <fabio.estevam(a)nxp.com>
imx6ul and imx7 report the following error:
caam_jr 2142000.jr1: 40000789: DECO: desc idx 7:
Protocol Size Error - A protocol has seen an error in size. When
running RSA, pdb size N < (size of F) when no formatting is used; or
pdb size N < (F + 11) when formatting is used.
------------[ cut here ]------------
WARNING: CPU: 0 PID: 1 at crypto/asymmetric_keys/public_key.c:148
public_key_verify_signature+0x27c/0x2b0
This error happens because the signature contains 257 bytes, including
a leading zero as the first element.
Fix the problem by stripping off the leading zero from input data
before feeding it to the CAAM accelerator.
Fixes: 8c419778ab57e497b5 ("crypto: caam - add support for RSA algorithm")
Cc: <stable(a)vger.kernel.org>
Reported-by: Martin Townsend <mtownsend1973(a)gmail.com>
Signed-off-by: Fabio Estevam <fabio.estevam(a)nxp.com>
---
Changes since v1:
- Use a temp pointer
- Assign len to req->src_len , so that more than one leading zero
can be taken into account
drivers/crypto/caam/caampkc.c | 45 +++++++++++++++++++++++++++++++++++--------
1 file changed, 37 insertions(+), 8 deletions(-)
diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index 7a897209..5f3e627 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -166,6 +166,14 @@ static void rsa_priv_f3_done(struct device *dev, u32 *desc, u32 err,
akcipher_request_complete(req, err);
}
+static void caam_rsa_drop_leading_zeros(const u8 **ptr, size_t *nbytes)
+{
+ while (!**ptr && *nbytes) {
+ (*ptr)++;
+ (*nbytes)--;
+ }
+}
+
static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
size_t desclen)
{
@@ -178,7 +186,36 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
int sgc;
int sec4_sg_index, sec4_sg_len = 0, sec4_sg_bytes;
int src_nents, dst_nents;
+ const u8 *temp;
+ void *buffer;
+ size_t len;
+
+ buffer = kzalloc(req->src_len, GFP_ATOMIC);
+ if (!buffer)
+ return ERR_PTR(-ENOMEM);
+
+ sg_copy_to_buffer(req->src, sg_nents(req->src),
+ buffer, req->src_len);
+ temp = (u8 *)buffer;
+ len = req->src_len;
+ /*
+ * Check if the buffer contains leading zeros and if
+ * it does, drop the leading zeros
+ */
+ if (temp[0] == 0) {
+ caam_rsa_drop_leading_zeros(&temp, &len);
+ if (!temp) {
+ kfree(buffer);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ req->src_len = len;
+ sg_copy_from_buffer(req->src, sg_nents(req->src),
+ (void *)temp, req->src_len);
+ }
+
+ kfree(buffer);
src_nents = sg_nents_for_len(req->src, req->src_len);
dst_nents = sg_nents_for_len(req->dst, req->dst_len);
@@ -683,14 +720,6 @@ static void caam_rsa_free_key(struct caam_rsa_key *key)
memset(key, 0, sizeof(*key));
}
-static void caam_rsa_drop_leading_zeros(const u8 **ptr, size_t *nbytes)
-{
- while (!**ptr && *nbytes) {
- (*ptr)++;
- (*nbytes)--;
- }
-}
-
/**
* caam_read_rsa_crt - Used for reading dP, dQ, qInv CRT members.
* dP, dQ and qInv could decode to less than corresponding p, q length, as the
--
2.7.4
When blk_queue_enter() waits for a queue to unfreeze, or unset the
PREEMPT_ONLY flag, do not allow it to be interrupted by a signal.
The PREEMPT_ONLY flag was introduced later in commit 3a0a529971ec
("block, scsi: Make SCSI quiesce and resume work reliably"). Note the SCSI
device is resumed asynchronously, i.e. after un-freezing userspace tasks.
So that commit exposed the bug as a regression in v4.15. A mysterious
SIGBUS (or -EIO) sometimes happened during the time the device was being
resumed. Most frequently, there was no kernel log message, and we saw Xorg
or Xwayland killed by SIGBUS.[1]
[1] E.g. https://bugzilla.redhat.com/show_bug.cgi?id=1553979
Without this fix, I get an IO error in this test:
# dd if=/dev/sda of=/dev/null iflag=direct & \
while killall -SIGUSR1 dd; do sleep 0.1; done & \
echo mem > /sys/power/state ; \
sleep 5; killall dd # stop after 5 seconds
The interruptible wait was added to blk_queue_enter in
commit 3ef28e83ab15 ("block: generic request_queue reference counting").
Before then, the interruptible wait was only in blk-mq, but I don't think
it could ever have been correct.
Cc: stable(a)vger.kernel.org
Signed-off-by: Alan Jenkins <alan.christopher.jenkins(a)gmail.com>
---
block/blk-core.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index abcb8684ba67..5a6d20069364 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -915,7 +915,6 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
while (true) {
bool success = false;
- int ret;
rcu_read_lock();
if (percpu_ref_tryget_live(&q->q_usage_counter)) {
@@ -947,14 +946,12 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
*/
smp_rmb();
- ret = wait_event_interruptible(q->mq_freeze_wq,
+ wait_event(q->mq_freeze_wq,
(atomic_read(&q->mq_freeze_depth) == 0 &&
(preempt || !blk_queue_preempt_only(q))) ||
blk_queue_dying(q));
if (blk_queue_dying(q))
return -ENODEV;
- if (ret)
- return ret;
}
}
--
2.14.3