Hi Jan,
What do you think about following patch on the top of yours ?
The new helper that I've added on the top of your patch will also future uses of the rcu_dereference_protected(). e.g. blktrace extension [1] support that I'm working on.
P.S. it is compile only if your okay I'll send a separate patch.
+ +/* Dereference q->blk_trace with q->blk_trace_mutex check only. */ +static inline struct blk_trace *blk_trace_rcu_deref(struct request_queue *q) +{ + return rcu_dereference_protected(q->blk_trace, + lockdep_is_held(&q->blk_trace_mutex)); +} /* * Send out a notify message. */ @@ -632,8 +639,7 @@ static int __blk_trace_startstop(struct request_queue *q, int start) int ret; struct blk_trace *bt;
- bt = rcu_dereference_protected(q->blk_trace, - lockdep_is_held(&q->blk_trace_mutex)); + bt = blk_trace_rcu_deref(q); if (bt == NULL) return -EINVAL;
@@ -743,8 +749,7 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg) void blk_trace_shutdown(struct request_queue *q) { mutex_lock(&q->blk_trace_mutex); - if (rcu_dereference_protected(q->blk_trace, - lockdep_is_held(&q->blk_trace_mutex))) { + if (blk_trace_rcu_deref(q)) { __blk_trace_startstop(q, 0); __blk_trace_remove(q); } @@ -1817,8 +1822,7 @@ static ssize_t sysfs_blk_trace_attr_show(struct device *dev,
mutex_lock(&q->blk_trace_mutex);
- bt = rcu_dereference_protected(q->blk_trace, - lockdep_is_held(&q->blk_trace_mutex)); + bt = blk_trace_rcu_deref(q); if (attr == &dev_attr_enable) { ret = sprintf(buf, "%u\n", !!bt); goto out_unlock_bdev; @@ -1881,8 +1885,7 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
mutex_lock(&q->blk_trace_mutex);
- bt = rcu_dereference_protected(q->blk_trace, - lockdep_is_held(&q->blk_trace_mutex)); + bt = blk_trace_rcu_deref(q); if (attr == &dev_attr_enable) { if (!!value == !!bt) { ret = 0;
On 02/06/2020 06:28 AM, Jan Kara wrote:
KASAN is reporting that __blk_add_trace() has a use-after-free issue when accessing q->blk_trace. Indeed the switching of block tracing (and thus eventual freeing of q->blk_trace) is completely unsynchronized with the currently running tracing and thus it can happen that the blk_trace structure is being freed just while __blk_add_trace() works on it. Protect accesses to q->blk_trace by RCU during tracing and make sure we wait for the end of RCU grace period when shutting down tracing. Luckily that is rare enough event that we can afford that. Note that postponing the freeing of blk_trace to an RCU callback should better be avoided as it could have unexpected user visible side-effects as debugfs files would be still existing for a short while block tracing has been shut down.
Link:https://bugzilla.kernel.org/show_bug.cgi?id=205711 CC:stable@vger.kernel.org Reported-by: Tristantristmd@gmail.com Signed-off-by: Jan Karajack@suse.cz