From: Peter Ziljstra <peterz(a)infradead.org>
Make the WW mutex code more readable by adding comments, splitting up
functions and pointing out that we're actually using the Wait-Die
algorithm.
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: Jonathan Corbet <corbet(a)lwn.net>
Cc: Gustavo Padovan <gustavo(a)padovan.org>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Sean Paul <seanpaul(a)chromium.org>
Cc: David Airlie <airlied(a)linux.ie>
Cc: Davidlohr Bueso <dave(a)stgolabs.net>
Cc: "Paul E. McKenney" <paulmck(a)linux.vnet.ibm.com>
Cc: Josh Triplett <josh(a)joshtriplett.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Kate Stewart <kstewart(a)linuxfoundation.org>
Cc: Philippe Ombredanne <pombredanne(a)nexb.com>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: linux-doc(a)vger.kernel.org
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Co-authored-by: Thomas Hellstrom <thellstrom(a)vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom(a)vmware.com>
---
Documentation/locking/ww-mutex-design.txt | 12 +-
include/linux/ww_mutex.h | 28 ++---
kernel/locking/mutex.c | 202 ++++++++++++++++++------------
3 files changed, 145 insertions(+), 97 deletions(-)
diff --git a/Documentation/locking/ww-mutex-design.txt b/Documentation/locking/ww-mutex-design.txt
index 34c3a1b50b9a..2fd7f2a2af21 100644
--- a/Documentation/locking/ww-mutex-design.txt
+++ b/Documentation/locking/ww-mutex-design.txt
@@ -32,10 +32,10 @@ the oldest task) wins, and the one with the higher reservation id (i.e. the
younger task) unlocks all of the buffers that it has already locked, and then
tries again.
-In the RDBMS literature this deadlock handling approach is called wait/wound:
+In the RDBMS literature this deadlock handling approach is called wait/die:
The older tasks waits until it can acquire the contended lock. The younger tasks
needs to back off and drop all the locks it is currently holding, i.e. the
-younger task is wounded.
+younger task dies.
Concepts
--------
@@ -56,9 +56,9 @@ Furthermore there are three different class of w/w lock acquire functions:
* Normal lock acquisition with a context, using ww_mutex_lock.
-* Slowpath lock acquisition on the contending lock, used by the wounded task
- after having dropped all already acquired locks. These functions have the
- _slow postfix.
+* Slowpath lock acquisition on the contending lock, used by the task that just
+ killed its transaction after having dropped all already acquired locks.
+ These functions have the _slow postfix.
From a simple semantics point-of-view the _slow functions are not strictly
required, since simply calling the normal ww_mutex_lock functions on the
@@ -220,7 +220,7 @@ mutexes are a natural fit for such a case for two reasons:
Note that this approach differs in two important ways from the above methods:
- Since the list of objects is dynamically constructed (and might very well be
- different when retrying due to hitting the -EDEADLK wound condition) there's
+ different when retrying due to hitting the -EDEADLK die condition) there's
no need to keep any object on a persistent list when it's not locked. We can
therefore move the list_head into the object itself.
- On the other hand the dynamic object list construction also means that the -EALREADY return
diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index 39fda195bf78..f82fce2229c8 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -6,7 +6,7 @@
*
* Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo(a)redhat.com>
*
- * Wound/wait implementation:
+ * Wait/Die implementation:
* Copyright (C) 2013 Canonical Ltd.
*
* This file contains the main data structure and API definitions.
@@ -28,9 +28,9 @@ struct ww_class {
struct ww_acquire_ctx {
struct task_struct *task;
unsigned long stamp;
- unsigned acquired;
+ unsigned int acquired;
#ifdef CONFIG_DEBUG_MUTEXES
- unsigned done_acquire;
+ unsigned int done_acquire;
struct ww_class *ww_class;
struct ww_mutex *contending_lock;
#endif
@@ -38,8 +38,8 @@ struct ww_acquire_ctx {
struct lockdep_map dep_map;
#endif
#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
- unsigned deadlock_inject_interval;
- unsigned deadlock_inject_countdown;
+ unsigned int deadlock_inject_interval;
+ unsigned int deadlock_inject_countdown;
#endif
};
@@ -102,7 +102,7 @@ static inline void ww_mutex_init(struct ww_mutex *lock,
*
* Context-based w/w mutex acquiring can be done in any order whatsoever within
* a given lock class. Deadlocks will be detected and handled with the
- * wait/wound logic.
+ * wait/die logic.
*
* Mixing of context-based w/w mutex acquiring and single w/w mutex locking can
* result in undetected deadlocks and is so forbidden. Mixing different contexts
@@ -195,13 +195,13 @@ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
* Lock the w/w mutex exclusively for this task.
*
* Deadlocks within a given w/w class of locks are detected and handled with the
- * wait/wound algorithm. If the lock isn't immediately avaiable this function
+ * wait/die algorithm. If the lock isn't immediately available this function
* will either sleep until it is (wait case). Or it selects the current context
- * for backing off by returning -EDEADLK (wound case). Trying to acquire the
+ * for backing off by returning -EDEADLK (die case). Trying to acquire the
* same lock with the same context twice is also detected and signalled by
* returning -EALREADY. Returns 0 if the mutex was successfully acquired.
*
- * In the wound case the caller must release all currently held w/w mutexes for
+ * In the die case the caller must release all currently held w/w mutexes for
* the given context and then wait for this contending lock to be available by
* calling ww_mutex_lock_slow. Alternatively callers can opt to not acquire this
* lock and proceed with trying to acquire further w/w mutexes (e.g. when
@@ -226,14 +226,14 @@ extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acq
* Lock the w/w mutex exclusively for this task.
*
* Deadlocks within a given w/w class of locks are detected and handled with the
- * wait/wound algorithm. If the lock isn't immediately avaiable this function
+ * wait/die algorithm. If the lock isn't immediately available this function
* will either sleep until it is (wait case). Or it selects the current context
- * for backing off by returning -EDEADLK (wound case). Trying to acquire the
+ * for backing off by returning -EDEADLK (die case). Trying to acquire the
* same lock with the same context twice is also detected and signalled by
* returning -EALREADY. Returns 0 if the mutex was successfully acquired. If a
* signal arrives while waiting for the lock then this function returns -EINTR.
*
- * In the wound case the caller must release all currently held w/w mutexes for
+ * In the die case the caller must release all currently held w/w mutexes for
* the given context and then wait for this contending lock to be available by
* calling ww_mutex_lock_slow_interruptible. Alternatively callers can opt to
* not acquire this lock and proceed with trying to acquire further w/w mutexes
@@ -256,7 +256,7 @@ extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
* @lock: the mutex to be acquired
* @ctx: w/w acquire context
*
- * Acquires a w/w mutex with the given context after a wound case. This function
+ * Acquires a w/w mutex with the given context after a die case. This function
* will sleep until the lock becomes available.
*
* The caller must have released all w/w mutexes already acquired with the
@@ -290,7 +290,7 @@ ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
* @lock: the mutex to be acquired
* @ctx: w/w acquire context
*
- * Acquires a w/w mutex with the given context after a wound case. This function
+ * Acquires a w/w mutex with the given context after a die case. This function
* will sleep until the lock becomes available and returns 0 when the lock has
* been acquired. If a signal arrives while waiting for the lock then this
* function returns -EINTR.
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 2048359f33d2..412b4fc08235 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -243,6 +243,17 @@ void __sched mutex_lock(struct mutex *lock)
EXPORT_SYMBOL(mutex_lock);
#endif
+/*
+ * Wait-Die:
+ * The newer transactions are killed when:
+ * It (the new transaction) makes a request for a lock being held
+ * by an older transaction.
+ */
+
+/*
+ * Associate the ww_mutex @ww with the context @ww_ctx under which we acquired
+ * it.
+ */
static __always_inline void
ww_mutex_lock_acquired(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx)
{
@@ -281,26 +292,53 @@ ww_mutex_lock_acquired(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx)
DEBUG_LOCKS_WARN_ON(ww_ctx->ww_class != ww->ww_class);
#endif
ww_ctx->acquired++;
+ ww->ctx = ww_ctx;
}
+/*
+ * Determine if context @a is 'after' context @b. IOW, @a is a younger
+ * transaction than @b and depending on algorithm either needs to wait for
+ * @b or die.
+ */
static inline bool __sched
__ww_ctx_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
{
- return a->stamp - b->stamp <= LONG_MAX &&
- (a->stamp != b->stamp || a > b);
+
+ return (signed long)(a->stamp - b->stamp) > 0;
+}
+
+/*
+ * Wait-Die; wake a younger waiter context (when locks held) such that it can
+ * die.
+ *
+ * Among waiters with context, only the first one can have other locks acquired
+ * already (ctx->acquired > 0), because __ww_mutex_add_waiter() and
+ * __ww_mutex_check_kill() wake any but the earliest context.
+ */
+static bool __sched
+__ww_mutex_die(struct mutex *lock, struct mutex_waiter *waiter,
+ struct ww_acquire_ctx *ww_ctx)
+{
+ if (waiter->ww_ctx->acquired > 0 &&
+ __ww_ctx_stamp_after(waiter->ww_ctx, ww_ctx)) {
+ debug_mutex_wake_waiter(lock, waiter);
+ wake_up_process(waiter->task);
+ }
+
+ return true;
}
/*
- * Wake up any waiters that may have to back off when the lock is held by the
- * given context.
+ * We just acquired @lock under @ww_ctx, if there are later contexts waiting
+ * behind us on the wait-list, check if they need to die.
*
- * Due to the invariants on the wait list, this can only affect the first
- * waiter with a context.
+ * See __ww_mutex_add_waiter() for the list-order construction; basically the
+ * list is ordered by stamp, smallest (oldest) first.
*
* The current task must not be on the wait list.
*/
static void __sched
-__ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx)
+__ww_mutex_check_waiters(struct mutex *lock, struct ww_acquire_ctx *ww_ctx)
{
struct mutex_waiter *cur;
@@ -310,30 +348,23 @@ __ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx)
if (!cur->ww_ctx)
continue;
- if (cur->ww_ctx->acquired > 0 &&
- __ww_ctx_stamp_after(cur->ww_ctx, ww_ctx)) {
- debug_mutex_wake_waiter(lock, cur);
- wake_up_process(cur->task);
- }
-
- break;
+ if (__ww_mutex_die(lock, cur, ww_ctx))
+ break;
}
}
/*
- * After acquiring lock with fastpath or when we lost out in contested
- * slowpath, set ctx and wake up any waiters so they can recheck.
+ * After acquiring lock with fastpath, where we do not hold wait_lock, set ctx
+ * and wake up any waiters so they can recheck.
*/
static __always_inline void
ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
{
ww_mutex_lock_acquired(lock, ctx);
- lock->ctx = ctx;
-
/*
* The lock->ctx update should be visible on all cores before
- * the atomic read is done, otherwise contended waiters might be
+ * the WAITERS check is done, otherwise contended waiters might be
* missed. The contended waiters will either see ww_ctx == NULL
* and keep spinning, or it will acquire wait_lock, add itself
* to waiter list and sleep.
@@ -347,29 +378,14 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
return;
/*
- * Uh oh, we raced in fastpath, wake up everyone in this case,
- * so they can see the new lock->ctx.
+ * Uh oh, we raced in fastpath, check if any of the waiters need to
+ * die.
*/
spin_lock(&lock->base.wait_lock);
- __ww_mutex_wakeup_for_backoff(&lock->base, ctx);
+ __ww_mutex_check_waiters(&lock->base, ctx);
spin_unlock(&lock->base.wait_lock);
}
-/*
- * After acquiring lock in the slowpath set ctx.
- *
- * Unlike for the fast path, the caller ensures that waiters are woken up where
- * necessary.
- *
- * Callers must hold the mutex wait_lock.
- */
-static __always_inline void
-ww_mutex_set_context_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
-{
- ww_mutex_lock_acquired(lock, ctx);
- lock->ctx = ctx;
-}
-
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
static inline
@@ -645,37 +661,73 @@ void __sched ww_mutex_unlock(struct ww_mutex *lock)
}
EXPORT_SYMBOL(ww_mutex_unlock);
+
+static __always_inline int __sched
+__ww_mutex_kill(struct mutex *lock, struct ww_acquire_ctx *ww_ctx)
+{
+ if (ww_ctx->acquired > 0) {
+#ifdef CONFIG_DEBUG_MUTEXES
+ struct ww_mutex *ww;
+
+ ww = container_of(lock, struct ww_mutex, base);
+ DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock);
+ ww_ctx->contending_lock = ww;
+#endif
+ return -EDEADLK;
+ }
+
+ return 0;
+}
+
+
+/*
+ * Check whether we need to kill the transaction for the current lock acquire.
+ *
+ * Wait-Die: If we're trying to acquire a lock already held by an older
+ * context, kill ourselves.
+ *
+ * Since __ww_mutex_add_waiter() orders the wait-list on stamp, we only have to
+ * look at waiters before us in the wait-list.
+ */
static inline int __sched
-__ww_mutex_lock_check_stamp(struct mutex *lock, struct mutex_waiter *waiter,
- struct ww_acquire_ctx *ctx)
+__ww_mutex_check_kill(struct mutex *lock, struct mutex_waiter *waiter,
+ struct ww_acquire_ctx *ctx)
{
struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
struct ww_acquire_ctx *hold_ctx = READ_ONCE(ww->ctx);
struct mutex_waiter *cur;
+ if (ctx->acquired == 0)
+ return 0;
+
if (hold_ctx && __ww_ctx_stamp_after(ctx, hold_ctx))
- goto deadlock;
+ return __ww_mutex_kill(lock, ctx);
/*
* If there is a waiter in front of us that has a context, then its
- * stamp is earlier than ours and we must back off.
+ * stamp is earlier than ours and we must kill ourself.
*/
cur = waiter;
list_for_each_entry_continue_reverse(cur, &lock->wait_list, list) {
- if (cur->ww_ctx)
- goto deadlock;
+ if (!cur->ww_ctx)
+ continue;
+
+ return __ww_mutex_kill(lock, ctx);
}
return 0;
-
-deadlock:
-#ifdef CONFIG_DEBUG_MUTEXES
- DEBUG_LOCKS_WARN_ON(ctx->contending_lock);
- ctx->contending_lock = ww;
-#endif
- return -EDEADLK;
}
+/*
+ * Add @waiter to the wait-list, keep the wait-list ordered by stamp, smallest
+ * first. Such that older contexts are preferred to acquire the lock over
+ * younger contexts.
+ *
+ * Waiters without context are interspersed in FIFO order.
+ *
+ * Furthermore, for Wait-Die kill ourself immediately when possible (there are
+ * older contexts already waiting) to avoid unnecessary waiting.
+ */
static inline int __sched
__ww_mutex_add_waiter(struct mutex_waiter *waiter,
struct mutex *lock,
@@ -692,7 +744,7 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter,
/*
* Add the waiter before the first waiter with a higher stamp.
* Waiters without a context are skipped to avoid starving
- * them.
+ * them. Wait-Die waiters may die here.
*/
pos = &lock->wait_list;
list_for_each_entry_reverse(cur, &lock->wait_list, list) {
@@ -700,34 +752,27 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter,
continue;
if (__ww_ctx_stamp_after(ww_ctx, cur->ww_ctx)) {
- /* Back off immediately if necessary. */
- if (ww_ctx->acquired > 0) {
-#ifdef CONFIG_DEBUG_MUTEXES
- struct ww_mutex *ww;
+ /*
+ * Wait-Die: if we find an older context waiting, there
+ * is no point in queueing behind it, as we'd have to
+ * die the moment it would acquire the lock.
+ */
+ int ret = __ww_mutex_kill(lock, ww_ctx);
- ww = container_of(lock, struct ww_mutex, base);
- DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock);
- ww_ctx->contending_lock = ww;
-#endif
- return -EDEADLK;
- }
+ if (ret)
+ return ret;
break;
}
pos = &cur->list;
- /*
- * Wake up the waiter so that it gets a chance to back
- * off.
- */
- if (cur->ww_ctx->acquired > 0) {
- debug_mutex_wake_waiter(lock, cur);
- wake_up_process(cur->task);
- }
+ /* Wait-Die: ensure younger waiters die. */
+ __ww_mutex_die(lock, cur, ww_ctx);
}
list_add_tail(&waiter->list, pos);
+
return 0;
}
@@ -771,7 +816,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
*/
if (__mutex_trylock(lock)) {
if (use_ww_ctx && ww_ctx)
- __ww_mutex_wakeup_for_backoff(lock, ww_ctx);
+ __ww_mutex_check_waiters(lock, ww_ctx);
goto skip_wait;
}
@@ -789,10 +834,13 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
waiter.ww_ctx = MUTEX_POISON_WW_CTX;
#endif
} else {
- /* Add in stamp order, waking up waiters that must back off. */
+ /*
+ * Add in stamp order, waking up waiters that must kill
+ * themselves.
+ */
ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx);
if (ret)
- goto err_early_backoff;
+ goto err_early_kill;
waiter.ww_ctx = ww_ctx;
}
@@ -814,7 +862,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
goto acquired;
/*
- * Check for signals and wound conditions while holding
+ * Check for signals and kill conditions while holding
* wait_lock. This ensures the lock cancellation is ordered
* against mutex_unlock() and wake-ups do not go missing.
*/
@@ -823,8 +871,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
goto err;
}
- if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) {
- ret = __ww_mutex_lock_check_stamp(lock, &waiter, ww_ctx);
+ if (use_ww_ctx && ww_ctx) {
+ ret = __ww_mutex_check_kill(lock, &waiter, ww_ctx);
if (ret)
goto err;
}
@@ -869,7 +917,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
lock_acquired(&lock->dep_map, ip);
if (use_ww_ctx && ww_ctx)
- ww_mutex_set_context_slowpath(ww, ww_ctx);
+ ww_mutex_lock_acquired(ww, ww_ctx);
spin_unlock(&lock->wait_lock);
preempt_enable();
@@ -878,7 +926,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
err:
__set_current_state(TASK_RUNNING);
mutex_remove_waiter(lock, &waiter, current);
-err_early_backoff:
+err_early_kill:
spin_unlock(&lock->wait_lock);
debug_mutex_free_waiter(&waiter);
mutex_release(&lock->dep_map, 1, ip);
--
2.14.3
The current Wound-Wait mutex algorithm is actually not Wound-Wait but
Wait-Die. Implement also Wound-Wait as a per-ww-class choice. Wound-Wait
is, contrary to Wait-Die a preemptive algorithm and is known to generate
fewer backoffs. Testing reveals that this is true if the
number of simultaneous contending transactions is small.
As the number of simultaneous contending threads increases, Wait-Wound
becomes inferior to Wait-Die in terms of elapsed time.
Possibly due to the larger number of held locks of sleeping transactions.
Update documentation and callers.
Timings using git://people.freedesktop.org/~thomash/ww_mutex_test
tag patch-18-06-04
Each thread runs 100000 batches of lock / unlock 800 ww mutexes randomly
chosen out of 100000. Four core Intel x86_64:
Algorithm #threads Rollbacks time
Wound-Wait 4 ~100 ~17s.
Wait-Die 4 ~150000 ~19s.
Wound-Wait 16 ~360000 ~109s.
Wait-Die 16 ~450000 ~82s.
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: Jonathan Corbet <corbet(a)lwn.net>
Cc: Gustavo Padovan <gustavo(a)padovan.org>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Sean Paul <seanpaul(a)chromium.org>
Cc: David Airlie <airlied(a)linux.ie>
Cc: Davidlohr Bueso <dave(a)stgolabs.net>
Cc: "Paul E. McKenney" <paulmck(a)linux.vnet.ibm.com>
Cc: Josh Triplett <josh(a)joshtriplett.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Kate Stewart <kstewart(a)linuxfoundation.org>
Cc: Philippe Ombredanne <pombredanne(a)nexb.com>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: linux-doc(a)vger.kernel.org
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Signed-off-by: Thomas Hellstrom <thellstrom(a)vmware.com>
---
Documentation/locking/ww-mutex-design.txt | 57 ++++++++++++++----
drivers/dma-buf/reservation.c | 2 +-
drivers/gpu/drm/drm_modeset_lock.c | 2 +-
include/linux/ww_mutex.h | 19 ++++--
kernel/locking/locktorture.c | 2 +-
kernel/locking/mutex.c | 98 ++++++++++++++++++++++++++++---
kernel/locking/test-ww_mutex.c | 2 +-
lib/locking-selftest.c | 2 +-
8 files changed, 152 insertions(+), 32 deletions(-)
diff --git a/Documentation/locking/ww-mutex-design.txt b/Documentation/locking/ww-mutex-design.txt
index 34c3a1b50b9a..29c85623b551 100644
--- a/Documentation/locking/ww-mutex-design.txt
+++ b/Documentation/locking/ww-mutex-design.txt
@@ -1,4 +1,4 @@
-Wait/Wound Deadlock-Proof Mutex Design
+Wound/Wait Deadlock-Proof Mutex Design
======================================
Please read mutex-design.txt first, as it applies to wait/wound mutexes too.
@@ -32,10 +32,23 @@ the oldest task) wins, and the one with the higher reservation id (i.e. the
younger task) unlocks all of the buffers that it has already locked, and then
tries again.
-In the RDBMS literature this deadlock handling approach is called wait/wound:
-The older tasks waits until it can acquire the contended lock. The younger tasks
-needs to back off and drop all the locks it is currently holding, i.e. the
-younger task is wounded.
+In the RDBMS literature, a reservation ticket is associated with a transaction.
+and the deadlock handling approach is called Wait-Die. The name is based on
+the actions of a locking thread when it encounters an already locked mutex.
+If the transaction holding the lock is younger, the locking transaction waits.
+If the transaction holding the lock is older, the locking transaction backs off
+and dies. Hence Wait-Die.
+There is also another algorithm called Wound-Wait:
+If the transaction holding the lock is younger, the locking transaction
+preempts the transaction holding the lock, requiring it to back off. It
+Wounds the other transaction.
+If the transaction holding the lock is older, it waits for the other
+transaction. Hence Wound-Wait.
+The two algorithms are both fair in that a transaction will eventually succeed.
+However, the Wound-Wait algorithm is typically stated to generate fewer backoffs
+compared to Wait-Die, but is, on the other hand, associated with more work than
+Wait-Die when recovering from a backoff. Wound-Wait is also a preemptive
+algorithm which requires a reliable way to preempt another transaction.
Concepts
--------
@@ -47,10 +60,12 @@ Acquire context: To ensure eventual forward progress it is important the a task
trying to acquire locks doesn't grab a new reservation id, but keeps the one it
acquired when starting the lock acquisition. This ticket is stored in the
acquire context. Furthermore the acquire context keeps track of debugging state
-to catch w/w mutex interface abuse.
+to catch w/w mutex interface abuse. An acquire context is representing a
+transaction.
W/w class: In contrast to normal mutexes the lock class needs to be explicit for
-w/w mutexes, since it is required to initialize the acquire context.
+w/w mutexes, since it is required to initialize the acquire context. The lock
+class also specifies what algorithm to use, Wound-Wait or Wait-Die.
Furthermore there are three different class of w/w lock acquire functions:
@@ -90,10 +105,15 @@ provided.
Usage
-----
+The algorithm (Wait-Die vs Wound-Wait) is chosen using the _is_wait_die
+argument to DEFINE_WW_CLASS(). As a rough rule of thumb, use Wound-Wait iff you
+typically expect the number of simultaneous competing transactions to be small,
+and the rollback cost can be substantial.
+
Three different ways to acquire locks within the same w/w class. Common
definitions for methods #1 and #2:
-static DEFINE_WW_CLASS(ww_class);
+static DEFINE_WW_CLASS(ww_class, false);
struct obj {
struct ww_mutex lock;
@@ -243,7 +263,7 @@ struct obj {
struct list_head locked_list;
};
-static DEFINE_WW_CLASS(ww_class);
+static DEFINE_WW_CLASS(ww_class, false);
void __unlock_objs(struct list_head *list)
{
@@ -312,12 +332,23 @@ Design:
We maintain the following invariants for the wait list:
(1) Waiters with an acquire context are sorted by stamp order; waiters
without an acquire context are interspersed in FIFO order.
- (2) Among waiters with contexts, only the first one can have other locks
- acquired already (ctx->acquired > 0). Note that this waiter may come
- after other waiters without contexts in the list.
+ (2) For Wait-Die, among waiters with contexts, only the first one can have
+ other locks acquired already (ctx->acquired > 0). Note that this waiter
+ may come after other waiters without contexts in the list.
+
+ The Wound-Wait preemption is implemented with a lazy-preemption scheme:
+ The wounded status of the transaction is checked only when there is
+ contention for a new lock and hence a true chance of deadlock. In that
+ situation, if the transaction is wounded, it backs off, clears the
+ wounded status and retries. A great benefit of implementing preemption in
+ this way is that the wounded transaction can identify a contending lock to
+ wait for before restarting the transaction. Just blindly restarting the
+ transaction would likely make the transaction end up in a situation where
+ it would have to back off again.
In general, not much contention is expected. The locks are typically used to
- serialize access to resources for devices.
+ serialize access to resources for devices, and optimization focus should
+ therefore be directed towards the uncontended cases.
Lockdep:
Special care has been taken to warn for as many cases of api abuse
diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
index 314eb1071cce..039571b9fea1 100644
--- a/drivers/dma-buf/reservation.c
+++ b/drivers/dma-buf/reservation.c
@@ -46,7 +46,7 @@
* write-side updates.
*/
-DEFINE_WW_CLASS(reservation_ww_class);
+DEFINE_WW_CLASS(reservation_ww_class, true);
EXPORT_SYMBOL(reservation_ww_class);
struct lock_class_key reservation_seqcount_class;
diff --git a/drivers/gpu/drm/drm_modeset_lock.c b/drivers/gpu/drm/drm_modeset_lock.c
index 8a5100685875..f22a7ef41de1 100644
--- a/drivers/gpu/drm/drm_modeset_lock.c
+++ b/drivers/gpu/drm/drm_modeset_lock.c
@@ -70,7 +70,7 @@
* lists and lookup data structures.
*/
-static DEFINE_WW_CLASS(crtc_ww_class);
+static DEFINE_WW_CLASS(crtc_ww_class, true);
/**
* drm_modeset_lock_all - take all modeset locks
diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index 39fda195bf78..6278077f288b 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -8,6 +8,8 @@
*
* Wound/wait implementation:
* Copyright (C) 2013 Canonical Ltd.
+ * Choice of algorithm:
+ * Copyright (C) 2018 WMWare Inc.
*
* This file contains the main data structure and API definitions.
*/
@@ -23,15 +25,17 @@ struct ww_class {
struct lock_class_key mutex_key;
const char *acquire_name;
const char *mutex_name;
+ bool is_wait_die;
};
struct ww_acquire_ctx {
struct task_struct *task;
unsigned long stamp;
unsigned acquired;
+ bool wounded;
+ struct ww_class *ww_class;
#ifdef CONFIG_DEBUG_MUTEXES
unsigned done_acquire;
- struct ww_class *ww_class;
struct ww_mutex *contending_lock;
#endif
#ifdef CONFIG_DEBUG_LOCK_ALLOC
@@ -58,17 +62,19 @@ struct ww_mutex {
# define __WW_CLASS_MUTEX_INITIALIZER(lockname, class)
#endif
-#define __WW_CLASS_INITIALIZER(ww_class) \
+#define __WW_CLASS_INITIALIZER(ww_class, _is_wait_die) \
{ .stamp = ATOMIC_LONG_INIT(0) \
, .acquire_name = #ww_class "_acquire" \
- , .mutex_name = #ww_class "_mutex" }
+ , .mutex_name = #ww_class "_mutex" \
+ , .is_wait_die = _is_wait_die }
#define __WW_MUTEX_INITIALIZER(lockname, class) \
{ .base = __MUTEX_INITIALIZER(lockname.base) \
__WW_CLASS_MUTEX_INITIALIZER(lockname, class) }
-#define DEFINE_WW_CLASS(classname) \
- struct ww_class classname = __WW_CLASS_INITIALIZER(classname)
+#define DEFINE_WW_CLASS(classname, _is_wait_die) \
+ struct ww_class classname = __WW_CLASS_INITIALIZER(classname, \
+ _is_wait_die)
#define DEFINE_WW_MUTEX(mutexname, ww_class) \
struct ww_mutex mutexname = __WW_MUTEX_INITIALIZER(mutexname, ww_class)
@@ -123,8 +129,9 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
ctx->task = current;
ctx->stamp = atomic_long_inc_return_relaxed(&ww_class->stamp);
ctx->acquired = 0;
-#ifdef CONFIG_DEBUG_MUTEXES
ctx->ww_class = ww_class;
+ ctx->wounded = false;
+#ifdef CONFIG_DEBUG_MUTEXES
ctx->done_acquire = 0;
ctx->contending_lock = NULL;
#endif
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index 6850ffd69125..778ed026382f 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -365,7 +365,7 @@ static struct lock_torture_ops mutex_lock_ops = {
};
#include <linux/ww_mutex.h>
-static DEFINE_WW_CLASS(torture_ww_class);
+static DEFINE_WW_CLASS(torture_ww_class, true);
static DEFINE_WW_MUTEX(torture_ww_mutex_0, &torture_ww_class);
static DEFINE_WW_MUTEX(torture_ww_mutex_1, &torture_ww_class);
static DEFINE_WW_MUTEX(torture_ww_mutex_2, &torture_ww_class);
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 2048359f33d2..b449a012c6f9 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -290,12 +290,47 @@ __ww_ctx_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
(a->stamp != b->stamp || a > b);
}
+/*
+ * Wound the lock holder transaction if it's younger than the contending
+ * transaction, and there is a possibility of a deadlock.
+ * Also if the lock holder transaction isn't the current transaction,
+ * Make sure it's woken up in case it's sleeping on another ww mutex.
+ */
+static bool __ww_mutex_wound(struct mutex *lock,
+ struct ww_acquire_ctx *ww_ctx,
+ struct ww_acquire_ctx *hold_ctx)
+{
+ struct task_struct *owner =
+ __owner_task(atomic_long_read(&lock->owner));
+
+ lockdep_assert_held(&lock->wait_lock);
+
+ if (owner && hold_ctx && __ww_ctx_stamp_after(hold_ctx, ww_ctx) &&
+ ww_ctx->acquired > 0) {
+ WRITE_ONCE(hold_ctx->wounded, true);
+ if (owner != current) {
+ /*
+ * wake_up_process() inserts a write memory barrier to
+ * make sure owner sees it is wounded before
+ * TASK_RUNNING in case it's sleeping on another
+ * ww_mutex. Note that owner points to a valid
+ * task_struct as long as we hold the wait_lock.
+ */
+ wake_up_process(owner);
+ }
+ return true;
+ }
+
+ return false;
+}
+
/*
* Wake up any waiters that may have to back off when the lock is held by the
* given context.
*
* Due to the invariants on the wait list, this can only affect the first
- * waiter with a context.
+ * waiter with a context, unless the Wound-Wait algorithm is used where
+ * also subsequent waiters with a context main wound the lock holder.
*
* The current task must not be on the wait list.
*/
@@ -303,6 +338,7 @@ static void __sched
__ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx)
{
struct mutex_waiter *cur;
+ bool is_wait_die = ww_ctx->ww_class->is_wait_die;
lockdep_assert_held(&lock->wait_lock);
@@ -310,13 +346,14 @@ __ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx)
if (!cur->ww_ctx)
continue;
- if (cur->ww_ctx->acquired > 0 &&
+ if (is_wait_die && cur->ww_ctx->acquired > 0 &&
__ww_ctx_stamp_after(cur->ww_ctx, ww_ctx)) {
debug_mutex_wake_waiter(lock, cur);
wake_up_process(cur->task);
}
- break;
+ if (is_wait_die || __ww_mutex_wound(lock, cur->ww_ctx, ww_ctx))
+ break;
}
}
@@ -338,12 +375,17 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
* and keep spinning, or it will acquire wait_lock, add itself
* to waiter list and sleep.
*/
- smp_mb(); /* ^^^ */
+ smp_mb(); /* See comments above and below. */
/*
- * Check if lock is contended, if not there is nobody to wake up
+ * Check if lock is contended, if not there is nobody to wake up.
+ * Checking MUTEX_FLAG_WAITERS is not enough here, since we need to
+ * order against the lock->ctx check in __ww_mutex_wound called from
+ * __ww_mutex_add_waiter. We can use list_empty without taking the
+ * wait_lock, given the memory barrier above and the list_empty
+ * documentation.
*/
- if (likely(!(atomic_long_read(&lock->base.owner) & MUTEX_FLAG_WAITERS)))
+ if (likely(list_empty(&lock->base.wait_list)))
return;
/*
@@ -653,6 +695,17 @@ __ww_mutex_lock_check_stamp(struct mutex *lock, struct mutex_waiter *waiter,
struct ww_acquire_ctx *hold_ctx = READ_ONCE(ww->ctx);
struct mutex_waiter *cur;
+ /*
+ * If we miss a wounded == true here, we will have a pending
+ * TASK_RUNNING and pick it up on the next schedule fall-through.
+ */
+ if (!ctx->ww_class->is_wait_die) {
+ if (READ_ONCE(ctx->wounded))
+ goto deadlock;
+ else
+ return 0;
+ }
+
if (hold_ctx && __ww_ctx_stamp_after(ctx, hold_ctx))
goto deadlock;
@@ -683,12 +736,15 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter,
{
struct mutex_waiter *cur;
struct list_head *pos;
+ bool is_wait_die;
if (!ww_ctx) {
list_add_tail(&waiter->list, &lock->wait_list);
return 0;
}
+ is_wait_die = ww_ctx->ww_class->is_wait_die;
+
/*
* Add the waiter before the first waiter with a higher stamp.
* Waiters without a context are skipped to avoid starving
@@ -701,7 +757,7 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter,
if (__ww_ctx_stamp_after(ww_ctx, cur->ww_ctx)) {
/* Back off immediately if necessary. */
- if (ww_ctx->acquired > 0) {
+ if (is_wait_die && ww_ctx->acquired > 0) {
#ifdef CONFIG_DEBUG_MUTEXES
struct ww_mutex *ww;
@@ -721,13 +777,26 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter,
* Wake up the waiter so that it gets a chance to back
* off.
*/
- if (cur->ww_ctx->acquired > 0) {
+ if (is_wait_die && cur->ww_ctx->acquired > 0) {
debug_mutex_wake_waiter(lock, cur);
wake_up_process(cur->task);
}
}
list_add_tail(&waiter->list, pos);
+ if (!is_wait_die) {
+ struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
+
+ /*
+ * Make sure a racing lock taker sees a non-empty waiting list
+ * before we read ww->ctx, so that if we miss ww->ctx, the
+ * racing lock taker will call __ww_mutex_wake_up_for_backoff()
+ * and wound itself.
+ */
+ smp_mb();
+ __ww_mutex_wound(lock, ww_ctx, ww->ctx);
+ }
+
return 0;
}
@@ -750,6 +819,14 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
if (use_ww_ctx && ww_ctx) {
if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
return -EALREADY;
+
+ /*
+ * Reset the wounded flag after a backoff.
+ * No other process can race and wound us here since they
+ * can't have a valid owner pointer at this time
+ */
+ if (ww_ctx->acquired == 0)
+ ww_ctx->wounded = false;
}
preempt_disable();
@@ -858,6 +935,11 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
acquired:
__set_current_state(TASK_RUNNING);
+ /* We stole the lock. Need to check wounded status. */
+ if (use_ww_ctx && ww_ctx && !ww_ctx->ww_class->is_wait_die &&
+ !__mutex_waiter_is_first(lock, &waiter))
+ __ww_mutex_wakeup_for_backoff(lock, ww_ctx);
+
mutex_remove_waiter(lock, &waiter, current);
if (likely(list_empty(&lock->wait_list)))
__mutex_clear_flag(lock, MUTEX_FLAGS);
diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
index 0e4cd64ad2c0..c7fc112d691d 100644
--- a/kernel/locking/test-ww_mutex.c
+++ b/kernel/locking/test-ww_mutex.c
@@ -26,7 +26,7 @@
#include <linux/slab.h>
#include <linux/ww_mutex.h>
-static DEFINE_WW_CLASS(ww_class);
+static DEFINE_WW_CLASS(ww_class, true);
struct workqueue_struct *wq;
struct test_mutex {
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index b5c1293ce147..e52065f2acbf 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -29,7 +29,7 @@
*/
static unsigned int debug_locks_verbose;
-static DEFINE_WW_CLASS(ww_lockdep);
+static DEFINE_WW_CLASS(ww_lockdep, true);
static int __init setup_debug_locks_verbose(char *str)
{
--
2.14.3
On 06/14/2018 03:29 PM, Matthew Wilcox wrote:
> On Thu, Jun 14, 2018 at 01:54:15PM +0200, Thomas Hellstrom wrote:
>> On 06/14/2018 01:36 PM, Peter Zijlstra wrote:
>>> Currently you don't allow mixing WD and WW contexts (which is not
>>> immediately obvious from the above code), and the above hard relies on
>>> that. Are there sensible use cases for mixing them? IOW will your
>>> current restriction stand without hassle?
>> Contexts _must_ agree on the algorithm used to resolve deadlocks. With
>> Wait-Die, for example, older transactions will wait if a lock is held by a
>> younger transaction and with Wound-Wait, younger transactions will wait if a
>> lock is held by an older transaction so there is no way of mixing them.
> Maybe the compiler should be enforcing that; ie make it a different type?
It's intended to be enforced by storing the algorithm choice in the
WW_MUTEX_CLASS which must be common for an acquire context and the
ww_mutexes it acquires. However, I don't think there is a check that
that holds. I guess we could add it as a DEBUG_MUTEX test in
ww_mutex_lock().
Thanks,
Thomas
This is a small fallout from a work to allow batching WW mutex locks and
unlocks.
Our Wound-Wait mutexes actually don't use the Wound-Wait algorithm but
the Wait-Die algorithm. One could perhaps rename those mutexes tree-wide to
"Wait-Die mutexes" or "Deadlock Avoidance mutexes". Another approach suggested
here is to implement also the "Wound-Wait" algorithm as a per-WW-class
choice, as it has advantages in some cases. See for example
http://www.mathcs.emory.edu/~cheung/Courses/554/Syllabus/8-recv+serial/dead…
Now Wound-Wait is a preemptive algorithm, and the preemption is implemented
using a lazy scheme: If a wounded transaction is about to go to sleep on
a contended WW mutex, we return -EDEADLK. That is sufficient for deadlock
prevention. Since with WW mutexes we also require the aborted transaction to
sleep waiting to lock the WW mutex it was aborted on, this choice also provides
a suitable WW mutex to sleep on. If we were to return -EDEADLK on the first
WW mutex lock after the transaction was wounded whether the WW mutex was
contended or not, the transaction might frequently be restarted without a wait,
which is far from optimal. Note also that with the lazy preemption scheme,
contrary to Wait-Die there will be no rollbacks on lock contention of locks
held by a transaction that has completed its locking sequence.
The modeset locks are then changed from Wait-Die to Wound-Wait since the
typical locking pattern of those locks very well matches the criterion for
a substantial reduction in the number of rollbacks. For reservation objects,
the benefit is more unclear at this point and they remain using Wait-Die.
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: Jonathan Corbet <corbet(a)lwn.net>
Cc: Gustavo Padovan <gustavo(a)padovan.org>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Sean Paul <seanpaul(a)chromium.org>
Cc: David Airlie <airlied(a)linux.ie>
Cc: Davidlohr Bueso <dave(a)stgolabs.net>
Cc: "Paul E. McKenney" <paulmck(a)linux.vnet.ibm.com>
Cc: Josh Triplett <josh(a)joshtriplett.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Kate Stewart <kstewart(a)linuxfoundation.org>
Cc: Philippe Ombredanne <pombredanne(a)nexb.com>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: linux-doc(a)vger.kernel.org
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
v2:
Updated DEFINE_WW_CLASS() API, minor code- and comment fixes as
detailed in each patch.
Resending hopefully better formatted..
On 06/14/2018 01:49 PM, Andrea Parri wrote:
> [...]
>
>>>> + /*
>>>> + * wake_up_process() paired with set_current_state() inserts
>>>> + * sufficient barriers to make sure @owner either sees it's
>>>> + * wounded or has a wakeup pending to re-read the wounded
>>>> + * state.
>>> IIUC, "sufficient barriers" = full memory barriers (here). (You may
>>> want to be more specific.)
>> Thanks for reviewing!
>> OK. What about if someone relaxes that in the future?
> This is actually one of my main concerns ;-) as, IIUC, those barriers are
> not only sufficient but also necessary: anything "less than a full barrier"
> (in either wake_up_process() or set_current_state()) would _not_ guarantee
> the "condition" above unless I'm misunderstanding it.
>
> But am I misunderstanding it? Which barriers/guarantee do you _need_ from
> the above mentioned pairing? (hence my comment...)
>
> Andrea
No you are probably not misunderstanding me at all. My comment
originated from the reading of the kerneldoc of set_current_state()
/*
* set_current_state() includes a barrier so that the write of current->state
* is correctly serialised wrt the caller's subsequent test of whether to
* actually sleep:
*
* for (;;) {
* set_current_state(TASK_UNINTERRUPTIBLE);
* if (!need_sleep)
* break;
*
* schedule();
* }
* __set_current_state(TASK_RUNNING);
*
* If the caller does not need such serialisation (because, for instance, the
* condition test and condition change and wakeup are under the same
lock) then
* use __set_current_state().
*
* The above is typically ordered against the wakeup, which does:
*
* need_sleep = false;
* wake_up_state(p, TASK_UNINTERRUPTIBLE);
*
* Where wake_up_state() (and all other wakeup primitives) imply enough
* barriers to order the store of the variable against wakeup.
---
*/
And with ctx->wounded := !need_sleep this exactly matches what's
happening in my code. So what I was trying to say in the comment was
that this above contract is sufficient to guarantee the "condition"
above, whitout me actually knowing exactly what barriers are required.
Thanks, Thomas
On 06/14/2018 01:49 PM, Andrea Parri wrote:
> [...]
>
>>>> + /*
>>>> + * wake_up_process() paired with set_current_state() inserts
>>>> + * sufficient barriers to make sure @owner either sees it's
>>>> + * wounded or has a wakeup pending to re-read the wounded
>>>> + * state.
>>> IIUC, "sufficient barriers" = full memory barriers (here). (You may
>>> want to be more specific.)
>> Thanks for reviewing!
>> OK. What about if someone relaxes that in the future?
> This is actually one of my main concerns ;-) as, IIUC, those barriers are
> not only sufficient but also necessary: anything "less than a full barrier"
> (in either wake_up_process() or set_current_state()) would _not_ guarantee
> the "condition" above unless I'm misunderstanding it.
>
> But am I misunderstanding it? Which barriers/guarantee do you _need_ from
> the above mentioned pairing? (hence my comment...)
>
> Andrea
No you are probably not misunderstanding me at all. My comment
originated from the reading of the kerneldoc of set_current_state()
* set_current_state() includes a barrier so that the write of current->state
* is correctly serialised wrt the caller's subsequent test of whether to
* actually sleep:
*
* for (;;) {
* set_current_state(TASK_UNINTERRUPTIBLE);
* if (!need_sleep)
* break;
*
* schedule();
* }
* __set_current_state(TASK_RUNNING);
*
* If the caller does not need such serialisation (because, for instance, the
* condition test and condition change and wakeup are under the same
lock) then
* use __set_current_state().
*
* The above is typically ordered against the wakeup, which does:
*
* need_sleep = false;
* wake_up_state(p, TASK_UNINTERRUPTIBLE);
*
* Where wake_up_state() (and all other wakeup primitives) imply enough
* barriers to order the store of the variable against wakeup. And with
ctx->wounded := !need_sleep this exactly matches what's happening in my
code. So what I was trying to say in the comment was that this above
contract is sufficient to guarantee the "condition" above, whitout me
actually knowing exactly what barriers are required. Thanks, Thomas
On 06/14/2018 12:38 PM, Andrea Parri wrote:
> Hi Thomas,
>
> On Thu, Jun 14, 2018 at 09:29:21AM +0200, Thomas Hellstrom wrote:
>> The current Wound-Wait mutex algorithm is actually not Wound-Wait but
>> Wait-Die. Implement also Wound-Wait as a per-ww-class choice. Wound-Wait
>> is, contrary to Wait-Die a preemptive algorithm and is known to generate
>> fewer backoffs. Testing reveals that this is true if the
>> number of simultaneous contending transactions is small.
>> As the number of simultaneous contending threads increases, Wait-Wound
>> becomes inferior to Wait-Die in terms of elapsed time.
>> Possibly due to the larger number of held locks of sleeping transactions.
>>
>> Update documentation and callers.
>>
>> Timings using git://people.freedesktop.org/~thomash/ww_mutex_test
>> tag patch-18-06-14
>>
>> Each thread runs 100000 batches of lock / unlock 800 ww mutexes randomly
>> chosen out of 100000. Four core Intel x86_64:
>>
>> Algorithm #threads Rollbacks time
>> Wound-Wait 4 ~100 ~17s.
>> Wait-Die 4 ~150000 ~19s.
>> Wound-Wait 16 ~360000 ~109s.
>> Wait-Die 16 ~450000 ~82s.
>>
>> Cc: Peter Zijlstra <peterz(a)infradead.org>
>> Cc: Ingo Molnar <mingo(a)redhat.com>
>> Cc: Jonathan Corbet <corbet(a)lwn.net>
>> Cc: Gustavo Padovan <gustavo(a)padovan.org>
>> Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
>> Cc: Sean Paul <seanpaul(a)chromium.org>
>> Cc: David Airlie <airlied(a)linux.ie>
>> Cc: Davidlohr Bueso <dave(a)stgolabs.net>
>> Cc: "Paul E. McKenney" <paulmck(a)linux.vnet.ibm.com>
>> Cc: Josh Triplett <josh(a)joshtriplett.org>
>> Cc: Thomas Gleixner <tglx(a)linutronix.de>
>> Cc: Kate Stewart <kstewart(a)linuxfoundation.org>
>> Cc: Philippe Ombredanne <pombredanne(a)nexb.com>
>> Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
>> Cc: linux-doc(a)vger.kernel.org
>> Cc: linux-media(a)vger.kernel.org
>> Cc: linaro-mm-sig(a)lists.linaro.org
>> Signed-off-by: Thomas Hellstrom <thellstrom(a)vmware.com>
>>
>> ---
>> v2:
>> * Update API according to review comment by Greg Kroah-Hartman.
>> * Address review comments by Peter Zijlstra:
>> - Avoid _Bool in composites
>> - Fix typo
>> - Use __mutex_owner() where applicable
>> - Rely on built-in barriers for the main loop exit condition,
>> struct ww_acquire_ctx::wounded. Update code comments.
>> - Explain unlocked use of list_empty().
>> ---
>> Documentation/locking/ww-mutex-design.txt | 54 ++++++++++++----
>> drivers/dma-buf/reservation.c | 2 +-
>> drivers/gpu/drm/drm_modeset_lock.c | 2 +-
>> include/linux/ww_mutex.h | 19 ++++--
>> kernel/locking/locktorture.c | 2 +-
>> kernel/locking/mutex.c | 103 +++++++++++++++++++++++++++---
>> kernel/locking/test-ww_mutex.c | 2 +-
>> lib/locking-selftest.c | 2 +-
>> 8 files changed, 156 insertions(+), 30 deletions(-)
>>
>> diff --git a/Documentation/locking/ww-mutex-design.txt b/Documentation/locking/ww-mutex-design.txt
>> index 34c3a1b50b9a..b9597def9581 100644
>> --- a/Documentation/locking/ww-mutex-design.txt
>> +++ b/Documentation/locking/ww-mutex-design.txt
>> @@ -1,4 +1,4 @@
>> -Wait/Wound Deadlock-Proof Mutex Design
>> +Wound/Wait Deadlock-Proof Mutex Design
>> ======================================
>>
>> Please read mutex-design.txt first, as it applies to wait/wound mutexes too.
>> @@ -32,10 +32,23 @@ the oldest task) wins, and the one with the higher reservation id (i.e. the
>> younger task) unlocks all of the buffers that it has already locked, and then
>> tries again.
>>
>> -In the RDBMS literature this deadlock handling approach is called wait/wound:
>> -The older tasks waits until it can acquire the contended lock. The younger tasks
>> -needs to back off and drop all the locks it is currently holding, i.e. the
>> -younger task is wounded.
>> +In the RDBMS literature, a reservation ticket is associated with a transaction.
>> +and the deadlock handling approach is called Wait-Die. The name is based on
>> +the actions of a locking thread when it encounters an already locked mutex.
>> +If the transaction holding the lock is younger, the locking transaction waits.
>> +If the transaction holding the lock is older, the locking transaction backs off
>> +and dies. Hence Wait-Die.
>> +There is also another algorithm called Wound-Wait:
>> +If the transaction holding the lock is younger, the locking transaction
>> +preempts the transaction holding the lock, requiring it to back off. It
>> +Wounds the other transaction.
>> +If the transaction holding the lock is older, it waits for the other
>> +transaction. Hence Wound-Wait.
>> +The two algorithms are both fair in that a transaction will eventually succeed.
>> +However, the Wound-Wait algorithm is typically stated to generate fewer backoffs
>> +compared to Wait-Die, but is, on the other hand, associated with more work than
>> +Wait-Die when recovering from a backoff. Wound-Wait is also a preemptive
>> +algorithm which requires a reliable way to preempt another transaction.
>>
>> Concepts
>> --------
>> @@ -47,10 +60,12 @@ Acquire context: To ensure eventual forward progress it is important the a task
>> trying to acquire locks doesn't grab a new reservation id, but keeps the one it
>> acquired when starting the lock acquisition. This ticket is stored in the
>> acquire context. Furthermore the acquire context keeps track of debugging state
>> -to catch w/w mutex interface abuse.
>> +to catch w/w mutex interface abuse. An acquire context is representing a
>> +transaction.
>>
>> W/w class: In contrast to normal mutexes the lock class needs to be explicit for
>> -w/w mutexes, since it is required to initialize the acquire context.
>> +w/w mutexes, since it is required to initialize the acquire context. The lock
>> +class also specifies what algorithm to use, Wound-Wait or Wait-Die.
>>
>> Furthermore there are three different class of w/w lock acquire functions:
>>
>> @@ -90,6 +105,12 @@ provided.
>> Usage
>> -----
>>
>> +The algorithm (Wait-Die vs Wound-Wait) is chosen by using either
>> +DEFINE_WW_CLASS_WDIE() for Wait-Die or DEFINE_WW_CLASS() for Wound-Wait.
>> +As a rough rule of thumb, use Wound-Wait iff you typically expect the number
>> +of simultaneous competing transactions to be small, and the rollback cost can
>> +be substantial.
>> +
>> Three different ways to acquire locks within the same w/w class. Common
>> definitions for methods #1 and #2:
>>
>> @@ -312,12 +333,23 @@ Design:
>> We maintain the following invariants for the wait list:
>> (1) Waiters with an acquire context are sorted by stamp order; waiters
>> without an acquire context are interspersed in FIFO order.
>> - (2) Among waiters with contexts, only the first one can have other locks
>> - acquired already (ctx->acquired > 0). Note that this waiter may come
>> - after other waiters without contexts in the list.
>> + (2) For Wait-Die, among waiters with contexts, only the first one can have
>> + other locks acquired already (ctx->acquired > 0). Note that this waiter
>> + may come after other waiters without contexts in the list.
>> +
>> + The Wound-Wait preemption is implemented with a lazy-preemption scheme:
>> + The wounded status of the transaction is checked only when there is
>> + contention for a new lock and hence a true chance of deadlock. In that
>> + situation, if the transaction is wounded, it backs off, clears the
>> + wounded status and retries. A great benefit of implementing preemption in
>> + this way is that the wounded transaction can identify a contending lock to
>> + wait for before restarting the transaction. Just blindly restarting the
>> + transaction would likely make the transaction end up in a situation where
>> + it would have to back off again.
>>
>> In general, not much contention is expected. The locks are typically used to
>> - serialize access to resources for devices.
>> + serialize access to resources for devices, and optimization focus should
>> + therefore be directed towards the uncontended cases.
>>
>> Lockdep:
>> Special care has been taken to warn for as many cases of api abuse
>> diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
>> index 314eb1071cce..b94a4bab2ecd 100644
>> --- a/drivers/dma-buf/reservation.c
>> +++ b/drivers/dma-buf/reservation.c
>> @@ -46,7 +46,7 @@
>> * write-side updates.
>> */
>>
>> -DEFINE_WW_CLASS(reservation_ww_class);
>> +DEFINE_WW_CLASS_WDIE(reservation_ww_class);
>> EXPORT_SYMBOL(reservation_ww_class);
>>
>> struct lock_class_key reservation_seqcount_class;
>> diff --git a/drivers/gpu/drm/drm_modeset_lock.c b/drivers/gpu/drm/drm_modeset_lock.c
>> index 8a5100685875..ff00a814f617 100644
>> --- a/drivers/gpu/drm/drm_modeset_lock.c
>> +++ b/drivers/gpu/drm/drm_modeset_lock.c
>> @@ -70,7 +70,7 @@
>> * lists and lookup data structures.
>> */
>>
>> -static DEFINE_WW_CLASS(crtc_ww_class);
>> +static DEFINE_WW_CLASS_WDIE(crtc_ww_class);
>>
>> /**
>> * drm_modeset_lock_all - take all modeset locks
>> diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
>> index 39fda195bf78..3880813b7db5 100644
>> --- a/include/linux/ww_mutex.h
>> +++ b/include/linux/ww_mutex.h
>> @@ -8,6 +8,8 @@
>> *
>> * Wound/wait implementation:
>> * Copyright (C) 2013 Canonical Ltd.
>> + * Choice of algorithm:
>> + * Copyright (C) 2018 WMWare Inc.
>> *
>> * This file contains the main data structure and API definitions.
>> */
>> @@ -23,15 +25,17 @@ struct ww_class {
>> struct lock_class_key mutex_key;
>> const char *acquire_name;
>> const char *mutex_name;
>> + unsigned int is_wait_die;
>> };
>>
>> struct ww_acquire_ctx {
>> struct task_struct *task;
>> unsigned long stamp;
>> unsigned acquired;
>> + unsigned int wounded;
>> + struct ww_class *ww_class;
>> #ifdef CONFIG_DEBUG_MUTEXES
>> unsigned done_acquire;
>> - struct ww_class *ww_class;
>> struct ww_mutex *contending_lock;
>> #endif
>> #ifdef CONFIG_DEBUG_LOCK_ALLOC
>> @@ -58,17 +62,21 @@ struct ww_mutex {
>> # define __WW_CLASS_MUTEX_INITIALIZER(lockname, class)
>> #endif
>>
>> -#define __WW_CLASS_INITIALIZER(ww_class) \
>> +#define __WW_CLASS_INITIALIZER(ww_class, _is_wait_die) \
>> { .stamp = ATOMIC_LONG_INIT(0) \
>> , .acquire_name = #ww_class "_acquire" \
>> - , .mutex_name = #ww_class "_mutex" }
>> + , .mutex_name = #ww_class "_mutex" \
>> + , .is_wait_die = _is_wait_die }
>>
>> #define __WW_MUTEX_INITIALIZER(lockname, class) \
>> { .base = __MUTEX_INITIALIZER(lockname.base) \
>> __WW_CLASS_MUTEX_INITIALIZER(lockname, class) }
>>
>> #define DEFINE_WW_CLASS(classname) \
>> - struct ww_class classname = __WW_CLASS_INITIALIZER(classname)
>> + struct ww_class classname = __WW_CLASS_INITIALIZER(classname, 0)
>> +
>> +#define DEFINE_WW_CLASS_WDIE(classname) \
>> + struct ww_class classname = __WW_CLASS_INITIALIZER(classname, 1)
>>
>> #define DEFINE_WW_MUTEX(mutexname, ww_class) \
>> struct ww_mutex mutexname = __WW_MUTEX_INITIALIZER(mutexname, ww_class)
>> @@ -123,8 +131,9 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
>> ctx->task = current;
>> ctx->stamp = atomic_long_inc_return_relaxed(&ww_class->stamp);
>> ctx->acquired = 0;
>> -#ifdef CONFIG_DEBUG_MUTEXES
>> ctx->ww_class = ww_class;
>> + ctx->wounded = false;
>> +#ifdef CONFIG_DEBUG_MUTEXES
>> ctx->done_acquire = 0;
>> ctx->contending_lock = NULL;
>> #endif
>> diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
>> index 6850ffd69125..e861c1bf0e1e 100644
>> --- a/kernel/locking/locktorture.c
>> +++ b/kernel/locking/locktorture.c
>> @@ -365,7 +365,7 @@ static struct lock_torture_ops mutex_lock_ops = {
>> };
>>
>> #include <linux/ww_mutex.h>
>> -static DEFINE_WW_CLASS(torture_ww_class);
>> +static DEFINE_WW_CLASS_WDIE(torture_ww_class);
>> static DEFINE_WW_MUTEX(torture_ww_mutex_0, &torture_ww_class);
>> static DEFINE_WW_MUTEX(torture_ww_mutex_1, &torture_ww_class);
>> static DEFINE_WW_MUTEX(torture_ww_mutex_2, &torture_ww_class);
>> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
>> index 2048359f33d2..ffa00b5aaf03 100644
>> --- a/kernel/locking/mutex.c
>> +++ b/kernel/locking/mutex.c
>> @@ -290,12 +290,49 @@ __ww_ctx_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
>> (a->stamp != b->stamp || a > b);
>> }
>>
>> +/*
>> + * Wound the lock holder transaction if it's younger than the contending
>> + * transaction, and there is a possibility of a deadlock.
>> + * Also if the lock holder transaction isn't the current transaction,
>> + * make sure it's woken up in case it's sleeping on another ww mutex.
>> + */
>> +static bool __ww_mutex_wound(struct mutex *lock,
>> + struct ww_acquire_ctx *ww_ctx,
>> + struct ww_acquire_ctx *hold_ctx)
>> +{
>> + struct task_struct *owner = __mutex_owner(lock);
>> +
>> + lockdep_assert_held(&lock->wait_lock);
>> +
>> + if (owner && hold_ctx && __ww_ctx_stamp_after(hold_ctx, ww_ctx) &&
>> + ww_ctx->acquired > 0) {
>> + hold_ctx->wounded = 1;
>> +
>> + /*
>> + * wake_up_process() paired with set_current_state() inserts
>> + * sufficient barriers to make sure @owner either sees it's
>> + * wounded or has a wakeup pending to re-read the wounded
>> + * state.
> IIUC, "sufficient barriers" = full memory barriers (here). (You may
> want to be more specific.)
Thanks for reviewing!
OK. What about if someone relaxes that in the future? I mean, what we
care about in this code is just that we have sufficient barriers for
that statement to be true, regardless what type of barriers those really
are?
>
>> + *
>> + * The value of hold_ctx->wounded in
>> + * __ww_mutex_lock_check_stamp();
> Missing parts/incomplete sentence?
Oops. I'll fix in next version.
>
> Andrea
Thanks,
Thomas
The expected return value from ion_map_kernel is an ERR_PTR. The error
path for a vmalloc failure currently just returns NULL, triggering
a warning in ion_buffer_kmap_get. Encode the vmalloc failure as an ERR_PTR.
Reported-by: syzbot+55b1d9f811650de944c6(a)syzkaller.appspotmail.com
Signed-off-by: Laura Abbott <labbott(a)redhat.com>
---
drivers/staging/android/ion/ion_heap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/android/ion/ion_heap.c b/drivers/staging/android/ion/ion_heap.c
index 772dad65396e..f32c12439eee 100644
--- a/drivers/staging/android/ion/ion_heap.c
+++ b/drivers/staging/android/ion/ion_heap.c
@@ -29,7 +29,7 @@ void *ion_heap_map_kernel(struct ion_heap *heap,
struct page **tmp = pages;
if (!pages)
- return NULL;
+ return ERR_PTR(-ENOMEM);
if (buffer->flags & ION_FLAG_CACHED)
pgprot = PAGE_KERNEL;
--
2.17.0