Quoting Ulf Hansson (2013-02-28 01:54:34)
On 28 February 2013 05:49, Mike Turquette mturquette@linaro.org wrote:
@@ -703,10 +744,29 @@ int clk_enable(struct clk *clk) unsigned long flags; int ret;
/* this call re-enters if it is from the same context */
if (spin_is_locked(&enable_lock) || mutex_is_locked(&prepare_lock)) {
if ((void *) atomic_read(&context) == get_current()) {
ret = __clk_enable(clk);
goto out;
}
}
I beleive the clk_enable|disable code will be racy. What do you think about this scenario:
- Thread 1, calls clk_prepare -> clk is not reentrant -> mutex_lock
-> set_context to thread1. 2. Thread 2, calls clk_enable -> above "if" will mean that get_current returns thread 1 context and then clk_enable continues -> spin_lock_irqsave -> set_context to thread 2. 3. Thread 1 continues and triggers a reentancy for clk_prepare -> clk is not reentrant (since thread 2 has set a new context) -> mutex_lock and we will hang forever.
Do you think above scenario could happen?
I think the solution would be to invent another "static atomic_t context;" which is used only for fast path functions (clk_enable|disable). That should do the trick I think.
Ulf,
You are correct. In fact I have a branch that has two separate context pointers, one for mutex-protected functions and one for spinlock-protected functions. Somehow I managed to discard that change before settling on the final version that was published.
I'll add the change back in.
Thanks for the review, Mike