Quoting Brendan Higgins (2019-06-26 16:00:40)
On Tue, Jun 25, 2019 at 8:41 PM Stephen Boyd sboyd@kernel.org wrote:
scenario like below, but where it is a problem. There could be three CPUs, or even one CPU and three threads if you want to describe the extra thread scenario.
Here's my scenario where it isn't needed:
CPU0 CPU1 ---- ---- kunit_run_test(&test) test_case_func() .... [mock hardirq] kunit_set_success(&test) [hardirq ends] ... complete(&test_done) wait_for_completion(&test_done) kunit_get_success(&test)
We don't need to care about having locking here because success or failure only happens in one place and it's synchronized with the completion.
Here is the scenario I am concerned about:
CPU0 CPU1 CPU2
kunit_run_test(&test) test_case_func() .... schedule_work(foo_func) [mock hardirq] foo_func() ... ... kunit_set_success(false) kunit_set_success(false) [hardirq ends] ... ... complete(&test_done) wait_for_completion(...) kunit_get_success(&test)
In my scenario, since both CPU1 and CPU2 update the success status of the test simultaneously, even though they are setting it to the same value. If my understanding is correct, this could result in a write-tear on some architectures in some circumstances. I suppose we could just make it an atomic boolean, but I figured locking is also fine, and generally preferred.
This is what we have WRITE_ONCE() and READ_ONCE() for. Maybe you could just use that in the getter and setters and remove the lock if it isn't used for anything else.
It may also be a good idea to have a kunit_fail_test() API that fails the test passed in with a WRITE_ONCE(false). Otherwise, the test is assumed successful and it isn't even possible for a test to change the state from failure to success due to a logical error because the API isn't available. Then we don't really need to have a generic kunit_set_success() function at all. We could have a kunit_test_failed() function too that replaces the kunit_get_success() function. That would read better in an if condition.
Also, to be clear, I am onboard with dropping then IRQ stuff for now. I am fine moving to a mutex for the time being.
Ok.