On Tue, Oct 05, 2021 at 05:24:18PM +0800, Ming Lei wrote:
On Mon, Sep 27, 2021 at 09:38:02AM -0700, Luis Chamberlain wrote:
When driver sysfs attributes use a lock also used on module removal we can race to deadlock. This happens when for instance a sysfs file on a driver is used, then at the same time we have module removal call trigger. The module removal call code holds a lock, and then the driver's sysfs file entry waits for the same lock. While holding the lock the module removal tries to remove the sysfs entries, but these cannot be removed yet as one is waiting for a lock. This won't complete as the lock is already held. Likewise module removal cannot complete, and so we deadlock.
This can now be easily reproducible with our sysfs selftest as follows:
./tools/testing/selftests/sysfs/sysfs.sh -t 0027
This uses a local driver lock. Test 0028 can also be used, that uses the rtnl_lock():
./tools/testing/selftests/sysfs/sysfs.sh -t 0028
To fix this we extend the struct kernfs_node with a module reference and use the try_module_get() after kernfs_get_active() is called. As documented in the prior patch, we now know that once kernfs_get_active() is called the module is implicitly guarded to exist and cannot be removed. This is because the module is the one in charge of removing the same sysfs file it created, and removal of sysfs files on module exit will wait until they don't have any active references. By using a try_module_get() after kernfs_get_active() we yield to let module removal trump calls to process a sysfs operation, while also preventing module removal if a sysfs operation is in already progress. This prevents the deadlock.
This deadlock was first reported with the zram driver, however the live
Looks not see the lock pattern you mentioned in zram driver, can you share the related zram code?
I recommend to not look at the zram driver, instead look at the test_sysfs driver as that abstracts the issue more clearly and uses two different locks as an example. The point is that if on module removal *any* lock is used which is *also* used on the sysfs file created by the module, you can deadlock.
And this can lead to this condition:
CPU A CPU B foo_store() foo_exit() mutex_lock(&foo) mutex_lock(&foo) del_gendisk(some_struct->disk); device_del() device_remove_groups()
I guess the deadlock exists if foo_exit() is called anywhere. If yes, look the issue may not be related with removing module directly, right?
No, the reason this can deadlock is that the module exit routine will patiently wait for the sysfs / kernfs files to be stop being used, but clearly they cannot if the exit routine took the mutex also used by the sysfs ops. That is, the special condition here is the removal of the sysfs files, and the sysfs files using a lock also used on module exit.
Luis