Hi, Josef
I believe there is a bug in the following scenario, but I'm not sure if it is the same bug reported by syzbot. Do you have any idea?
umount thread: btrfs-cleaner thread: btrfs_run_delayed_iputs() ->run_delayed_iput_locked() btrfs_kill_super() ->iput(inode) ->generic_shutdown_super() ->spin_lock(inode) ->evict_inodes() // inode->i_count dec to 0 ->spin_lock(inode) ->iput_final() // cause some reason, get into // __inode_add_lru // passed i_count==0 test ->__inode_add_lru() // and then schedule out // so iput_final() returned wich I_FREEING was not set // note here: the inode still in the sb list ->__btrfs_run_defrag_inode() ->btrfs_iget() ->find_inode() ->spin_lock(inode) ->__iget(); // i_count inc to 1 ->spin_unlock(inode); // schedule back spin_lock(inode) // I_FREEING was not set // so we continue set I_FREEING flag spin_unlock(inode) ->iput() // put the inode into dispose ->spin_lock(inode) // list // dec i_count to 0 ->dispose_list() ->iput_final() ->evict() ->spin_unlock() ->evict()
Now, we have two threads simultaneously evicting the same inode, which led to a bug. I think this can be addressed by protecting the atomic_read(inode->i_count) in evict_inode() with inode->i_lock.