3.16.63-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Chris Mason clm@fb.com
commit 7703bdd8d23e6ef057af3253958a793ec6066b28 upstream.
During buffered writes, we follow this basic series of steps:
again: lock all the pages wait for writeback on all the pages Take the extent range lock wait for ordered extents on the whole range clean all the pages
if (copy_from_user_in_atomic() hits a fault) { drop our locks goto again; }
dirty all the pages release all the locks
The extra waiting, cleaning and locking are there to make sure we don't modify pages in flight to the drive, after they've been crc'd.
If some of the pages in the range were already dirty when the write began, and we need to goto again, we create a window where a dirty page has been cleaned and unlocked. It may be reclaimed before we're able to lock it again, which means we'll read the old contents off the drive and lose any modifications that had been pending writeback.
We don't actually need to clean the pages. All of the other locking in place makes sure we don't start IO on the pages, so we can just leave them dirty for the duration of the write.
Fixes: 73d59314e6ed (the original btrfs merge) Signed-off-by: Chris Mason clm@fb.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com [bwh: Backported to 3.16: - Keep passing additional argument of GFP_NOFS to clear_extent_bit() - Adjust context] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -506,6 +506,16 @@ int btrfs_dirty_pages(struct btrfs_root num_bytes = ALIGN(write_bytes + pos - start_pos, root->sectorsize);
end_of_last_block = start_pos + num_bytes - 1; + + /* + * The pages may have already been dirty, clear out old accounting so + * we can set things up properly + */ + clear_extent_bit(&BTRFS_I(inode)->io_tree, start_pos, end_of_last_block, + EXTENT_DIRTY | EXTENT_DELALLOC | + EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG, 0, 0, cached, + GFP_NOFS); + err = btrfs_set_extent_delalloc(inode, start_pos, end_of_last_block, cached); if (err) @@ -1408,18 +1418,26 @@ lock_and_cleanup_extent_if_need(struct i if (ordered) btrfs_put_ordered_extent(ordered);
- clear_extent_bit(&BTRFS_I(inode)->io_tree, start_pos, - last_pos, EXTENT_DIRTY | EXTENT_DELALLOC | - EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG, - 0, 0, cached_state, GFP_NOFS); *lockstart = start_pos; *lockend = last_pos; ret = 1; }
+ /* + * It's possible the pages are dirty right now, but we don't want + * to clean them yet because copy_from_user may catch a page fault + * and we might have to fall back to one page at a time. If that + * happens, we'll unlock these pages and we'd have a window where + * reclaim could sneak in and drop the once-dirty page on the floor + * without writing it. + * + * We have the pages locked and the extent range locked, so there's + * no way someone can start IO on any dirty pages in this range. + * + * We'll call btrfs_dirty_pages() later on, and that will flip around + * delalloc bits and dirty the pages as required. + */ for (i = 0; i < num_pages; i++) { - if (clear_page_dirty_for_io(pages[i])) - account_page_redirty(pages[i]); set_page_extent_mapped(pages[i]); WARN_ON(!PageLocked(pages[i])); }