This is a note to let you know that I've just added the patch titled
ceph: only dirty ITER_IOVEC pages for direct read
to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: ceph-only-dirty-iter_iovec-pages-for-direct-read.patch and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From 85784f9395987a422fa04263e7c0fb13da11eb5c Mon Sep 17 00:00:00 2001
From: "Yan, Zheng" zyan@redhat.com Date: Fri, 16 Mar 2018 11:22:29 +0800 Subject: ceph: only dirty ITER_IOVEC pages for direct read
From: Yan, Zheng zyan@redhat.com
commit 85784f9395987a422fa04263e7c0fb13da11eb5c upstream.
If a page is already locked, attempting to dirty it leads to a deadlock in lock_page(). This is what currently happens to ITER_BVEC pages when a dio-enabled loop device is backed by ceph:
$ losetup --direct-io /dev/loop0 /mnt/cephfs/img $ xfs_io -c 'pread 0 4k' /dev/loop0
Follow other file systems and only dirty ITER_IOVEC pages.
Cc: stable@kernel.org Signed-off-by: "Yan, Zheng" zyan@redhat.com Reviewed-by: Ilya Dryomov idryomov@gmail.com Signed-off-by: Ilya Dryomov idryomov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ceph/file.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/fs/ceph/file.c +++ b/fs/ceph/file.c @@ -598,7 +598,8 @@ static ssize_t ceph_sync_read(struct kio struct ceph_aio_request { struct kiocb *iocb; size_t total_len; - int write; + bool write; + bool should_dirty; int error; struct list_head osd_reqs; unsigned num_reqs; @@ -708,7 +709,7 @@ static void ceph_aio_complete_req(struct } }
- ceph_put_page_vector(osd_data->pages, num_pages, !aio_req->write); + ceph_put_page_vector(osd_data->pages, num_pages, aio_req->should_dirty); ceph_osdc_put_request(req);
if (rc < 0) @@ -890,6 +891,7 @@ ceph_direct_read_write(struct kiocb *ioc size_t count = iov_iter_count(iter); loff_t pos = iocb->ki_pos; bool write = iov_iter_rw(iter) == WRITE; + bool should_dirty = !write && iter_is_iovec(iter);
if (write && ceph_snap(file_inode(file)) != CEPH_NOSNAP) return -EROFS; @@ -954,6 +956,7 @@ ceph_direct_read_write(struct kiocb *ioc if (aio_req) { aio_req->iocb = iocb; aio_req->write = write; + aio_req->should_dirty = should_dirty; INIT_LIST_HEAD(&aio_req->osd_reqs); if (write) { aio_req->mtime = mtime; @@ -1012,7 +1015,7 @@ ceph_direct_read_write(struct kiocb *ioc len = ret; }
- ceph_put_page_vector(pages, num_pages, !write); + ceph_put_page_vector(pages, num_pages, should_dirty);
ceph_osdc_put_request(req); if (ret < 0)
Patches currently in stable-queue which might be from zyan@redhat.com are
queue-4.9/ceph-only-dirty-iter_iovec-pages-for-direct-read.patch