The patch below does not apply to the 4.19-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From b6391ac73400eff38377a4a7364bd3df5efb5178 Mon Sep 17 00:00:00 2001
From: Gao Xiang gaoxiang25@huawei.com Date: Mon, 25 Mar 2019 11:40:07 +0800 Subject: [PATCH] staging: erofs: fix error handling when failed to read compresssed data
Complete read error handling paths for all three kinds of compressed pages:
1) For cache-managed pages, PG_uptodate will be checked since read_endio will unlock and SetPageUptodate for these pages;
2) For inplaced pages, read_endio cannot SetPageUptodate directly since it should be used to mark the final decompressed data, PG_error will be set with page locked for IO error instead;
3) For staging pages, PG_error is used, which is similar to what we do for inplaced pages.
Fixes: 3883a79abd02 ("staging: erofs: introduce VLE decompression support") Cc: stable@vger.kernel.org # 4.19+ Reviewed-by: Chao Yu yuchao0@huawei.com Signed-off-by: Gao Xiang gaoxiang25@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c index c7b3b21123c1..31eef8395774 100644 --- a/drivers/staging/erofs/unzip_vle.c +++ b/drivers/staging/erofs/unzip_vle.c @@ -972,6 +972,7 @@ static int z_erofs_vle_unzip(struct super_block *sb, overlapped = false; compressed_pages = grp->compressed_pages;
+ err = 0; for (i = 0; i < clusterpages; ++i) { unsigned int pagenr;
@@ -981,26 +982,39 @@ static int z_erofs_vle_unzip(struct super_block *sb, DBG_BUGON(!page); DBG_BUGON(!page->mapping);
- if (z_erofs_is_stagingpage(page)) - continue; + if (!z_erofs_is_stagingpage(page)) { #ifdef EROFS_FS_HAS_MANAGED_CACHE - if (page->mapping == MNGD_MAPPING(sbi)) { - DBG_BUGON(!PageUptodate(page)); - continue; - } + if (page->mapping == MNGD_MAPPING(sbi)) { + if (unlikely(!PageUptodate(page))) + err = -EIO; + continue; + } #endif
- /* only non-head page could be reused as a compressed page */ - pagenr = z_erofs_onlinepage_index(page); + /* + * only if non-head page can be selected + * for inplace decompression + */ + pagenr = z_erofs_onlinepage_index(page);
- DBG_BUGON(pagenr >= nr_pages); - DBG_BUGON(pages[pagenr]); - ++sparsemem_pages; - pages[pagenr] = page; + DBG_BUGON(pagenr >= nr_pages); + DBG_BUGON(pages[pagenr]); + ++sparsemem_pages; + pages[pagenr] = page; + + overlapped = true; + }
- overlapped = true; + /* PG_error needs checking for inplaced and staging pages */ + if (unlikely(PageError(page))) { + DBG_BUGON(PageUptodate(page)); + err = -EIO; + } }
+ if (unlikely(err)) + goto out; + llen = (nr_pages << PAGE_SHIFT) - work->pageofs;
if (z_erofs_vle_workgrp_fmt(grp) == Z_EROFS_VLE_WORKGRP_FMT_PLAIN) { @@ -1198,6 +1212,7 @@ pickup_page_for_submission(struct z_erofs_vle_workgroup *grp, if (page->mapping == mc) { WRITE_ONCE(grp->compressed_pages[nr], page);
+ ClearPageError(page); if (!PagePrivate(page)) { /* * impossible to be !PagePrivate(page) for