The page migration code assumes that a page with PG_private set has its page count elevated by 1. UBIFS never did this and therefore the migration code was unable to migrate some pages owned by UBIFS. The lead to situations where the CMA memory allocator failed to allocate memory.
Fix this by using get/put_page when changing PG_private.
Cc: stable@vger.kernel.org Cc: zhangjun openzhangj@gmail.com Fixes: 4ac1c17b2044 ("UBIFS: Implement ->migratepage()") Reported-by: zhangjun openzhangj@gmail.com Signed-off-by: Richard Weinberger richard@nod.at --- zhangjun,
Please give this patch a try!
Thanks, //richard --- fs/ubifs/file.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c index 1b78f2e09218..abe940d0767c 100644 --- a/fs/ubifs/file.c +++ b/fs/ubifs/file.c @@ -582,6 +582,7 @@ static int ubifs_write_end(struct file *file, struct address_space *mapping, }
if (!PagePrivate(page)) { + get_page(page); SetPagePrivate(page); atomic_long_inc(&c->dirty_pg_cnt); __set_page_dirty_nobuffers(page); @@ -959,6 +960,7 @@ static int do_writepage(struct page *page, int len) atomic_long_dec(&c->dirty_pg_cnt); ClearPagePrivate(page); ClearPageChecked(page); + put_page(page);
kunmap(page); unlock_page(page); @@ -1318,6 +1320,7 @@ static void ubifs_invalidatepage(struct page *page, unsigned int offset, atomic_long_dec(&c->dirty_pg_cnt); ClearPagePrivate(page); ClearPageChecked(page); + put_page(page); }
int ubifs_fsync(struct file *file, loff_t start, loff_t end, int datasync) @@ -1487,6 +1490,8 @@ static int ubifs_migrate_page(struct address_space *mapping,
if (PagePrivate(page)) { ClearPagePrivate(page); + put_page(page); + get_page(newpage); SetPagePrivate(newpage); }
@@ -1513,6 +1518,7 @@ static int ubifs_releasepage(struct page *page, gfp_t unused_gfp_flags) ubifs_assert(c, 0); ClearPagePrivate(page); ClearPageChecked(page); + put_page(page); return 1; }
@@ -1582,6 +1588,7 @@ static vm_fault_t ubifs_vm_page_mkwrite(struct vm_fault *vmf) else { if (!PageChecked(page)) ubifs_convert_page_budget(c); + get_page(page); SetPagePrivate(page); atomic_long_inc(&c->dirty_pg_cnt); __set_page_dirty_nobuffers(page);
On 2018/12/15 下午11:01, Richard Weinberger wrote:
The page migration code assumes that a page with PG_private set has its page count elevated by 1. UBIFS never did this and therefore the migration code was unable to migrate some pages owned by UBIFS. The lead to situations where the CMA memory allocator failed to allocate memory.
Fix this by using get/put_page when changing PG_private.
Cc: stable@vger.kernel.org Cc: zhangjun openzhangj@gmail.com Fixes: 4ac1c17b2044 ("UBIFS: Implement ->migratepage()") Reported-by: zhangjun openzhangj@gmail.com Signed-off-by: Richard Weinberger richard@nod.at
zhangjun,
Please give this patch a try!
Thanks, //richard
fs/ubifs/file.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c index 1b78f2e09218..abe940d0767c 100644 --- a/fs/ubifs/file.c +++ b/fs/ubifs/file.c @@ -582,6 +582,7 @@ static int ubifs_write_end(struct file *file, struct address_space *mapping, } if (!PagePrivate(page)) {
SetPagePrivate(page); atomic_long_inc(&c->dirty_pg_cnt); __set_page_dirty_nobuffers(page);get_page(page);
@@ -959,6 +960,7 @@ static int do_writepage(struct page *page, int len) atomic_long_dec(&c->dirty_pg_cnt); ClearPagePrivate(page); ClearPageChecked(page);
- put_page(page);
kunmap(page); unlock_page(page); @@ -1318,6 +1320,7 @@ static void ubifs_invalidatepage(struct page *page, unsigned int offset, atomic_long_dec(&c->dirty_pg_cnt); ClearPagePrivate(page); ClearPageChecked(page);
- put_page(page); }
int ubifs_fsync(struct file *file, loff_t start, loff_t end, int datasync) @@ -1487,6 +1490,8 @@ static int ubifs_migrate_page(struct address_space *mapping, if (PagePrivate(page)) { ClearPagePrivate(page);
put_page(page);
SetPagePrivate(newpage); }get_page(newpage);
@@ -1513,6 +1518,7 @@ static int ubifs_releasepage(struct page *page, gfp_t unused_gfp_flags) ubifs_assert(c, 0); ClearPagePrivate(page); ClearPageChecked(page);
- put_page(page); return 1; }
@@ -1582,6 +1588,7 @@ static vm_fault_t ubifs_vm_page_mkwrite(struct vm_fault *vmf) else { if (!PageChecked(page)) ubifs_convert_page_budget(c);
SetPagePrivate(page); atomic_long_inc(&c->dirty_pg_cnt); __set_page_dirty_nobuffers(page);get_page(page);
Hello Richard
After adding your patch,my test did not go wrong. I think it is ok now.
thanks //zhangjun
On Sat, Dec 15, 2018 at 04:01:30PM +0100, Richard Weinberger wrote:
The page migration code assumes that a page with PG_private set has its page count elevated by 1. UBIFS never did this and therefore the migration code was unable to migrate some pages owned by UBIFS. The lead to situations where the CMA memory allocator failed to allocate memory.
Fix this by using get/put_page when changing PG_private.
Looks good to me.
Acked-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com
Cc: stable@vger.kernel.org Cc: zhangjun openzhangj@gmail.com Fixes: 4ac1c17b2044 ("UBIFS: Implement ->migratepage()")
It is fair to reference the commit here. But I believe the bug itself predates the commit and relevant not only for migration.
We might make it clear in the commit message.
Am Montag, 17. Dezember 2018, 11:59:44 CET schrieb Kirill A. Shutemov:
On Sat, Dec 15, 2018 at 04:01:30PM +0100, Richard Weinberger wrote:
The page migration code assumes that a page with PG_private set has its page count elevated by 1. UBIFS never did this and therefore the migration code was unable to migrate some pages owned by UBIFS. The lead to situations where the CMA memory allocator failed to allocate memory.
Fix this by using get/put_page when changing PG_private.
Looks good to me.
Acked-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com
Cc: stable@vger.kernel.org Cc: zhangjun openzhangj@gmail.com Fixes: 4ac1c17b2044 ("UBIFS: Implement ->migratepage()")
It is fair to reference the commit here. But I believe the bug itself predates the commit and relevant not only for migration.
My intention was not blaming you. :) IMHO backporting the fix makes only sense up to that commit.
We might make it clear in the commit message.
Fair point, I'll rephrase.
Thanks, //richard
Am Samstag, 15. Dezember 2018, 16:01:30 CET schrieb Richard Weinberger:
The page migration code assumes that a page with PG_private set has its page count elevated by 1. UBIFS never did this and therefore the migration code was unable to migrate some pages owned by UBIFS. The lead to situations where the CMA memory allocator failed to allocate memory.
Fix this by using get/put_page when changing PG_private.
Cc: stable@vger.kernel.org Cc: zhangjun openzhangj@gmail.com Fixes: 4ac1c17b2044 ("UBIFS: Implement ->migratepage()") Reported-by: zhangjun openzhangj@gmail.com Signed-off-by: Richard Weinberger richard@nod.at
FYI, on the XFS side a similar change caused a regression. https://marc.info/?l=linux-fsdevel&m=154530861202448&w=2
Until this regression is not fully understood, including the implications for UBIFS, I'll not merge this patch.
Thanks, //richard
On Fri, Dec 21, 2018 at 09:56:25AM +0100, Richard Weinberger wrote:
Am Samstag, 15. Dezember 2018, 16:01:30 CET schrieb Richard Weinberger:
The page migration code assumes that a page with PG_private set has its page count elevated by 1. UBIFS never did this and therefore the migration code was unable to migrate some pages owned by UBIFS. The lead to situations where the CMA memory allocator failed to allocate memory.
Fix this by using get/put_page when changing PG_private.
Cc: stable@vger.kernel.org Cc: zhangjun openzhangj@gmail.com Fixes: 4ac1c17b2044 ("UBIFS: Implement ->migratepage()") Reported-by: zhangjun openzhangj@gmail.com Signed-off-by: Richard Weinberger richard@nod.at
FYI, on the XFS side a similar change caused a regression. https://marc.info/?l=linux-fsdevel&m=154530861202448&w=2
Until this regression is not fully understood, including the implications for UBIFS, I'll not merge this patch.
This looks like a reasonable resolution to me:
http://lkml.kernel.org/r/20181221093919.GA2337@lst.de
But let's wait the inclusion (or objection).
On 2018/12/21 下午4:56, Richard Weinberger wrote:
Am Samstag, 15. Dezember 2018, 16:01:30 CET schrieb Richard Weinberger:
The page migration code assumes that a page with PG_private set has its page count elevated by 1. UBIFS never did this and therefore the migration code was unable to migrate some pages owned by UBIFS. The lead to situations where the CMA memory allocator failed to allocate memory.
Fix this by using get/put_page when changing PG_private.
Cc: stable@vger.kernel.org Cc: zhangjun openzhangj@gmail.com Fixes: 4ac1c17b2044 ("UBIFS: Implement ->migratepage()") Reported-by: zhangjun openzhangj@gmail.com Signed-off-by: Richard Weinberger richard@nod.at
FYI, on the XFS side a similar change caused a regression. https://marc.info/?l=linux-fsdevel&m=154530861202448&w=2
Until this regression is not fully understood, including the implications for UBIFS, I'll not merge this patch.
Thanks, //richard
Hello,richard
Before fully understanding this regression, in order to fix the bug of cma_alloc(), submit a conservative patch that modifies count in iomap_migrate_page(). Can you consider merging first?
https://marc.info/?l=linux-kernel&m=154473132332661&w=2
Thanks //zhangjun
Am Dienstag, 25. Dezember 2018, 03:42:37 CET schrieb zhangjun:
Hello,richard
Before fully understanding this regression, in order to fix the bug of cma_alloc(), submit a conservative patch that modifies count in iomap_migrate_page(). Can you consider merging first?
No. There is no need to rush. As it looks like the iomap regression is trivial, Christoph pointed out that a case was forgotten. But Dave is currently (like almost all of us) on vacation to confirm.
Thanks, //richard
linux-stable-mirror@lists.linaro.org