On 10.11.20 16:14, Mike Rapoport wrote:
From: Mike Rapoport rppt@linux.ibm.com
It will be used by the upcoming secret memory implementation.
Signed-off-by: Mike Rapoport rppt@linux.ibm.com
mm/internal.h | 3 +++ mm/mmap.c | 5 ++--- 2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h index c43ccdddb0f6..ae146a260b14 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma) extern void mlock_vma_page(struct page *page); extern unsigned int munlock_vma_page(struct page *page); +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
unsigned long len);
- /*
- Clear the page's PageMlocked(). This can be useful in a situation where
- we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c index 61f72b09d990..c481f088bd50 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint) return hint; } -static inline int mlock_future_check(struct mm_struct *mm,
unsigned long flags,
unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
{ unsigned long locked, lock_limit;unsigned long len)
So, an interesting question is if you actually want to charge secretmem pages against mlock now, or if you want a dedicated secretmem cgroup controller instead?