On 30.09.22 16:19, David Hildenbrand wrote:
FOLL_MIGRATION exists only for the purpose of break_ksm(), and actually, there is not even the need to wait for the migration to finish, we only want to know if we're dealing with a KSM page.
Using follow_page() just to identify a KSM page overcomplicates GUP code. Let's use walk_page_range_vma() instead, because we don't actually care about the page itself, we only need to know a single property -- no need to even grab a reference on the page.
In my setup (AMD Ryzen 9 3900X), running the KSM selftest to test unmerge performance on 2 GiB (taskset 0x8 ./ksm_tests -D -s 2048), this results in a performance degradation of ~4% (old: ~5010 MiB/s, new: ~4800 MiB/s). I don't think we particularly care for now.
Signed-off-by: David Hildenbrand david@redhat.com
mm/ksm.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 62 insertions(+), 8 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c index 4d7bcf7da7c3..814c1a37c323 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -39,6 +39,7 @@ #include <linux/freezer.h> #include <linux/oom.h> #include <linux/numa.h> +#include <linux/pagewalk.h> #include <asm/tlbflush.h> #include "internal.h" @@ -452,6 +453,60 @@ static inline bool ksm_test_exit(struct mm_struct *mm) return atomic_read(&mm->mm_users) == 0; } +int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next,
struct mm_walk *walk)
+{
- /* We only care about page tables to walk to a single base page. */
- if (pud_leaf(*pud) || !pud_present(*pud))
return 1;
- return 0;
+}
+int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
struct mm_walk *walk)
+{
- bool *ksm_page = walk->private;
- struct page *page = NULL;
- pte_t *pte, ptent;
- spinlock_t *ptl;
- /* We only care about page tables to walk to a single base page. */
- if (pmd_leaf(*pmd) || !pmd_present(*pmd))
return 1;
- /*
* We only lookup a single page (a) no need to iterate; and (b)
* always return 1 to exit immediately and not iterate in the caller.
*/
- pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
- ptent = *pte;
- if (pte_none(ptent))
return 1;
As reported by Janosch, we fail to drop the lock here.
t480s: ~/git/linux ksm_unshare $ git diff diff --git a/mm/ksm.c b/mm/ksm.c index 26aec41b127c..94f8e114c89f 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -435,7 +435,7 @@ int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next, pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
if (pte_none(*pte)) - return 1; + goto out_unlock; if (!pte_present(*pte)) { swp_entry_t entry = pte_to_swp_entry(*pte);
@@ -451,6 +451,7 @@ int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next, } if (page && PageKsm(page)) *ksm_page = true; +out_unlock: pte_unmap_unlock(pte, ptl); return 1; }