Correctable memory errors are very common on servers with large amount of memory, and are corrected by ECC, but with two pain points to users: 1. Correction usually happens on the fly and adds latency overhead 2. Not-fully-proved theory states excessive correctable memory errors can develop into uncorrectable memory error.
Soft offline is kernel's additional solution for memory pages having (excessive) corrected memory errors. Impacted page is migrated to healthy page if it is in use, then the original page is discarded for any future use.
The actual policy on whether (and when) to soft offline should be maintained by userspace, especially in case of HugeTLB hugepages. Soft-offline dissolves a hugepage, either in-use or free, into chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage. If userspace has not acknowledged such behavior, it may be surprised when later mmap hugepages MAP_FAILED due to lack of hugepages. In addition, discarding the entire 1G memory page only because of corrected memory errors sounds very costly and kernel better not doing under the hood. But today there are at least 2 such cases: 1. GHES driver sees both GHES_SEV_CORRECTED and CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER. 2. RAS Correctable Errors Collector counts correctable errors per PFN and when the counter for a PFN reaches threshold In both cases, userspace has no control of the soft offline performed by kernel's memory failure recovery.
This patch series give userspace the control of soft-offlining HugeTLB pages: kernel only soft offlines hugepage if userspace has opt-ed in for that specific hugepage size, and exposed to userspace by a new sysfs entry called softoffline_corrected_errors under /sys/kernel/mm/hugepages/hugepages-${size}kB directory: * When softoffline_corrected_errors=0, skip soft offlining for all hugepages of size ${size}kB. * When softoffline_corrected_errors=1, soft offline as before this patch series. By default softoffline_corrected_errors is 1.
This patch set is based at commit a52b4f11a2e1 ("selftest mm/mseal read-only elf memory segment").
Jiaqi Yan (3): mm/memory-failure: userspace controls soft-offlining hugetlb pages selftest/mm: test softoffline_corrected_errors behaviors docs: hugetlbpage.rst: add softoffline_corrected_errors
Documentation/admin-guide/mm/hugetlbpage.rst | 15 +- include/linux/hugetlb.h | 17 ++ mm/hugetlb.c | 34 +++ mm/memory-failure.c | 7 + tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + .../selftests/mm/hugetlb-soft-offline.c | 262 ++++++++++++++++++ tools/testing/selftests/mm/run_vmtests.sh | 4 + 8 files changed, 340 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/mm/hugetlb-soft-offline.c
Correctable memory errors are very common on servers with large amount of memory, and are corrected by ECC. Soft offline is kernel's additional recovery handling for memory pages having (excessive) corrected memory errors. Impacted page is migrated to a healthy page if mapped/inuse; the original page is discarded for any future use.
The actual policy on whether (and when) to soft offline should be maintained by userspace, especially in case of HugeTLB hugepages. Soft-offline dissolves a hugepage, either in-use or free, into chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage. If userspace has not acknowledged such behavior, it may be surprised when later mmap hugepages MAP_FAILED due to lack of hugepages. In addition, discarding the entire 1G memory page only because of corrected memory errors sounds very costly and kernel better not doing under the hood. But today there are at least 2 such cases: 1. GHES driver sees both GHES_SEV_CORRECTED and CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER. 2. RAS Correctable Errors Collector counts correctable errors per PFN and when the counter for a PFN reaches threshold In both cases, userspace has no control of the soft offline performed by kernel's memory failure recovery.
This commit gives userspace the control of soft-offlining HugeTLB pages: kernel only soft offlines hugepage if userspace has opt-ed in in for that specific hugepage size. The interface to userspace is a new sysfs entry called softoffline_corrected_errors under the /sys/kernel/mm/hugepages/hugepages-${size}kB directory: * When softoffline_corrected_errors=0, skip soft offlining for all hugepages of size ${size}kB. * When softoffline_corrected_errors=1, soft offline as before this patch series.
So the granularity of the control is per hugepage size, and is kept in corresponding hstate. By default softoffline_corrected_errors is 1 to preserve existing behavior in kernel.
Signed-off-by: Jiaqi Yan jiaqiyan@google.com --- include/linux/hugetlb.h | 17 +++++++++++++++++ mm/hugetlb.c | 34 ++++++++++++++++++++++++++++++++++ mm/memory-failure.c | 7 +++++++ 3 files changed, 58 insertions(+)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 2b3c3a404769..55f9e9593cce 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -685,6 +685,7 @@ struct hstate { int next_nid_to_free; unsigned int order; unsigned int demote_order; + unsigned int softoffline_corrected_errors; unsigned long mask; unsigned long max_huge_pages; unsigned long nr_huge_pages; @@ -1029,6 +1030,16 @@ void hugetlb_unregister_node(struct node *node); */ bool is_raw_hwpoison_page_in_hugepage(struct page *page);
+/* + * For certain hugepage size, when a hugepage has corrected memory error(s): + * - Return 0 if userspace wants to disable soft offlining the hugepage. + * - Return > 0 if userspace allows soft offlining the hugepage. + */ +static inline int hugetlb_softoffline_corrected_errors(struct folio *folio) +{ + return folio_hstate(folio)->softoffline_corrected_errors; +} + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {};
@@ -1226,6 +1237,12 @@ static inline bool hugetlbfs_pagecache_present( { return false; } + +static inline int hugetlb_softoffline_corrected_errors(struct folio *folio) +{ + return 1; +} + #endif /* CONFIG_HUGETLB_PAGE */
static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6be78e7d4f6e..a184e28ce592 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4325,6 +4325,38 @@ static ssize_t demote_size_store(struct kobject *kobj, } HSTATE_ATTR(demote_size);
+static ssize_t softoffline_corrected_errors_show(struct kobject *kobj, + struct kobj_attribute *attr, + char *buf) +{ + struct hstate *h = kobj_to_hstate(kobj, NULL); + + return sysfs_emit(buf, "%d\n", h->softoffline_corrected_errors); +} + +static ssize_t softoffline_corrected_errors_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, + size_t count) +{ + int err; + unsigned long input; + struct hstate *h = kobj_to_hstate(kobj, NULL); + + err = kstrtoul(buf, 10, &input); + if (err) + return err; + + /* softoffline_corrected_errors is either 0 or 1. */ + if (input > 1) + return -EINVAL; + + h->softoffline_corrected_errors = input; + + return count; +} +HSTATE_ATTR(softoffline_corrected_errors); + static struct attribute *hstate_attrs[] = { &nr_hugepages_attr.attr, &nr_overcommit_hugepages_attr.attr, @@ -4334,6 +4366,7 @@ static struct attribute *hstate_attrs[] = { #ifdef CONFIG_NUMA &nr_hugepages_mempolicy_attr.attr, #endif + &softoffline_corrected_errors_attr.attr, NULL, };
@@ -4655,6 +4688,7 @@ void __init hugetlb_add_hstate(unsigned int order) h = &hstates[hugetlb_max_hstate++]; mutex_init(&h->resize_lock); h->order = order; + h->softoffline_corrected_errors = 1; h->mask = ~(huge_page_size(h) - 1); for (i = 0; i < MAX_NUMNODES; ++i) INIT_LIST_HEAD(&h->hugepage_freelists[i]); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 16ada4fb02b7..7094fc4c62e2 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2776,6 +2776,13 @@ int soft_offline_page(unsigned long pfn, int flags) return -EIO; }
+ if (PageHuge(page) && + !hugetlb_softoffline_corrected_errors(page_folio(page))) { + pr_info("soft offline: %#lx: hugetlb page is ignored\n", pfn); + put_ref_page(pfn, flags); + return -EINVAL; + } + mutex_lock(&mf_mutex);
if (PageHWPoison(page)) {
+CC Jane.
On Fri, May 31, 2024 at 2:34 PM Jiaqi Yan jiaqiyan@google.com wrote:
Correctable memory errors are very common on servers with large amount of memory, and are corrected by ECC. Soft offline is kernel's additional recovery handling for memory pages having (excessive) corrected memory errors. Impacted page is migrated to a healthy page if mapped/inuse; the original page is discarded for any future use.
The actual policy on whether (and when) to soft offline should be maintained by userspace, especially in case of HugeTLB hugepages. Soft-offline dissolves a hugepage, either in-use or free, into chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage. If userspace has not acknowledged such behavior, it may be surprised when later mmap hugepages MAP_FAILED due to lack of hugepages. In addition, discarding the entire 1G memory page only because of corrected memory errors sounds very costly and kernel better not doing under the hood. But today there are at least 2 such cases:
- GHES driver sees both GHES_SEV_CORRECTED and CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER.
- RAS Correctable Errors Collector counts correctable errors per PFN and when the counter for a PFN reaches threshold
In both cases, userspace has no control of the soft offline performed by kernel's memory failure recovery.
This commit gives userspace the control of soft-offlining HugeTLB pages: kernel only soft offlines hugepage if userspace has opt-ed in in for that specific hugepage size. The interface to userspace is a new sysfs entry called softoffline_corrected_errors under the /sys/kernel/mm/hugepages/hugepages-${size}kB directory:
- When softoffline_corrected_errors=0, skip soft offlining for all hugepages of size ${size}kB.
- When softoffline_corrected_errors=1, soft offline as before this patch series.
So the granularity of the control is per hugepage size, and is kept in corresponding hstate. By default softoffline_corrected_errors is 1 to preserve existing behavior in kernel.
Signed-off-by: Jiaqi Yan jiaqiyan@google.com
include/linux/hugetlb.h | 17 +++++++++++++++++ mm/hugetlb.c | 34 ++++++++++++++++++++++++++++++++++ mm/memory-failure.c | 7 +++++++ 3 files changed, 58 insertions(+)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 2b3c3a404769..55f9e9593cce 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -685,6 +685,7 @@ struct hstate { int next_nid_to_free; unsigned int order; unsigned int demote_order;
unsigned int softoffline_corrected_errors; unsigned long mask; unsigned long max_huge_pages; unsigned long nr_huge_pages;
@@ -1029,6 +1030,16 @@ void hugetlb_unregister_node(struct node *node); */ bool is_raw_hwpoison_page_in_hugepage(struct page *page);
+/*
- For certain hugepage size, when a hugepage has corrected memory error(s):
- Return 0 if userspace wants to disable soft offlining the hugepage.
- Return > 0 if userspace allows soft offlining the hugepage.
- */
+static inline int hugetlb_softoffline_corrected_errors(struct folio *folio) +{
return folio_hstate(folio)->softoffline_corrected_errors;
+}
#else /* CONFIG_HUGETLB_PAGE */ struct hstate {};
@@ -1226,6 +1237,12 @@ static inline bool hugetlbfs_pagecache_present( { return false; }
+static inline int hugetlb_softoffline_corrected_errors(struct folio *folio) +{
return 1;
+}
#endif /* CONFIG_HUGETLB_PAGE */
static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6be78e7d4f6e..a184e28ce592 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4325,6 +4325,38 @@ static ssize_t demote_size_store(struct kobject *kobj, } HSTATE_ATTR(demote_size);
+static ssize_t softoffline_corrected_errors_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
+{
struct hstate *h = kobj_to_hstate(kobj, NULL);
return sysfs_emit(buf, "%d\n", h->softoffline_corrected_errors);
+}
+static ssize_t softoffline_corrected_errors_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf,
size_t count)
+{
int err;
unsigned long input;
struct hstate *h = kobj_to_hstate(kobj, NULL);
err = kstrtoul(buf, 10, &input);
if (err)
return err;
/* softoffline_corrected_errors is either 0 or 1. */
if (input > 1)
return -EINVAL;
h->softoffline_corrected_errors = input;
return count;
+} +HSTATE_ATTR(softoffline_corrected_errors);
static struct attribute *hstate_attrs[] = { &nr_hugepages_attr.attr, &nr_overcommit_hugepages_attr.attr, @@ -4334,6 +4366,7 @@ static struct attribute *hstate_attrs[] = { #ifdef CONFIG_NUMA &nr_hugepages_mempolicy_attr.attr, #endif
&softoffline_corrected_errors_attr.attr, NULL,
};
@@ -4655,6 +4688,7 @@ void __init hugetlb_add_hstate(unsigned int order) h = &hstates[hugetlb_max_hstate++]; mutex_init(&h->resize_lock); h->order = order;
h->softoffline_corrected_errors = 1; h->mask = ~(huge_page_size(h) - 1); for (i = 0; i < MAX_NUMNODES; ++i) INIT_LIST_HEAD(&h->hugepage_freelists[i]);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 16ada4fb02b7..7094fc4c62e2 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2776,6 +2776,13 @@ int soft_offline_page(unsigned long pfn, int flags) return -EIO; }
if (PageHuge(page) &&
!hugetlb_softoffline_corrected_errors(page_folio(page))) {
pr_info("soft offline: %#lx: hugetlb page is ignored\n", pfn);
put_ref_page(pfn, flags);
return -EINVAL;
}
mutex_lock(&mf_mutex); if (PageHWPoison(page)) {
-- 2.45.1.288.g0e0cd299f1-goog
Add regression and new tests when hugepage has correctable memory errors, and how userspace wants to deal with it: * if softoffline_corrected_errors=0, mapped hugepage is soft offlined * if softoffline_corrected_errors=1, mapped hugepage is intact
Free hugepages case is not explicitly covered by the tests.
Hugepage having corrected memory errors is emulated with MADV_SOFT_OFFLINE.
Signed-off-by: Jiaqi Yan jiaqiyan@google.com --- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + .../selftests/mm/hugetlb-soft-offline.c | 262 ++++++++++++++++++ tools/testing/selftests/mm/run_vmtests.sh | 4 + 4 files changed, 268 insertions(+) create mode 100644 tools/testing/selftests/mm/hugetlb-soft-offline.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index 0b9ab987601c..064e7b125643 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -6,6 +6,7 @@ hugepage-shm hugepage-vmemmap hugetlb-madvise hugetlb-read-hwpoison +hugetlb-soft-offline khugepaged map_hugetlb map_populate diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 3b49bc3d0a3b..d166067d75ef 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -42,6 +42,7 @@ TEST_GEN_FILES += gup_test TEST_GEN_FILES += hmm-tests TEST_GEN_FILES += hugetlb-madvise TEST_GEN_FILES += hugetlb-read-hwpoison +TEST_GEN_FILES += hugetlb-soft-offline TEST_GEN_FILES += hugepage-mmap TEST_GEN_FILES += hugepage-mremap TEST_GEN_FILES += hugepage-shm diff --git a/tools/testing/selftests/mm/hugetlb-soft-offline.c b/tools/testing/selftests/mm/hugetlb-soft-offline.c new file mode 100644 index 000000000000..8d1d7d4a84d8 --- /dev/null +++ b/tools/testing/selftests/mm/hugetlb-soft-offline.c @@ -0,0 +1,262 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test soft offline behavior for HugeTLB pages: + * - if softoffline_corrected_errors = 0, hugepages should stay intact and soft + * offlining failed with EINVAL. + * - if softoffline_corrected_errors > 0, a hugepage should be dissolved and + * nr_hugepages should be reduced by 1. + * + * Before running, make sure more than 2 hugepages of default_hugepagesz + * are allocated. For example, if /proc/meminfo/Hugepagesize is 2048kB: + * echo 8 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages + */ + +#define _GNU_SOURCE +#include <errno.h> +#include <stdlib.h> +#include <stdio.h> +#include <string.h> +#include <unistd.h> + +#include <linux/magic.h> +#include <linux/memfd.h> +#include <sys/mman.h> +#include <sys/statfs.h> +#include <sys/types.h> + +#ifndef MADV_SOFT_OFFLINE +#define MADV_SOFT_OFFLINE 101 +#endif + +#define PREFIX " ... " +#define EPREFIX " !!! " + +enum test_status { + TEST_PASS = 0, + TEST_FAILED = 1, + // From ${ksft_skip} in run_vmtests.sh. + TEST_SKIPPED = 4, +}; + +static enum test_status do_soft_offline(int fd, size_t len, int expect_ret) +{ + char *filemap = NULL; + char *hwp_addr = NULL; + const unsigned long pagesize = getpagesize(); + int ret = 0; + enum test_status status = TEST_SKIPPED; + + if (ftruncate(fd, len) < 0) { + perror(EPREFIX "ftruncate to len failed"); + return status; + } + + filemap = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_POPULATE, fd, 0); + if (filemap == MAP_FAILED) { + perror(EPREFIX "mmap failed"); + goto untruncate; + } + + memset(filemap, 0xab, len); + printf(PREFIX "Allocated %#lx bytes of hugetlb pages\n", len); + + hwp_addr = filemap + len / 2; + ret = madvise(hwp_addr, pagesize, MADV_SOFT_OFFLINE); + printf(PREFIX "MADV_SOFT_OFFLINE %p ret=%d, errno=%d\n", + hwp_addr, ret, errno); + if (ret != 0) + perror(EPREFIX "madvise failed"); + + if (errno == expect_ret) + status = TEST_PASS; + else { + printf(EPREFIX "MADV_SOFT_OFFLINE should ret %d\n", expect_ret); + status = TEST_FAILED; + } + + munmap(filemap, len); +untruncate: + if (ftruncate(fd, 0) < 0) + perror(EPREFIX "ftruncate back to 0 failed"); + + return status; +} + +static int set_softoffline_corrected_errors(unsigned long hugepage_size, int value) +{ + char cmd[256] = {0}; + FILE *cmdfile = NULL; + + if (value != 0 && value != 1) + return -EINVAL; + + sprintf(cmd, + "echo %d > /sys/kernel/mm/hugepages/hugepages-%ldkB/softoffline_corrected_errors", + value, hugepage_size); + cmdfile = popen(cmd, "r"); + + if (cmdfile == NULL) + perror(EPREFIX "failed to set softoffline_corrected_errors"); + else + printf(PREFIX + "softoffline_corrected_errors=%d for %ldkB hugepages\n", + value, hugepage_size); + + pclose(cmdfile); + return 0; +} + +static int read_nr_hugepages(unsigned long hugepage_size, + unsigned long *nr_hugepages) +{ + char buffer[256] = {0}; + char cmd[256] = {0}; + + sprintf(cmd, "cat /sys/kernel/mm/hugepages/hugepages-%ldkB/nr_hugepages", + hugepage_size); + FILE *cmdfile = popen(cmd, "r"); + + if (!fgets(buffer, sizeof(buffer), cmdfile)) { + perror(EPREFIX "failed to read nr_hugepages"); + pclose(cmdfile); + return -1; + } + + *nr_hugepages = atoll(buffer); + pclose(cmdfile); + return 0; +} + +static int create_hugetlbfs_file(struct statfs *file_stat) +{ + int fd; + + fd = memfd_create("hugetlb_tmp", MFD_HUGETLB); + if (fd < 0) { + perror(EPREFIX "could not open hugetlbfs file"); + return -1; + } + + memset(file_stat, 0, sizeof(*file_stat)); + if (fstatfs(fd, file_stat)) { + perror(EPREFIX "fstatfs failed"); + goto close; + } + if (file_stat->f_type != HUGETLBFS_MAGIC) { + printf(EPREFIX "not hugetlbfs file\n"); + goto close; + } + + return fd; +close: + close(fd); + return -1; +} + +static enum test_status test_soft_offline(void) +{ + int fd; + struct statfs file_stat; + unsigned long hugepagesize_kb = 0; + unsigned long nr_hugepages_before = 0; + unsigned long nr_hugepages_after = 0; + enum test_status status = TEST_SKIPPED; + + printf("Test Soft Offline When softoffline_corrected_errors=1\n"); + + fd = create_hugetlbfs_file(&file_stat); + if (fd < 0) { + printf(EPREFIX "Failed to create hugetlbfs file\n"); + return status; + } + + hugepagesize_kb = file_stat.f_bsize / 1024; + printf(PREFIX "Hugepagesize is %ldkB\n", hugepagesize_kb); + + if (set_softoffline_corrected_errors(hugepagesize_kb, 1)) + return TEST_FAILED; + + if (read_nr_hugepages(hugepagesize_kb, &nr_hugepages_before) != 0) + return TEST_FAILED; + + printf(PREFIX "Before MADV_SOFT_OFFLINE nr_hugepages=%ld\n", + nr_hugepages_before); + + status = do_soft_offline(fd, 2 * file_stat.f_bsize, /*expect_ret=*/0); + + if (read_nr_hugepages(hugepagesize_kb, &nr_hugepages_after) != 0) + return TEST_FAILED; + + printf(PREFIX "After MADV_SOFT_OFFLINE nr_hugepages=%ld\n", + nr_hugepages_after); + + if (nr_hugepages_before != nr_hugepages_after + 1) { + printf(EPREFIX "MADV_SOFT_OFFLINE should reduced 1 hugepage\n"); + return TEST_FAILED; + } + + return status; +} + +static enum test_status test_disable_soft_offline(void) +{ + int fd; + struct statfs file_stat; + unsigned long hugepagesize_kb = 0; + unsigned long nr_hugepages_before = 0; + unsigned long nr_hugepages_after = 0; + enum test_status status = TEST_SKIPPED; + + printf("Test Soft Offline When softoffline_corrected_errors=0\n"); + + fd = create_hugetlbfs_file(&file_stat); + if (fd < 0) { + printf(EPREFIX "Failed to create hugetlbfs file\n"); + return status; + } + + hugepagesize_kb = file_stat.f_bsize / 1024; + printf(PREFIX "Hugepagesize is %ldkB\n", hugepagesize_kb); + + if (set_softoffline_corrected_errors(hugepagesize_kb, 0)) + return TEST_FAILED; + + if (read_nr_hugepages(hugepagesize_kb, &nr_hugepages_before) != 0) + return TEST_FAILED; + + printf(PREFIX "Before MADV_SOFT_OFFLINE nr_hugepages=%ld\n", + nr_hugepages_before); + + status = do_soft_offline(fd, 2 * file_stat.f_bsize, /*expect_ret=*/EINVAL); + + if (read_nr_hugepages(hugepagesize_kb, &nr_hugepages_after) != 0) + return TEST_FAILED; + + printf(PREFIX "After MADV_SOFT_OFFLINE nr_hugepages=%ld\n", + nr_hugepages_after); + + if (nr_hugepages_before != nr_hugepages_after) { + printf(EPREFIX "MADV_SOFT_OFFLINE reduced %lu hugepages\n", + nr_hugepages_before - nr_hugepages_after); + return TEST_FAILED; + } + + return status; +} + +int main(void) +{ + enum test_status status; + + status = test_soft_offline(); + if (status != TEST_PASS) + return status; + + status = test_disable_soft_offline(); + if (status != TEST_PASS) + return status; + + printf("Test Soft Offline All Good!\n"); + return TEST_PASS; +} diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index 3157204b9047..91db9971ba69 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -332,6 +332,10 @@ CATEGORY="hugetlb" run_test ./charge_reserved_hugetlb.sh -cgroup-v2 CATEGORY="hugetlb" run_test ./hugetlb_reparenting_test.sh -cgroup-v2 if $RUN_DESTRUCTIVE; then CATEGORY="hugetlb" run_test ./hugetlb-read-hwpoison +nr_hugepages_tmp=$(cat /proc/sys/vm/nr_hugepages) +echo 8 > /proc/sys/vm/nr_hugepages +CATEGORY="hugetlb" run_test ./hugetlb-soft-offline +echo "$nr_hugepages_tmp" > /proc/sys/vm/nr_hugepages fi
if [ $VADDR64 -ne 0 ]; then
Add the documentation for what softoffline_corrected_errors sysfs interface is for.
Signed-off-by: Jiaqi Yan jiaqiyan@google.com --- Documentation/admin-guide/mm/hugetlbpage.rst | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f34a0d798d5b..7969ae47f5f1 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -244,7 +244,8 @@ will exist, of the form::
Inside each of these directories, the set of files contained in ``/proc`` will exist. In addition, two additional interfaces for demoting huge -pages may exist:: +pages, and one additional interface for handling corrected memory errors, +may exist::
demote demote_size @@ -254,6 +255,7 @@ pages may exist:: free_hugepages resv_hugepages surplus_hugepages + softoffline_corrected_errors
The demote interfaces provide the ability to split a huge page into smaller huge pages. For example, the x86 architecture supports both @@ -276,6 +278,17 @@ demote actually demoted, compare the value of nr_hugepages before and after writing to the demote interface. demote is a write only interface.
+The interface for handling corrected memory errors is + +softoffline_corrected_errors + allow userspace to control how to deal with hugepages that have + corrected memory errors. When setting to 1, kernel attempts to soft + offline the hugepage whenever it thinks needed. If soft offlinging a + huge page succeeds, for in-use hugepage, page content is migrated to a + new hugepage; however, regardless of in-use or free, capacity of the + hugepages will reduce by 1. When setting to 0, kernel won't attempt to + soft offline the hugepage of the specific size. Its default value is 1. + The interfaces which are the same as in ``/proc`` (all except demote and demote_size) function as described above for the default huge page-sized case.
On 2024/6/1 5:34, Jiaqi Yan wrote:
Correctable memory errors are very common on servers with large amount of memory, and are corrected by ECC, but with two pain points to users:
- Correction usually happens on the fly and adds latency overhead
- Not-fully-proved theory states excessive correctable memory errors can develop into uncorrectable memory error.
Thanks for your patch.
Soft offline is kernel's additional solution for memory pages having (excessive) corrected memory errors. Impacted page is migrated to healthy page if it is in use, then the original page is discarded for any future use.
The actual policy on whether (and when) to soft offline should be maintained by userspace, especially in case of HugeTLB hugepages. Soft-offline dissolves a hugepage, either in-use or free, into chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage. If userspace has not acknowledged such behavior, it may be surprised when later mmap hugepages MAP_FAILED due to lack of hugepages.
For in use hugetlb folio case, migrate_pages() is called. The hugetlb pool capacity won't be modified in that case. So I assume you're referring to the free hugetlb folio case? The Hugetlb pool capacity is reduced in that case. But if we don't do that, we might encounter uncorrectable memory error later which will be more severe? Will it be better to add a way to compensate the capacity?
In addition, discarding the entire 1G memory page only because of corrected memory errors sounds very costly and kernel better not doing under the hood. But today there are at least 2 such cases:
- GHES driver sees both GHES_SEV_CORRECTED and CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER.
- RAS Correctable Errors Collector counts correctable errors per PFN and when the counter for a PFN reaches threshold
In both cases, userspace has no control of the soft offline performed by kernel's memory failure recovery.
Userspace can figure out the hugetlb folio pfn range by using `page-types -b huge -rlN` and then decide whether to soft offline the page according to it. But for GHES driver, I think it has to be done in the kernel. So add a control in /sys/ seems like a good idea.
This patch series give userspace the control of soft-offlining HugeTLB pages: kernel only soft offlines hugepage if userspace has opt-ed in for that specific hugepage size, and exposed to userspace by a new sysfs entry called softoffline_corrected_errors under /sys/kernel/mm/hugepages/hugepages-${size}kB directory:
- When softoffline_corrected_errors=0, skip soft offlining for all hugepages of size ${size}kB.
- When softoffline_corrected_errors=1, soft offline as before this
Will it be better to be called as "soft_offline_corrected_errors" or simplify "soft_offline_enabled"?
Thanks. .
On Tue, Jun 4, 2024 at 12:19 AM Miaohe Lin linmiaohe@huawei.com wrote:
On 2024/6/1 5:34, Jiaqi Yan wrote:
Correctable memory errors are very common on servers with large amount of memory, and are corrected by ECC, but with two pain points to users:
- Correction usually happens on the fly and adds latency overhead
- Not-fully-proved theory states excessive correctable memory errors can develop into uncorrectable memory error.
Thanks for your patch.
Thanks Miaohe, sorry I missed your message (Gmail mistakenly put it in my spam folder).
Soft offline is kernel's additional solution for memory pages having (excessive) corrected memory errors. Impacted page is migrated to healthy page if it is in use, then the original page is discarded for any future use.
The actual policy on whether (and when) to soft offline should be maintained by userspace, especially in case of HugeTLB hugepages. Soft-offline dissolves a hugepage, either in-use or free, into chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage. If userspace has not acknowledged such behavior, it may be surprised when later mmap hugepages MAP_FAILED due to lack of hugepages.
For in use hugetlb folio case, migrate_pages() is called. The hugetlb pool capacity won't be modified in that case. So I assume you're referring to the
I don't think so.
For in-use hugetlb folio case, after migrate_pages, kernel will dissolve_free_hugetlb_folio the src hugetlb folio. At this point refcount of src hugetlb folio should be zero already, and remove_hugetlb_folio will reduce the hugetlb pool capacity (both nr_hugepages and free_hugepages) accordingly.
For the free hugetlb folio case, dissolving also happens. But CE on free pages should be very rare (since no one is accessing except patrol scrubber).
One of my test cases in patch 2/3 validates my point: the test case MADV_SOFT_OFFLINE a mapped page and at the point soft offline succeeds, both nr_hugepages and nr_freepages are reduced by 1.
free hugetlb folio case? The Hugetlb pool capacity is reduced in that case. But if we don't do that, we might encounter uncorrectable memory error later
If your concern is more correctable error will develop into more severe uncorrectable, your concern is absolutely valid. There is a tradeoff between reliability vs performance (availability of hugetlb pages), but IMO should be decided by userspace.
which will be more severe? Will it be better to add a way to compensate the capacity?
Corner cases: What if finding physically contiguous memory takes too long? What if we can't find any physically contiguous memory to compensate? (then hugetlb pool will still need to be reduced).
If we treat "compensate" as an improvement to the overall soft offline process, it is something we can do in future and it is something orthogonal to this control API, right? I think if userspace explicitly tells kernel to soft offline, then they are also well-prepared for the corner cases above.
In addition, discarding the entire 1G memory page only because of corrected memory errors sounds very costly and kernel better not doing under the hood. But today there are at least 2 such cases:
- GHES driver sees both GHES_SEV_CORRECTED and CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER.
- RAS Correctable Errors Collector counts correctable errors per PFN and when the counter for a PFN reaches threshold
In both cases, userspace has no control of the soft offline performed by kernel's memory failure recovery.
Userspace can figure out the hugetlb folio pfn range by using `page-types -b huge -rlN` and then decide whether to soft offline the page according to it. But for GHES driver, I think it has to be done in the kernel. So add a control in /sys/ seems like a good idea.
Thanks.
This patch series give userspace the control of soft-offlining HugeTLB pages: kernel only soft offlines hugepage if userspace has opt-ed in for that specific hugepage size, and exposed to userspace by a new sysfs entry called softoffline_corrected_errors under /sys/kernel/mm/hugepages/hugepages-${size}kB directory:
- When softoffline_corrected_errors=0, skip soft offlining for all hugepages of size ${size}kB.
- When softoffline_corrected_errors=1, soft offline as before this
Will it be better to be called as "soft_offline_corrected_errors" or simplify "soft_offline_enabled"?
"soft_offline_enabled" is less optimal as it can't be extended to support something like "soft offline this PFN if something repeatedly requested soft offline this exact PFN x times". (although I don't think we need it).
softoffline_corrected_errors is one char less, but if you insist, soft_offline_corrected_errors also works for me.
Thanks. .
On 6/7/2024 3:22 PM, Jiaqi Yan wrote:
On Tue, Jun 4, 2024 at 12:19 AM Miaohe Lin linmiaohe@huawei.com wrote:
On 2024/6/1 5:34, Jiaqi Yan wrote:
Correctable memory errors are very common on servers with large amount of memory, and are corrected by ECC, but with two pain points to users:
- Correction usually happens on the fly and adds latency overhead
- Not-fully-proved theory states excessive correctable memory errors can develop into uncorrectable memory error.
Thanks for your patch.
Thanks Miaohe, sorry I missed your message (Gmail mistakenly put it in my spam folder).
Soft offline is kernel's additional solution for memory pages having (excessive) corrected memory errors. Impacted page is migrated to healthy page if it is in use, then the original page is discarded for any future use.
The actual policy on whether (and when) to soft offline should be maintained by userspace, especially in case of HugeTLB hugepages. Soft-offline dissolves a hugepage, either in-use or free, into chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage. If userspace has not acknowledged such behavior, it may be surprised when later mmap hugepages MAP_FAILED due to lack of hugepages.
For in use hugetlb folio case, migrate_pages() is called. The hugetlb pool capacity won't be modified in that case. So I assume you're referring to the
I don't think so.
For in-use hugetlb folio case, after migrate_pages, kernel will dissolve_free_hugetlb_folio the src hugetlb folio. At this point refcount of src hugetlb folio should be zero already, and remove_hugetlb_folio will reduce the hugetlb pool capacity (both nr_hugepages and free_hugepages) accordingly.
For the free hugetlb folio case, dissolving also happens. But CE on free pages should be very rare (since no one is accessing except patrol scrubber).
One of my test cases in patch 2/3 validates my point: the test case MADV_SOFT_OFFLINE a mapped page and at the point soft offline succeeds, both nr_hugepages and nr_freepages are reduced by 1.
free hugetlb folio case? The Hugetlb pool capacity is reduced in that case. But if we don't do that, we might encounter uncorrectable memory error later
If your concern is more correctable error will develop into more severe uncorrectable, your concern is absolutely valid. There is a tradeoff between reliability vs performance (availability of hugetlb pages), but IMO should be decided by userspace.
which will be more severe? Will it be better to add a way to compensate the capacity?
Corner cases: What if finding physically contiguous memory takes too long? What if we can't find any physically contiguous memory to compensate? (then hugetlb pool will still need to be reduced).
If we treat "compensate" as an improvement to the overall soft offline process, it is something we can do in future and it is something orthogonal to this control API, right? I think if userspace explicitly tells kernel to soft offline, then they are also well-prepared for the corner cases above.
In addition, discarding the entire 1G memory page only because of corrected memory errors sounds very costly and kernel better not doing under the hood. But today there are at least 2 such cases:
- GHES driver sees both GHES_SEV_CORRECTED and CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER.
- RAS Correctable Errors Collector counts correctable errors per PFN and when the counter for a PFN reaches threshold
In both cases, userspace has no control of the soft offline performed by kernel's memory failure recovery.
Userspace can figure out the hugetlb folio pfn range by using `page-types -b huge -rlN` and then decide whether to soft offline the page according to it. But for GHES driver, I think it has to be done in the kernel. So add a control in /sys/ seems like a good idea.
Thanks.
This patch series give userspace the control of soft-offlining HugeTLB pages: kernel only soft offlines hugepage if userspace has opt-ed in for that specific hugepage size, and exposed to userspace by a new sysfs entry called softoffline_corrected_errors under /sys/kernel/mm/hugepages/hugepages-${size}kB directory:
- When softoffline_corrected_errors=0, skip soft offlining for all hugepages of size ${size}kB.
- When softoffline_corrected_errors=1, soft offline as before this
Will it be better to be called as "soft_offline_corrected_errors" or simplify "soft_offline_enabled"?
"soft_offline_enabled" is less optimal as it can't be extended to support something like "soft offline this PFN if something repeatedly requested soft offline this exact PFN x times". (although I don't think we need it).
The "x time" thing is a threshold thing, and if your typical application needs to have a say about performance(and maintaining physically contiguous memory) over RAS, shouldn't that be baked into the driver rather than hugetlbfs ?
Also, I am not comfortable with this being hugetlbfs specific. What is the objection to creating a "soft_offline_enabled" switch that is applicable to any user page size?
thanks,
-jane
softoffline_corrected_errors is one char less, but if you insist, soft_offline_corrected_errors also works for me.
Thanks. .
Thanks for your feedback, Jane!
On Mon, Jun 10, 2024 at 12:41 PM Jane Chu jane.chu@oracle.com wrote:
On 6/7/2024 3:22 PM, Jiaqi Yan wrote:
On Tue, Jun 4, 2024 at 12:19 AM Miaohe Lin linmiaohe@huawei.com wrote:
On 2024/6/1 5:34, Jiaqi Yan wrote:
Correctable memory errors are very common on servers with large amount of memory, and are corrected by ECC, but with two pain points to users:
- Correction usually happens on the fly and adds latency overhead
- Not-fully-proved theory states excessive correctable memory errors can develop into uncorrectable memory error.
Thanks for your patch.
Thanks Miaohe, sorry I missed your message (Gmail mistakenly put it in my spam folder).
Soft offline is kernel's additional solution for memory pages having (excessive) corrected memory errors. Impacted page is migrated to healthy page if it is in use, then the original page is discarded for any future use.
The actual policy on whether (and when) to soft offline should be maintained by userspace, especially in case of HugeTLB hugepages. Soft-offline dissolves a hugepage, either in-use or free, into chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage. If userspace has not acknowledged such behavior, it may be surprised when later mmap hugepages MAP_FAILED due to lack of hugepages.
For in use hugetlb folio case, migrate_pages() is called. The hugetlb pool capacity won't be modified in that case. So I assume you're referring to the
I don't think so.
For in-use hugetlb folio case, after migrate_pages, kernel will dissolve_free_hugetlb_folio the src hugetlb folio. At this point refcount of src hugetlb folio should be zero already, and remove_hugetlb_folio will reduce the hugetlb pool capacity (both nr_hugepages and free_hugepages) accordingly.
For the free hugetlb folio case, dissolving also happens. But CE on free pages should be very rare (since no one is accessing except patrol scrubber).
One of my test cases in patch 2/3 validates my point: the test case MADV_SOFT_OFFLINE a mapped page and at the point soft offline succeeds, both nr_hugepages and nr_freepages are reduced by 1.
free hugetlb folio case? The Hugetlb pool capacity is reduced in that case. But if we don't do that, we might encounter uncorrectable memory error later
If your concern is more correctable error will develop into more severe uncorrectable, your concern is absolutely valid. There is a tradeoff between reliability vs performance (availability of hugetlb pages), but IMO should be decided by userspace.
which will be more severe? Will it be better to add a way to compensate the capacity?
Corner cases: What if finding physically contiguous memory takes too long? What if we can't find any physically contiguous memory to compensate? (then hugetlb pool will still need to be reduced).
If we treat "compensate" as an improvement to the overall soft offline process, it is something we can do in future and it is something orthogonal to this control API, right? I think if userspace explicitly tells kernel to soft offline, then they are also well-prepared for the corner cases above.
In addition, discarding the entire 1G memory page only because of corrected memory errors sounds very costly and kernel better not doing under the hood. But today there are at least 2 such cases:
- GHES driver sees both GHES_SEV_CORRECTED and CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER.
- RAS Correctable Errors Collector counts correctable errors per PFN and when the counter for a PFN reaches threshold
In both cases, userspace has no control of the soft offline performed by kernel's memory failure recovery.
Userspace can figure out the hugetlb folio pfn range by using `page-types -b huge -rlN` and then decide whether to soft offline the page according to it. But for GHES driver, I think it has to be done in the kernel. So add a control in /sys/ seems like a good idea.
Thanks.
This patch series give userspace the control of soft-offlining HugeTLB pages: kernel only soft offlines hugepage if userspace has opt-ed in for that specific hugepage size, and exposed to userspace by a new sysfs entry called softoffline_corrected_errors under /sys/kernel/mm/hugepages/hugepages-${size}kB directory:
- When softoffline_corrected_errors=0, skip soft offlining for all hugepages of size ${size}kB.
- When softoffline_corrected_errors=1, soft offline as before this
Will it be better to be called as "soft_offline_corrected_errors" or simplify "soft_offline_enabled"?
"soft_offline_enabled" is less optimal as it can't be extended to support something like "soft offline this PFN if something repeatedly requested soft offline this exact PFN x times". (although I don't think we need it).
The "x time" thing is a threshold thing, and if your typical application needs to have a say about performance(and maintaining physically contiguous memory) over RAS, shouldn't that be baked into the driver rather than hugetlbfs ?
I mostly agree, only that I want to point out the threshold has already been maintained by some firmware. For example, CPER has something like the following defined in UEFI Spec Table N.5: Section Descriptor:
Bit 3 - Error threshold exceeded: If set, OS may choose to discontinue use of this resource.
In this case, I think "enable_soft_offline" is a better name for "OS choose to discontinue use of this page" (enable_soft_offline=1) or not (enable_soft_offline=0). WDYT?
Also, I am not comfortable with this being hugetlbfs specific. What is the objection to creating a "soft_offline_enabled" switch that is applicable to any user page size?
I have no objection to making the "soft_offline_enabled" switch to apply to anything (hugetlb, transparent hugepage, raw page, etc). The only reason my current patch is hugetlb specific is because softoffline behavior is very disruptive in the hugetlb 1G page case, and I want to start with a limited scope in my first attempt.
If Miaohe, you, and other people are fine with making it applicable to any user pages, maybe a better interface for this could be at something like /sys/devices/system/memory/enable_soft_offline (location-wise close to /sys/devices/system/memory/soft_offline_page)?
thanks,
-jane
softoffline_corrected_errors is one char less, but if you insist, soft_offline_corrected_errors also works for me.
Thanks. .
On 6/10/2024 3:55 PM, Jiaqi Yan wrote:
Thanks for your feedback, Jane!
On Mon, Jun 10, 2024 at 12:41 PM Jane Chu jane.chu@oracle.com wrote:
On 6/7/2024 3:22 PM, Jiaqi Yan wrote:
On Tue, Jun 4, 2024 at 12:19 AM Miaohe Lin linmiaohe@huawei.com wrote:
On 2024/6/1 5:34, Jiaqi Yan wrote:
Correctable memory errors are very common on servers with large amount of memory, and are corrected by ECC, but with two pain points to users:
- Correction usually happens on the fly and adds latency overhead
- Not-fully-proved theory states excessive correctable memory errors can develop into uncorrectable memory error.
Thanks for your patch.
Thanks Miaohe, sorry I missed your message (Gmail mistakenly put it in my spam folder).
Soft offline is kernel's additional solution for memory pages having (excessive) corrected memory errors. Impacted page is migrated to healthy page if it is in use, then the original page is discarded for any future use.
The actual policy on whether (and when) to soft offline should be maintained by userspace, especially in case of HugeTLB hugepages. Soft-offline dissolves a hugepage, either in-use or free, into chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage. If userspace has not acknowledged such behavior, it may be surprised when later mmap hugepages MAP_FAILED due to lack of hugepages.
For in use hugetlb folio case, migrate_pages() is called. The hugetlb pool capacity won't be modified in that case. So I assume you're referring to the
I don't think so.
For in-use hugetlb folio case, after migrate_pages, kernel will dissolve_free_hugetlb_folio the src hugetlb folio. At this point refcount of src hugetlb folio should be zero already, and remove_hugetlb_folio will reduce the hugetlb pool capacity (both nr_hugepages and free_hugepages) accordingly.
For the free hugetlb folio case, dissolving also happens. But CE on free pages should be very rare (since no one is accessing except patrol scrubber).
One of my test cases in patch 2/3 validates my point: the test case MADV_SOFT_OFFLINE a mapped page and at the point soft offline succeeds, both nr_hugepages and nr_freepages are reduced by 1.
free hugetlb folio case? The Hugetlb pool capacity is reduced in that case. But if we don't do that, we might encounter uncorrectable memory error later
If your concern is more correctable error will develop into more severe uncorrectable, your concern is absolutely valid. There is a tradeoff between reliability vs performance (availability of hugetlb pages), but IMO should be decided by userspace.
which will be more severe? Will it be better to add a way to compensate the capacity?
Corner cases: What if finding physically contiguous memory takes too long? What if we can't find any physically contiguous memory to compensate? (then hugetlb pool will still need to be reduced).
If we treat "compensate" as an improvement to the overall soft offline process, it is something we can do in future and it is something orthogonal to this control API, right? I think if userspace explicitly tells kernel to soft offline, then they are also well-prepared for the corner cases above.
In addition, discarding the entire 1G memory page only because of corrected memory errors sounds very costly and kernel better not doing under the hood. But today there are at least 2 such cases:
- GHES driver sees both GHES_SEV_CORRECTED and CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER.
- RAS Correctable Errors Collector counts correctable errors per PFN and when the counter for a PFN reaches threshold
In both cases, userspace has no control of the soft offline performed by kernel's memory failure recovery.
Userspace can figure out the hugetlb folio pfn range by using `page-types -b huge -rlN` and then decide whether to soft offline the page according to it. But for GHES driver, I think it has to be done in the kernel. So add a control in /sys/ seems like a good idea.
Thanks.
This patch series give userspace the control of soft-offlining HugeTLB pages: kernel only soft offlines hugepage if userspace has opt-ed in for that specific hugepage size, and exposed to userspace by a new sysfs entry called softoffline_corrected_errors under /sys/kernel/mm/hugepages/hugepages-${size}kB directory:
- When softoffline_corrected_errors=0, skip soft offlining for all hugepages of size ${size}kB.
- When softoffline_corrected_errors=1, soft offline as before this
Will it be better to be called as "soft_offline_corrected_errors" or simplify "soft_offline_enabled"?
"soft_offline_enabled" is less optimal as it can't be extended to support something like "soft offline this PFN if something repeatedly requested soft offline this exact PFN x times". (although I don't think we need it).
The "x time" thing is a threshold thing, and if your typical application needs to have a say about performance(and maintaining physically contiguous memory) over RAS, shouldn't that be baked into the driver rather than hugetlbfs ?
I mostly agree, only that I want to point out the threshold has already been maintained by some firmware. For example, CPER has something like the following defined in UEFI Spec Table N.5: Section Descriptor:
Bit 3 - Error threshold exceeded: If set, OS may choose to discontinue use of this resource.
In this case, I think "enable_soft_offline" is a better name for "OS choose to discontinue use of this page" (enable_soft_offline=1) or not (enable_soft_offline=0). WDYT?
Yes, as long as enable_soft_offline=1 is the default. Out of thought, I suppose the CE count and threshold can be retrieved by the GHES driver? I haven't checked. If so, maybe another way is to implement a per task CE threshold: add a new field.ce_thresholdto the tsak struct, add a function to prctl(2) for a user thread to specify a CE threshold, also a function to retrieve the firmware defined default CE threshold, and let soft_offline_page() check against the task->ce_threshold to decide whether to offline the page. If you want to apply the CE threshold to patrol scrub triggered soft offline, than you could define a global/system wide CE threshold. That said, this might be an overblown to what you need, I'm just letting it out there for the sake of brain storming.
Also, I am not comfortable with this being hugetlbfs specific. What is the objection to creating a "soft_offline_enabled" switch that is applicable to any user page size?
I have no objection to making the "soft_offline_enabled" switch to apply to anything (hugetlb, transparent hugepage, raw page, etc). The only reason my current patch is hugetlb specific is because softoffline behavior is very disruptive in the hugetlb 1G page case, and I want to start with a limited scope in my first attempt.
If Miaohe, you, and other people are fine with making it applicable to any user pages, maybe a better interface for this could be at something like /sys/devices/system/memory/enable_soft_offline (location-wise close to /sys/devices/system/memory/soft_offline_page)?
Or, you could use /proc/sys/vm/enable_soft_offline, side by side with the existing 'memory_failure_early_kill' and 'memory_failure_recovery' switches.
You could also make 'enable_soft_offline' a per process option, similar to 'PR_MCE_KILL_EARLY' in prctl(2).* *
thanks,
-jane
thanks,
-jane
softoffline_corrected_errors is one char less, but if you insist, soft_offline_corrected_errors also works for me.
Thanks. .
On Tue, Jun 11, 2024 at 10:55 AM Jane Chu jane.chu@oracle.com wrote:
On 6/10/2024 3:55 PM, Jiaqi Yan wrote:
Thanks for your feedback, Jane!
On Mon, Jun 10, 2024 at 12:41 PM Jane Chu jane.chu@oracle.com wrote:
On 6/7/2024 3:22 PM, Jiaqi Yan wrote:
On Tue, Jun 4, 2024 at 12:19 AM Miaohe Lin linmiaohe@huawei.com wrote:
On 2024/6/1 5:34, Jiaqi Yan wrote:
Correctable memory errors are very common on servers with large amount of memory, and are corrected by ECC, but with two pain points to users:
- Correction usually happens on the fly and adds latency overhead
- Not-fully-proved theory states excessive correctable memory errors can develop into uncorrectable memory error.
Thanks for your patch.
Thanks Miaohe, sorry I missed your message (Gmail mistakenly put it in my spam folder).
Soft offline is kernel's additional solution for memory pages having (excessive) corrected memory errors. Impacted page is migrated to healthy page if it is in use, then the original page is discarded for any future use.
The actual policy on whether (and when) to soft offline should be maintained by userspace, especially in case of HugeTLB hugepages. Soft-offline dissolves a hugepage, either in-use or free, into chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage. If userspace has not acknowledged such behavior, it may be surprised when later mmap hugepages MAP_FAILED due to lack of hugepages.
For in use hugetlb folio case, migrate_pages() is called. The hugetlb pool capacity won't be modified in that case. So I assume you're referring to the
I don't think so.
For in-use hugetlb folio case, after migrate_pages, kernel will dissolve_free_hugetlb_folio the src hugetlb folio. At this point refcount of src hugetlb folio should be zero already, and remove_hugetlb_folio will reduce the hugetlb pool capacity (both nr_hugepages and free_hugepages) accordingly.
For the free hugetlb folio case, dissolving also happens. But CE on free pages should be very rare (since no one is accessing except patrol scrubber).
One of my test cases in patch 2/3 validates my point: the test case MADV_SOFT_OFFLINE a mapped page and at the point soft offline succeeds, both nr_hugepages and nr_freepages are reduced by 1.
free hugetlb folio case? The Hugetlb pool capacity is reduced in that case. But if we don't do that, we might encounter uncorrectable memory error later
If your concern is more correctable error will develop into more severe uncorrectable, your concern is absolutely valid. There is a tradeoff between reliability vs performance (availability of hugetlb pages), but IMO should be decided by userspace.
which will be more severe? Will it be better to add a way to compensate the capacity?
Corner cases: What if finding physically contiguous memory takes too long? What if we can't find any physically contiguous memory to compensate? (then hugetlb pool will still need to be reduced).
If we treat "compensate" as an improvement to the overall soft offline process, it is something we can do in future and it is something orthogonal to this control API, right? I think if userspace explicitly tells kernel to soft offline, then they are also well-prepared for the corner cases above.
In addition, discarding the entire 1G memory page only because of corrected memory errors sounds very costly and kernel better not doing under the hood. But today there are at least 2 such cases:
- GHES driver sees both GHES_SEV_CORRECTED and CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER.
- RAS Correctable Errors Collector counts correctable errors per PFN and when the counter for a PFN reaches threshold
In both cases, userspace has no control of the soft offline performed by kernel's memory failure recovery.
Userspace can figure out the hugetlb folio pfn range by using `page-types -b huge -rlN` and then decide whether to soft offline the page according to it. But for GHES driver, I think it has to be done in the kernel. So add a control in /sys/ seems like a good idea.
Thanks.
This patch series give userspace the control of soft-offlining HugeTLB pages: kernel only soft offlines hugepage if userspace has opt-ed in for that specific hugepage size, and exposed to userspace by a new sysfs entry called softoffline_corrected_errors under /sys/kernel/mm/hugepages/hugepages-${size}kB directory:
- When softoffline_corrected_errors=0, skip soft offlining for all hugepages of size ${size}kB.
- When softoffline_corrected_errors=1, soft offline as before this
Will it be better to be called as "soft_offline_corrected_errors" or simplify "soft_offline_enabled"?
"soft_offline_enabled" is less optimal as it can't be extended to support something like "soft offline this PFN if something repeatedly requested soft offline this exact PFN x times". (although I don't think we need it).
The "x time" thing is a threshold thing, and if your typical application needs to have a say about performance(and maintaining physically contiguous memory) over RAS, shouldn't that be baked into the driver rather than hugetlbfs ?
I mostly agree, only that I want to point out the threshold has already been maintained by some firmware. For example, CPER has something like the following defined in UEFI Spec Table N.5: Section Descriptor:
Bit 3 - Error threshold exceeded: If set, OS may choose to discontinue use of this resource.
In this case, I think "enable_soft_offline" is a better name for "OS choose to discontinue use of this page" (enable_soft_offline=1) or not (enable_soft_offline=0). WDYT?
Yes, as long as enable_soft_offline=1 is the default. Out of thought, I
For sure, like this patcheset, I will ensure enable_soft_offline keeps the "default on" behavior.
suppose the CE count and threshold can be retrieved by the GHES driver?
Unfortunately GHES doesn't have visibility to CE count and threshold value (RAS Correctable Errors Collector does by itself) . GHES driver only knows it from the CPER_SEC_ERROR_THRESHOLD_EXCEEDED bit in the CPER reported by some firmware.
I haven't checked. If so, maybe another way is to implement a per task CE threshold: add a new field.ce_thresholdto the tsak struct, add a function to prctl(2) for a user thread to specify a CE threshold, also a function to retrieve the firmware defined default CE threshold, and let soft_offline_page() check against the task->ce_threshold to decide whether to offline the page. If you want to apply the CE threshold to patrol scrub triggered soft offline, than you could define a global/system wide CE threshold. That said, this might be an overblown to what you need, I'm just letting it out there for the sake of brain storming.
Thanks for your great idea! But yeah, it sounds like an overkill for the current problem. I think starting with OS-wide control of whether to soft offline any page already gives much better flexibility to userspace.
Also, I am not comfortable with this being hugetlbfs specific. What is the objection to creating a "soft_offline_enabled" switch that is applicable to any user page size?
I have no objection to making the "soft_offline_enabled" switch to apply to anything (hugetlb, transparent hugepage, raw page, etc). The only reason my current patch is hugetlb specific is because softoffline behavior is very disruptive in the hugetlb 1G page case, and I want to start with a limited scope in my first attempt.
If Miaohe, you, and other people are fine with making it applicable to any user pages, maybe a better interface for this could be at something like /sys/devices/system/memory/enable_soft_offline (location-wise close to /sys/devices/system/memory/soft_offline_page)?
Or, you could use /proc/sys/vm/enable_soft_offline, side by side with the existing 'memory_failure_early_kill' and 'memory_failure_recovery' switches.
I was actually looking into this better option, but your reply beats me ;)
You could also make 'enable_soft_offline' a per process option, similar to 'PR_MCE_KILL_EARLY' in prctl(2).*
thanks,
-jane
thanks,
-jane
softoffline_corrected_errors is one char less, but if you insist, soft_offline_corrected_errors also works for me.
Thanks. .
linux-kselftest-mirror@lists.linaro.org