Thanks for your questions, David!
On Tue, Jun 11, 2024 at 5:25 PM David Rientjes rientjes@google.com wrote:
On Tue, 11 Jun 2024, Jiaqi Yan wrote:
@@ -267,6 +268,20 @@ used:: These are informational only. They do not mean that anything is wrong with your system. To disable them, echo 4 (bit 2) into drop_caches.
+enable_soft_offline +=================== +Control whether to soft offline memory pages that have (excessive) correctable +memory errors. It is your call to choose between reliability (stay away from +fragile physical memory) vs performance (brought by HugeTLB or transparent +hugepages).
Could you expand upon the relevance of HugeTLB or THP in this documentation? I understand the need in some cases to soft offline memory after a number of correctable memory errors, but it's not clear how the performance implications plays into this. The paragraph below goes into a
To be accurate, I should say soft offlining transparent hugepage impacts performance, and soft offlining hugetlb hugepage impacts capacity. It may be clearer to first explain soft-offline's behaviors and implications, so that user knows what is the cost of soft-offline, then talks about the behavior of enable_soft_offline:
Correctable memory errors are very common on servers. Soft-offline is kernel's handling for memory pages having (excessive) corrected memory errors.
For different types of page, soft-offline has different behaviors / costs. - For a raw error page, soft-offline migrates the in-use page's content to a new raw page. - For a page that is part of a transparent hugepage, soft-offline splits the transparent hugepage into raw pages, then migrates only the raw error page. As a result, user is transparently backed by 1 less hugepage, impacting memory access performance. - For a page that is part of a HugeTLB hugepage, soft-offline first migrates the entire HugeTLB hugepage, during which a free hugepage will be consumed as migration target. Then the original hugepage is dissolved into raw pages without compensation, reducing the capacity of the HugeTLB pool by 1.
It is user's call to choose between reliability (staying away from fragile physical memory) vs performance / capacity implications in transparent and HugeTLB cases.
difference in the splitting behavior, are hugepage users the only ones that should be concerned with this?
If the cost of migrating a raw page is negligible, then yes, only hugepage users should be concerned and think about should they disable soft offline.
+When setting to 1, kernel attempts to soft offline the page when it thinks +needed. For in-use page, page content will be migrated to a new page. If +the oringinal hugepage is a HugeTLB hugepage, regardless of in-use or free,
s/oringinal/original/
To fix in v3.
+it will be dissolved into raw pages, and the capacity of the HugeTLB pool +will reduce by 1. If the original hugepage is a transparent hugepage, it +will be split into raw pages. When setting to 0, kernel won't attempt to +soft offline the page. Its default value is 1.
This behavior is the same for all architectures?
Yes, enable_soft_offline has the same behavior for all architectures, and default=1.
It may be worth mentioning that setting enable_soft_offline to 0 means: - If RAS Correctable Errors Collector is running, its request to soft offline pages will be ignored. - On ARM, the request to soft offline pages from GHES driver will be ignored. - On PARISC, the request to soft offline pages from Page Deallocation Table will be ignored.
I can add these clarifications in v3 if they are valuable.