There are two major types of uncorrected error (UC) :
- Action Required: The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
- Action Optional: The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
For X86 platforms, we can easily distinguish between these two types based on the MCA Bank. While for arm64 platform, the memory failure flags for all UCs which severity are GHES_SEV_RECOVERABLE are set as 0, a.k.a, Action Optional now.
If UC is detected by a background scrubber, it is obviously an Action Optional error. For other errors, we should conservatively regard them as Action Required.
cper_sec_mem_err::error_type identifies the type of error that occurred if CPER_MEM_VALID_ERROR_TYPE is set. So, set memory failure flags as 0 for Scrub Uncorrected Error (type 14). Otherwise, set memory failure flags as MF_ACTION_REQUIRED.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com --- drivers/acpi/apei/ghes.c | 10 ++++++++-- include/linux/cper.h | 3 +++ 2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 80ad530583c9..6c03059cbfc6 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -474,8 +474,14 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, if (sec_sev == GHES_SEV_CORRECTED && (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE; - if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) - flags = 0; + if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) { + if (mem_err->validation_bits & CPER_MEM_VALID_ERROR_TYPE) + flags = mem_err->error_type == CPER_MEM_SCRUB_UC ? + 0 : + MF_ACTION_REQUIRED; + else + flags = MF_ACTION_REQUIRED; + }
if (flags != -1) return ghes_do_memory_failure(mem_err->physical_addr, flags); diff --git a/include/linux/cper.h b/include/linux/cper.h index eacb7dd7b3af..b77ab7636614 100644 --- a/include/linux/cper.h +++ b/include/linux/cper.h @@ -235,6 +235,9 @@ enum { #define CPER_MEM_VALID_BANK_ADDRESS 0x100000 #define CPER_MEM_VALID_CHIP_ID 0x200000
+#define CPER_MEM_SCRUB_CE 13 +#define CPER_MEM_SCRUB_UC 14 + #define CPER_MEM_EXT_ROW_MASK 0x3 #define CPER_MEM_EXT_ROW_SHIFT 16
Hi,
Thanks for your patch.
FYI: kernel test robot notices the stable kernel rule is not satisfied.
Rule: 'Cc: stable@vger.kernel.org' or 'commit <sha1> upstream.' Subject: [PATCH] ACPI: APEI: set memory failure flags as MF_ACTION_REQUIRED on action required events Link: https://lore.kernel.org/stable/20221027042445.60108-1-xueshuai%40linux.aliba...
The check is based on https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
On Thu, Oct 27, 2022 at 6:25 AM Shuai Xue xueshuai@linux.alibaba.com wrote:
There are two major types of uncorrected error (UC) :
Action Required: The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
Action Optional: The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
For X86 platforms, we can easily distinguish between these two types based on the MCA Bank. While for arm64 platform, the memory failure flags for all UCs which severity are GHES_SEV_RECOVERABLE are set as 0, a.k.a, Action Optional now.
If UC is detected by a background scrubber, it is obviously an Action Optional error. For other errors, we should conservatively regard them as Action Required.
cper_sec_mem_err::error_type identifies the type of error that occurred if CPER_MEM_VALID_ERROR_TYPE is set. So, set memory failure flags as 0 for Scrub Uncorrected Error (type 14). Otherwise, set memory failure flags as MF_ACTION_REQUIRED.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com
I need input from the APEI reviewers on this.
Thanks!
drivers/acpi/apei/ghes.c | 10 ++++++++-- include/linux/cper.h | 3 +++ 2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 80ad530583c9..6c03059cbfc6 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -474,8 +474,14 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, if (sec_sev == GHES_SEV_CORRECTED && (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE;
if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
flags = 0;
if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) {
if (mem_err->validation_bits & CPER_MEM_VALID_ERROR_TYPE)
flags = mem_err->error_type == CPER_MEM_SCRUB_UC ?
0 :
MF_ACTION_REQUIRED;
else
flags = MF_ACTION_REQUIRED;
} if (flags != -1) return ghes_do_memory_failure(mem_err->physical_addr, flags);
diff --git a/include/linux/cper.h b/include/linux/cper.h index eacb7dd7b3af..b77ab7636614 100644 --- a/include/linux/cper.h +++ b/include/linux/cper.h @@ -235,6 +235,9 @@ enum { #define CPER_MEM_VALID_BANK_ADDRESS 0x100000 #define CPER_MEM_VALID_CHIP_ID 0x200000
+#define CPER_MEM_SCRUB_CE 13 +#define CPER_MEM_SCRUB_UC 14
#define CPER_MEM_EXT_ROW_MASK 0x3 #define CPER_MEM_EXT_ROW_SHIFT 16
-- 2.20.1.9.gb50a0d7
cper_sec_mem_err::error_type identifies the type of error that occurred if CPER_MEM_VALID_ERROR_TYPE is set. So, set memory failure flags as 0 for Scrub Uncorrected Error (type 14). Otherwise, set memory failure flags as MF_ACTION_REQUIRED.
On x86 the "action required" cases are signaled by a synchronous machine check that is delivered before the instruction that is attempting to consume the uncorrected data retires. I.e., it is guaranteed that the uncorrected error has not been propagated because it is not visible in any architectural state.
APEI signaled errors don't fall into that category on x86 ... the uncorrected data could have been consumed and propagated long before the signaling used for APEI can alert the OS.
Does ARM deliver APEI signals synchronously?
If not, then this patch might deliver a false sense of security to applications about the state of uncorrected data in the system.
-Tony
在 2022/10/29 AM1:08, Rafael J. Wysocki 写道:
On Thu, Oct 27, 2022 at 6:25 AM Shuai Xue xueshuai@linux.alibaba.com wrote:
There are two major types of uncorrected error (UC) :
Action Required: The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
Action Optional: The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
For X86 platforms, we can easily distinguish between these two types based on the MCA Bank. While for arm64 platform, the memory failure flags for all UCs which severity are GHES_SEV_RECOVERABLE are set as 0, a.k.a, Action Optional now.
If UC is detected by a background scrubber, it is obviously an Action Optional error. For other errors, we should conservatively regard them as Action Required.
cper_sec_mem_err::error_type identifies the type of error that occurred if CPER_MEM_VALID_ERROR_TYPE is set. So, set memory failure flags as 0 for Scrub Uncorrected Error (type 14). Otherwise, set memory failure flags as MF_ACTION_REQUIRED.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com
I need input from the APEI reviewers on this.
Thanks!
Hi, Rafael,
Sorry, I missed this email. Thank you for you quick reply. Let's discuss with reviewers.
Thank you.
Cheers, Shuai
drivers/acpi/apei/ghes.c | 10 ++++++++-- include/linux/cper.h | 3 +++ 2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 80ad530583c9..6c03059cbfc6 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -474,8 +474,14 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, if (sec_sev == GHES_SEV_CORRECTED && (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE;
if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
flags = 0;
if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) {
if (mem_err->validation_bits & CPER_MEM_VALID_ERROR_TYPE)
flags = mem_err->error_type == CPER_MEM_SCRUB_UC ?
0 :
MF_ACTION_REQUIRED;
else
flags = MF_ACTION_REQUIRED;
} if (flags != -1) return ghes_do_memory_failure(mem_err->physical_addr, flags);
diff --git a/include/linux/cper.h b/include/linux/cper.h index eacb7dd7b3af..b77ab7636614 100644 --- a/include/linux/cper.h +++ b/include/linux/cper.h @@ -235,6 +235,9 @@ enum { #define CPER_MEM_VALID_BANK_ADDRESS 0x100000 #define CPER_MEM_VALID_CHIP_ID 0x200000
+#define CPER_MEM_SCRUB_CE 13 +#define CPER_MEM_SCRUB_UC 14
#define CPER_MEM_EXT_ROW_MASK 0x3 #define CPER_MEM_EXT_ROW_SHIFT 16
-- 2.20.1.9.gb50a0d7
在 2022/10/29 AM1:25, Luck, Tony 写道:
cper_sec_mem_err::error_type identifies the type of error that occurred if CPER_MEM_VALID_ERROR_TYPE is set. So, set memory failure flags as 0 for Scrub Uncorrected Error (type 14). Otherwise, set memory failure flags as MF_ACTION_REQUIRED.
On x86 the "action required" cases are signaled by a synchronous machine check that is delivered before the instruction that is attempting to consume the uncorrected data retires. I.e., it is guaranteed that the uncorrected error has not been propagated because it is not visible in any architectural state.
On arm, if a 2-bit (uncorrectable) error is detected, and the memory access has been architecturally executed, that error is considered “consumed”. The CPU will take a synchronous error exception, signaled as synchronous external abort (SEA), which is analogously to MCE.
APEI signaled errors don't fall into that category on x86 ... the uncorrected data could have been consumed and propagated long before the signaling used for APEI can alert the OS.
Does ARM deliver APEI signals synchronously?
If not, then this patch might deliver a false sense of security to applications about the state of uncorrected data in the system.
Well, it does not always. There are many APEI notification, such as SCI, GSIV, GPIO, SDEI, SEA, etc. Not all APEI notifications are synchronously and it depends on hardware signal. As far as I know, if a UE is detected and consumed, synchronous external abort is signaled to firmware and firmware then performs a first-level triage and synchronously notify OS by SDEI or SEA notification. On the other hand, if CE is detected, a asynchronous interrupt will be signaled and firmware could notify OS by GPIO or GSIV.
Best Regards, Shuai
在 2022/11/2 PM7:53, Shuai Xue 写道:
在 2022/10/29 AM1:25, Luck, Tony 写道:
cper_sec_mem_err::error_type identifies the type of error that occurred if CPER_MEM_VALID_ERROR_TYPE is set. So, set memory failure flags as 0 for Scrub Uncorrected Error (type 14). Otherwise, set memory failure flags as MF_ACTION_REQUIRED.
On x86 the "action required" cases are signaled by a synchronous machine check that is delivered before the instruction that is attempting to consume the uncorrected data retires. I.e., it is guaranteed that the uncorrected error has not been propagated because it is not visible in any architectural state.
On arm, if a 2-bit (uncorrectable) error is detected, and the memory access has been architecturally executed, that error is considered “consumed”. The CPU will take a synchronous error exception, signaled as synchronous external abort (SEA), which is analogously to MCE.
APEI signaled errors don't fall into that category on x86 ... the uncorrected data could have been consumed and propagated long before the signaling used for APEI can alert the OS.
Does ARM deliver APEI signals synchronously?
If not, then this patch might deliver a false sense of security to applications about the state of uncorrected data in the system.
Well, it does not always. There are many APEI notification, such as SCI, GSIV, GPIO, SDEI, SEA, etc. Not all APEI notifications are synchronously and it depends on hardware signal. As far as I know, if a UE is detected and consumed, synchronous external abort is signaled to firmware and firmware then performs a first-level triage and synchronously notify OS by SDEI or SEA notification. On the other hand, if CE is detected, a asynchronous interrupt will be signaled and firmware could notify OS by GPIO or GSIV.
Best Regards, Shuai
Hi, Tony,
Prefetch data with UE error triggers async interrupt on both X86 and Arm64 platform (CMCI in X86 and SPI in arm64). It does not belongs to scrub UEs. I have to admit that cper_sec_mem_err::error_type is not an appropriate basis to distinguish "action required" cases.
acpi_hest_generic_data::flags (UEFI spec section N.2.2) could be used to indicate Action Optional (Scrub/Prefetch).
Bit 5 – Latent error: If set this flag indicates that action has been taken to ensure error containment (such a poisoning data), but the error has not been fully corrected and the data has not been consumed. System software may choose to take further corrective action before the data is consumed.
Our hardware team has submitted a proposal to UEFI community to add a new bit:
Bit 8 – sync flag; if set this flag indicates that this event record is synchronous(e.g. cpu core consumes poison data, then cause instruction/data abort); if not set, this event record is asynchronous.
With bit 8, we will know it is "Action Required".
I will send a new patch set to rework GHES error handling after the proposal is accept.
Thank you.
Best Regards Shuai
Hi, ALL,
I have rewritten the cover letter with the hope that the maintainer will truly understand the necessity of this patch. Both Alibaba and Huawei met the same issue in products, and we hope it could be fixed ASAP.
changes since v7: - rebase to Linux v6.6-rc2 (no code changed) - rewritten the cover letter to explain the motivation of this patchset
changes since v6: - add more explicty error message suggested by Xiaofei - pick up reviewed-by tag from Xiaofei - pick up internal reviewed-by tag from Baolin
changes since v5 by addressing comments from Kefeng: - document return value of memory_failure() - drop redundant comments in call site of memory_failure() - make ghes_do_proc void and handle abnormal case within it - pick up reviewed-by tag from Kefeng Wang
changes since v4 by addressing comments from Xiaofei: - do a force kill only for abnormal sync errors
changes since v3 by addressing comments from Xiaofei: - do a force kill for abnormal memory failure error such as invalid PA, unexpected severity, OOM, etc - pcik up tested-by tag from Ma Wupeng
changes since v2 by addressing comments from Naoya: - rename mce_task_work to sync_task_work - drop ACPI_HEST_NOTIFY_MCE case in is_hest_sync_notify() - add steps to reproduce this problem in cover letter
changes since v1: - synchronous events by notify type - Link: https://lore.kernel.org/lkml/20221206153354.92394-3-xueshuai@linux.alibaba.c...
There are two major types of uncorrected recoverable (UCR) errors :
- Action Required (AR): The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this error.
- Action Optional (AO): The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this error.
The main difference between AR and AO errors is that AR errors are synchronous events, while AO errors are asynchronous events. Synchronous exceptions, such as Machine Check Exception (MCE) on X86 and Synchronous External Abort (SEA) on Arm64, are signaled by the hardware when an error is detected and the memory access has architecturally been executed.
Currently, both synchronous and asynchronous errors are queued as AO errors and handled by a dedicated kernel thread in a work queue on the ARM64 platform. For synchronous errors, memory_failure() is synced using a cancel_work_sync trick to ensure that the corrupted page is unmapped and poisoned. Upon returning to user-space, the process resumes at the current instruction, triggering a page fault. As a result, the kernel sends a SIGBUS signal to the current process due to VM_FAULT_HWPOISON.
However, this trick is not always be effective, this patch set improves the recovery process in three specific aspects:
1. Handle synchronous exceptions with proper si_code
ghes_handle_memory_failure() queue both synchronous and asynchronous errors with flag=0. Then the kernel will notify the process by sending a SIGBUS signal in memory_failure() with wrong si_code: BUS_MCEERR_AO to the actual user-space process instead of BUS_MCEERR_AR. The user-space processes rely on the si_code to distinguish to handle memory failure.
For example, hwpoison-aware user-space processes use the si_code: BUS_MCEERR_AO for 'action optional' early notifications, and BUS_MCEERR_AR for 'action required' synchronous/late notifications. Specifically, when a signal with SIGBUS_MCEERR_AR is delivered to QEMU, it will inject a vSEA to Guest kernel. In contrast, a signal with SIGBUS_MCEERR_AO will be ignored by QEMU.[1]
Fix it by seting memory failure flags as MF_ACTION_REQUIRED on synchronous events. (PATCH 1)
2. Handle memory_failure() abnormal fails to avoid a unnecessary reboot
If process mapping fault page, but memory_failure() abnormal return before try_to_unmap(), for example, the fault page process mapping is KSM page. In this case, arm64 cannot use the page fault process to terminate the synchronous exception loop.[4]
This loop can potentially exceed the platform firmware threshold or even trigger a kernel hard lockup, leading to a system reboot. However, kernel has the capability to recover from this error.
Fix it by performing a force kill when memory_failure() abnormal fails or when other abnormal synchronous errors occur. These errors can include situations such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc. (PATCH 2)
3. Handle memory_failure() in current process context which consuming poison
When synchronous errors occur, memory_failure() assume that current process context is exactly that consuming poison synchronous error.
For example, kill_accessing_process() holds mmap locking of current->mm, does pagetable walk to find the error virtual address, and sends SIGBUS to the current process with error info. However, the mm of kworker is not valid, resulting in a null-pointer dereference. I have fixed this in[3].
commit 77677cdbc2aa mm,hwpoison: check mm when killing accessing process
Another example is that collect_procs()/kill_procs() walk the task list, only collect and send sigbus to task which consuming poison. But memory_failure() is queued and handled by a dedicated kernel thread on arm64 platform.
Fix it by queuing memory_failure() as a task work which runs in current execution context to synchronously send SIGBUS before ret_to_user. (PATCH 2)
** In summary, this patch set handles synchronous errors in task work with proper si_code so that hwpoison-aware process can recover from errors, and fixes (potentially)abnormal cases. **
Lv Ying and XiuQi from Huawei also proposed to address similar problem[2][4]. Acknowledge to discussion with them.
To reproduce this problem:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 5 addr 0xffffb0d75000 page not present Test passed
The si_code (code 5) from einj_mem_uc indicates that it is BUS_MCEERR_AO error and it is not fact.
After this patch set:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 4 addr 0xffffb0d75000 page not present Test passed
The si_code (code 4) from einj_mem_uc indicates that it is BUS_MCEERR_AR error as we expected.
[1] Add ARMv8 RAS virtualization support in QEMU https://patchew.org/QEMU/20200512030609.19593-1-gengdongjiu@huawei.com/ [2] https://lore.kernel.org/lkml/20221205115111.131568-3-lvying6@huawei.com/ [3] https://lkml.kernel.org/r/20220914064935.7851-1-xueshuai@linux.alibaba.com [4] https://lore.kernel.org/lkml/20221209095407.383211-1-lvying6@huawei.com/
Shuai Xue (2): ACPI: APEI: set memory failure flags as MF_ACTION_REQUIRED on synchronous events ACPI: APEI: handle synchronous exceptions in task work
arch/x86/kernel/cpu/mce/core.c | 9 +-- drivers/acpi/apei/ghes.c | 113 ++++++++++++++++++++++----------- include/acpi/ghes.h | 3 - mm/memory-failure.c | 17 +---- 4 files changed, 79 insertions(+), 63 deletions(-)
There are two major types of uncorrected recoverable (UCR) errors :
- Action Required (AR): The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
- Action Optional (AO): The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
The essential difference between AR and AO errors is that AR is a synchronous event, while AO is an asynchronous event. The hardware will signal a synchronous exception (Machine Check Exception on X86 and Synchronous External Abort on Arm64) when an error is detected and the memory access has been architecturally executed.
When APEI firmware first is enabled, a platform may describe one error source for the handling of synchronous errors (e.g. MCE or SEA notification ), or for handling asynchronous errors (e.g. SCI or External Interrupt notification). In other words, we can distinguish synchronous errors by APEI notification. For AR errors, kernel will kill current process accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In addition, for AO errors, kernel will notify the process who owns the poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. However, the GHES driver always sets mf_flags to 0 so that all UCR errors are handled as AO errors in memory failure.
To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous events.
Fixes: ba61ca4aab47 ("ACPI, APEI, GHES: Add hardware memory error recovery support")' Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com --- drivers/acpi/apei/ghes.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index ef59d6ea16da..88178aa6222d 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; }
+/* + * A platform may describe one error source for the handling of synchronous + * errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI + * or External Interrupt). On x86, the HEST notifications are always + * asynchronous, so only SEA on ARM is delivered as a synchronous + * notification. + */ +static inline bool is_hest_sync_notify(struct ghes *ghes) +{ + u8 notify_type = ghes->generic->notify.type; + + return notify_type == ACPI_HEST_NOTIFY_SEA; +} + /* * This driver isn't really modular, however for the time being, * continuing to use module_param is the easiest way to remain @@ -475,7 +489,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) }
static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, - int sev) + int sev, bool sync) { int flags = -1; int sec_sev = ghes_severity(gdata->error_severity); @@ -489,7 +503,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE; if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) - flags = 0; + flags = sync ? MF_ACTION_REQUIRED : 0;
if (flags != -1) return ghes_do_memory_failure(mem_err->physical_addr, flags); @@ -497,9 +511,11 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, return false; }
-static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev) +static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, + int sev, bool sync) { struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata); + int flags = sync ? MF_ACTION_REQUIRED : 0; bool queued = false; int sec_sev, i; char *p; @@ -524,7 +540,7 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int s * and don't filter out 'corrected' error here. */ if (is_cache && has_pa) { - queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0); + queued = ghes_do_memory_failure(err_info->physical_fault_addr, flags); p += err_info->length; continue; } @@ -645,6 +661,7 @@ static bool ghes_do_proc(struct ghes *ghes, const guid_t *fru_id = &guid_null; char *fru_text = ""; bool queued = false; + bool sync = is_hest_sync_notify(ghes);
sev = ghes_severity(estatus->error_severity); apei_estatus_for_each_section(estatus, gdata) { @@ -662,13 +679,13 @@ static bool ghes_do_proc(struct ghes *ghes, atomic_notifier_call_chain(&ghes_report_chain, sev, mem_err);
arch_apei_report_mem_error(sev, mem_err); - queued = ghes_handle_memory_failure(gdata, sev); + queued = ghes_handle_memory_failure(gdata, sev, sync); } else if (guid_equal(sec_type, &CPER_SEC_PCIE)) { ghes_handle_aer(gdata); } else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) { - queued = ghes_handle_arm_hw_error(gdata, sev); + queued = ghes_handle_arm_hw_error(gdata, sev, sync); } else { void *err = acpi_hest_get_payload(gdata);
Hardware errors could be signaled by synchronous interrupt, e.g. when an error is detected by a background scrubber, or signaled by synchronous exception, e.g. when an uncorrected error is consumed. Both synchronous and asynchronous error are queued and handled by a dedicated kthread in workqueue.
commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") keep track of whether memory_failure() work was queued, and make task_work pending to flush out the workqueue so that the work for synchronous error is processed before returning to user-space. The trick ensures that the corrupted page is unmapped and poisoned. And after returning to user-space, the task starts at current instruction which triggering a page fault in which kernel will send SIGBUS to current process due to VM_FAULT_HWPOISON.
However, the memory failure recovery for hwpoison-aware mechanisms does not work as expected. For example, hwpoison-aware user-space processes like QEMU register their customized SIGBUS handler and enable early kill mode by seting PF_MCE_EARLY at initialization. Then the kernel will directy notify the process by sending a SIGBUS signal in memory failure with wrong si_code: the actual user-space process accessing the corrupt memory location, but its memory failure work is handled in a kthread context, so it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space process instead of BUS_MCEERR_AR in kill_proc().
To this end, separate synchronous and asynchronous error handling into different paths like X86 platform does:
- valid synchronous errors: queue a task_work to synchronously send SIGBUS before ret_to_user. - valid asynchronous errors: queue a work into workqueue to asynchronously handle memory failure. - abnormal branches such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc.
Then for valid synchronous errors, the current context in memory failure is exactly belongs to the task consuming poison data and it will send SIBBUS with proper si_code.
Fixes: 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com --- arch/x86/kernel/cpu/mce/core.c | 9 +--- drivers/acpi/apei/ghes.c | 84 +++++++++++++++++++++------------- include/acpi/ghes.h | 3 -- mm/memory-failure.c | 17 ++----- 4 files changed, 56 insertions(+), 57 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 6f35f724cc14..1675ff77033d 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1334,17 +1334,10 @@ static void kill_me_maybe(struct callback_head *cb) return; }
- /* - * -EHWPOISON from memory_failure() means that it already sent SIGBUS - * to the current process with the proper error info, - * -EOPNOTSUPP means hwpoison_filter() filtered the error event, - * - * In both cases, no further processing is required. - */ if (ret == -EHWPOISON || ret == -EOPNOTSUPP) return;
- pr_err("Memory error not recovered"); + pr_err("Sending SIGBUS to current task due to memory error not recovered"); kill_me_now(cb); }
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 88178aa6222d..014401a65ed5 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -450,28 +450,41 @@ static void ghes_clear_estatus(struct ghes *ghes, }
/* - * Called as task_work before returning to user-space. - * Ensure any queued work has been done before we return to the context that - * triggered the notification. + * struct sync_task_work - for synchronous RAS event + * + * @twork: callback_head for task work + * @pfn: page frame number of corrupted page + * @flags: fine tune action taken + * + * Structure to pass task work to be handled before + * ret_to_user via task_work_add(). */ -static void ghes_kick_task_work(struct callback_head *head) +struct sync_task_work { + struct callback_head twork; + u64 pfn; + int flags; +}; + +static void memory_failure_cb(struct callback_head *twork) { - struct acpi_hest_generic_status *estatus; - struct ghes_estatus_node *estatus_node; - u32 node_len; + int ret; + struct sync_task_work *twcb = + container_of(twork, struct sync_task_work, twork);
- estatus_node = container_of(head, struct ghes_estatus_node, task_work); - if (IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE)) - memory_failure_queue_kick(estatus_node->task_work_cpu); + ret = memory_failure(twcb->pfn, twcb->flags); + kfree(twcb);
- estatus = GHES_ESTATUS_FROM_NODE(estatus_node); - node_len = GHES_ESTATUS_NODE_LEN(cper_estatus_len(estatus)); - gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len); + if (!ret || ret == -EHWPOISON || ret == -EOPNOTSUPP) + return; + + pr_err("Sending SIGBUS to current task due to memory error not recovered"); + force_sig(SIGBUS); }
static bool ghes_do_memory_failure(u64 physical_addr, int flags) { unsigned long pfn; + struct sync_task_work *twcb;
if (!IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE)) return false; @@ -484,6 +497,18 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) return false; }
+ if (flags == MF_ACTION_REQUIRED && current->mm) { + twcb = kmalloc(sizeof(*twcb), GFP_ATOMIC); + if (!twcb) + return false; + + twcb->pfn = pfn; + twcb->flags = flags; + init_task_work(&twcb->twork, memory_failure_cb); + task_work_add(current, &twcb->twork, TWA_RESUME); + return true; + } + memory_failure_queue(pfn, flags); return true; } @@ -652,7 +677,7 @@ static void ghes_defer_non_standard_event(struct acpi_hest_generic_data *gdata, schedule_work(&entry->work); }
-static bool ghes_do_proc(struct ghes *ghes, +static void ghes_do_proc(struct ghes *ghes, const struct acpi_hest_generic_status *estatus) { int sev, sec_sev; @@ -696,7 +721,14 @@ static bool ghes_do_proc(struct ghes *ghes, } }
- return queued; + /* + * If no memory failure work is queued for abnormal synchronous + * errors, do a force kill. + */ + if (sync && !queued) { + pr_err("Sending SIGBUS to current task due to memory error not recovered"); + force_sig(SIGBUS); + } }
static void __ghes_print_estatus(const char *pfx, @@ -998,9 +1030,7 @@ static void ghes_proc_in_irq(struct irq_work *irq_work) struct ghes_estatus_node *estatus_node; struct acpi_hest_generic *generic; struct acpi_hest_generic_status *estatus; - bool task_work_pending; u32 len, node_len; - int ret;
llnode = llist_del_all(&ghes_estatus_llist); /* @@ -1015,25 +1045,16 @@ static void ghes_proc_in_irq(struct irq_work *irq_work) estatus = GHES_ESTATUS_FROM_NODE(estatus_node); len = cper_estatus_len(estatus); node_len = GHES_ESTATUS_NODE_LEN(len); - task_work_pending = ghes_do_proc(estatus_node->ghes, estatus); + + ghes_do_proc(estatus_node->ghes, estatus); + if (!ghes_estatus_cached(estatus)) { generic = estatus_node->generic; if (ghes_print_estatus(NULL, generic, estatus)) ghes_estatus_cache_add(generic, estatus); } - - if (task_work_pending && current->mm) { - estatus_node->task_work.func = ghes_kick_task_work; - estatus_node->task_work_cpu = smp_processor_id(); - ret = task_work_add(current, &estatus_node->task_work, - TWA_RESUME); - if (ret) - estatus_node->task_work.func = NULL; - } - - if (!estatus_node->task_work.func) - gen_pool_free(ghes_estatus_pool, - (unsigned long)estatus_node, node_len); + gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, + node_len);
llnode = next; } @@ -1094,7 +1115,6 @@ static int ghes_in_nmi_queue_one_entry(struct ghes *ghes,
estatus_node->ghes = ghes; estatus_node->generic = ghes->generic; - estatus_node->task_work.func = NULL; estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
if (__ghes_read_estatus(estatus, buf_paddr, fixmap_idx, len)) { diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h index 3c8bba9f1114..e5e0c308d27f 100644 --- a/include/acpi/ghes.h +++ b/include/acpi/ghes.h @@ -35,9 +35,6 @@ struct ghes_estatus_node { struct llist_node llnode; struct acpi_hest_generic *generic; struct ghes *ghes; - - int task_work_cpu; - struct callback_head task_work; };
struct ghes_estatus_cache { diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 4d6e43c88489..80e1ea1cc56d 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2163,7 +2163,9 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, * * Return: 0 for successfully handled the memory error, * -EOPNOTSUPP for hwpoison_filter() filtered the error event, - * < 0(except -EOPNOTSUPP) on failure. + * -EHWPOISON for already sent SIGBUS to the current process with + * the proper error info, + * other negative error code on failure. */ int memory_failure(unsigned long pfn, int flags) { @@ -2445,19 +2447,6 @@ static void memory_failure_work_func(struct work_struct *work) } }
-/* - * Process memory_failure work queued on the specified CPU. - * Used to avoid return-to-userspace racing with the memory_failure workqueue. - */ -void memory_failure_queue_kick(int cpu) -{ - struct memory_failure_cpu *mf_cpu; - - mf_cpu = &per_cpu(memory_failure_cpu, cpu); - cancel_work_sync(&mf_cpu->work); - memory_failure_work_func(&mf_cpu->work); -} - static int __init memory_failure_init(void) { struct memory_failure_cpu *mf_cpu;
On Tue Sep 19, 2023 at 5:21 AM EEST, Shuai Xue wrote:
There are two major types of uncorrected recoverable (UCR) errors :
Action Required (AR): The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
Action Optional (AO): The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
The essential difference between AR and AO errors is that AR is a synchronous event, while AO is an asynchronous event. The hardware will signal a synchronous exception (Machine Check Exception on X86 and Synchronous External Abort on Arm64) when an error is detected and the memory access has been architecturally executed.
When APEI firmware first is enabled, a platform may describe one error source for the handling of synchronous errors (e.g. MCE or SEA notification ), or for handling asynchronous errors (e.g. SCI or External Interrupt notification). In other words, we can distinguish synchronous errors by APEI notification. For AR errors, kernel will kill current process accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In addition, for AO errors, kernel will notify the process who owns the poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. However, the GHES driver always sets mf_flags to 0 so that all UCR errors are handled as AO errors in memory failure.
To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous events.
Fixes: ba61ca4aab47 ("ACPI, APEI, GHES: Add hardware memory error recovery support")' Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
drivers/acpi/apei/ghes.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index ef59d6ea16da..88178aa6222d 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; } +/*
- A platform may describe one error source for the handling of synchronous
- errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI
- or External Interrupt). On x86, the HEST notifications are always
- asynchronous, so only SEA on ARM is delivered as a synchronous
- notification.
- */
+static inline bool is_hest_sync_notify(struct ghes *ghes) +{
- u8 notify_type = ghes->generic->notify.type;
- return notify_type == ACPI_HEST_NOTIFY_SEA;
+}
/*
- This driver isn't really modular, however for the time being,
- continuing to use module_param is the easiest way to remain
@@ -475,7 +489,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) } static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
int sev)
int sev, bool sync)
{ int flags = -1; int sec_sev = ghes_severity(gdata->error_severity); @@ -489,7 +503,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE; if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
flags = 0;
flags = sync ? MF_ACTION_REQUIRED : 0;
Not my territory but this branching looks a bit weird to my eyes so just in case putting a comment.
What *if* the previous condition sets MF_SOFT_OFFLINE and this condition overwrites the value?
I know that earlier it could have been overwritten by zero.
Neither the function comment has any explanation why it is ok overwrite like this.
Or if these cannot happen simultaenously why there is not immediate return after settting MF_SOFT_OFFLINE?
For someone like me the functions logic is tediously hard to understand tbh.
BR, Jarkko
On Tue Sep 19, 2023 at 5:21 AM EEST, Shuai Xue wrote:
Hardware errors could be signaled by synchronous interrupt, e.g. when an error is detected by a background scrubber, or signaled by synchronous exception, e.g. when an uncorrected error is consumed. Both synchronous and asynchronous error are queued and handled by a dedicated kthread in workqueue.
commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") keep track of whether memory_failure() work was queued, and make task_work pending to flush out the workqueue so that the work for synchronous error is processed before returning to user-space. The trick ensures that the corrupted page is unmapped and poisoned. And after returning to user-space, the task starts at current instruction which triggering a page fault in which kernel will send SIGBUS to current process due to VM_FAULT_HWPOISON.
However, the memory failure recovery for hwpoison-aware mechanisms does not work as expected. For example, hwpoison-aware user-space processes like QEMU register their customized SIGBUS handler and enable early kill mode by seting PF_MCE_EARLY at initialization. Then the kernel will directy notify the process by sending a SIGBUS signal in memory failure with wrong si_code: the actual user-space process accessing the corrupt memory location, but its memory failure work is handled in a kthread context, so it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space process instead of BUS_MCEERR_AR in kill_proc().
To this end, separate synchronous and asynchronous error handling into different paths like X86 platform does:
- valid synchronous errors: queue a task_work to synchronously send SIGBUS before ret_to_user.
- valid asynchronous errors: queue a work into workqueue to asynchronously handle memory failure.
- abnormal branches such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc.
Then for valid synchronous errors, the current context in memory failure is exactly belongs to the task consuming poison data and it will send SIBBUS with proper si_code.
Fixes: 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
Did 7f17b4a121d0 actually break something that was not broken before?
If not, this is (afaik) not a bug fix.
BR, Jarkko
On 2023/9/25 22:43, Jarkko Sakkinen wrote:
On Tue Sep 19, 2023 at 5:21 AM EEST, Shuai Xue wrote:
There are two major types of uncorrected recoverable (UCR) errors :
Action Required (AR): The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
Action Optional (AO): The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
The essential difference between AR and AO errors is that AR is a synchronous event, while AO is an asynchronous event. The hardware will signal a synchronous exception (Machine Check Exception on X86 and Synchronous External Abort on Arm64) when an error is detected and the memory access has been architecturally executed.
When APEI firmware first is enabled, a platform may describe one error source for the handling of synchronous errors (e.g. MCE or SEA notification ), or for handling asynchronous errors (e.g. SCI or External Interrupt notification). In other words, we can distinguish synchronous errors by APEI notification. For AR errors, kernel will kill current process accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In addition, for AO errors, kernel will notify the process who owns the poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. However, the GHES driver always sets mf_flags to 0 so that all UCR errors are handled as AO errors in memory failure.
To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous events.
Fixes: ba61ca4aab47 ("ACPI, APEI, GHES: Add hardware memory error recovery support")' Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
drivers/acpi/apei/ghes.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index ef59d6ea16da..88178aa6222d 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; } +/*
- A platform may describe one error source for the handling of synchronous
- errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI
- or External Interrupt). On x86, the HEST notifications are always
- asynchronous, so only SEA on ARM is delivered as a synchronous
- notification.
- */
+static inline bool is_hest_sync_notify(struct ghes *ghes) +{
- u8 notify_type = ghes->generic->notify.type;
- return notify_type == ACPI_HEST_NOTIFY_SEA;
+}
/*
- This driver isn't really modular, however for the time being,
- continuing to use module_param is the easiest way to remain
@@ -475,7 +489,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) } static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
int sev)
int sev, bool sync)
{ int flags = -1; int sec_sev = ghes_severity(gdata->error_severity); @@ -489,7 +503,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE; if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
flags = 0;
flags = sync ? MF_ACTION_REQUIRED : 0;
Not my territory but this branching looks a bit weird to my eyes so just in case putting a comment.
What *if* the previous condition sets MF_SOFT_OFFLINE and this condition overwrites the value?
I know that earlier it could have been overwritten by zero.
Neither the function comment has any explanation why it is ok overwrite like this.
Or if these cannot happen simultaenously why there is not immediate return after settting MF_SOFT_OFFLINE?
For someone like me the functions logic is tediously hard to understand tbh.
BR, Jarkko
Hi, Jarkko,
I hope the original source code can help to understand:
/* iff following two events can be handled properly by now */ if (sec_sev == GHES_SEV_CORRECTED && (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE; if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) flags = 0;
if (flags != -1) return ghes_do_memory_failure(mem_err->physical_addr, flags);
The sec_sev of gdata is either GHES_SEV_CORRECTED or GHES_SEV_RECOVERABLE. So the two if-conditions are independent of each other and these cannot happen simultaneously. ghes_do_memory_failure() then handle the two events with a proper seted flags.
Thanks.
Best Regards, Shuai
On 2023/9/25 23:00, Jarkko Sakkinen wrote:
On Tue Sep 19, 2023 at 5:21 AM EEST, Shuai Xue wrote:
Hardware errors could be signaled by synchronous interrupt, e.g. when an error is detected by a background scrubber, or signaled by synchronous exception, e.g. when an uncorrected error is consumed. Both synchronous and asynchronous error are queued and handled by a dedicated kthread in workqueue.
commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") keep track of whether memory_failure() work was queued, and make task_work pending to flush out the workqueue so that the work for synchronous error is processed before returning to user-space. The trick ensures that the corrupted page is unmapped and poisoned. And after returning to user-space, the task starts at current instruction which triggering a page fault in which kernel will send SIGBUS to current process due to VM_FAULT_HWPOISON.
However, the memory failure recovery for hwpoison-aware mechanisms does not work as expected. For example, hwpoison-aware user-space processes like QEMU register their customized SIGBUS handler and enable early kill mode by seting PF_MCE_EARLY at initialization. Then the kernel will directy notify the process by sending a SIGBUS signal in memory failure with wrong si_code: the actual user-space process accessing the corrupt memory location, but its memory failure work is handled in a kthread context, so it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space process instead of BUS_MCEERR_AR in kill_proc().
To this end, separate synchronous and asynchronous error handling into different paths like X86 platform does:
- valid synchronous errors: queue a task_work to synchronously send SIGBUS before ret_to_user.
- valid asynchronous errors: queue a work into workqueue to asynchronously handle memory failure.
- abnormal branches such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc.
Then for valid synchronous errors, the current context in memory failure is exactly belongs to the task consuming poison data and it will send SIBBUS with proper si_code.
Fixes: 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
Did 7f17b4a121d0 actually break something that was not broken before?
If not, this is (afaik) not a bug fix.
Hi, Jarkko,
It did not. It keeps track of whether memory_failure() work was queued, and makes task_work pending to flush out the queue. But if no work queued for synchronous error due to abnormal branches, it does not do a force kill to current process resulting a hard lockup due to exception loop.
It is fine to me to remove the bug fix tag if you insist on removing it.
Best Regards, Shuai
On Tue, Sep 19, 2023 at 10:21:27AM +0800, Shuai Xue wrote:
Hardware errors could be signaled by synchronous interrupt, e.g. when an error is detected by a background scrubber, or signaled by synchronous exception, e.g. when an uncorrected error is consumed. Both synchronous and asynchronous error are queued and handled by a dedicated kthread in workqueue.
commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") keep track of whether memory_failure() work was queued, and make task_work pending to flush out the workqueue so that the work for synchronous error is processed before returning to user-space. The trick ensures that the corrupted page is unmapped and poisoned. And after returning to user-space, the task starts at current instruction which triggering a page fault in which kernel will send SIGBUS to current process due to VM_FAULT_HWPOISON.
However, the memory failure recovery for hwpoison-aware mechanisms does not work as expected. For example, hwpoison-aware user-space processes like QEMU register their customized SIGBUS handler and enable early kill mode by seting PF_MCE_EARLY at initialization. Then the kernel will directy notify the process by sending a SIGBUS signal in memory failure with wrong si_code: the actual user-space process accessing the corrupt memory location, but its memory failure work is handled in a kthread context, so it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space process instead of BUS_MCEERR_AR in kill_proc().
To this end, separate synchronous and asynchronous error handling into different paths like X86 platform does:
- valid synchronous errors: queue a task_work to synchronously send SIGBUS before ret_to_user.
- valid asynchronous errors: queue a work into workqueue to asynchronously handle memory failure.
- abnormal branches such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc.
Then for valid synchronous errors, the current context in memory failure is exactly belongs to the task consuming poison data and it will send SIBBUS with proper si_code.
Fixes: 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
arch/x86/kernel/cpu/mce/core.c | 9 +--- drivers/acpi/apei/ghes.c | 84 +++++++++++++++++++++------------- include/acpi/ghes.h | 3 -- mm/memory-failure.c | 17 ++----- 4 files changed, 56 insertions(+), 57 deletions(-)
...
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 4d6e43c88489..80e1ea1cc56d 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2163,7 +2163,9 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
- Return: 0 for successfully handled the memory error,
-EOPNOTSUPP for hwpoison_filter() filtered the error event,
< 0(except -EOPNOTSUPP) on failure.
-EHWPOISON for already sent SIGBUS to the current process with
the proper error info,
The meaning of this comment is understood, but the sentence seems to be a little too long. Could you sort this out with bullet points (like below)?
* Return values: * 0 - success * -EOPNOTSUPP - hwpoison_filter() filtered the error event. * -EHWPOISON - sent SIGBUS to the current process with the proper * error info by kill_accessing_process(). * other negative values - failure
*/
other negative error code on failure.
int memory_failure(unsigned long pfn, int flags) { @@ -2445,19 +2447,6 @@ static void memory_failure_work_func(struct work_struct *work) } } -/*
- Process memory_failure work queued on the specified CPU.
- Used to avoid return-to-userspace racing with the memory_failure workqueue.
- */
-void memory_failure_queue_kick(int cpu) -{
- struct memory_failure_cpu *mf_cpu;
- mf_cpu = &per_cpu(memory_failure_cpu, cpu);
- cancel_work_sync(&mf_cpu->work);
- memory_failure_work_func(&mf_cpu->work);
-}
The declaration of memory_failure_queue_kick() still remains in include/linux/mm.h, so you can remove it together.
Thanks, Naoya Horiguchi
On 2023/10/3 16:28, Naoya Horiguchi wrote:
On Tue, Sep 19, 2023 at 10:21:27AM +0800, Shuai Xue wrote:
Hardware errors could be signaled by synchronous interrupt, e.g. when an error is detected by a background scrubber, or signaled by synchronous exception, e.g. when an uncorrected error is consumed. Both synchronous and asynchronous error are queued and handled by a dedicated kthread in workqueue.
commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") keep track of whether memory_failure() work was queued, and make task_work pending to flush out the workqueue so that the work for synchronous error is processed before returning to user-space. The trick ensures that the corrupted page is unmapped and poisoned. And after returning to user-space, the task starts at current instruction which triggering a page fault in which kernel will send SIGBUS to current process due to VM_FAULT_HWPOISON.
However, the memory failure recovery for hwpoison-aware mechanisms does not work as expected. For example, hwpoison-aware user-space processes like QEMU register their customized SIGBUS handler and enable early kill mode by seting PF_MCE_EARLY at initialization. Then the kernel will directy notify the process by sending a SIGBUS signal in memory failure with wrong si_code: the actual user-space process accessing the corrupt memory location, but its memory failure work is handled in a kthread context, so it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space process instead of BUS_MCEERR_AR in kill_proc().
To this end, separate synchronous and asynchronous error handling into different paths like X86 platform does:
- valid synchronous errors: queue a task_work to synchronously send SIGBUS before ret_to_user.
- valid asynchronous errors: queue a work into workqueue to asynchronously handle memory failure.
- abnormal branches such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc.
Then for valid synchronous errors, the current context in memory failure is exactly belongs to the task consuming poison data and it will send SIBBUS with proper si_code.
Fixes: 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
arch/x86/kernel/cpu/mce/core.c | 9 +--- drivers/acpi/apei/ghes.c | 84 +++++++++++++++++++++------------- include/acpi/ghes.h | 3 -- mm/memory-failure.c | 17 ++----- 4 files changed, 56 insertions(+), 57 deletions(-)
...
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 4d6e43c88489..80e1ea1cc56d 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2163,7 +2163,9 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
- Return: 0 for successfully handled the memory error,
-EOPNOTSUPP for hwpoison_filter() filtered the error event,
< 0(except -EOPNOTSUPP) on failure.
-EHWPOISON for already sent SIGBUS to the current process with
the proper error info,
The meaning of this comment is understood, but the sentence seems to be a little too long. Could you sort this out with bullet points (like below)?
- Return values:
- 0 - success
- -EOPNOTSUPP - hwpoison_filter() filtered the error event.
- -EHWPOISON - sent SIGBUS to the current process with the proper
error info by kill_accessing_process().
- other negative values - failure
Of course, will do it.
*/
other negative error code on failure.
int memory_failure(unsigned long pfn, int flags) { @@ -2445,19 +2447,6 @@ static void memory_failure_work_func(struct work_struct *work) } } -/*
- Process memory_failure work queued on the specified CPU.
- Used to avoid return-to-userspace racing with the memory_failure workqueue.
- */
-void memory_failure_queue_kick(int cpu) -{
- struct memory_failure_cpu *mf_cpu;
- mf_cpu = &per_cpu(memory_failure_cpu, cpu);
- cancel_work_sync(&mf_cpu->work);
- memory_failure_work_func(&mf_cpu->work);
-}
The declaration of memory_failure_queue_kick() still remains in include/linux/mm.h, so you can remove it together.
Good catch, will remove it too.
Thanks, Naoya Horiguchi
Thank you for valuable comments.
Best Regards, Shuai
Hi, ALL,
I have rewritten the cover letter with the hope that the maintainer will truly understand the necessity of this patch. Both Alibaba and Huawei met the same issue in products, and we hope it could be fixed ASAP.
## Changes Log
changes since v8: - remove the bug fix tag of patch 2 (per Jarkko Sakkinen) - remove the declaration of memory_failure_queue_kick (per Naoya Horiguchi) - rewrite the return value comments of memory_failure (per Naoya Horiguchi)
changes since v7: - rebase to Linux v6.6-rc2 (no code changed) - rewritten the cover letter to explain the motivation of this patchset
changes since v6: - add more explicty error message suggested by Xiaofei - pick up reviewed-by tag from Xiaofei - pick up internal reviewed-by tag from Baolin
changes since v5 by addressing comments from Kefeng: - document return value of memory_failure() - drop redundant comments in call site of memory_failure() - make ghes_do_proc void and handle abnormal case within it - pick up reviewed-by tag from Kefeng Wang
changes since v4 by addressing comments from Xiaofei: - do a force kill only for abnormal sync errors
changes since v3 by addressing comments from Xiaofei: - do a force kill for abnormal memory failure error such as invalid PA, unexpected severity, OOM, etc - pcik up tested-by tag from Ma Wupeng
changes since v2 by addressing comments from Naoya: - rename mce_task_work to sync_task_work - drop ACPI_HEST_NOTIFY_MCE case in is_hest_sync_notify() - add steps to reproduce this problem in cover letter
changes since v1: - synchronous events by notify type - Link: https://lore.kernel.org/lkml/20221206153354.92394-3-xueshuai@linux.alibaba.c...
## Cover Letter
There are two major types of uncorrected recoverable (UCR) errors :
- Action Required (AR): The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this error.
- Action Optional (AO): The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this error.
The main difference between AR and AO errors is that AR errors are synchronous events, while AO errors are asynchronous events. Synchronous exceptions, such as Machine Check Exception (MCE) on X86 and Synchronous External Abort (SEA) on Arm64, are signaled by the hardware when an error is detected and the memory access has architecturally been executed.
Currently, both synchronous and asynchronous errors are queued as AO errors and handled by a dedicated kernel thread in a work queue on the ARM64 platform. For synchronous errors, memory_failure() is synced using a cancel_work_sync trick to ensure that the corrupted page is unmapped and poisoned. Upon returning to user-space, the process resumes at the current instruction, triggering a page fault. As a result, the kernel sends a SIGBUS signal to the current process due to VM_FAULT_HWPOISON.
However, this trick is not always be effective, this patch set improves the recovery process in three specific aspects:
1. Handle synchronous exceptions with proper si_code
ghes_handle_memory_failure() queue both synchronous and asynchronous errors with flag=0. Then the kernel will notify the process by sending a SIGBUS signal in memory_failure() with wrong si_code: BUS_MCEERR_AO to the actual user-space process instead of BUS_MCEERR_AR. The user-space processes rely on the si_code to distinguish to handle memory failure.
For example, hwpoison-aware user-space processes use the si_code: BUS_MCEERR_AO for 'action optional' early notifications, and BUS_MCEERR_AR for 'action required' synchronous/late notifications. Specifically, when a signal with SIGBUS_MCEERR_AR is delivered to QEMU, it will inject a vSEA to Guest kernel. In contrast, a signal with SIGBUS_MCEERR_AO will be ignored by QEMU.[1]
Fix it by seting memory failure flags as MF_ACTION_REQUIRED on synchronous events. (PATCH 1)
2. Handle memory_failure() abnormal fails to avoid a unnecessary reboot
If process mapping fault page, but memory_failure() abnormal return before try_to_unmap(), for example, the fault page process mapping is KSM page. In this case, arm64 cannot use the page fault process to terminate the synchronous exception loop.[4]
This loop can potentially exceed the platform firmware threshold or even trigger a kernel hard lockup, leading to a system reboot. However, kernel has the capability to recover from this error.
Fix it by performing a force kill when memory_failure() abnormal fails or when other abnormal synchronous errors occur. These errors can include situations such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc. (PATCH 2)
3. Handle memory_failure() in current process context which consuming poison
When synchronous errors occur, memory_failure() assume that current process context is exactly that consuming poison synchronous error.
For example, kill_accessing_process() holds mmap locking of current->mm, does pagetable walk to find the error virtual address, and sends SIGBUS to the current process with error info. However, the mm of kworker is not valid, resulting in a null-pointer dereference. I have fixed this in[3].
commit 77677cdbc2aa mm,hwpoison: check mm when killing accessing process
Another example is that collect_procs()/kill_procs() walk the task list, only collect and send sigbus to task which consuming poison. But memory_failure() is queued and handled by a dedicated kernel thread on arm64 platform.
Fix it by queuing memory_failure() as a task work which runs in current execution context to synchronously send SIGBUS before ret_to_user. (PATCH 2)
** In summary, this patch set handles synchronous errors in task work with proper si_code so that hwpoison-aware process can recover from errors, and fixes (potentially) abnormal cases. **
Lv Ying and XiuQi from Huawei also proposed to address similar problem[2][4]. Acknowledge to discussion with them.
## Steps to Reproduce This Problem
To reproduce this problem:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 5 addr 0xffffb0d75000 page not present Test passed
The si_code (code 5) from einj_mem_uc indicates that it is BUS_MCEERR_AO error and it is not fact.
After this patch set:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 4 addr 0xffffb0d75000 page not present Test passed
The si_code (code 4) from einj_mem_uc indicates that it is BUS_MCEERR_AR error as we expected.
[1] Add ARMv8 RAS virtualization support in QEMU https://patchew.org/QEMU/20200512030609.19593-1-gengdongjiu@huawei.com/ [2] https://lore.kernel.org/lkml/20221205115111.131568-3-lvying6@huawei.com/ [3] https://lkml.kernel.org/r/20220914064935.7851-1-xueshuai@linux.alibaba.com [4] https://lore.kernel.org/lkml/20221209095407.383211-1-lvying6@huawei.com/
Shuai Xue (2): ACPI: APEI: set memory failure flags as MF_ACTION_REQUIRED on synchronous events ACPI: APEI: handle synchronous exceptions in task work
arch/x86/kernel/cpu/mce/core.c | 9 +-- drivers/acpi/apei/ghes.c | 113 ++++++++++++++++++++++----------- include/acpi/ghes.h | 3 - include/linux/mm.h | 1 - mm/memory-failure.c | 22 ++----- 5 files changed, 82 insertions(+), 66 deletions(-)
There are two major types of uncorrected recoverable (UCR) errors :
- Action Required (AR): The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
- Action Optional (AO): The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
The essential difference between AR and AO errors is that AR is a synchronous event, while AO is an asynchronous event. The hardware will signal a synchronous exception (Machine Check Exception on X86 and Synchronous External Abort on Arm64) when an error is detected and the memory access has been architecturally executed.
When APEI firmware first is enabled, a platform may describe one error source for the handling of synchronous errors (e.g. MCE or SEA notification ), or for handling asynchronous errors (e.g. SCI or External Interrupt notification). In other words, we can distinguish synchronous errors by APEI notification. For AR errors, kernel will kill current process accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In addition, for AO errors, kernel will notify the process who owns the poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. However, the GHES driver always sets mf_flags to 0 so that all UCR errors are handled as AO errors in memory failure.
To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous events.
Fixes: ba61ca4aab47 ("ACPI, APEI, GHES: Add hardware memory error recovery support")' Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com --- drivers/acpi/apei/ghes.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index ef59d6ea16da..88178aa6222d 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; }
+/* + * A platform may describe one error source for the handling of synchronous + * errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI + * or External Interrupt). On x86, the HEST notifications are always + * asynchronous, so only SEA on ARM is delivered as a synchronous + * notification. + */ +static inline bool is_hest_sync_notify(struct ghes *ghes) +{ + u8 notify_type = ghes->generic->notify.type; + + return notify_type == ACPI_HEST_NOTIFY_SEA; +} + /* * This driver isn't really modular, however for the time being, * continuing to use module_param is the easiest way to remain @@ -475,7 +489,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) }
static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, - int sev) + int sev, bool sync) { int flags = -1; int sec_sev = ghes_severity(gdata->error_severity); @@ -489,7 +503,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE; if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) - flags = 0; + flags = sync ? MF_ACTION_REQUIRED : 0;
if (flags != -1) return ghes_do_memory_failure(mem_err->physical_addr, flags); @@ -497,9 +511,11 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, return false; }
-static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev) +static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, + int sev, bool sync) { struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata); + int flags = sync ? MF_ACTION_REQUIRED : 0; bool queued = false; int sec_sev, i; char *p; @@ -524,7 +540,7 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int s * and don't filter out 'corrected' error here. */ if (is_cache && has_pa) { - queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0); + queued = ghes_do_memory_failure(err_info->physical_fault_addr, flags); p += err_info->length; continue; } @@ -645,6 +661,7 @@ static bool ghes_do_proc(struct ghes *ghes, const guid_t *fru_id = &guid_null; char *fru_text = ""; bool queued = false; + bool sync = is_hest_sync_notify(ghes);
sev = ghes_severity(estatus->error_severity); apei_estatus_for_each_section(estatus, gdata) { @@ -662,13 +679,13 @@ static bool ghes_do_proc(struct ghes *ghes, atomic_notifier_call_chain(&ghes_report_chain, sev, mem_err);
arch_apei_report_mem_error(sev, mem_err); - queued = ghes_handle_memory_failure(gdata, sev); + queued = ghes_handle_memory_failure(gdata, sev, sync); } else if (guid_equal(sec_type, &CPER_SEC_PCIE)) { ghes_handle_aer(gdata); } else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) { - queued = ghes_handle_arm_hw_error(gdata, sev); + queued = ghes_handle_arm_hw_error(gdata, sev, sync); } else { void *err = acpi_hest_get_payload(gdata);
Hardware errors could be signaled by synchronous interrupt, e.g. when an error is detected by a background scrubber, or signaled by synchronous exception, e.g. when an uncorrected error is consumed. Both synchronous and asynchronous error are queued and handled by a dedicated kthread in workqueue.
commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") keep track of whether memory_failure() work was queued, and make task_work pending to flush out the workqueue so that the work for synchronous error is processed before returning to user-space. The trick ensures that the corrupted page is unmapped and poisoned. And after returning to user-space, the task starts at current instruction which triggering a page fault in which kernel will send SIGBUS to current process due to VM_FAULT_HWPOISON.
However, the memory failure recovery for hwpoison-aware mechanisms does not work as expected. For example, hwpoison-aware user-space processes like QEMU register their customized SIGBUS handler and enable early kill mode by seting PF_MCE_EARLY at initialization. Then the kernel will directy notify the process by sending a SIGBUS signal in memory failure with wrong si_code: the actual user-space process accessing the corrupt memory location, but its memory failure work is handled in a kthread context, so it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space process instead of BUS_MCEERR_AR in kill_proc().
To this end, separate synchronous and asynchronous error handling into different paths like X86 platform does:
- valid synchronous errors: queue a task_work to synchronously send SIGBUS before ret_to_user. - valid asynchronous errors: queue a work into workqueue to asynchronously handle memory failure. - abnormal branches such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc.
Then for valid synchronous errors, the current context in memory failure is exactly belongs to the task consuming poison data and it will send SIBBUS with proper si_code.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com --- arch/x86/kernel/cpu/mce/core.c | 9 +--- drivers/acpi/apei/ghes.c | 84 +++++++++++++++++++++------------- include/acpi/ghes.h | 3 -- include/linux/mm.h | 1 - mm/memory-failure.c | 22 +++------ 5 files changed, 59 insertions(+), 60 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 6f35f724cc14..1675ff77033d 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1334,17 +1334,10 @@ static void kill_me_maybe(struct callback_head *cb) return; }
- /* - * -EHWPOISON from memory_failure() means that it already sent SIGBUS - * to the current process with the proper error info, - * -EOPNOTSUPP means hwpoison_filter() filtered the error event, - * - * In both cases, no further processing is required. - */ if (ret == -EHWPOISON || ret == -EOPNOTSUPP) return;
- pr_err("Memory error not recovered"); + pr_err("Sending SIGBUS to current task due to memory error not recovered"); kill_me_now(cb); }
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 88178aa6222d..014401a65ed5 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -450,28 +450,41 @@ static void ghes_clear_estatus(struct ghes *ghes, }
/* - * Called as task_work before returning to user-space. - * Ensure any queued work has been done before we return to the context that - * triggered the notification. + * struct sync_task_work - for synchronous RAS event + * + * @twork: callback_head for task work + * @pfn: page frame number of corrupted page + * @flags: fine tune action taken + * + * Structure to pass task work to be handled before + * ret_to_user via task_work_add(). */ -static void ghes_kick_task_work(struct callback_head *head) +struct sync_task_work { + struct callback_head twork; + u64 pfn; + int flags; +}; + +static void memory_failure_cb(struct callback_head *twork) { - struct acpi_hest_generic_status *estatus; - struct ghes_estatus_node *estatus_node; - u32 node_len; + int ret; + struct sync_task_work *twcb = + container_of(twork, struct sync_task_work, twork);
- estatus_node = container_of(head, struct ghes_estatus_node, task_work); - if (IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE)) - memory_failure_queue_kick(estatus_node->task_work_cpu); + ret = memory_failure(twcb->pfn, twcb->flags); + kfree(twcb);
- estatus = GHES_ESTATUS_FROM_NODE(estatus_node); - node_len = GHES_ESTATUS_NODE_LEN(cper_estatus_len(estatus)); - gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len); + if (!ret || ret == -EHWPOISON || ret == -EOPNOTSUPP) + return; + + pr_err("Sending SIGBUS to current task due to memory error not recovered"); + force_sig(SIGBUS); }
static bool ghes_do_memory_failure(u64 physical_addr, int flags) { unsigned long pfn; + struct sync_task_work *twcb;
if (!IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE)) return false; @@ -484,6 +497,18 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) return false; }
+ if (flags == MF_ACTION_REQUIRED && current->mm) { + twcb = kmalloc(sizeof(*twcb), GFP_ATOMIC); + if (!twcb) + return false; + + twcb->pfn = pfn; + twcb->flags = flags; + init_task_work(&twcb->twork, memory_failure_cb); + task_work_add(current, &twcb->twork, TWA_RESUME); + return true; + } + memory_failure_queue(pfn, flags); return true; } @@ -652,7 +677,7 @@ static void ghes_defer_non_standard_event(struct acpi_hest_generic_data *gdata, schedule_work(&entry->work); }
-static bool ghes_do_proc(struct ghes *ghes, +static void ghes_do_proc(struct ghes *ghes, const struct acpi_hest_generic_status *estatus) { int sev, sec_sev; @@ -696,7 +721,14 @@ static bool ghes_do_proc(struct ghes *ghes, } }
- return queued; + /* + * If no memory failure work is queued for abnormal synchronous + * errors, do a force kill. + */ + if (sync && !queued) { + pr_err("Sending SIGBUS to current task due to memory error not recovered"); + force_sig(SIGBUS); + } }
static void __ghes_print_estatus(const char *pfx, @@ -998,9 +1030,7 @@ static void ghes_proc_in_irq(struct irq_work *irq_work) struct ghes_estatus_node *estatus_node; struct acpi_hest_generic *generic; struct acpi_hest_generic_status *estatus; - bool task_work_pending; u32 len, node_len; - int ret;
llnode = llist_del_all(&ghes_estatus_llist); /* @@ -1015,25 +1045,16 @@ static void ghes_proc_in_irq(struct irq_work *irq_work) estatus = GHES_ESTATUS_FROM_NODE(estatus_node); len = cper_estatus_len(estatus); node_len = GHES_ESTATUS_NODE_LEN(len); - task_work_pending = ghes_do_proc(estatus_node->ghes, estatus); + + ghes_do_proc(estatus_node->ghes, estatus); + if (!ghes_estatus_cached(estatus)) { generic = estatus_node->generic; if (ghes_print_estatus(NULL, generic, estatus)) ghes_estatus_cache_add(generic, estatus); } - - if (task_work_pending && current->mm) { - estatus_node->task_work.func = ghes_kick_task_work; - estatus_node->task_work_cpu = smp_processor_id(); - ret = task_work_add(current, &estatus_node->task_work, - TWA_RESUME); - if (ret) - estatus_node->task_work.func = NULL; - } - - if (!estatus_node->task_work.func) - gen_pool_free(ghes_estatus_pool, - (unsigned long)estatus_node, node_len); + gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, + node_len);
llnode = next; } @@ -1094,7 +1115,6 @@ static int ghes_in_nmi_queue_one_entry(struct ghes *ghes,
estatus_node->ghes = ghes; estatus_node->generic = ghes->generic; - estatus_node->task_work.func = NULL; estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
if (__ghes_read_estatus(estatus, buf_paddr, fixmap_idx, len)) { diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h index 3c8bba9f1114..e5e0c308d27f 100644 --- a/include/acpi/ghes.h +++ b/include/acpi/ghes.h @@ -35,9 +35,6 @@ struct ghes_estatus_node { struct llist_node llnode; struct acpi_hest_generic *generic; struct ghes *ghes; - - int task_work_cpu; - struct callback_head task_work; };
struct ghes_estatus_cache { diff --git a/include/linux/mm.h b/include/linux/mm.h index bf5d0b1b16f4..3ce9e4371659 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3835,7 +3835,6 @@ enum mf_flags { int mf_dax_kill_procs(struct address_space *mapping, pgoff_t index, unsigned long count, int mf_flags); extern int memory_failure(unsigned long pfn, int flags); -extern void memory_failure_queue_kick(int cpu); extern int unpoison_memory(unsigned long pfn); extern void shake_page(struct page *p); extern atomic_long_t num_poisoned_pages __read_mostly; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 4d6e43c88489..0d02f8a0b556 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2161,9 +2161,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, * Must run in process context (e.g. a work queue) with interrupts * enabled and no spinlocks held. * - * Return: 0 for successfully handled the memory error, - * -EOPNOTSUPP for hwpoison_filter() filtered the error event, - * < 0(except -EOPNOTSUPP) on failure. + * Return values: + * 0 - success + * -EOPNOTSUPP - hwpoison_filter() filtered the error event. + * -EHWPOISON - sent SIGBUS to the current process with the proper + * error info by kill_accessing_process(). + * other negative values - failure */ int memory_failure(unsigned long pfn, int flags) { @@ -2445,19 +2448,6 @@ static void memory_failure_work_func(struct work_struct *work) } }
-/* - * Process memory_failure work queued on the specified CPU. - * Used to avoid return-to-userspace racing with the memory_failure workqueue. - */ -void memory_failure_queue_kick(int cpu) -{ - struct memory_failure_cpu *mf_cpu; - - mf_cpu = &per_cpu(memory_failure_cpu, cpu); - cancel_work_sync(&mf_cpu->work); - memory_failure_work_func(&mf_cpu->work); -} - static int __init memory_failure_init(void) { struct memory_failure_cpu *mf_cpu;
Hi, ALL,
Gentle ping.
Best Regards, Shuai
On 2023/10/7 15:28, Shuai Xue wrote:
Hi, ALL,
I have rewritten the cover letter with the hope that the maintainer will truly understand the necessity of this patch. Both Alibaba and Huawei met the same issue in products, and we hope it could be fixed ASAP.
## Changes Log
changes since v8:
- remove the bug fix tag of patch 2 (per Jarkko Sakkinen)
- remove the declaration of memory_failure_queue_kick (per Naoya Horiguchi)
- rewrite the return value comments of memory_failure (per Naoya Horiguchi)
changes since v7:
- rebase to Linux v6.6-rc2 (no code changed)
- rewritten the cover letter to explain the motivation of this patchset
changes since v6:
- add more explicty error message suggested by Xiaofei
- pick up reviewed-by tag from Xiaofei
- pick up internal reviewed-by tag from Baolin
changes since v5 by addressing comments from Kefeng:
- document return value of memory_failure()
- drop redundant comments in call site of memory_failure()
- make ghes_do_proc void and handle abnormal case within it
- pick up reviewed-by tag from Kefeng Wang
changes since v4 by addressing comments from Xiaofei:
- do a force kill only for abnormal sync errors
changes since v3 by addressing comments from Xiaofei:
- do a force kill for abnormal memory failure error such as invalid PA,
unexpected severity, OOM, etc
- pcik up tested-by tag from Ma Wupeng
changes since v2 by addressing comments from Naoya:
- rename mce_task_work to sync_task_work
- drop ACPI_HEST_NOTIFY_MCE case in is_hest_sync_notify()
- add steps to reproduce this problem in cover letter
changes since v1:
- synchronous events by notify type
- Link: https://lore.kernel.org/lkml/20221206153354.92394-3-xueshuai@linux.alibaba.c...
## Cover Letter
There are two major types of uncorrected recoverable (UCR) errors :
Action Required (AR): The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this error.
Action Optional (AO): The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this error.
The main difference between AR and AO errors is that AR errors are synchronous events, while AO errors are asynchronous events. Synchronous exceptions, such as Machine Check Exception (MCE) on X86 and Synchronous External Abort (SEA) on Arm64, are signaled by the hardware when an error is detected and the memory access has architecturally been executed.
Currently, both synchronous and asynchronous errors are queued as AO errors and handled by a dedicated kernel thread in a work queue on the ARM64 platform. For synchronous errors, memory_failure() is synced using a cancel_work_sync trick to ensure that the corrupted page is unmapped and poisoned. Upon returning to user-space, the process resumes at the current instruction, triggering a page fault. As a result, the kernel sends a SIGBUS signal to the current process due to VM_FAULT_HWPOISON.
However, this trick is not always be effective, this patch set improves the recovery process in three specific aspects:
- Handle synchronous exceptions with proper si_code
ghes_handle_memory_failure() queue both synchronous and asynchronous errors with flag=0. Then the kernel will notify the process by sending a SIGBUS signal in memory_failure() with wrong si_code: BUS_MCEERR_AO to the actual user-space process instead of BUS_MCEERR_AR. The user-space processes rely on the si_code to distinguish to handle memory failure.
For example, hwpoison-aware user-space processes use the si_code: BUS_MCEERR_AO for 'action optional' early notifications, and BUS_MCEERR_AR for 'action required' synchronous/late notifications. Specifically, when a signal with SIGBUS_MCEERR_AR is delivered to QEMU, it will inject a vSEA to Guest kernel. In contrast, a signal with SIGBUS_MCEERR_AO will be ignored by QEMU.[1]
Fix it by seting memory failure flags as MF_ACTION_REQUIRED on synchronous events. (PATCH 1)
- Handle memory_failure() abnormal fails to avoid a unnecessary reboot
If process mapping fault page, but memory_failure() abnormal return before try_to_unmap(), for example, the fault page process mapping is KSM page. In this case, arm64 cannot use the page fault process to terminate the synchronous exception loop.[4]
This loop can potentially exceed the platform firmware threshold or even trigger a kernel hard lockup, leading to a system reboot. However, kernel has the capability to recover from this error.
Fix it by performing a force kill when memory_failure() abnormal fails or when other abnormal synchronous errors occur. These errors can include situations such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc. (PATCH 2)
- Handle memory_failure() in current process context which consuming poison
When synchronous errors occur, memory_failure() assume that current process context is exactly that consuming poison synchronous error.
For example, kill_accessing_process() holds mmap locking of current->mm, does pagetable walk to find the error virtual address, and sends SIGBUS to the current process with error info. However, the mm of kworker is not valid, resulting in a null-pointer dereference. I have fixed this in[3].
commit 77677cdbc2aa mm,hwpoison: check mm when killing accessing process
Another example is that collect_procs()/kill_procs() walk the task list, only collect and send sigbus to task which consuming poison. But memory_failure() is queued and handled by a dedicated kernel thread on arm64 platform.
Fix it by queuing memory_failure() as a task work which runs in current execution context to synchronously send SIGBUS before ret_to_user. (PATCH 2)
** In summary, this patch set handles synchronous errors in task work with proper si_code so that hwpoison-aware process can recover from errors, and fixes (potentially) abnormal cases. **
Lv Ying and XiuQi from Huawei also proposed to address similar problem[2][4]. Acknowledge to discussion with them.
## Steps to Reproduce This Problem
To reproduce this problem:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 5 addr 0xffffb0d75000 page not present Test passed
The si_code (code 5) from einj_mem_uc indicates that it is BUS_MCEERR_AO error and it is not fact.
After this patch set:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 4 addr 0xffffb0d75000 page not present Test passed
The si_code (code 4) from einj_mem_uc indicates that it is BUS_MCEERR_AR error as we expected.
[1] Add ARMv8 RAS virtualization support in QEMU https://patchew.org/QEMU/20200512030609.19593-1-gengdongjiu@huawei.com/ [2] https://lore.kernel.org/lkml/20221205115111.131568-3-lvying6@huawei.com/ [3] https://lkml.kernel.org/r/20220914064935.7851-1-xueshuai@linux.alibaba.com [4] https://lore.kernel.org/lkml/20221209095407.383211-1-lvying6@huawei.com/
Shuai Xue (2): ACPI: APEI: set memory failure flags as MF_ACTION_REQUIRED on synchronous events ACPI: APEI: handle synchronous exceptions in task work
arch/x86/kernel/cpu/mce/core.c | 9 +-- drivers/acpi/apei/ghes.c | 113 ++++++++++++++++++++++----------- include/acpi/ghes.h | 3 - include/linux/mm.h | 1 - mm/memory-failure.c | 22 ++----- 5 files changed, 82 insertions(+), 66 deletions(-)
On Sat, Oct 07, 2023 at 03:28:16PM +0800, Shuai Xue wrote:
However, this trick is not always be effective
So far so good.
What's missing here is why "this trick" is not always effective.
Basically to explain what exactly the problem is.
For example, hwpoison-aware user-space processes use the si_code: BUS_MCEERR_AO for 'action optional' early notifications, and BUS_MCEERR_AR for 'action required' synchronous/late notifications. Specifically, when a signal with SIGBUS_MCEERR_AR is delivered to QEMU, it will inject a vSEA to Guest kernel. In contrast, a signal with SIGBUS_MCEERR_AO will be ignored by QEMU.[1]
Fix it by seting memory failure flags as MF_ACTION_REQUIRED on synchronous events. (PATCH 1)
So you're fixing qemu by "fixing" the kernel?
This doesn't make any sense.
Make errors which are ACPI_HEST_NOTIFY_SEA type return MF_ACTION_REQUIRED so that it *happens* to fix your use case.
Sounds like a lot of nonsense to me.
What is the issue here you're trying to solve?
- Handle memory_failure() abnormal fails to avoid a unnecessary reboot
If process mapping fault page, but memory_failure() abnormal return before try_to_unmap(), for example, the fault page process mapping is KSM page. In this case, arm64 cannot use the page fault process to terminate the synchronous exception loop.[4]
This loop can potentially exceed the platform firmware threshold or even trigger a kernel hard lockup, leading to a system reboot. However, kernel has the capability to recover from this error.
Fix it by performing a force kill when memory_failure() abnormal fails or when other abnormal synchronous errors occur.
Just like that?
Without giving the process the opportunity to even save its other data?
So this all is still very confusing, patches definitely need splitting and this whole thing needs restraint.
You go and do this: you split *each* issue you're addressing into a separate patch and explain it like this:
--- 1. Prepare the context for the explanation briefly.
2. Explain the problem at hand.
3. "It happens because of <...>"
4. "Fix it by doing X"
5. "(Potentially do Y)." ---
and each patch explains *exactly* *one* issue, what happens, why it happens and just the fix for it and *why* it is needed.
Otherwise, this is unreviewable.
Thx.
On 2023/11/23 23:07, Borislav Petkov wrote:
Hi, Borislav,
Thank you for your reply and advice.
On Sat, Oct 07, 2023 at 03:28:16PM +0800, Shuai Xue wrote:
However, this trick is not always be effective
So far so good.
What's missing here is why "this trick" is not always effective.
Basically to explain what exactly the problem is.
I think the main point is that this trick for AR error is not effective, because:
- an AR error consumed by current process is deferred to handle in a dedicated kernel thread, but memory_failure() assumes that it runs in the current context - another page fault is not unnecessary, we can send sigbus to current process in the first Synchronous External Abort SEA on arm64 (analogy Machine Check Exception on x86)
For example, hwpoison-aware user-space processes use the si_code: BUS_MCEERR_AO for 'action optional' early notifications, and BUS_MCEERR_AR for 'action required' synchronous/late notifications. Specifically, when a signal with SIGBUS_MCEERR_AR is delivered to QEMU, it will inject a vSEA to Guest kernel. In contrast, a signal with SIGBUS_MCEERR_AO will be ignored by QEMU.[1]
Fix it by seting memory failure flags as MF_ACTION_REQUIRED on synchronous events. (PATCH 1)
So you're fixing qemu by "fixing" the kernel?
This doesn't make any sense.
I just give an example that the user space process *really* relys on the si_code of signal to handle hardware errors
Make errors which are ACPI_HEST_NOTIFY_SEA type return MF_ACTION_REQUIRED so that it *happens* to fix your use case.
Sounds like a lot of nonsense to me.
What is the issue here you're trying to solve?
The SIGBUS si_codes defined in include/uapi/asm-generic/siginfo.h says:
/* hardware memory error consumed on a machine check: action required */ #define BUS_MCEERR_AR 4 /* hardware memory error detected in process but not consumed: action optional*/ #define BUS_MCEERR_AO 5
When a synchronous error is consumed by Guest, the kernel should send a signal with BUS_MCEERR_AR instead of BUS_MCEERR_AO.
- Handle memory_failure() abnormal fails to avoid a unnecessary reboot
If process mapping fault page, but memory_failure() abnormal return before try_to_unmap(), for example, the fault page process mapping is KSM page. In this case, arm64 cannot use the page fault process to terminate the synchronous exception loop.[4]
This loop can potentially exceed the platform firmware threshold or even trigger a kernel hard lockup, leading to a system reboot. However, kernel has the capability to recover from this error.
Fix it by performing a force kill when memory_failure() abnormal fails or when other abnormal synchronous errors occur.
Just like that?
Without giving the process the opportunity to even save its other data?
Exactly.
So this all is still very confusing, patches definitely need splitting and this whole thing needs restraint.
You go and do this: you split *each* issue you're addressing into a separate patch and explain it like this:
Prepare the context for the explanation briefly.
Explain the problem at hand.
"It happens because of <...>"
"Fix it by doing X"
"(Potentially do Y)."
and each patch explains *exactly* *one* issue, what happens, why it happens and just the fix for it and *why* it is needed.
Otherwise, this is unreviewable.
Thank you for your valuable suggestion, I will split the patches and resubmit a new patch set.
Thx.
Best Regards, Shuai
On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
- an AR error consumed by current process is deferred to handle in a dedicated kernel thread, but memory_failure() assumes that it runs in the current context
On x86? ARM?
Please point to the exact code flow.
- another page fault is not unnecessary, we can send sigbus to current process in the first Synchronous External Abort SEA on arm64 (analogy Machine Check Exception on x86)
I have no clue what that means. What page fault?
I just give an example that the user space process *really* relys on the si_code of signal to handle hardware errors
No, don't give examples.
Explain what the exact problem is you're seeing, in your use case, point to the code and then state how you think it should be fixed and why.
Right now your text is "all over the place" and I have no clue what you even want.
The SIGBUS si_codes defined in include/uapi/asm-generic/siginfo.h says:
/* hardware memory error consumed on a machine check: action required */ #define BUS_MCEERR_AR 4 /* hardware memory error detected in process but not consumed: action optional*/ #define BUS_MCEERR_AO 5
When a synchronous error is consumed by Guest, the kernel should send a signal with BUS_MCEERR_AR instead of BUS_MCEERR_AO.
Can you drop this "synchronous" bla and concentrate on the error *severity*?
I think you want to say that there are some types of errors for which error handling needs to happen immediately and for some reason that doesn't happen.
Which errors are those? Types?
Why do you need them to be handled immediately?
Exactly.
No, not exactly. Why is it ok to do that? What are the implications of this?
Is immediate killing the right decision?
Is this ok for *every* possible kernel running out there - not only for your use case?
And so on and so on...
On 2023/11/25 20:10, Borislav Petkov wrote:
Hi, Borislav,
Thank you for your reply, and sorry for the confusion I made. Please see my rely inline.
Best Regards, Shuai
On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
- an AR error consumed by current process is deferred to handle in a dedicated kernel thread, but memory_failure() assumes that it runs in the current context
On x86? ARM?
Pease point to the exact code flow.
An AR error consumed by current process is deferred to handle in a dedicated kernel thread on ARM platform. The AR error is handled in bellow flow:
----------------------------------------------------------------------------- [usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
----------------------------------------------------------------------------- [ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1 ghes_sdei_critical_callback => __ghes_sdei_callback => ghes_in_nmi_queue_one_entry // peak and read estatus => irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work [ghes_sdei_critical_callback: return] ----------------------------------------------------------------------------- [ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2 => ghes_do_proc => ghes_handle_memory_failure => ghes_do_memory_failure => memory_failure_queue // put work task on current CPU => if (kfifo_put(&mf_cpu->fifo, entry)) schedule_work_on(smp_processor_id(), &mf_cpu->work); => task_work_add(current, &estatus_node->task_work, TWA_RESUME); [ghes_proc_in_irq: return] ----------------------------------------------------------------------------- // kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3 [memory_failure_work_func: current kworker, CPU 3] => memory_failure_work_func(&mf_cpu->work) => while kfifo_get(&mf_cpu->fifo, &entry); // until get no work => memory_failure(entry.pfn, entry.flags); ----------------------------------------------------------------------------- [ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4 => memory_failure_queue_kick => cancel_work_sync - waiting memory_failure_work_func finish => memory_failure_work_func(&mf_cpu->work) => kfifo_get(&mf_cpu->fifo, &entry); // no work ----------------------------------------------------------------------------- [einj_mem_uc resume at the same PC, trigger a page fault STEP 5
STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware notifies hardware error to kernel through is SDEI (ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie a irq_work to handle hardware errors in IRQ context
STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on current CPU in workqueue and add task work to sync with the workqueue.
STEP3: The kworker preempts the current running thread and get CPU 3. Then memory_failure() is processed in kworker.
STEP4: ghes_kick_task_work() is called as task_work to ensure any queued workqueue has been done before returning to user-space.
STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the current instruction, because the poison page is unmapped by memory_failure() in step 3, so a page fault will be triggered.
memory_failure() assumes that it runs in the current context on both x86 and ARM platform.
for example: memory_failure() in mm/memory-failure.c:
if (flags & MF_ACTION_REQUIRED) { folio = page_folio(p); res = kill_accessing_process(current, folio_pfn(folio), flags); }
- another page fault is not unnecessary, we can send sigbus to current process in the first Synchronous External Abort SEA on arm64 (analogy Machine Check Exception on x86)
I have no clue what that means. What page fault?
I mean page fault in step 5. We can simplify the above flow by queuing memory_failure() as a task work for AR errors in step 3 directly.
I just give an example that the user space process *really* relys on the si_code of signal to handle hardware errors
No, don't give examples.
Explain what the exact problem is you're seeing, in your use case, point to the code and then state how you think it should be fixed and why.
Right now your text is "all over the place" and I have no clue what you even want.
Ok, got it. Thank you.
The SIGBUS si_codes defined in include/uapi/asm-generic/siginfo.h says:
/* hardware memory error consumed on a machine check: action required */ #define BUS_MCEERR_AR 4 /* hardware memory error detected in process but not consumed: action optional*/ #define BUS_MCEERR_AO 5
When a synchronous error is consumed by Guest, the kernel should send a signal with BUS_MCEERR_AR instead of BUS_MCEERR_AO.
Can you drop this "synchronous" bla and concentrate on the error *severity*?
I think you want to say that there are some types of errors for which error handling needs to happen immediately and for some reason that doesn't happen.
Which errors are those? Types?
Why do you need them to be handled immediately?
Well, the severity defined on x86 and ARM platform is quite different. I guess you mean taxonomy of producer error types.
- X86: Software recoverable action required (SRAR)
A UCR error that *requires* system software to take a recovery action on this processor *before scheduling another stream of execution on this processor*. (15.6.3 UCR Error Classification in Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3)
- ARM: Recoverable state (UER)
The PE determines that software *must* take action to locate and repair the error to successfully recover execution. This might be because the exception was taken before the error was architecturally consumed by the PE, at the point when the PE was not be able to make correct progress without either consuming the error or *otherwise making the state of the PE unrecoverable*. (2.3.2 PE error state classification in Arm RAS Supplement https://documentation-service.arm.com/static/63185614f72fad1903828eda)
I think above two types of error need to be handled immediately.
Exactly.
No, not exactly. Why is it ok to do that? What are the implications of this?
Is immediate killing the right decision?
Is this ok for *every* possible kernel running out there - not only for your use case?
And so on and so on...
I don't have a clear answer here. I guess the poison data only effects the user space task which triggers exception. A panic is not necessary.
On x86 platform, the current error handling of memory_failure() in kill_me_maybe() is just send a sigbus forcely.
kill_me_maybe():
ret = memory_failure(pfn, flags); if (ret == -EHWPOISON || ret == -EOPNOTSUPP) return;
pr_err("Memory error not recovered"); kill_me_now(cb);
Do you have any comments or suggestion about this? I don't change x86 behavior.
For arm64 platform, step 3 in above flow, memory_failure_work_func(), the call site of memory_failure(), does not handle the return code of memory_failure(). I just add the same behavior.
Moving James to To:
On Sun, Nov 26, 2023 at 08:25:38PM +0800, Shuai Xue wrote:
On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
- an AR error consumed by current process is deferred to handle in a dedicated kernel thread, but memory_failure() assumes that it runs in the current context
On x86? ARM?
Pease point to the exact code flow.
An AR error consumed by current process is deferred to handle in a dedicated kernel thread on ARM platform. The AR error is handled in bellow flow:
[usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
[ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1 ghes_sdei_critical_callback => __ghes_sdei_callback => ghes_in_nmi_queue_one_entry // peak and read estatus => irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work [ghes_sdei_critical_callback: return]
[ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2 => ghes_do_proc => ghes_handle_memory_failure => ghes_do_memory_failure => memory_failure_queue // put work task on current CPU => if (kfifo_put(&mf_cpu->fifo, entry)) schedule_work_on(smp_processor_id(), &mf_cpu->work); => task_work_add(current, &estatus_node->task_work, TWA_RESUME); [ghes_proc_in_irq: return]
// kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3 [memory_failure_work_func: current kworker, CPU 3] => memory_failure_work_func(&mf_cpu->work) => while kfifo_get(&mf_cpu->fifo, &entry); // until get no work => memory_failure(entry.pfn, entry.flags);
From the comment above that function:
* The function is primarily of use for corruptions that * happen outside the current execution context (e.g. when * detected by a background scrubber) * * Must run in process context (e.g. a work queue) with interrupts * enabled and no spinlocks held.
[ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4 => memory_failure_queue_kick => cancel_work_sync - waiting memory_failure_work_func finish => memory_failure_work_func(&mf_cpu->work) => kfifo_get(&mf_cpu->fifo, &entry); // no work
[einj_mem_uc resume at the same PC, trigger a page fault STEP 5
STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware notifies hardware error to kernel through is SDEI (ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie a irq_work to handle hardware errors in IRQ context
STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on current CPU in workqueue and add task work to sync with the workqueue.
STEP3: The kworker preempts the current running thread and get CPU 3. Then memory_failure() is processed in kworker.
See above.
STEP4: ghes_kick_task_work() is called as task_work to ensure any queued workqueue has been done before returning to user-space.
STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the current instruction, because the poison page is unmapped by memory_failure() in step 3, so a page fault will be triggered.
memory_failure() assumes that it runs in the current context on both x86 and ARM platform.
for example: memory_failure() in mm/memory-failure.c:
if (flags & MF_ACTION_REQUIRED) { folio = page_folio(p); res = kill_accessing_process(current, folio_pfn(folio), flags); }
And?
Do you see the check above it?
if (TestSetPageHWPoison(p)) {
test_and_set_bit() returns true only when the page was poisoned already.
* This function is intended to handle "Action Required" MCEs on already * hardware poisoned pages. They could happen, for example, when * memory_failure() failed to unmap the error page at the first call, or * when multiple local machine checks happened on different CPUs.
And that's kill_accessing_process().
So AFAIU, the kworker running memory_failure() would only mark the page as poison.
The killing happens when memory_failure() runs again and the process touches the page again.
But I'd let James confirm here.
I still don't know what you're fixing here.
Is this something you're encountering on some machine or you simply stared at code?
What does that
"Both Alibaba and Huawei met the same issue in products, and we hope it could be fixed ASAP."
mean?
What did you meet?
What was the problem?
I still note that you're avoiding answering the question what the issue is and if you keep avoiding it, I'll ignore this whole thread.
On 2023/11/30 02:54, Borislav Petkov wrote:
Moving James to To:
On Sun, Nov 26, 2023 at 08:25:38PM +0800, Shuai Xue wrote:
On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
- an AR error consumed by current process is deferred to handle in a dedicated kernel thread, but memory_failure() assumes that it runs in the current context
On x86? ARM?
Pease point to the exact code flow.
An AR error consumed by current process is deferred to handle in a dedicated kernel thread on ARM platform. The AR error is handled in bellow flow:
[usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
[ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1 ghes_sdei_critical_callback => __ghes_sdei_callback => ghes_in_nmi_queue_one_entry // peak and read estatus => irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work [ghes_sdei_critical_callback: return]
[ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2 => ghes_do_proc => ghes_handle_memory_failure => ghes_do_memory_failure => memory_failure_queue // put work task on current CPU => if (kfifo_put(&mf_cpu->fifo, entry)) schedule_work_on(smp_processor_id(), &mf_cpu->work); => task_work_add(current, &estatus_node->task_work, TWA_RESUME); [ghes_proc_in_irq: return]
// kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3 [memory_failure_work_func: current kworker, CPU 3] => memory_failure_work_func(&mf_cpu->work) => while kfifo_get(&mf_cpu->fifo, &entry); // until get no work => memory_failure(entry.pfn, entry.flags);
From the comment above that function:
- The function is primarily of use for corruptions that
- happen outside the current execution context (e.g. when
- detected by a background scrubber)
- Must run in process context (e.g. a work queue) with interrupts
- enabled and no spinlocks held.
Hi, Borislav,
Thank you for your comments.
But we are talking about Action Required error, it does happen *inside the current execution context*. The Action Required error does not meet the function comments.
[ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4 => memory_failure_queue_kick => cancel_work_sync - waiting memory_failure_work_func finish => memory_failure_work_func(&mf_cpu->work) => kfifo_get(&mf_cpu->fifo, &entry); // no work
[einj_mem_uc resume at the same PC, trigger a page fault STEP 5
STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware notifies hardware error to kernel through is SDEI (ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie a irq_work to handle hardware errors in IRQ context
STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on current CPU in workqueue and add task work to sync with the workqueue.
STEP3: The kworker preempts the current running thread and get CPU 3. Then memory_failure() is processed in kworker.
See above.
STEP4: ghes_kick_task_work() is called as task_work to ensure any queued workqueue has been done before returning to user-space.
STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the current instruction, because the poison page is unmapped by memory_failure() in step 3, so a page fault will be triggered.
memory_failure() assumes that it runs in the current context on both x86 and ARM platform.
for example: memory_failure() in mm/memory-failure.c:
if (flags & MF_ACTION_REQUIRED) { folio = page_folio(p); res = kill_accessing_process(current, folio_pfn(folio), flags); }
And?
Do you see the check above it?
if (TestSetPageHWPoison(p)) {
test_and_set_bit() returns true only when the page was poisoned already.
- This function is intended to handle "Action Required" MCEs on already
- hardware poisoned pages. They could happen, for example, when
- memory_failure() failed to unmap the error page at the first call, or
- when multiple local machine checks happened on different CPUs.
And that's kill_accessing_process().
So AFAIU, the kworker running memory_failure() would only mark the page as poison.
The killing happens when memory_failure() runs again and the process touches the page again.
When a Action Required error occurs, it triggers a MCE-like exception (SEA). In the first call of memory_failure(), it will poison the page. If it failed to unmap the error page, the user space task resumes at the current PC and triggers another SEA exception, then the second call of memory_failure() will run into kill_accessing_process() which do nothing and just return -EFAULT. As a result, a third SEA exception will be triggered. Finally, a exception loop happens resulting a hard lockup panic.
But I'd let James confirm here.
I still don't know what you're fixing here.
In ARM64 platform, when a Action Required error occurs, the kernel should send SIGBUS with si_code BUS_MCEERR_AR instead of BUS_MCEERR_AO. (It is also the subject of this thread)
Is this something you're encountering on some machine or you simply stared at code?
I met the wrong si_code problem on Yitian 710 machine which is based on ARM64 platform. And I think it is gernel on ARM64 platfrom.
To reproduce this problem:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 5 addr 0xffffb0d75000 page not present Test passed
The si_code (code 5) from einj_mem_uc indicates that it is BUS_MCEERR_AO error and it is not fact.
After this patch set:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 4 addr 0xffffb0d75000 page not present Test passed
The si_code (code 4) from einj_mem_uc indicates that it is BUS_MCEERR_AR error as we expected.
What does that
"Both Alibaba and Huawei met the same issue in products, and we hope it could be fixed ASAP."
mean?
What did you meet?
What was the problem?
We both got wrong si_code of SIGBUS from kernel side on ARM64 platform.
The VMM in our product relies on the si_code of SIGBUS to handle memory failure in userspace.
- For BUS_MCEERR_AO, we regard that the corruptions happen *outside the current execution context* e.g. detected by a background scrubber, the VMM will ignore the error and the VM will not be killed immediately. - For BUS_MCEERR_AR, we regard that the corruptions happen *insdie the current execution context*, e.g. when a data poison is consumed, the VMM will kill the VM immediately to avoid any further potential data propagation.
I still note that you're avoiding answering the question what the issue is and if you keep avoiding it, I'll ignore this whole thread.
Sorry, Borislav, thank you for your patient and time. I really appreciate that you are involving in to review this patchset. But I have to say it is not the truth, I am avoiding anything. I tried my best to answer every comments you raised, give the details of ARM RAS specific and code flow.
Best Regards, Shuai
FTR, this is starting to make sense, thanks for explaining.
Replying only to this one for now:
On Thu, Nov 30, 2023 at 10:58:53AM +0800, Shuai Xue wrote:
To reproduce this problem:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error
So this is for ARM folks to deal with, BUT:
A consumed uncorrectable error on x86 means panic. On some hw like on AMD, that error doesn't even get seen by the OS but the hw does something called syncflood to prevent further error propagation. So there's no any action required - the hw does that.
But I'd like to hear from ARM folks whether consuming an uncorrectable error even lets software run. Dunno.
Thx.
Hi Boris, Shuai,
On 29/11/2023 18:54, Borislav Petkov wrote:
On Sun, Nov 26, 2023 at 08:25:38PM +0800, Shuai Xue wrote:
On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
- an AR error consumed by current process is deferred to handle in a dedicated kernel thread, but memory_failure() assumes that it runs in the current context
On x86? ARM?
Pease point to the exact code flow.
An AR error consumed by current process is deferred to handle in a dedicated kernel thread on ARM platform. The AR error is handled in bellow flow:
Please don't think of errors as "action required" - that's a user-space signal code. If the page could be fixed by memory-failure(), you may never get a signal. (all this was the fix for always sending an action-required signal)
I assume you mean the CPU accessed a poisoned location and took a synchronous error.
[usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
[ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1 ghes_sdei_critical_callback => __ghes_sdei_callback => ghes_in_nmi_queue_one_entry // peak and read estatus => irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work [ghes_sdei_critical_callback: return]
[ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2 => ghes_do_proc => ghes_handle_memory_failure => ghes_do_memory_failure => memory_failure_queue // put work task on current CPU => if (kfifo_put(&mf_cpu->fifo, entry)) schedule_work_on(smp_processor_id(), &mf_cpu->work); => task_work_add(current, &estatus_node->task_work, TWA_RESUME); [ghes_proc_in_irq: return]
// kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3 [memory_failure_work_func: current kworker, CPU 3] => memory_failure_work_func(&mf_cpu->work) => while kfifo_get(&mf_cpu->fifo, &entry); // until get no work => memory_failure(entry.pfn, entry.flags);
From the comment above that function:
- The function is primarily of use for corruptions that
- happen outside the current execution context (e.g. when
- detected by a background scrubber)
- Must run in process context (e.g. a work queue) with interrupts
- enabled and no spinlocks held.
[ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4 => memory_failure_queue_kick => cancel_work_sync - waiting memory_failure_work_func finish => memory_failure_work_func(&mf_cpu->work) => kfifo_get(&mf_cpu->fifo, &entry); // no work
[einj_mem_uc resume at the same PC, trigger a page fault STEP 5
STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware notifies hardware error to kernel through is SDEI (ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie a irq_work to handle hardware errors in IRQ context
STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on current CPU in workqueue and add task work to sync with the workqueue.
STEP3: The kworker preempts the current running thread and get CPU 3. Then memory_failure() is processed in kworker.
See above.
STEP4: ghes_kick_task_work() is called as task_work to ensure any queued workqueue has been done before returning to user-space.
STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the current instruction, because the poison page is unmapped by memory_failure() in step 3, so a page fault will be triggered.
memory_failure() assumes that it runs in the current context on both x86 and ARM platform.
for example: memory_failure() in mm/memory-failure.c:
if (flags & MF_ACTION_REQUIRED) { folio = page_folio(p); res = kill_accessing_process(current, folio_pfn(folio), flags); }
And?
Do you see the check above it?
if (TestSetPageHWPoison(p)) {
test_and_set_bit() returns true only when the page was poisoned already.
- This function is intended to handle "Action Required" MCEs on already
- hardware poisoned pages. They could happen, for example, when
- memory_failure() failed to unmap the error page at the first call, or
- when multiple local machine checks happened on different CPUs.
And that's kill_accessing_process().
So AFAIU, the kworker running memory_failure() would only mark the page as poison.
The killing happens when memory_failure() runs again and the process touches the page again.
But I'd let James confirm here.
Yes, this is what is expected to happen with the existing code.
The first pass will remove the pages from all processes that have it mapped before this user-space task can restart. Restarting the task will make it access a poisoned page, kicking off the second path which delivers the signal.
The reason for two passes is send_sig_mceerr() likes to clear_siginfo(), so even if you queued action-required before leaving GHES, memory-failure() would stomp on it.
I still don't know what you're fixing here.
The problem is if the user-space process registered for early messages, it gets a signal on the first pass. If it returns from that signal, it will access the poisoned page and get the action-required signal.
How is this making Qemu go wrong?
As to how this works for you given Boris' comments above: kill_procs() is also called from hwpoison_user_mappings(), which takes the flags given to memory-failure(). This is where the action-optional signals come from.
Thanks,
James
Hi Shuai,
On 07/10/2023 08:28, Shuai Xue wrote:
There are two major types of uncorrected recoverable (UCR) errors :
Is UCR a well known x86 acronym? It's best to just spell this out each time, there is enough jargon in this area already.
Action Required (AR): The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
Action Optional (AO): The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
As elsewhere, please don't think of errors as 'action required', this is how things get reported to user-space. Action-required for one thread may be action-optional for another that has the same page mapped - its really not a property of the error. It would be better to describe this as synchronous and asynchronous, or in-band and out-of-band.
The essential difference between AR and AO errors is that AR is a synchronous event, while AO is an asynchronous event. The hardware will signal a synchronous exception (Machine Check Exception on X86 and Synchronous External Abort on Arm64) when an error is detected and the memory access has been architecturally executed.
When APEI firmware first is enabled, a platform may describe one error source for the handling of synchronous errors (e.g. MCE or SEA notification ), or for handling asynchronous errors (e.g. SCI or External Interrupt notification). In other words, we can distinguish synchronous errors by APEI notification. For AR errors, kernel will kill current process accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In addition, for AO errors, kernel will notify the process who owns the poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. However, the GHES driver always sets mf_flags to 0 so that all UCR errors are handled as AO errors in memory failure.
To make this easier to read: UCR and AR -> synchronous AO -> asynchronous
To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous events.
Fixes: ba61ca4aab47 ("ACPI, APEI, GHES: Add hardware memory error recovery support")'
Erm, this predates arm64 support, and what you have here doesn't change the behaviour on x86.
You can blame 7f17b4a121d0d50 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors"), which should have covered this.
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index ef59d6ea16da..88178aa6222d 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; } +/*
- A platform may describe one error source for the handling of synchronous
- errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI
- or External Interrupt). On x86, the HEST notifications are always
- asynchronous, so only SEA on ARM is delivered as a synchronous
- notification.
- */
+static inline bool is_hest_sync_notify(struct ghes *ghes) +{
- u8 notify_type = ghes->generic->notify.type;
- return notify_type == ACPI_HEST_NOTIFY_SEA;
+}
and as you had in earlier versions, sometimes SDEI. SDEI can report by synchronous and asynchronous errors, I wouldn't too surprised if the hardware NMI can be used for the same. It would be good to chase up having a hint of this in the CPER records and pass that in here as a hint.
Unfortunately, its not safe to assume either way for SDEI.
Reviewed-by: James Morse james.morse@arm.com
Thanks,
James
Hi Shuai,
On 07/10/2023 08:28, Shuai Xue wrote:
Hardware errors could be signaled by synchronous interrupt,
I'm struggling with 'synchronous interrupt'. Do you mean arm64's 'precise' (all instructions before the exception were executed, and none after). Otherwise, surely any interrupt from a background scrubber is inherently asynchronous!
e.g. when an error is detected by a background scrubber, or signaled by synchronous exception, e.g. when an uncorrected error is consumed. Both synchronous and asynchronous error are queued and handled by a dedicated kthread in workqueue.
commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") keep track of whether memory_failure() work was queued, and make task_work pending to flush out the workqueue so that the work for synchronous error is processed before returning to user-space.
It does it regardless, if user-space was interrupted by APEI any work queued as a result of that should be completed before we go back to user-space. Otherwise we can bounce between user-space and firmware, with the kernel only running the APEI code, and never making progress.
The trick ensures that the corrupted page is unmapped and poisoned. And after returning to user-space, the task starts at current instruction which triggering a page fault in which kernel will send SIGBUS to current process due to VM_FAULT_HWPOISON.
However, the memory failure recovery for hwpoison-aware mechanisms does not work as expected. For example, hwpoison-aware user-space processes like QEMU register their customized SIGBUS handler and enable early kill mode by seting PF_MCE_EARLY at initialization. Then the kernel will directy notify
(setting, directly)
the process by sending a SIGBUS signal in memory failure with wrong
si_code: the actual user-space process accessing the corrupt memory location, but its memory failure work is handled in a kthread context, so it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space process instead of BUS_MCEERR_AR in kill_proc().
This is hard to parse, "the user-space process is accessing"? (dropping 'actual' and adding 'is')
Wasn't this behaviour fixed by the previous patch?
What problem are you fixing here?
To this end, separate synchronous and asynchronous error handling into different paths like X86 platform does:
- valid synchronous errors: queue a task_work to synchronously send SIGBUS before ret_to_user.
- valid asynchronous errors: queue a work into workqueue to asynchronously handle memory failure.
Why? The signal issue was fixed by the previous patch. Why delay the handling of a poisoned memory location further?
- abnormal branches such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc.
... do what?
Then for valid synchronous errors, the current context in memory failure is exactly belongs to the task consuming poison data and it will send SIBBUS with proper si_code.
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 6f35f724cc14..1675ff77033d 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1334,17 +1334,10 @@ static void kill_me_maybe(struct callback_head *cb) return; }
- /*
* -EHWPOISON from memory_failure() means that it already sent SIGBUS
* to the current process with the proper error info,
* -EOPNOTSUPP means hwpoison_filter() filtered the error event,
*
* In both cases, no further processing is required.
if (ret == -EHWPOISON || ret == -EOPNOTSUPP) return;*/
- pr_err("Memory error not recovered");
- pr_err("Sending SIGBUS to current task due to memory error not recovered"); kill_me_now(cb);
}
I'm not sure how this hunk is relevant to the commit message.
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 88178aa6222d..014401a65ed5 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -484,6 +497,18 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) return false; }
- if (flags == MF_ACTION_REQUIRED && current->mm) {
twcb = kmalloc(sizeof(*twcb), GFP_ATOMIC);
if (!twcb)
return false;
Yuck - New failure modes! This is why the existing code always has this memory allocated in struct ghes_estatus_node.
twcb->pfn = pfn;
twcb->flags = flags;
init_task_work(&twcb->twork, memory_failure_cb);
task_work_add(current, &twcb->twork, TWA_RESUME);
return true;
- }
- memory_failure_queue(pfn, flags); return true;
}
[..]
@@ -696,7 +721,14 @@ static bool ghes_do_proc(struct ghes *ghes, } }
- return queued;
- /*
* If no memory failure work is queued for abnormal synchronous
* errors, do a force kill.
*/
- if (sync && !queued) {
pr_err("Sending SIGBUS to current task due to memory error not recovered");
force_sig(SIGBUS);
- }
}
I think this is a lot of churn, and this hunk is the the only meaningful change in behaviour. Can you explain how this happens?
Wouldn't it be simpler to split ghes_kick_task_work() to have a sync/async version. The synchronous version can unconditionally force_sig_mceerr(BUS_MCEERR_AR, ...) after memory_failure_queue_kick() - but that still means memory_failure() is unable to disappear errors that it fixed - see MF_RECOVERED.
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 4d6e43c88489..0d02f8a0b556 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2161,9 +2161,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
- Must run in process context (e.g. a work queue) with interrupts
- enabled and no spinlocks held.
- Return: 0 for successfully handled the memory error,
-EOPNOTSUPP for hwpoison_filter() filtered the error event,
< 0(except -EOPNOTSUPP) on failure.
- Return values:
- 0 - success
- -EOPNOTSUPP - hwpoison_filter() filtered the error event.
- -EHWPOISON - sent SIGBUS to the current process with the proper
error info by kill_accessing_process().
*/
- other negative values - failure
int memory_failure(unsigned long pfn, int flags) {
I'm not sure how this hunk is relevant to the commit message.
Thanks,
James
Hi Boris,
On 30/11/2023 14:40, Borislav Petkov wrote:
FTR, this is starting to make sense, thanks for explaining.
Replying only to this one for now:
On Thu, Nov 30, 2023 at 10:58:53AM +0800, Shuai Xue wrote:
To reproduce this problem:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error
So this is for ARM folks to deal with, BUT:
A consumed uncorrectable error on x86 means panic. On some hw like on AMD, that error doesn't even get seen by the OS but the hw does something called syncflood to prevent further error propagation. So there's no any action required - the hw does that.
But I'd like to hear from ARM folks whether consuming an uncorrectable error even lets software run. Dunno.
I think we mean different things by 'consume' here.
I'd assume Shuai's test is poisoning a cache-line. When the CPU tries to access that cache-line it will get an 'external abort' signal back from the memory system. Shuai - is this what you mean by 'consume' - the CPU received external abort from the poisoned cache line?
It's then up to the CPU whether it can put the world back in order to take this as synchronous-external-abort or asynchronous-external-abort, which for arm64 are two different interrupt/exception types. The synchronous exceptions can't be masked, but the asynchronous one can. If by the time the asynchronous-external-abort interrupt/exception has been unmasked, the CPU has used the poisoned value in some calculation (which is what we usually mean by consume) which has resulted in a memory access - it will report the error as 'uncontained' because the error has been silently propagated. APEI should always report those a 'fatal', and there is little point getting the OS involved at this point. Also in this category are things like 'tag ram corruption', where you can no longer trust anything about memory.
Everything in this thread is about synchronous errors where this can't happen. The CPU stops and does takes an interrupt/exception instead.
Thanks,
James
On 2023/12/1 01:43, James Morse wrote:
Hi Boris,
On 30/11/2023 14:40, Borislav Petkov wrote:
FTR, this is starting to make sense, thanks for explaining.
Replying only to this one for now:
On Thu, Nov 30, 2023 at 10:58:53AM +0800, Shuai Xue wrote:
To reproduce this problem:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error
So this is for ARM folks to deal with, BUT:
A consumed uncorrectable error on x86 means panic. On some hw like on AMD, that error doesn't even get seen by the OS but the hw does something called syncflood to prevent further error propagation. So there's no any action required - the hw does that.
The "consume" is at the application point of view, e.g. a memory read. If poison is enable, then a SRAR error will be detected and a MCE raised at the point of the consumption in the execution flow.
A generic Intel x86 hw behaves like below:
1. UE Error Inject at a known Physical Address. (by einj_mem_uc through EINJ interface) 2. Core Issue a Memory Read to the same Physical Address (by a singe memory read) 3. iMC Detects the error. 4. HA logs UCA error and signals CMCI if enabled 5. HA Forward data with poison indication bit set. 6. CBo detects the Poison data. Does not log any error. 7. MLC detects the Poison data. 8. DCU detects the Poison data, logs SRAR error and trigger MCERR if recoverable 9. OS/VMM takes corresponding recovery action based on affected state.
In our example: - step 2 is triggered by a singe memory read. - step 8: UCR errors detected on data load, MCACOD 134H, triggering MCERR - step 9: the kernel is excepted to send sigbus with si_code BUS_MCEERR_AR (code 4)
I also run the same test in AMD EPYC platform, e.g. Milan, Genoa, which behaves the same as Intel Xeon platform, e.g. Icelake, SPR.
The ARMv8.2 RAS extension support similar data poison mechanism, a Synchronous External Abort on arm64 (analogy Machine Check Exception on x86) will be trigger in setp 8. See James comments for details. But the kernel sends sigbus with si_code BUS_MCEERR_AO (code 5) , tested on Alibaba Yitian710 and Huawei Kunepng 920.
But I'd like to hear from ARM folks whether consuming an uncorrectable error even lets software run. Dunno.
I think we mean different things by 'consume' here.
I'd assume Shuai's test is poisoning a cache-line. When the CPU tries to access that cache-line it will get an 'external abort' signal back from the memory system. Shuai - is this what you mean by 'consume' - the CPU received external abort from the poisoned cache line?
Yes, exactly. Thank you for point it out. We are talking about synchronous errors.
It's then up to the CPU whether it can put the world back in order to take this as synchronous-external-abort or asynchronous-external-abort, which for arm64 are two different interrupt/exception types. The synchronous exceptions can't be masked, but the asynchronous one can. If by the time the asynchronous-external-abort interrupt/exception has been unmasked, the CPU has used the poisoned value in some calculation (which is what we usually mean by consume) which has resulted in a memory access - it will report the error as 'uncontained' because the error has been silently propagated. APEI should always report those a 'fatal', and there is little point getting the OS involved at this point. Also in this category are things like 'tag ram corruption', where you can no longer trust anything about memory.
Everything in this thread is about synchronous errors where this can't happen. The CPU stops and does takes an interrupt/exception instead.
Thank you for explaining.
Best Regards, Shuai
On 2023/12/1 01:39, James Morse wrote:
Hi Boris, Shuai,
On 29/11/2023 18:54, Borislav Petkov wrote:
On Sun, Nov 26, 2023 at 08:25:38PM +0800, Shuai Xue wrote:
On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
- an AR error consumed by current process is deferred to handle in a dedicated kernel thread, but memory_failure() assumes that it runs in the current context
On x86? ARM?
Pease point to the exact code flow.
An AR error consumed by current process is deferred to handle in a dedicated kernel thread on ARM platform. The AR error is handled in bellow flow:
Please don't think of errors as "action required" - that's a user-space signal code. If the page could be fixed by memory-failure(), you may never get a signal. (all this was the fix for always sending an action-required signal)
I assume you mean the CPU accessed a poisoned location and took a synchronous error.
Yes, I mean that CPU accessed a poisoned location and took a synchronous error.
[usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
[ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1 ghes_sdei_critical_callback => __ghes_sdei_callback => ghes_in_nmi_queue_one_entry // peak and read estatus => irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work [ghes_sdei_critical_callback: return]
[ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2 => ghes_do_proc => ghes_handle_memory_failure => ghes_do_memory_failure => memory_failure_queue // put work task on current CPU => if (kfifo_put(&mf_cpu->fifo, entry)) schedule_work_on(smp_processor_id(), &mf_cpu->work); => task_work_add(current, &estatus_node->task_work, TWA_RESUME); [ghes_proc_in_irq: return]
// kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3 [memory_failure_work_func: current kworker, CPU 3] => memory_failure_work_func(&mf_cpu->work) => while kfifo_get(&mf_cpu->fifo, &entry); // until get no work => memory_failure(entry.pfn, entry.flags);
From the comment above that function:
- The function is primarily of use for corruptions that
- happen outside the current execution context (e.g. when
- detected by a background scrubber)
- Must run in process context (e.g. a work queue) with interrupts
- enabled and no spinlocks held.
[ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4 => memory_failure_queue_kick => cancel_work_sync - waiting memory_failure_work_func finish => memory_failure_work_func(&mf_cpu->work) => kfifo_get(&mf_cpu->fifo, &entry); // no work
[einj_mem_uc resume at the same PC, trigger a page fault STEP 5
STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware notifies hardware error to kernel through is SDEI (ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie a irq_work to handle hardware errors in IRQ context
STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on current CPU in workqueue and add task work to sync with the workqueue.
STEP3: The kworker preempts the current running thread and get CPU 3. Then memory_failure() is processed in kworker.
See above.
STEP4: ghes_kick_task_work() is called as task_work to ensure any queued workqueue has been done before returning to user-space.
STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the current instruction, because the poison page is unmapped by memory_failure() in step 3, so a page fault will be triggered.
memory_failure() assumes that it runs in the current context on both x86 and ARM platform.
for example: memory_failure() in mm/memory-failure.c:
if (flags & MF_ACTION_REQUIRED) { folio = page_folio(p); res = kill_accessing_process(current, folio_pfn(folio), flags); }
And?
Do you see the check above it?
if (TestSetPageHWPoison(p)) {
test_and_set_bit() returns true only when the page was poisoned already.
- This function is intended to handle "Action Required" MCEs on already
- hardware poisoned pages. They could happen, for example, when
- memory_failure() failed to unmap the error page at the first call, or
- when multiple local machine checks happened on different CPUs.
And that's kill_accessing_process().
So AFAIU, the kworker running memory_failure() would only mark the page as poison.
The killing happens when memory_failure() runs again and the process touches the page again.
But I'd let James confirm here.
Yes, this is what is expected to happen with the existing code.
The first pass will remove the pages from all processes that have it mapped before this user-space task can restart. Restarting the task will make it access a poisoned page, kicking off the second path which delivers the signal.
The reason for two passes is send_sig_mceerr() likes to clear_siginfo(), so even if you queued action-required before leaving GHES, memory-failure() would stomp on it.
I still don't know what you're fixing here.
The problem is if the user-space process registered for early messages, it gets a signal on the first pass. If it returns from that signal, it will access the poisoned page and get the action-required signal.
How is this making Qemu go wrong?
The problem here is that we need to assume, the first pass memory failure handle and unmap the poisoned page successfully.
- If so, it may work by the second pass action-requried signal because it access an unmapped page. But IMHO, we can improve by just sending one pass signal, so that the Guest will vmexit only once, right?
- If not, there is no second pass signal. The exist code does not handle the error code from memory_failure(), so a exception loop happens resulting a hard lockup panic.
Besides, in production environment, a second access to an already known poison page will introduce more risk of error propagation.
As to how this works for you given Boris' comments above: kill_procs() is also called from hwpoison_user_mappings(), which takes the flags given to memory-failure(). This is where the action-optional signals come from.
Thank you very much for involving to review and comment.
Best Regards, Shuai
On 2023/12/1 01:39, James Morse wrote:
Hi Shuai,
On 07/10/2023 08:28, Shuai Xue wrote:
There are two major types of uncorrected recoverable (UCR) errors :
Is UCR a well known x86 acronym? It's best to just spell this out each time, there is enough jargon in this area already.
Quite agreed, will replace the commit log with "uncorrected recoverable error".
Action Required (AR): The error is detected and the processor already consumes the memory. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
Action Optional (AO): The error is detected out of processor execution context. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
As elsewhere, please don't think of errors as 'action required', this is how things get reported to user-space. Action-required for one thread may be action-optional for another that has the same page mapped - its really not a property of the error. It would be better to describe this as synchronous and asynchronous, or in-band and out-of-band.
Thank you for explanation. I will change to "synchronous and asynchronous".
The essential difference between AR and AO errors is that AR is a synchronous event, while AO is an asynchronous event. The hardware will signal a synchronous exception (Machine Check Exception on X86 and Synchronous External Abort on Arm64) when an error is detected and the memory access has been architecturally executed.
When APEI firmware first is enabled, a platform may describe one error source for the handling of synchronous errors (e.g. MCE or SEA notification ), or for handling asynchronous errors (e.g. SCI or External Interrupt notification). In other words, we can distinguish synchronous errors by APEI notification. For AR errors, kernel will kill current process accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In addition, for AO errors, kernel will notify the process who owns the poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. However, the GHES driver always sets mf_flags to 0 so that all UCR errors are handled as AO errors in memory failure.
To make this easier to read: UCR and AR -> synchronous AO -> asynchronous
Will do that.
To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous events.
Fixes: ba61ca4aab47 ("ACPI, APEI, GHES: Add hardware memory error recovery support")'
Erm, this predates arm64 support, and what you have here doesn't change the behaviour on x86.
You can blame 7f17b4a121d0d50 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors"), which should have covered this.
Do you mean just drop the "Fixes" tags?
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index ef59d6ea16da..88178aa6222d 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; } +/*
- A platform may describe one error source for the handling of synchronous
- errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI
- or External Interrupt). On x86, the HEST notifications are always
- asynchronous, so only SEA on ARM is delivered as a synchronous
- notification.
- */
+static inline bool is_hest_sync_notify(struct ghes *ghes) +{
- u8 notify_type = ghes->generic->notify.type;
- return notify_type == ACPI_HEST_NOTIFY_SEA;
+}
and as you had in earlier versions, sometimes SDEI. SDEI can report by synchronous and asynchronous errors, I wouldn't too surprised if the hardware NMI can be used for the same. It would be good to chase up having a hint of this in the CPER records and pass that in here as a hint.> Unfortunately, its not safe to assume either way for SDEI.
For SDEI notification, only x0-x17 has preserved by firmware. As SDEI TRM[1] describes "the dispatcher can simulate an exception-like entry into the client, **with the client providing an additional asynchronous entry point similar to an interrupt entry point**". The client (kernel) lacks complete synchronous context, e.g. system register (ELR, ESR, etc). So I think SDEI notification should not be used for asynchronous error, can you help to confirm this?
For NMI notification, as far as I know, AArch64 (aka arm64 in the Linux tree) does not provide architected NMIs.
Reviewed-by: James Morse james.morse@arm.com
Thank you for valuable comments.
Best Regards, Shuai
On 2023/12/1 01:39, James Morse wrote:
Hi Shuai,
On 07/10/2023 08:28, Shuai Xue wrote:
Hardware errors could be signaled by synchronous interrupt,
I'm struggling with 'synchronous interrupt'. Do you mean arm64's 'precise' (all instructions before the exception were executed, and none after). Otherwise, surely any interrupt from a background scrubber is inherently asynchronous!
I am sorry, this is typo. I mean asynchronous interrupt.
e.g. when an error is detected by a background scrubber, or signaled by synchronous exception, e.g. when an uncorrected error is consumed. Both synchronous and asynchronous error are queued and handled by a dedicated kthread in workqueue.
commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") keep track of whether memory_failure() work was queued, and make task_work pending to flush out the workqueue so that the work for synchronous error is processed before returning to user-space.
It does it regardless, if user-space was interrupted by APEI any work queued as a result of that should be completed before we go back to user-space. Otherwise we can bounce between user-space and firmware, with the kernel only running the APEI code, and never making progress.
Agreed.
The trick ensures that the corrupted page is unmapped and poisoned. And after returning to user-space, the task starts at current instruction which triggering a page fault in which kernel will send SIGBUS to current process due to VM_FAULT_HWPOISON.
However, the memory failure recovery for hwpoison-aware mechanisms does not work as expected. For example, hwpoison-aware user-space processes like QEMU register their customized SIGBUS handler and enable early kill mode by seting PF_MCE_EARLY at initialization. Then the kernel will directly notify
(setting, directly)
Thank you. Will fix it.
the process by sending a SIGBUS signal in memory failure with wrong
si_code: the actual user-space process accessing the corrupt memory location, but its memory failure work is handled in a kthread context, so it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space process instead of BUS_MCEERR_AR in kill_proc().
This is hard to parse, "the user-space process is accessing"? (dropping 'actual' and adding 'is')
Will fix it.
Wasn't this behaviour fixed by the previous patch?
What problem are you fixing here?
Nope. The memory_failure() runs in a kthread context, but not the user-space process which consuming poison data.
// kill_proc() in memory-failure.c
if ((flags & MF_ACTION_REQUIRED) && (t == current)) ret = force_sig_mceerr(BUS_MCEERR_AR, (void __user *)tk->addr, addr_lsb); else ret = send_sig_mceerr(BUS_MCEERR_AO, (void __user *)tk->addr, addr_lsb, t);
So, even we queue memory_failure() with MF_ACTION_REQUIRED flags in previous patch, it will still send a sigbus with BUS_MCEERR_AO in the else branch of kill_proc().
To this end, separate synchronous and asynchronous error handling into different paths like X86 platform does:
- valid synchronous errors: queue a task_work to synchronously send SIGBUS before ret_to_user.
- valid asynchronous errors: queue a work into workqueue to asynchronously handle memory failure.
Why? The signal issue was fixed by the previous patch. Why delay the handling of a poisoned memory location further?
The signal issue is not fixed completely. See my reply above.
- abnormal branches such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, OOM, etc.
... do what?
If no memory failure work is queued for abnormal errors, do a force kill. Will also add this comment to commit log.
Then for valid synchronous errors, the current context in memory failure is exactly belongs to the task consuming poison data and it will send SIBBUS with proper si_code.
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 6f35f724cc14..1675ff77033d 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1334,17 +1334,10 @@ static void kill_me_maybe(struct callback_head *cb) return; }
- /*
* -EHWPOISON from memory_failure() means that it already sent SIGBUS
* to the current process with the proper error info,
* -EOPNOTSUPP means hwpoison_filter() filtered the error event,
*
* In both cases, no further processing is required.
if (ret == -EHWPOISON || ret == -EOPNOTSUPP) return;*/
- pr_err("Memory error not recovered");
- pr_err("Sending SIGBUS to current task due to memory error not recovered"); kill_me_now(cb);
}
I'm not sure how this hunk is relevant to the commit message.
I handle memory_failure() error code in its arm64 call site memory_failure_cb() with some comments, similar to x86 call site kill_me_maybe(). I moved these two part comments to function declaration, followed by review comments from Kefeng.
I should split this into a separate patch. Will do it in next version.
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 88178aa6222d..014401a65ed5 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -484,6 +497,18 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) return false; }
- if (flags == MF_ACTION_REQUIRED && current->mm) {
twcb = kmalloc(sizeof(*twcb), GFP_ATOMIC);
if (!twcb)
return false;
Yuck - New failure modes! This is why the existing code always has this memory allocated in struct ghes_estatus_node.
Are you suggesting to move fields of struct sync_task_work to struct ghes_estatus_node, and use ghes_estatus_node here? Or we can just alloc struct sync_task_work with gen_pool_alloc from ghes_estatus_pool.
twcb->pfn = pfn;
twcb->flags = flags;
init_task_work(&twcb->twork, memory_failure_cb);
task_work_add(current, &twcb->twork, TWA_RESUME);
return true;
- }
- memory_failure_queue(pfn, flags); return true;
}
[..]
@@ -696,7 +721,14 @@ static bool ghes_do_proc(struct ghes *ghes, } }
- return queued;
- /*
* If no memory failure work is queued for abnormal synchronous
* errors, do a force kill.
*/
- if (sync && !queued) {
pr_err("Sending SIGBUS to current task due to memory error not recovered");
force_sig(SIGBUS);
- }
}
I think this is a lot of churn, and this hunk is the the only meaningful change in behaviour. Can you explain how this happens?
For example: - invalid GUID section in ghes_do_proc() - CPER_MEM_VALID_PA is not set, unexpected severity in ghes_handle_memory_failure(). - CONFIG_ACPI_APEI_MEMORY_FAILURE is not enabled, !pfn_vaild(pfn) in ghes_do_memory_failure()
Wouldn't it be simpler to split ghes_kick_task_work() to have a sync/async version. The synchronous version can unconditionally force_sig_mceerr(BUS_MCEERR_AR, ...) after memory_failure_queue_kick() - but that still means memory_failure() is unable to disappear errors that it fixed - see MF_RECOVERED.
Sorry, I don't think so. Unconditionally send a sigbus is not a good choice. For example, if a sync memory error detected in instruction memory error, the kernel should transparently fix and no signal should be send.
./einj_mem_uc instr [168522.751671] Memory failure: 0x89dedd: corrupted page was clean: dropped without side effects [168522.751679] Memory failure: 0x89dedd: recovery action for clean LRU page: Recovered
With this patch set, the instr case behaves consistently on both the arm64 and x86 platforms.
The complex page error_states are handled in memory_failure(). IMHO, we should left this part to it.
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 4d6e43c88489..0d02f8a0b556 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2161,9 +2161,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
- Must run in process context (e.g. a work queue) with interrupts
- enabled and no spinlocks held.
- Return: 0 for successfully handled the memory error,
-EOPNOTSUPP for hwpoison_filter() filtered the error event,
< 0(except -EOPNOTSUPP) on failure.
- Return values:
- 0 - success
- -EOPNOTSUPP - hwpoison_filter() filtered the error event.
- -EHWPOISON - sent SIGBUS to the current process with the proper
error info by kill_accessing_process().
*/
- other negative values - failure
int memory_failure(unsigned long pfn, int flags) {
I'm not sure how this hunk is relevant to the commit message.
As mentioned, I will split this into a separate patch.
Thanks,
James
Thank you for valuable comments. Best Regards, Shuai
## Changes Log
changes since v9: - split patch 2 to address exactly one issue in one patch (per Borislav) - rewrite commit log according to template (per Borislav) - pickup reviewed-by tag of patch 1 from James Morse - alloc and free twcb through gen_pool_{alloc, free) (Per James) - rewrite cover letter
changes since v8: - remove the bug fix tag of patch 2 (per Jarkko Sakkinen) - remove the declaration of memory_failure_queue_kick (per Naoya Horiguchi) - rewrite the return value comments of memory_failure (per Naoya Horiguchi)
changes since v7: - rebase to Linux v6.6-rc2 (no code changed) - rewritten the cover letter to explain the motivation of this patchset
changes since v6: - add more explicty error message suggested by Xiaofei - pick up reviewed-by tag from Xiaofei - pick up internal reviewed-by tag from Baolin
changes since v5 by addressing comments from Kefeng: - document return value of memory_failure() - drop redundant comments in call site of memory_failure() - make ghes_do_proc void and handle abnormal case within it - pick up reviewed-by tag from Kefeng Wang
changes since v4 by addressing comments from Xiaofei: - do a force kill only for abnormal sync errors
changes since v3 by addressing comments from Xiaofei: - do a force kill for abnormal memory failure error such as invalid PA, unexpected severity, OOM, etc - pcik up tested-by tag from Ma Wupeng
changes since v2 by addressing comments from Naoya: - rename mce_task_work to sync_task_work - drop ACPI_HEST_NOTIFY_MCE case in is_hest_sync_notify() - add steps to reproduce this problem in cover letter
changes since v1: - synchronous events by notify type - Link: https://lore.kernel.org/lkml/20221206153354.92394-3-xueshuai@linux.alibaba.c...
## Cover Letter
There are two major types of uncorrected recoverable (UCR) errors :
- Synchronous error: The error is detected and raised at the point of the consumption in the execution flow, e.g. when a CPU tries to access a poisoned cache line. The CPU will take a synchronous error exception such as Synchronous External Abort (SEA) on Arm64 and Machine Check Exception (MCE) on X86. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
- Asynchronous error: The error is detected out of processor execution context, e.g. when an error is detected by a background scrubber. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
Currently, both synchronous and asynchronous errors are queued by ghes_handle_memory_failure() with flag 0, and handled by a dedicated kernel thread in a work queue on the ARM64 platform. As a result, the memory failure recovery sends SIBUS with wrong BUS_MCEERR_AO si_code for synchronous errors in early kill mode. The main problem is that the memory_failure() work is handled in kthread context but not the user-space process context which is accessing the corrupt memory location, so it will send SIGBUS with BUS_MCEERR_AO si_code to the user-space process instead of BUS_MCEERR_AR in kill_proc().
Fix the problem by: - Patch 1: seting memory_failure() flags as MF_ACTION_REQUIRED on synchronous errors. - Patch 2: performing a force kill if no memory_failure() work is queued for synchronous errors. - Patch 3: a minor comments improve. - Patch 4: queueing memory_failure() as a task_work so that the current context in memory_failure() exactly belongs to the process consuming poison data.
Lv Ying and XiuQi from Huawei also proposed to address similar problem[2][4]. Acknowledge to discussion with them.
## Steps to Reproduce This Problem
To reproduce this problem:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 5 addr 0xffffb0d75000 page not present Test passed
The si_code (code 5) from einj_mem_uc indicates that it is BUS_MCEERR_AO error and it is not fact.
After this patch set:
# STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 4 addr 0xffffb0d75000 page not present Test passed
The si_code (code 4) from einj_mem_uc indicates that it is BUS_MCEERR_AR error as we expected.
[1] Add ARMv8 RAS virtualization support in QEMU https://patchew.org/QEMU/20200512030609.19593-1-gengdongjiu@huawei.com/ [2] https://lore.kernel.org/lkml/20221205115111.131568-3-lvying6@huawei.com/ [3] https://lkml.kernel.org/r/20220914064935.7851-1-xueshuai@linux.alibaba.com [4] https://lore.kernel.org/lkml/20221209095407.383211-1-lvying6@huawei.com/
Shuai Xue (4): ACPI: APEI: set memory failure flags as MF_ACTION_REQUIRED on synchronous events ACPI: APEI: send SIGBUS to current task if synchronous memory error not recovered mm: memory-failure: move memory_failure() return value documentation to function declaration ACPI: APEI: handle synchronous exceptions in task work
arch/x86/kernel/cpu/mce/core.c | 9 +-- drivers/acpi/apei/ghes.c | 113 ++++++++++++++++++++++----------- include/acpi/ghes.h | 3 - mm/memory-failure.c | 22 ++----- 4 files changed, 82 insertions(+), 65 deletions(-)
There are two major types of uncorrected recoverable (UCR) errors :
- Synchronous error: The error is detected and raised at the point of the consumption in the execution flow, e.g. when a CPU tries to access a poisoned cache line. The CPU will take a synchronous error exception such as Synchronous External Abort (SEA) on Arm64 and Machine Check Exception (MCE) on X86. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
- Asynchronous error: The error is detected out of processor execution context, e.g. when an error is detected by a background scrubber. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
When APEI firmware first is enabled, a platform may describe one error source for the handling of synchronous errors (e.g. MCE or SEA notification ), or for handling asynchronous errors (e.g. SCI or External Interrupt notification). In other words, we can distinguish synchronous errors by APEI notification. For synchronous errors, kernel will kill the current process which accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In addition, for asynchronous errors, kernel will notify the process who owns the poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. However, the GHES driver always sets mf_flags to 0 so that all synchronous errors are handled as asynchronous errors in memory failure.
To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous events.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Reviewed-by: James Morse james.morse@arm.com --- drivers/acpi/apei/ghes.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 63ad0541db38..ab2a82cb1b0b 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; }
+/* + * A platform may describe one error source for the handling of synchronous + * errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI + * or External Interrupt). On x86, the HEST notifications are always + * asynchronous, so only SEA on ARM is delivered as a synchronous + * notification. + */ +static inline bool is_hest_sync_notify(struct ghes *ghes) +{ + u8 notify_type = ghes->generic->notify.type; + + return notify_type == ACPI_HEST_NOTIFY_SEA; +} + /* * This driver isn't really modular, however for the time being, * continuing to use module_param is the easiest way to remain @@ -489,7 +503,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) }
static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, - int sev) + int sev, bool sync) { int flags = -1; int sec_sev = ghes_severity(gdata->error_severity); @@ -503,7 +517,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE; if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) - flags = 0; + flags = sync ? MF_ACTION_REQUIRED : 0;
if (flags != -1) return ghes_do_memory_failure(mem_err->physical_addr, flags); @@ -511,9 +525,11 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, return false; }
-static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev) +static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, + int sev, bool sync) { struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata); + int flags = sync ? MF_ACTION_REQUIRED : 0; bool queued = false; int sec_sev, i; char *p; @@ -538,7 +554,7 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int s * and don't filter out 'corrected' error here. */ if (is_cache && has_pa) { - queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0); + queued = ghes_do_memory_failure(err_info->physical_fault_addr, flags); p += err_info->length; continue; } @@ -666,6 +682,7 @@ static bool ghes_do_proc(struct ghes *ghes, const guid_t *fru_id = &guid_null; char *fru_text = ""; bool queued = false; + bool sync = is_hest_sync_notify(ghes);
sev = ghes_severity(estatus->error_severity); apei_estatus_for_each_section(estatus, gdata) { @@ -683,13 +700,13 @@ static bool ghes_do_proc(struct ghes *ghes, atomic_notifier_call_chain(&ghes_report_chain, sev, mem_err);
arch_apei_report_mem_error(sev, mem_err); - queued = ghes_handle_memory_failure(gdata, sev); + queued = ghes_handle_memory_failure(gdata, sev, sync); } else if (guid_equal(sec_type, &CPER_SEC_PCIE)) { ghes_handle_aer(gdata); } else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) { - queued = ghes_handle_arm_hw_error(gdata, sev); + queued = ghes_handle_arm_hw_error(gdata, sev, sync); } else { void *err = acpi_hest_get_payload(gdata);
Synchronous error was detected as a result of user-space process accessing a 2-bit uncorrected error. The CPU will take a synchronous error exception such as Synchronous External Abort (SEA) on Arm64. The kernel will queue a memory_failure() work which poisons the related page, unmaps the page, and then sends a SIGBUS to the process, so that a system wide panic can be avoided.
However, no memory_failure() work will be queued when abnormal synchronous errors occur. These errors can include situations such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, etc. In such case, the user-space process will trigger SEA again. This loop can potentially exceed the platform firmware threshold or even trigger a kernel hard lockup, leading to a system reboot.
Fix it by performing a force kill if no memory_failure() work is queued for synchronous errors.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com --- drivers/acpi/apei/ghes.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index ab2a82cb1b0b..f832ffc5a88d 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -717,6 +717,15 @@ static bool ghes_do_proc(struct ghes *ghes, } }
+ /* + * If no memory failure work is queued for abnormal synchronous + * errors, do a force kill. + */ + if (sync && !queued) { + pr_err("Sending SIGBUS to current task due to memory error not recovered"); + force_sig(SIGBUS); + } + return queued; }
Part of return value comments for memory_failure() were originally documented at the call site. Move those comments to the function declaration to improve code readability and to provide developers with immediate access to function usage and return information.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com --- arch/x86/kernel/cpu/mce/core.c | 9 +-------- mm/memory-failure.c | 9 ++++++--- 2 files changed, 7 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 7b397370b4d6..43e542f06ad5 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1324,17 +1324,10 @@ static void kill_me_maybe(struct callback_head *cb) return; }
- /* - * -EHWPOISON from memory_failure() means that it already sent SIGBUS - * to the current process with the proper error info, - * -EOPNOTSUPP means hwpoison_filter() filtered the error event, - * - * In both cases, no further processing is required. - */ if (ret == -EHWPOISON || ret == -EOPNOTSUPP) return;
- pr_err("Memory error not recovered"); + pr_err("Sending SIGBUS to current task due to memory error not recovered"); kill_me_now(cb); }
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 660c21859118..bd3dcafdfa4a 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2164,9 +2164,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, * Must run in process context (e.g. a work queue) with interrupts * enabled and no spinlocks held. * - * Return: 0 for successfully handled the memory error, - * -EOPNOTSUPP for hwpoison_filter() filtered the error event, - * < 0(except -EOPNOTSUPP) on failure. + * Return values: + * 0 - success + * -EOPNOTSUPP - hwpoison_filter() filtered the error event. + * -EHWPOISON - sent SIGBUS to the current process with the proper + * error info by kill_accessing_process(). + * other negative values - failure */ int memory_failure(unsigned long pfn, int flags) {
Hardware errors could be signaled by asynchronous interrupt, e.g. when an error is detected by a background scrubber, or signaled by synchronous exception, e.g. when a CPU tries to access a poisoned cache line. Both synchronous and asynchronous error are queued as a memory_failure() work and handled by a dedicated kthread in workqueue.
However, the memory failure recovery sends SIBUS with wrong BUS_MCEERR_AO si_code for synchronous errors in early kill mode, even MF_ACTION_REQUIRED is set. The main problem is that the memory failure work is handled in kthread context but not the user-space process which is accessing the corrupt memory location, so it will send SIGBUS with BUS_MCEERR_AO si_code to the user-space process instead of BUS_MCEERR_AR in kill_proc().
To this end, queue memory_failure() as a task_work so that the current context in memory_failure() is exactly belongs to the process consuming poison data and it will send SIBBUS with proper si_code.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com --- drivers/acpi/apei/ghes.c | 77 +++++++++++++++++++++++----------------- include/acpi/ghes.h | 3 -- mm/memory-failure.c | 13 ------- 3 files changed, 44 insertions(+), 49 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index f832ffc5a88d..a6b4907cfe47 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -464,28 +464,41 @@ static void ghes_clear_estatus(struct ghes *ghes, }
/* - * Called as task_work before returning to user-space. - * Ensure any queued work has been done before we return to the context that - * triggered the notification. + * struct sync_task_work - for synchronous RAS event + * + * @twork: callback_head for task work + * @pfn: page frame number of corrupted page + * @flags: fine tune action taken + * + * Structure to pass task work to be handled before + * ret_to_user via task_work_add(). */ -static void ghes_kick_task_work(struct callback_head *head) +struct sync_task_work { + struct callback_head twork; + u64 pfn; + int flags; +}; + +static void memory_failure_cb(struct callback_head *twork) { - struct acpi_hest_generic_status *estatus; - struct ghes_estatus_node *estatus_node; - u32 node_len; + int ret; + struct sync_task_work *twcb = + container_of(twork, struct sync_task_work, twork);
- estatus_node = container_of(head, struct ghes_estatus_node, task_work); - if (IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE)) - memory_failure_queue_kick(estatus_node->task_work_cpu); + ret = memory_failure(twcb->pfn, twcb->flags); + gen_pool_free(ghes_estatus_pool, (unsigned long)twcb, sizeof(*twcb));
- estatus = GHES_ESTATUS_FROM_NODE(estatus_node); - node_len = GHES_ESTATUS_NODE_LEN(cper_estatus_len(estatus)); - gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len); + if (!ret || ret == -EHWPOISON || ret == -EOPNOTSUPP) + return; + + pr_err("Sending SIGBUS to current task due to memory error not recovered"); + force_sig(SIGBUS); }
static bool ghes_do_memory_failure(u64 physical_addr, int flags) { unsigned long pfn; + struct sync_task_work *twcb;
if (!IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE)) return false; @@ -498,6 +511,18 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) return false; }
+ if (flags == MF_ACTION_REQUIRED && current->mm) { + twcb = (void *)gen_pool_alloc(ghes_estatus_pool, sizeof(*twcb)); + if (!twcb) + return false; + + twcb->pfn = pfn; + twcb->flags = flags; + init_task_work(&twcb->twork, memory_failure_cb); + task_work_add(current, &twcb->twork, TWA_RESUME); + return true; + } + memory_failure_queue(pfn, flags); return true; } @@ -673,7 +698,7 @@ static void ghes_defer_non_standard_event(struct acpi_hest_generic_data *gdata, schedule_work(&entry->work); }
-static bool ghes_do_proc(struct ghes *ghes, +static void ghes_do_proc(struct ghes *ghes, const struct acpi_hest_generic_status *estatus) { int sev, sec_sev; @@ -725,8 +750,6 @@ static bool ghes_do_proc(struct ghes *ghes, pr_err("Sending SIGBUS to current task due to memory error not recovered"); force_sig(SIGBUS); } - - return queued; }
static void __ghes_print_estatus(const char *pfx, @@ -1028,9 +1051,7 @@ static void ghes_proc_in_irq(struct irq_work *irq_work) struct ghes_estatus_node *estatus_node; struct acpi_hest_generic *generic; struct acpi_hest_generic_status *estatus; - bool task_work_pending; u32 len, node_len; - int ret;
llnode = llist_del_all(&ghes_estatus_llist); /* @@ -1045,25 +1066,16 @@ static void ghes_proc_in_irq(struct irq_work *irq_work) estatus = GHES_ESTATUS_FROM_NODE(estatus_node); len = cper_estatus_len(estatus); node_len = GHES_ESTATUS_NODE_LEN(len); - task_work_pending = ghes_do_proc(estatus_node->ghes, estatus); + + ghes_do_proc(estatus_node->ghes, estatus); + if (!ghes_estatus_cached(estatus)) { generic = estatus_node->generic; if (ghes_print_estatus(NULL, generic, estatus)) ghes_estatus_cache_add(generic, estatus); } - - if (task_work_pending && current->mm) { - estatus_node->task_work.func = ghes_kick_task_work; - estatus_node->task_work_cpu = smp_processor_id(); - ret = task_work_add(current, &estatus_node->task_work, - TWA_RESUME); - if (ret) - estatus_node->task_work.func = NULL; - } - - if (!estatus_node->task_work.func) - gen_pool_free(ghes_estatus_pool, - (unsigned long)estatus_node, node_len); + gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, + node_len);
llnode = next; } @@ -1124,7 +1136,6 @@ static int ghes_in_nmi_queue_one_entry(struct ghes *ghes,
estatus_node->ghes = ghes; estatus_node->generic = ghes->generic; - estatus_node->task_work.func = NULL; estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
if (__ghes_read_estatus(estatus, buf_paddr, fixmap_idx, len)) { diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h index be1dd4c1a917..ebd21b05fe6e 100644 --- a/include/acpi/ghes.h +++ b/include/acpi/ghes.h @@ -35,9 +35,6 @@ struct ghes_estatus_node { struct llist_node llnode; struct acpi_hest_generic *generic; struct ghes *ghes; - - int task_work_cpu; - struct callback_head task_work; };
struct ghes_estatus_cache { diff --git a/mm/memory-failure.c b/mm/memory-failure.c index bd3dcafdfa4a..6bff57444928 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2451,19 +2451,6 @@ static void memory_failure_work_func(struct work_struct *work) } }
-/* - * Process memory_failure work queued on the specified CPU. - * Used to avoid return-to-userspace racing with the memory_failure workqueue. - */ -void memory_failure_queue_kick(int cpu) -{ - struct memory_failure_cpu *mf_cpu; - - mf_cpu = &per_cpu(memory_failure_cpu, cpu); - cancel_work_sync(&mf_cpu->work); - memory_failure_work_func(&mf_cpu->work); -} - static int __init memory_failure_init(void) { struct memory_failure_cpu *mf_cpu;
On Mon, Dec 18, 2023 at 02:45:18PM +0800, Shuai Xue wrote:
There are two major types of uncorrected recoverable (UCR) errors :
Synchronous error: The error is detected and raised at the point of the consumption in the execution flow, e.g. when a CPU tries to access a poisoned cache line. The CPU will take a synchronous error exception such as Synchronous External Abort (SEA) on Arm64 and Machine Check Exception (MCE) on X86. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
Asynchronous error: The error is detected out of processor execution context, e.g. when an error is detected by a background scrubber. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
When APEI firmware first is enabled, a platform may describe one error source for the handling of synchronous errors (e.g. MCE or SEA notification ), or for handling asynchronous errors (e.g. SCI or External Interrupt notification). In other words, we can distinguish synchronous errors by APEI notification. For synchronous errors, kernel will kill the current process which accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In addition, for asynchronous errors, kernel will notify the process who owns the poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. However, the GHES driver always sets mf_flags to 0 so that all synchronous errors are handled as asynchronous errors in memory failure.
To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous events.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Reviewed-by: James Morse james.morse@arm.com
drivers/acpi/apei/ghes.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-)
<formletter>
This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly.
</formletter>
On Mon, Dec 18, 2023 at 02:45:19PM +0800, Shuai Xue wrote:
Synchronous error was detected as a result of user-space process accessing a 2-bit uncorrected error. The CPU will take a synchronous error exception such as Synchronous External Abort (SEA) on Arm64. The kernel will queue a memory_failure() work which poisons the related page, unmaps the page, and then sends a SIGBUS to the process, so that a system wide panic can be avoided.
However, no memory_failure() work will be queued when abnormal synchronous errors occur. These errors can include situations such as invalid PA, unexpected severity, no memory failure config support, invalid GUID section, etc. In such case, the user-space process will trigger SEA again. This loop can potentially exceed the platform firmware threshold or even trigger a kernel hard lockup, leading to a system reboot.
Fix it by performing a force kill if no memory_failure() work is queued for synchronous errors.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com
drivers/acpi/apei/ghes.c | 9 +++++++++ 1 file changed, 9 insertions(+)
<formletter>
This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly.
</formletter>
On Mon, Dec 18, 2023 at 02:45:20PM +0800, Shuai Xue wrote:
Part of return value comments for memory_failure() were originally documented at the call site. Move those comments to the function declaration to improve code readability and to provide developers with immediate access to function usage and return information.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com
arch/x86/kernel/cpu/mce/core.c | 9 +-------- mm/memory-failure.c | 9 ++++++--- 2 files changed, 7 insertions(+), 11 deletions(-)
<formletter>
This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly.
</formletter>
On Mon, Dec 18, 2023 at 02:45:21PM +0800, Shuai Xue wrote:
Hardware errors could be signaled by asynchronous interrupt, e.g. when an error is detected by a background scrubber, or signaled by synchronous exception, e.g. when a CPU tries to access a poisoned cache line. Both synchronous and asynchronous error are queued as a memory_failure() work and handled by a dedicated kthread in workqueue.
However, the memory failure recovery sends SIBUS with wrong BUS_MCEERR_AO si_code for synchronous errors in early kill mode, even MF_ACTION_REQUIRED is set. The main problem is that the memory failure work is handled in kthread context but not the user-space process which is accessing the corrupt memory location, so it will send SIGBUS with BUS_MCEERR_AO si_code to the user-space process instead of BUS_MCEERR_AR in kill_proc().
To this end, queue memory_failure() as a task_work so that the current context in memory_failure() is exactly belongs to the process consuming poison data and it will send SIBBUS with proper si_code.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
drivers/acpi/apei/ghes.c | 77 +++++++++++++++++++++++----------------- include/acpi/ghes.h | 3 -- mm/memory-failure.c | 13 ------- 3 files changed, 44 insertions(+), 49 deletions(-)
<formletter>
This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly.
</formletter>
On Mon, Dec 18, 2023 at 7:45 AM Shuai Xue xueshuai@linux.alibaba.com wrote:
There are two major types of uncorrected recoverable (UCR) errors :
Synchronous error: The error is detected and raised at the point of the consumption in the execution flow, e.g. when a CPU tries to access a poisoned cache line. The CPU will take a synchronous error exception such as Synchronous External Abort (SEA) on Arm64 and Machine Check Exception (MCE) on X86. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
Asynchronous error: The error is detected out of processor execution context, e.g. when an error is detected by a background scrubber. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
When APEI firmware first is enabled, a platform may describe one error source for the handling of synchronous errors (e.g. MCE or SEA notification ), or for handling asynchronous errors (e.g. SCI or External Interrupt notification). In other words, we can distinguish synchronous errors by APEI notification. For synchronous errors, kernel will kill the current process which accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In addition, for asynchronous errors, kernel will notify the process who owns the poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. However, the GHES driver always sets mf_flags to 0 so that all synchronous errors are handled as asynchronous errors in memory failure.
To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous events.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Reviewed-by: James Morse james.morse@arm.com
Applied as 6.8 material.
The other patches in the series still need to receive tags from the APEI designated reviewers (as per MAINTAINERS).
Thanks!
drivers/acpi/apei/ghes.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 63ad0541db38..ab2a82cb1b0b 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; }
+/*
- A platform may describe one error source for the handling of synchronous
- errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI
- or External Interrupt). On x86, the HEST notifications are always
- asynchronous, so only SEA on ARM is delivered as a synchronous
- notification.
- */
+static inline bool is_hest_sync_notify(struct ghes *ghes) +{
u8 notify_type = ghes->generic->notify.type;
return notify_type == ACPI_HEST_NOTIFY_SEA;
+}
/*
- This driver isn't really modular, however for the time being,
- continuing to use module_param is the easiest way to remain
@@ -489,7 +503,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) }
static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
int sev)
int sev, bool sync)
{ int flags = -1; int sec_sev = ghes_severity(gdata->error_severity); @@ -503,7 +517,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE; if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
flags = 0;
flags = sync ? MF_ACTION_REQUIRED : 0; if (flags != -1) return ghes_do_memory_failure(mem_err->physical_addr, flags);
@@ -511,9 +525,11 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, return false; }
-static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev) +static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata,
int sev, bool sync)
{ struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
int flags = sync ? MF_ACTION_REQUIRED : 0; bool queued = false; int sec_sev, i; char *p;
@@ -538,7 +554,7 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int s * and don't filter out 'corrected' error here. */ if (is_cache && has_pa) {
queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0);
queued = ghes_do_memory_failure(err_info->physical_fault_addr, flags); p += err_info->length; continue; }
@@ -666,6 +682,7 @@ static bool ghes_do_proc(struct ghes *ghes, const guid_t *fru_id = &guid_null; char *fru_text = ""; bool queued = false;
bool sync = is_hest_sync_notify(ghes); sev = ghes_severity(estatus->error_severity); apei_estatus_for_each_section(estatus, gdata) {
@@ -683,13 +700,13 @@ static bool ghes_do_proc(struct ghes *ghes, atomic_notifier_call_chain(&ghes_report_chain, sev, mem_err);
arch_apei_report_mem_error(sev, mem_err);
queued = ghes_handle_memory_failure(gdata, sev);
queued = ghes_handle_memory_failure(gdata, sev, sync); } else if (guid_equal(sec_type, &CPER_SEC_PCIE)) { ghes_handle_aer(gdata); } else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) {
queued = ghes_handle_arm_hw_error(gdata, sev);
queued = ghes_handle_arm_hw_error(gdata, sev, sync); } else { void *err = acpi_hest_get_payload(gdata);
-- 2.39.3
On 2023/12/21 21:55, Rafael J. Wysocki wrote:
On Mon, Dec 18, 2023 at 7:45 AM Shuai Xue xueshuai@linux.alibaba.com wrote:
There are two major types of uncorrected recoverable (UCR) errors :
Synchronous error: The error is detected and raised at the point of the consumption in the execution flow, e.g. when a CPU tries to access a poisoned cache line. The CPU will take a synchronous error exception such as Synchronous External Abort (SEA) on Arm64 and Machine Check Exception (MCE) on X86. OS requires to take action (for example, offline failure page/kill failure thread) to recover this uncorrectable error.
Asynchronous error: The error is detected out of processor execution context, e.g. when an error is detected by a background scrubber. Some data in the memory are corrupted. But the data have not been consumed. OS is optional to take action to recover this uncorrectable error.
When APEI firmware first is enabled, a platform may describe one error source for the handling of synchronous errors (e.g. MCE or SEA notification ), or for handling asynchronous errors (e.g. SCI or External Interrupt notification). In other words, we can distinguish synchronous errors by APEI notification. For synchronous errors, kernel will kill the current process which accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In addition, for asynchronous errors, kernel will notify the process who owns the poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. However, the GHES driver always sets mf_flags to 0 so that all synchronous errors are handled as asynchronous errors in memory failure.
To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous events.
Signed-off-by: Shuai Xue xueshuai@linux.alibaba.com Tested-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Xiaofei Tan tanxiaofei@huawei.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Reviewed-by: James Morse james.morse@arm.com
Applied as 6.8 material.
The other patches in the series still need to receive tags from the APEI designated reviewers (as per MAINTAINERS).
Thanks!
Thank you :)
I will wait more feedback of other patches from MAINTAINERS.
Cheers, Shuai
linux-stable-mirror@lists.linaro.org