On Thu, 21 Apr 2022 15:57:32 PDT (-0700), Palmer Dabbelt wrote:
On Wed, 06 Apr 2022 07:16:49 PDT (-0700), guoren@kernel.org wrote:
From: Guo Ren guoren@linux.alibaba.com
These patch_text implementations are using stop_machine_cpuslocked infrastructure with atomic cpu_count. The original idea: When the master CPU patch_text, the others should wait for it. But current implementation is using the first CPU as master, which couldn't guarantee the remaining CPUs are waiting. This patch changes the last CPU as the master to solve the potential risk.
Signed-off-by: Guo Ren guoren@linux.alibaba.com Signed-off-by: Guo Ren guoren@kernel.org Acked-by: Palmer Dabbelt palmer@rivosinc.com Reviewed-by: Masami Hiramatsu mhiramat@kernel.org Cc: stable@vger.kernel.org
arch/riscv/kernel/patch.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c index 0b552873a577..765004b60513 100644 --- a/arch/riscv/kernel/patch.c +++ b/arch/riscv/kernel/patch.c @@ -104,7 +104,7 @@ static int patch_text_cb(void *data) struct patch_insn *patch = data; int ret = 0;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
- if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) { ret = patch_text_nosync(patch->addr, &patch->insn, GET_INSN_LENGTH(patch->insn));
Thanks, this is on fixes.
Sorry, I forgot to add the Fixes and stable tags. I just fixed that up, but I'm going to hold off on this one until next week's PR to make sure it has time to go through linux-next.