From: Will Deacon will.deacon@arm.com
When unmapping the kernel at EL0, we use tpidrro_el0 as a scratch register during exception entry from native tasks and subsequently zero it in the kernel_ventry macro. We can therefore avoid zeroing tpidrro_el0 in the context-switch path for native tasks using the entry trampoline.
Reviewed-by: Mark Rutland mark.rutland@arm.com Tested-by: Laura Abbott labbott@redhat.com Tested-by: Shanker Donthineni shankerd@codeaurora.org Signed-off-by: Will Deacon will.deacon@arm.com (cherry picked from commit 18011eac28c7cb31c87b86b7d0e5b01894405c7f) Signed-off-by: Alex Shi alex.shi@linaro.org
Conflicts: keep fold tls_preserve_current_state() in arch/arm64/kernel/process.c --- arch/arm64/kernel/process.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 0e73949..0972ce5 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -306,17 +306,17 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
static void tls_thread_switch(struct task_struct *next) { - unsigned long tpidr, tpidrro; + unsigned long tpidr;
tpidr = read_sysreg(tpidr_el0); *task_user_tls(current) = tpidr;
- tpidr = *task_user_tls(next); - tpidrro = is_compat_thread(task_thread_info(next)) ? - next->thread.tp_value : 0; + if (is_compat_thread(task_thread_info(next))) + write_sysreg(next->thread.tp_value, tpidrro_el0); + else if (!arm64_kernel_unmapped_at_el0()) + write_sysreg(0, tpidrro_el0);
- write_sysreg(tpidr, tpidr_el0); - write_sysreg(tpidrro, tpidrro_el0); + write_sysreg(*task_user_tls(next), tpidr_el0); }
/* Restore the UAO state depending on next's addr_limit */