Implement and enable context tracking for arm64 (which is a prerequisite for FULL_NOHZ support). This patchset builds upon earlier work by Kevin Hilman and is based on Will Deacon's tree.
Changes v3 to v4:
* Rename parameter of ct_user_exit from save to restore * Rebased patch to Will Deacon's tree (branch remotes/origin/aarch64 of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git)
Changes v2 to v3:
* Save/restore necessary registers in ct_user_enter and ct_user_exit * Annotate "error paths" out of el0_sync with ct_user_exit
Changes v1 to v2:
* Save far_el1 in x26 temporarily
Larry Bassel (2): arm64: adjust el0_sync so that a function can be called arm64: enable context tracking
arch/arm64/Kconfig | 1 + arch/arm64/include/asm/thread_info.h | 1 + arch/arm64/kernel/entry.S | 72 ++++++++++++++++++++++++++++++++---- 3 files changed, 67 insertions(+), 7 deletions(-)
To implement the context tracker properly on arm64, a function call needs to be made after debugging and interrupts are turned on, but before the lr is changed to point to ret_to_user(). If the function call is made after the lr is changed the function will not return to the correct place.
For similar reasons, defer the setting of x0 so that it doesn't need to be saved around the function call (save far_el1 in x26 temporarily instead).
Signed-off-by: Larry Bassel larry.bassel@linaro.org --- arch/arm64/kernel/entry.S | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index e8b23a3..20b336e 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -354,7 +354,6 @@ el0_sync: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC64 // SVC in 64-bit state b.eq el0_svc - adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0 @@ -383,7 +382,6 @@ el0_sync_compat: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC32 // SVC in 32-bit state b.eq el0_svc_compat - adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0 @@ -426,22 +424,26 @@ el0_da: /* * Data abort handling */ - mrs x0, far_el1 - bic x0, x0, #(0xff << 56) + mrs x26, far_el1 // enable interrupts before calling the main handler enable_dbg_and_irq + mov x0, x26 + bic x0, x0, #(0xff << 56) mov x1, x25 mov x2, sp + adr lr, ret_to_user b do_mem_abort el0_ia: /* * Instruction abort handling */ - mrs x0, far_el1 + mrs x26, far_el1 // enable interrupts before calling the main handler enable_dbg_and_irq + mov x0, x26 orr x1, x25, #1 << 24 // use reserved ISS bit for instruction aborts mov x2, sp + adr lr, ret_to_user b do_mem_abort el0_fpsimd_acc: /* @@ -450,6 +452,7 @@ el0_fpsimd_acc: enable_dbg mov x0, x25 mov x1, sp + adr lr, ret_to_user b do_fpsimd_acc el0_fpsimd_exc: /* @@ -458,16 +461,19 @@ el0_fpsimd_exc: enable_dbg mov x0, x25 mov x1, sp + adr lr, ret_to_user b do_fpsimd_exc el0_sp_pc: /* * Stack or PC alignment exception handling */ - mrs x0, far_el1 + mrs x26, far_el1 // enable interrupts before calling the main handler enable_dbg_and_irq + mov x0, x26 mov x1, x25 mov x2, sp + adr lr, ret_to_user b do_sp_pc_abort el0_undef: /* @@ -476,23 +482,27 @@ el0_undef: // enable interrupts before calling the main handler enable_dbg_and_irq mov x0, sp + adr lr, ret_to_user b do_undefinstr el0_dbg: /* * Debug exception handling */ tbnz x24, #0, el0_inv // EL0 only - mrs x0, far_el1 + mrs x26, far_el1 + mov x0, x26 mov x1, x25 mov x2, sp bl do_debug_exception enable_dbg + mov x0, x26 b ret_to_user el0_inv: enable_dbg mov x0, sp mov x1, #BAD_SYNC mrs x2, esr_el1 + adr lr, ret_to_user b bad_mode ENDPROC(el0_sync)
Hi Larry,
On 05/22/2014 03:27 PM, Larry Bassel wrote:
To implement the context tracker properly on arm64, a function call needs to be made after debugging and interrupts are turned on, but before the lr is changed to point to ret_to_user(). If the function call is made after the lr is changed the function will not return to the correct place.
For similar reasons, defer the setting of x0 so that it doesn't need to be saved around the function call (save far_el1 in x26 temporarily instead).
Signed-off-by: Larry Bassel larry.bassel@linaro.org
arch/arm64/kernel/entry.S | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index e8b23a3..20b336e 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -354,7 +354,6 @@ el0_sync: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC64 // SVC in 64-bit state b.eq el0_svc
- adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0
@@ -383,7 +382,6 @@ el0_sync_compat: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC32 // SVC in 32-bit state b.eq el0_svc_compat
- adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0
@@ -426,22 +424,26 @@ el0_da: /* * Data abort handling */
- mrs x0, far_el1
- bic x0, x0, #(0xff << 56)
- mrs x26, far_el1 // enable interrupts before calling the main handler enable_dbg_and_irq
- mov x0, x26
- bic x0, x0, #(0xff << 56)
Nit: I believe you can bit clear with x26 as the source register and omit the move instruction.
Regards, Christopher
On 22 May 14 16:23, Christopher Covington wrote:
Hi Larry,
On 05/22/2014 03:27 PM, Larry Bassel wrote:
To implement the context tracker properly on arm64, a function call needs to be made after debugging and interrupts are turned on, but before the lr is changed to point to ret_to_user(). If the function call is made after the lr is changed the function will not return to the correct place.
For similar reasons, defer the setting of x0 so that it doesn't need to be saved around the function call (save far_el1 in x26 temporarily instead).
Signed-off-by: Larry Bassel larry.bassel@linaro.org
arch/arm64/kernel/entry.S | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index e8b23a3..20b336e 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -354,7 +354,6 @@ el0_sync: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC64 // SVC in 64-bit state b.eq el0_svc
- adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0
@@ -383,7 +382,6 @@ el0_sync_compat: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC32 // SVC in 32-bit state b.eq el0_svc_compat
- adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0
@@ -426,22 +424,26 @@ el0_da: /* * Data abort handling */
- mrs x0, far_el1
- bic x0, x0, #(0xff << 56)
- mrs x26, far_el1 // enable interrupts before calling the main handler enable_dbg_and_irq
- mov x0, x26
- bic x0, x0, #(0xff << 56)
Nit: I believe you can bit clear with x26 as the source register and omit the move instruction.
Is that really an improvement (assuming it works)? Are we saving any cycles here? If so, does it matter? It is easy to see what the move instruction is doing.
Regards, Christopher
-- Employee of Qualcomm Innovation Center, Inc. Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by the Linux Foundation.
Larry
On Thu, May 22, 2014 at 11:35:20PM +0100, Larry Bassel wrote:
On 05/22/2014 03:27 PM, Larry Bassel wrote:
To implement the context tracker properly on arm64, a function call needs to be made after debugging and interrupts are turned on, but before the lr is changed to point to ret_to_user(). If the function call is made after the lr is changed the function will not return to the correct place.
For similar reasons, defer the setting of x0 so that it doesn't need to be saved around the function call (save far_el1 in x26 temporarily instead).
Signed-off-by: Larry Bassel larry.bassel@linaro.org
arch/arm64/kernel/entry.S | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index e8b23a3..20b336e 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -354,7 +354,6 @@ el0_sync: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC64 // SVC in 64-bit state b.eq el0_svc
- adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0
@@ -383,7 +382,6 @@ el0_sync_compat: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC32 // SVC in 32-bit state b.eq el0_svc_compat
- adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0
@@ -426,22 +424,26 @@ el0_da: /* * Data abort handling */
- mrs x0, far_el1
- bic x0, x0, #(0xff << 56)
- mrs x26, far_el1 // enable interrupts before calling the main handler enable_dbg_and_irq
- mov x0, x26
- bic x0, x0, #(0xff << 56)
Nit: I believe you can bit clear with x26 as the source register and omit the move instruction.
Is that really an improvement (assuming it works)? Are we saving any cycles here? If so, does it matter? It is easy to see what the move instruction is doing.
Even if it's not noticeable, I would still reduce the number of lines by one. BIC with immediate is just an alias for AND and it supports different source and destination.
On 23 May 14 15:44, Catalin Marinas wrote:
On Thu, May 22, 2014 at 11:35:20PM +0100, Larry Bassel wrote:
On 05/22/2014 03:27 PM, Larry Bassel wrote:
To implement the context tracker properly on arm64, a function call needs to be made after debugging and interrupts are turned on, but before the lr is changed to point to ret_to_user(). If the function call is made after the lr is changed the function will not return to the correct place.
For similar reasons, defer the setting of x0 so that it doesn't need to be saved around the function call (save far_el1 in x26 temporarily instead).
Signed-off-by: Larry Bassel larry.bassel@linaro.org
arch/arm64/kernel/entry.S | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index e8b23a3..20b336e 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -354,7 +354,6 @@ el0_sync: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC64 // SVC in 64-bit state b.eq el0_svc
- adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0
@@ -383,7 +382,6 @@ el0_sync_compat: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC32 // SVC in 32-bit state b.eq el0_svc_compat
- adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0
@@ -426,22 +424,26 @@ el0_da: /* * Data abort handling */
- mrs x0, far_el1
- bic x0, x0, #(0xff << 56)
- mrs x26, far_el1 // enable interrupts before calling the main handler enable_dbg_and_irq
- mov x0, x26
- bic x0, x0, #(0xff << 56)
Nit: I believe you can bit clear with x26 as the source register and omit the move instruction.
Is that really an improvement (assuming it works)? Are we saving any cycles here? If so, does it matter? It is easy to see what the move instruction is doing.
Even if it's not noticeable, I would still reduce the number of lines by one. BIC with immediate is just an alias for AND and it supports different source and destination.
Ack.
-- Catalin
Larry
Make calls to ct_user_enter when the kernel is exited and ct_user_exit when the kernel is entered (in el0_da, el0_ia, el0_svc, el0_irq and all of the "error" paths).
These macros expand to function calls which will only work properly if el0_sync and related code has been rearranged (in a previous patch of this series).
The calls to ct_user_exit are made after hw debugging has been enabled (enable_dbg_and_irq).
The call to ct_user_enter is made at the beginning of the kernel_exit macro.
This patch is based on earlier work by Kevin Hilman. Save/restore optimizations were also done by Kevin.
Signed-off-by: Kevin Hilman khilman@linaro.org Signed-off-by: Larry Bassel larry.bassel@linaro.org --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/thread_info.h | 1 + arch/arm64/kernel/entry.S | 48 ++++++++++++++++++++++++++++++++++++ 3 files changed, 50 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index e759af5..ef18ae5 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -55,6 +55,7 @@ config ARM64 select RTC_LIB select SPARSE_IRQ select SYSCTL_EXCEPTION_TRACE + select HAVE_CONTEXT_TRACKING help ARM 64-bit (AArch64) Linux support.
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 720e70b..301ea6a 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -108,6 +108,7 @@ static inline struct thread_info *current_thread_info(void) #define TIF_SINGLESTEP 21 #define TIF_32BIT 22 /* 32bit process */ #define TIF_SWITCH_MM 23 /* deferred switch_mm */ +#define TIF_NOHZ 24
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 20b336e..520da4c 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -30,6 +30,44 @@ #include <asm/unistd32.h>
/* + * Context tracking subsystem. Used to instrument transitions + * between user and kernel mode. + */ + .macro ct_user_exit, restore = 0 +#ifdef CONFIG_CONTEXT_TRACKING + bl context_tracking_user_exit + .if \restore == 1 + /* + * Save/restore needed during syscalls. Restore syscall arguments from + * the values already saved on stack during kernel_entry. + */ + ldp x0, x1, [sp] + ldp x2, x3, [sp, #S_X2] + ldp x4, x5, [sp, #S_X4] + ldp x6, x7, [sp, #S_X6] + .endif +#endif + .endm + + .macro ct_user_enter, save = 0 +#ifdef CONFIG_CONTEXT_TRACKING + .if \save == 1 + /* + * Save/restore only needed on syscall fastpath, which uses + * x0-x2. + */ + push x2, x3 + push x0, x1 + .endif + bl context_tracking_user_enter + .if \save == 1 + pop x0, x1 + pop x2, x3 + .endif +#endif + .endm + +/* * Bad Abort numbers *----------------- */ @@ -91,6 +129,7 @@ .macro kernel_exit, el, ret = 0 ldp x21, x22, [sp, #S_PC] // load ELR, SPSR .if \el == 0 + ct_user_enter \ret ldr x23, [sp, #S_SP] // load return stack pointer .endif .if \ret @@ -318,6 +357,7 @@ el1_irq: bl trace_hardirqs_off #endif
+ ct_user_exit irq_handler
#ifdef CONFIG_PREEMPT @@ -427,6 +467,7 @@ el0_da: mrs x26, far_el1 // enable interrupts before calling the main handler enable_dbg_and_irq + ct_user_exit mov x0, x26 bic x0, x0, #(0xff << 56) mov x1, x25 @@ -440,6 +481,7 @@ el0_ia: mrs x26, far_el1 // enable interrupts before calling the main handler enable_dbg_and_irq + ct_user_exit mov x0, x26 orr x1, x25, #1 << 24 // use reserved ISS bit for instruction aborts mov x2, sp @@ -450,6 +492,7 @@ el0_fpsimd_acc: * Floating Point or Advanced SIMD access */ enable_dbg + ct_user_exit mov x0, x25 mov x1, sp adr lr, ret_to_user @@ -459,6 +502,7 @@ el0_fpsimd_exc: * Floating Point or Advanced SIMD exception */ enable_dbg + ct_user_exit mov x0, x25 mov x1, sp adr lr, ret_to_user @@ -481,6 +525,7 @@ el0_undef: */ // enable interrupts before calling the main handler enable_dbg_and_irq + ct_user_exit mov x0, sp adr lr, ret_to_user b do_undefinstr @@ -495,10 +540,12 @@ el0_dbg: mov x2, sp bl do_debug_exception enable_dbg + ct_user_exit mov x0, x26 b ret_to_user el0_inv: enable_dbg + ct_user_exit mov x0, sp mov x1, #BAD_SYNC mrs x2, esr_el1 @@ -619,6 +666,7 @@ el0_svc: el0_svc_naked: // compat entry point stp x0, scno, [sp, #S_ORIG_X0] // save the original x0 and syscall number enable_dbg_and_irq + ct_user_exit 1
ldr x16, [tsk, #TI_FLAGS] // check for syscall tracing tbnz x16, #TIF_SYSCALL_TRACE, __sys_trace // are we tracing syscalls?
Hi Larry,
On 05/22/2014 03:27 PM, Larry Bassel wrote:
Make calls to ct_user_enter when the kernel is exited and ct_user_exit when the kernel is entered (in el0_da, el0_ia, el0_svc, el0_irq and all of the "error" paths).
These macros expand to function calls which will only work properly if el0_sync and related code has been rearranged (in a previous patch of this series).
The calls to ct_user_exit are made after hw debugging has been enabled (enable_dbg_and_irq).
The call to ct_user_enter is made at the beginning of the kernel_exit macro.
This patch is based on earlier work by Kevin Hilman. Save/restore optimizations were also done by Kevin.
--- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -30,6 +30,44 @@ #include <asm/unistd32.h> /*
- Context tracking subsystem. Used to instrument transitions
- between user and kernel mode.
- */
- .macro ct_user_exit, restore = 0
+#ifdef CONFIG_CONTEXT_TRACKING
- bl context_tracking_user_exit
- .if \restore == 1
- /*
* Save/restore needed during syscalls. Restore syscall arguments from
* the values already saved on stack during kernel_entry.
*/
- ldp x0, x1, [sp]
- ldp x2, x3, [sp, #S_X2]
- ldp x4, x5, [sp, #S_X4]
- ldp x6, x7, [sp, #S_X6]
- .endif
+#endif
- .endm
- .macro ct_user_enter, save = 0
+#ifdef CONFIG_CONTEXT_TRACKING
- .if \save == 1
- /*
* Save/restore only needed on syscall fastpath, which uses
* x0-x2.
*/
- push x2, x3
Why is x3 saved?
- push x0, x1
- .endif
- bl context_tracking_user_enter
- .if \save == 1
- pop x0, x1
- pop x2, x3
- .endif
+#endif
- .endm
Thanks, Christopher
+Mark Rutland
Christopher Covington cov@codeaurora.org writes:
Hi Larry,
On 05/22/2014 03:27 PM, Larry Bassel wrote:
Make calls to ct_user_enter when the kernel is exited and ct_user_exit when the kernel is entered (in el0_da, el0_ia, el0_svc, el0_irq and all of the "error" paths).
These macros expand to function calls which will only work properly if el0_sync and related code has been rearranged (in a previous patch of this series).
The calls to ct_user_exit are made after hw debugging has been enabled (enable_dbg_and_irq).
The call to ct_user_enter is made at the beginning of the kernel_exit macro.
This patch is based on earlier work by Kevin Hilman. Save/restore optimizations were also done by Kevin.
--- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -30,6 +30,44 @@ #include <asm/unistd32.h> /*
- Context tracking subsystem. Used to instrument transitions
- between user and kernel mode.
- */
- .macro ct_user_exit, restore = 0
+#ifdef CONFIG_CONTEXT_TRACKING
- bl context_tracking_user_exit
- .if \restore == 1
- /*
* Save/restore needed during syscalls. Restore syscall arguments from
* the values already saved on stack during kernel_entry.
*/
- ldp x0, x1, [sp]
- ldp x2, x3, [sp, #S_X2]
- ldp x4, x5, [sp, #S_X4]
- ldp x6, x7, [sp, #S_X6]
- .endif
+#endif
- .endm
- .macro ct_user_enter, save = 0
+#ifdef CONFIG_CONTEXT_TRACKING
- .if \save == 1
- /*
* Save/restore only needed on syscall fastpath, which uses
* x0-x2.
*/
- push x2, x3
Why is x3 saved?
I'll respond here since I worked with Larry on the context save/restore part.
[insert rather embarassing disclamer of ignorance of arm64 assembly]
Based on my reading of the code, I figured only x0-x2 needed to be saved. However, based on some experiments with intentionally clobbering the registers[1] (as suggested by Mark Rutland) in order to make sure we're saving/restoring the right things, I discovered x3 was needed too (I missed updating the comment to mention x0-x3.)
Maybe Will/Catalin/Mark R. can shed some light here?
Kevin
[1]
From 8a8702b4d597d08def22221368beae5db2f4a8aa Mon Sep 17 00:00:00 2001
From: Kevin Hilman khilman@linaro.org Date: Fri, 9 May 2014 13:37:43 -0700 Subject: [PATCH] KJH: test: clobber regs
--- arch/arm64/kernel/entry.S | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 520da4c02ece..232f0200e88d 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -36,6 +36,25 @@ .macro ct_user_exit, restore = 0 #ifdef CONFIG_CONTEXT_TRACKING bl context_tracking_user_exit + movz x0, #0xff, lsl #48 + movz x1, #0xff, lsl #48 + movz x2, #0xff, lsl #48 + movz x3, #0xff, lsl #48 + movz x4, #0xff, lsl #48 + movz x5, #0xff, lsl #48 + movz x6, #0xff, lsl #48 + movz x7, #0xff, lsl #48 + movz x8, #0xff, lsl #48 + movz x9, #0xff, lsl #48 + movz x10, #0xff, lsl #48 + movz x11, #0xff, lsl #48 + movz x12, #0xff, lsl #48 + movz x13, #0xff, lsl #48 + movz x14, #0xff, lsl #48 + movz x15, #0xff, lsl #48 + movz x16, #0xff, lsl #48 + movz x17, #0xff, lsl #48 + movz x18, #0xff, lsl #48 .if \restore == 1 /* * Save/restore needed during syscalls. Restore syscall arguments from @@ -60,6 +79,25 @@ push x0, x1 .endif bl context_tracking_user_enter + movz x0, #0xff, lsl #48 + movz x1, #0xff, lsl #48 + movz x2, #0xff, lsl #48 + movz x3, #0xff, lsl #48 + movz x4, #0xff, lsl #48 + movz x5, #0xff, lsl #48 + movz x6, #0xff, lsl #48 + movz x7, #0xff, lsl #48 + movz x8, #0xff, lsl #48 + movz x9, #0xff, lsl #48 + movz x10, #0xff, lsl #48 + movz x11, #0xff, lsl #48 + movz x12, #0xff, lsl #48 + movz x13, #0xff, lsl #48 + movz x14, #0xff, lsl #48 + movz x15, #0xff, lsl #48 + movz x16, #0xff, lsl #48 + movz x17, #0xff, lsl #48 + movz x18, #0xff, lsl #48 .if \save == 1 pop x0, x1 pop x2, x3
On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
Christopher Covington cov@codeaurora.org writes:
On 05/22/2014 03:27 PM, Larry Bassel wrote:
Make calls to ct_user_enter when the kernel is exited and ct_user_exit when the kernel is entered (in el0_da, el0_ia, el0_svc, el0_irq and all of the "error" paths).
These macros expand to function calls which will only work properly if el0_sync and related code has been rearranged (in a previous patch of this series).
The calls to ct_user_exit are made after hw debugging has been enabled (enable_dbg_and_irq).
The call to ct_user_enter is made at the beginning of the kernel_exit macro.
This patch is based on earlier work by Kevin Hilman. Save/restore optimizations were also done by Kevin.
--- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -30,6 +30,44 @@ #include <asm/unistd32.h> /*
- Context tracking subsystem. Used to instrument transitions
- between user and kernel mode.
- */
- .macro ct_user_exit, restore = 0
+#ifdef CONFIG_CONTEXT_TRACKING
- bl context_tracking_user_exit
- .if \restore == 1
- /*
* Save/restore needed during syscalls. Restore syscall arguments from
* the values already saved on stack during kernel_entry.
*/
- ldp x0, x1, [sp]
- ldp x2, x3, [sp, #S_X2]
- ldp x4, x5, [sp, #S_X4]
- ldp x6, x7, [sp, #S_X6]
- .endif
+#endif
- .endm
- .macro ct_user_enter, save = 0
+#ifdef CONFIG_CONTEXT_TRACKING
- .if \save == 1
- /*
* Save/restore only needed on syscall fastpath, which uses
* x0-x2.
*/
- push x2, x3
Why is x3 saved?
I'll respond here since I worked with Larry on the context save/restore part.
[insert rather embarassing disclamer of ignorance of arm64 assembly]
Based on my reading of the code, I figured only x0-x2 needed to be saved. However, based on some experiments with intentionally clobbering the registers[1] (as suggested by Mark Rutland) in order to make sure we're saving/restoring the right things, I discovered x3 was needed too (I missed updating the comment to mention x0-x3.)
Maybe Will/Catalin/Mark R. can shed some light here?
I haven't checked all the code paths but at least for pushing onto the stack we must keep it 16-bytes aligned (architecture requirement).
On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote:
On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
Christopher Covington cov@codeaurora.org writes:
On 05/22/2014 03:27 PM, Larry Bassel wrote:
Make calls to ct_user_enter when the kernel is exited and ct_user_exit when the kernel is entered (in el0_da, el0_ia, el0_svc, el0_irq and all of the "error" paths).
These macros expand to function calls which will only work properly if el0_sync and related code has been rearranged (in a previous patch of this series).
The calls to ct_user_exit are made after hw debugging has been enabled (enable_dbg_and_irq).
The call to ct_user_enter is made at the beginning of the kernel_exit macro.
This patch is based on earlier work by Kevin Hilman. Save/restore optimizations were also done by Kevin.
--- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -30,6 +30,44 @@ #include <asm/unistd32.h> /*
- Context tracking subsystem. Used to instrument transitions
- between user and kernel mode.
- */
- .macro ct_user_exit, restore = 0
+#ifdef CONFIG_CONTEXT_TRACKING
- bl context_tracking_user_exit
- .if \restore == 1
- /*
* Save/restore needed during syscalls. Restore syscall arguments from
* the values already saved on stack during kernel_entry.
*/
- ldp x0, x1, [sp]
- ldp x2, x3, [sp, #S_X2]
- ldp x4, x5, [sp, #S_X4]
- ldp x6, x7, [sp, #S_X6]
- .endif
+#endif
- .endm
- .macro ct_user_enter, save = 0
+#ifdef CONFIG_CONTEXT_TRACKING
- .if \save == 1
- /*
* Save/restore only needed on syscall fastpath, which uses
* x0-x2.
*/
- push x2, x3
Why is x3 saved?
I'll respond here since I worked with Larry on the context save/restore part.
[insert rather embarassing disclamer of ignorance of arm64 assembly]
Based on my reading of the code, I figured only x0-x2 needed to be saved. However, based on some experiments with intentionally clobbering the registers[1] (as suggested by Mark Rutland) in order to make sure we're saving/restoring the right things, I discovered x3 was needed too (I missed updating the comment to mention x0-x3.)
Maybe Will/Catalin/Mark R. can shed some light here?
I haven't checked all the code paths but at least for pushing onto the stack we must keep it 16-bytes aligned (architecture requirement).
Sure -- if modifying the stack we need to push/pop pairs of registers to keep it aligned. It might be better to use xzr as the dummy value in that case to make it clear that the value doesn't really matter.
That said, ct_user_enter is only called in kernel_exit before we restore the values off the stack, and the only register I can spot that we need to preserve is x0 for the syscall return value. I can't see x1 or x2 being used any more specially than the rest of the remaining registers. Am I missing something, or would it be sufficient to do the following?
push x0, xzr bl context_tacking_user_enter pop x0, xzr
Cheers, Mark.
On Fri, May 23, 2014 at 04:55:44PM +0100, Mark Rutland wrote:
On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote:
On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote: I haven't checked all the code paths but at least for pushing onto the stack we must keep it 16-bytes aligned (architecture requirement).
Sure -- if modifying the stack we need to push/pop pairs of registers to keep it aligned. It might be better to use xzr as the dummy value in that case to make it clear that the value doesn't really matter.
That said, ct_user_enter is only called in kernel_exit before we restore the values off the stack, and the only register I can spot that we need to preserve is x0 for the syscall return value. I can't see x1 or x2 being used any more specially than the rest of the remaining registers. Am I missing something, or would it be sufficient to do the following?
push x0, xzr bl context_tacking_user_enter pop x0, xzr
... and if that works, then why are we using the stack instead of a callee-saved register?
Will
Mark Rutland mark.rutland@arm.com writes:
On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote:
On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
Christopher Covington cov@codeaurora.org writes:
On 05/22/2014 03:27 PM, Larry Bassel wrote:
Make calls to ct_user_enter when the kernel is exited and ct_user_exit when the kernel is entered (in el0_da, el0_ia, el0_svc, el0_irq and all of the "error" paths).
These macros expand to function calls which will only work properly if el0_sync and related code has been rearranged (in a previous patch of this series).
The calls to ct_user_exit are made after hw debugging has been enabled (enable_dbg_and_irq).
The call to ct_user_enter is made at the beginning of the kernel_exit macro.
This patch is based on earlier work by Kevin Hilman. Save/restore optimizations were also done by Kevin.
--- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -30,6 +30,44 @@ #include <asm/unistd32.h> /*
- Context tracking subsystem. Used to instrument transitions
- between user and kernel mode.
- */
- .macro ct_user_exit, restore = 0
+#ifdef CONFIG_CONTEXT_TRACKING
- bl context_tracking_user_exit
- .if \restore == 1
- /*
* Save/restore needed during syscalls. Restore syscall arguments from
* the values already saved on stack during kernel_entry.
*/
- ldp x0, x1, [sp]
- ldp x2, x3, [sp, #S_X2]
- ldp x4, x5, [sp, #S_X4]
- ldp x6, x7, [sp, #S_X6]
- .endif
+#endif
- .endm
- .macro ct_user_enter, save = 0
+#ifdef CONFIG_CONTEXT_TRACKING
- .if \save == 1
- /*
* Save/restore only needed on syscall fastpath, which uses
* x0-x2.
*/
- push x2, x3
Why is x3 saved?
I'll respond here since I worked with Larry on the context save/restore part.
[insert rather embarassing disclamer of ignorance of arm64 assembly]
Based on my reading of the code, I figured only x0-x2 needed to be saved. However, based on some experiments with intentionally clobbering the registers[1] (as suggested by Mark Rutland) in order to make sure we're saving/restoring the right things, I discovered x3 was needed too (I missed updating the comment to mention x0-x3.)
Maybe Will/Catalin/Mark R. can shed some light here?
I haven't checked all the code paths but at least for pushing onto the stack we must keep it 16-bytes aligned (architecture requirement).
Sure -- if modifying the stack we need to push/pop pairs of registers to keep it aligned. It might be better to use xzr as the dummy value in that case to make it clear that the value doesn't really matter.
That said, ct_user_enter is only called in kernel_exit before we restore the values off the stack, and the only register I can spot that we need to preserve is x0 for the syscall return value. I can't see x1 or x2 being used any more specially than the rest of the remaining registers. Am I missing something,
I don't think you're missing something. I had thought my experiment in clobbering registers uncovered that x1-x3 were also in use somewhere, but in trying to reproduce that now, it's clear only x0 is important.
or would it be sufficient to do the following? push x0, xzr bl context_tacking_user_enter pop x0, xzr
Yes, this seems to work.
Following Will's suggestion of using a callee-saved register to save x0, the updated version now looks like this:
.macro ct_user_enter, save = 0 #ifdef CONFIG_CONTEXT_TRACKING .if \save == 1 /* * We only have to save/restore x0 on the fast syscall path where * x0 contains the syscall return. */ mov x19, x0 .endif bl context_tracking_user_enter .if \save == 1 mov x0, x19 .endif #endif .endm
We'll update this as well as address the comments on PATCH 1/2 and send a v5.
Thanks guys for the review and guidance as I'm wandering a bit in the dark here in arm64 assembler land.
Cheers,
Kevin
Larry Bassel larry.bassel@linaro.org writes:
Implement and enable context tracking for arm64 (which is a prerequisite for FULL_NOHZ support). This patchset builds upon earlier work by Kevin Hilman and is based on Will Deacon's tree.
Tested-by: Kevin Hilman khilman@linaro.org
Test this with NO_HZ_FULL on v3.15-rc5 merged with the aarch64 branch from Will's tree on the foundation model (with the DT modified so the arch timer doesn't enter C3_STOP mode.)
Kevin
linaro-kernel@lists.linaro.org