On Fri, Oct 16, 2020 at 01:06:36PM +0200, Peter Zijlstra wrote:
On Fri, Oct 09, 2020 at 12:42:53PM -0700, ira.weiny@intel.com wrote:
@@ -644,6 +663,8 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p) if ((tifp ^ tifn) & _TIF_SLD) switch_to_sld(tifn);
- pks_sched_in();
}
You seem to have lost the comment proposed here:
https://lkml.kernel.org/r/20200717083140.GW10769@hirez.programming.kicks-ass...
It is useful and important information that the wrmsr normally doesn't happen.
Added back in here.
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c index 3cf8f775f36d..30f65dd3d0c5 100644 --- a/arch/x86/mm/pkeys.c +++ b/arch/x86/mm/pkeys.c @@ -229,3 +229,31 @@ u32 update_pkey_val(u32 pk_reg, int pkey, unsigned int flags) return pk_reg; }
+DEFINE_PER_CPU(u32, pkrs_cache);
+/**
- It should also be noted that the underlying WRMSR(MSR_IA32_PKRS) is not
- serializing but still maintains ordering properties similar to WRPKRU.
- The current SDM section on PKRS needs updating but should be the same as
- that of WRPKRU. So to quote from the WRPKRU text:
- WRPKRU will never execute transiently. Memory accesses
- affected by PKRU register will not execute (even transiently)
- until all prior executions of WRPKRU have completed execution
- and updated the PKRU register.
(whitespace damage; space followed by tabstop)
Fixed thanks.
- */
+void write_pkrs(u32 new_pkrs) +{
- u32 *pkrs;
- if (!static_cpu_has(X86_FEATURE_PKS))
return;
- pkrs = get_cpu_ptr(&pkrs_cache);
- if (*pkrs != new_pkrs) {
*pkrs = new_pkrs;
wrmsrl(MSR_IA32_PKRS, new_pkrs);
- }
- put_cpu_ptr(pkrs);
+}
looks familiar that... :-)
Added you as a co-developer if that is ok?
Ira