On Sun, Sep 11, 2022 at 07:47:25AM +0200, Greg KH wrote:
On Thu, Sep 08, 2022 at 02:44:33PM +0200, Ben Hutchings wrote:
On Wed, 2022-09-07 at 23:09 -0700, Pawan Gupta wrote:
On Wed, Sep 07, 2022 at 02:23:58AM +0200, Ben Hutchings wrote:
- The added mitigation, for PBRSB, requires removing any RET
instructions executed between VM exit and the RSB filling. In these older branches that hasn't been done, so the mitigation doesn't work.
I checked 4.19 and 5.4, I don't see any RET between VM-exit and RSB filling. Could you please point me to any specific instance you are seeing?
Yes, you're right. The backported versions avoid this problem. They are quite different from the upstream commit - and I would have appreciated some explanation of this in their commit messages.
Ahh right, I will keep in mind next time.
So, let's try again to move forward. I've attached a backport for 4.19 and 5.4 (only tested with the latter so far).
I am not understanding why lfence in single-entry-fill sequence is okay on 32-bit kernels?
#define __FILL_ONE_RETURN \ __FILL_RETURN_SLOT \ add $(BITS_PER_LONG/8), %_ASM_SP; \ lfence;
This isn't exactly about whether the kernel is 32-bit vs 64-bit, it's about whether the code may run on a processor that lacks support for LFENCE (part of SSE2).
- SSE2 is architectural on x86_64, so 64-bit kernels can use LFENCE
unconditionally.
- PBRSB doesn't affect any of those old processors, so its mitigation
can use LFENCE unconditionally. (Those procesors don't support VMX either.)
Ok, it seems that I need to take Ben's patch to resolve this. Pawan, if you object, please let us know.
I don't see any issue taking Ben's patch to resolve this.
Backport for 5.4 didn't apply cleanly on 4.19 and needed a minor change.
Attaching the patch for 4.19. It built fine with CONFIG_64BIT=n.
I don't see LFENCE in the i386 version of FILL_RETURN_BUFFER:
Dump of assembler code for function __switch_to_asm: 0xc1d63e00 <+0>: push %ebp 0xc1d63e01 <+1>: push %ebx 0xc1d63e02 <+2>: push %edi 0xc1d63e03 <+3>: push %esi 0xc1d63e04 <+4>: pushf 0xc1d63e05 <+5>: mov %esp,0x69c(%eax) 0xc1d63e0b <+11>: mov 0x69c(%edx),%esp 0xc1d63e11 <+17>: mov 0x378(%edx),%ebx 0xc1d63e17 <+23>: mov %ebx,%fs:0xc23b0e74 0xc1d63e1e <+30>: call 0xc1d63e24 <__switch_to_asm+36> ---> //FILL_RETURN_BUFFER 0xc1d63e23 <+35>: int3 0xc1d63e24 <+36>: call 0xc1d63e2a <__switch_to_asm+42> 0xc1d63e29 <+41>: int3 0xc1d63e2a <+42>: call 0xc1d63e30 <__switch_to_asm+48> 0xc1d63e2f <+47>: int3 0xc1d63e30 <+48>: call 0xc1d63e36 <__switch_to_asm+54> 0xc1d63e35 <+53>: int3 0xc1d63e36 <+54>: call 0xc1d63e3c <__switch_to_asm+60> 0xc1d63e3b <+59>: int3 0xc1d63e3c <+60>: call 0xc1d63e42 <__switch_to_asm+66>
[...]
0xc1d63ecc <+204>: call 0xc1d63ed2 <__switch_to_asm+210> 0xc1d63ed1 <+209>: int3 0xc1d63ed2 <+210>: call 0xc1d63ed8 <__switch_to_asm+216> 0xc1d63ed7 <+215>: int3 0xc1d63ed8 <+216>: call 0xc1d63ede <__switch_to_asm+222> 0xc1d63edd <+221>: int3 0xc1d63ede <+222>: add $0x80,%esp