This patch series suggests fixes for several corner cases in the RISC-V
vector ptrace implementation:
- init vector context with proper vlenb, to avoid reading zero vlenb
by an early attached debugger
- follow gdbserver expectations and return ENODATA instead of EINVAL
if vector extension is supported but not yet activated for the
traced process
- validate input vector csr registers in ptrace, to maintain an accurate
view of the tracee's vector context across multiple halt/resume
debug cycles
For detailed description see the appropriate commit messages. A new test
suite v_ptrace is added into the tools/testing/selftests/riscv/vector
to verify some of the vector ptrace functionality and corner cases.
Previous versions:
- v3: https://lore.kernel.org/linux-riscv/20251025210655.43099-1-geomatsi@gmail.c…
- v2: https://lore.kernel.org/linux-riscv/20250821173957.563472-1-geomatsi@gmail.…
- v1: https://lore.kernel.org/linux-riscv/20251007115840.2320557-1-geomatsi@gmail…
Changes in v4:
The form 'vsetvli x0, x0, ...' can only be used if VLMAX remains
unchanged, see spec 6.2. This condition was not met by the initial
values in the selftests w.r.t. the initial zeroed context. QEMU accepted
such values, but actual hardware (c908, BananaPi CanMV Zero board) did
not, setting vill. So fix the selftests after testing on hardware:
- replace 'vsetvli x0, x0, ...' by 'vsetvli rd, x0, ...'
- fixed instruction returns VLMAX, so use it in checks as well
- replace fixed vlenb == 16 in the syscall test
Changes in v3:
Address the review comments by Andy Chiu and rework the approach:
- drop forced vector context save entirely
- perform strict validation of vector csr regs in ptrace
Changes in v2:
- add thread_info flag to allow to force vector context save
- force vector context save after vector ptrace to ensure valid vector
context in the next ptrace operations
- force vector context save on the first context switch after vector
context init to get proper vlenb
---
Ilya Mamay (1):
riscv: ptrace: return ENODATA for inactive vector extension
Sergey Matyukevich (8):
selftests: riscv: test ptrace vector interface
selftests: riscv: verify initial vector state with ptrace
riscv: vector: init vector context with proper vlenb
riscv: csr: define vtype registers elements
riscv: ptrace: validate input vector csr registers
selftests: riscv: verify ptrace rejects invalid vector csr inputs
selftests: riscv: verify ptrace accepts valid vector csr values
selftests: riscv: verify syscalls discard vector context
arch/riscv/include/asm/csr.h | 11 +
arch/riscv/kernel/ptrace.c | 72 +-
arch/riscv/kernel/vector.c | 12 +-
.../testing/selftests/riscv/vector/.gitignore | 1 +
tools/testing/selftests/riscv/vector/Makefile | 5 +-
.../testing/selftests/riscv/vector/v_ptrace.c | 754 ++++++++++++++++++
6 files changed, 847 insertions(+), 8 deletions(-)
create mode 100644 tools/testing/selftests/riscv/vector/v_ptrace.c
base-commit: e811c33b1f137be26a20444b79db8cbc1fca1c89
--
2.51.0
At the moment, the ability to direct-inject vLPIs is only enableable
on an all-or-nothing per-VM basis, causing unnecessary I/O performance
loss in cases where a VM's vCPU count exceeds available vPEs. This RFC
introduces per-vCPU control over vLPI injection to realize potential
I/O performance gain in such situations.
Background
----------
The value of dynamically enabling the direct injection of vLPIs on a
per-vCPU basis is the ability to run guest VMs with simultaneous
hardware-forwarded and software-forwarded message-signaled interrupts.
Currently, hardware-forwarded vLPI direct injection on a KVM guest
requires GICv4 and is enabled on a per-VM, all-or-nothing basis. vLPI
injection enablment happens in two stages:
1) At vGIC initialization, allocate direct injection structures for
each vCPU (doorbell IRQ, vPE table entry, virtual pending table,
vPEID).
2) When a PCI device is configured for passthrough, map its MSIs to
vLPIs using the structures allocated in step 1.
Step 1 is all-or-nothing; if any vCPU cannot be configured with the
vPE structures necessary for direct injection, the vPEs of all vCPUs
are torn down and direct injection is disabled VM-wide.
This universality of direct vLPI injection enablement sparks several
issues, with the most pressing being performance degradation on
overcommitted hosts.
VM-wide vLPI enablement creates resource inefficiency when guest
VMs have more vCPUs than the host has available vPEIDs. The amount of
vPEIDs (and consequently, vPEs) a host can allocate is constrained by
hardware and defined by GICD_TYPER2.VID + 1 (ITS_MAX_VPEID). Since
direct injection requires a vCPU to be assigned a vPEID, at most
ITS_MAX_VPEID vCPUs can be configured for direct injection at a time.
Because vLPI direct injection is all-or-nothing on a VM, if a new guest
VM would exhaust remaining vPEIDs, all vCPUs on that VM would fall back
to hypervisor-forwarded LPIs, causing considerable I/O performance
degradation.
Such performance degradation is exemplified on hosts with CPU
overcommitment. Overcommitting an arbitrarily high number of vCPUs
enables a VM's vCPU count to easily exceed the host's available vPEIDs.
Even with marginally more vCPUs than vPEIDs, the current all-or-nothing
vLPI paradigm disables direct injection entirely. This creates two
problems: first, a single many-vCPU overcommitted VM loses all direct
injection despite having vPEIDs available; second, on multi-tenant
hosts, VMs booted first consume all vPEIDs, leaving later VMs without
direct injection regardless of their I/O intensity. Per-vCPU control
would allow userspace to allocate available vPEIDs across VMs based on
I/O workload rather than boot order or per-VM vCPU count. This per-vCPU
granularity recovers most of the direct injection performance benefit
instead of losing it completely.
To allow this per-vCPU granularity, this RFC introduces three new ioctls
to the KVM API that enables userspace the ability to activate/deactivate
direct vLPI injection capability and resources to vCPUs ad-hoc during VM
runtime.
This RFC proposes userspace control, rather than kernel control, over
vPEID allocation for simplicity of implementation, ease of testability,
and autonomy over resource usage. In the future, the vLPI enable/disable
building blocks from this RFC may be used to implement a full vPE
allocation policy in the kernel.
The solution comes in several parts
-----------------------------------
1) [P 1] General declarations (ioctl definitions/stubs, kconfig option)
2) [P 2] Conditionally disable auto vLPI injection init routines
To prevent vCPUs from exceeding vPEID allocation limits upon VM boot,
disable automatic vPEID allocation in the GICv4 initialization
routine when the per-vCPU kconfig is active. Likewise, disable
automatic hardware forwarding for PCI device-backed MSIs upon device
registration.
3) [P 3-6] Implement per-vCPU vLPI enablement routine, which:
a) Creates per-vCPU doorbell IRQ on new vCPU-scoped, rather than
VM-scoped, interrupt domain hierarchies.
b) Allocates per-vCPU vPE table entries and virtual pending table,
linking them to the vCPU's doorbell IRQ.
c) Iterates through interrupt translation table to set hardware
forwarding for all PCI device–backed interrupts targeting the
specific vCPU.
3) [P 7-8] Implement per-vCPU vLPI disablement routine, which
a) Iterates through interrupt translation table to unset hardware
forwarding for all interrupts targeting the specific vCPU.
b) Frees per-vCPU vPE table entries, virtual pending table, and
doorbell IRQ, then removes vgic_dist's pointer to the vCPU's
freed vPE.
4) [P 9] Couple vSGI enablement with per-vCPU vPE allocation
Since vSGIs cannot be direct-injected without an allocated vPE on
the receiving vCPU, couple vSGI enablement with vLPI enablement
on GICv4.1.
5) [P 10-13] Write selftests for vLPI direct injection
PCI devices cannot be passed through to selftest guests, so
define an ioctl that mocks a hardware source for software-defined
MSI interrupts and sets vLPI "hardware" forwarding for the MSIs. Use
these vLPIs to selftest per-vCPU vLPI enablement/disablement ioctls.
Testing
-------
Testing has been carried out via selftests and QEMU-emulated guests.
Selftests have covered diverse vLPI configurations and race conditions.
These include:
1) Stress testing LPI injection across multiple vCPUs while
concurrently and repeatedly toggling the vCPUs' vLPI
injection capability.
2) Enabling/disabling vLPI direct injection while scheduling or
unscheduling a vCPU.
3) Allocating and freeing a single vPEID to multiple vCPUs, ensuring
reusability.
4) Attempting to allocate a vPEID when all are already allocated,
validating an error is thrown.
5) Calling enable/disable vLPI ioctls when GIC is not initialized.
6) Idempotent ioctl calls.
PCI device passthrough and interrupt injection to QEMU guest
demonstrated:
1) Complete hypervisor circumvention when vLPI injection is enabled on
a vCPU, hypervisor forwarding when vLPI injection is disabled.
2) Interrupts are not lost when received during per-vCPU vLPI state
transitions.
Caveats
-------
1) Pending interrupts are flushed when vLPI injection is disabled for a
vCPU; hardware pending state is not transfered to software. This may
cause pending interrupts to be lost upon vPE disablement.
Unlike vSGIs, vLPIs do not expose their pending state through a
GICD_ISPENDR register. Thus, we would need to read the pending state
of the vLPI from the vPT. To read the pending status of the vLPI from
vPT, we would need to invalidate any vPT cache associated with the
vCPU's vPE. This requires unmapping the vPE and halting the vCPU,
which would be incredibly expensive and unecessary given that MSIs
are usually recoverable by the driver.
2) Direct-injected vSGIs (GICv4.1) require vCPUs to have associated
vPEs. Since disabling vLPI injection on a vCPU frees its
vPE, vSGI direct injection must simultaenously be disabled as well.
At the moment, we use the per-vCPU vSGI toggle mechanism introduced
in commit bacf2c6 to enable/disable vSGI injection alongside vLPI
injection.
Maximilian Dittgen (13):
KVM: Introduce config option for per-vCPU vLPI enablement
KVM: arm64: Disable auto vCPU vPE assignment with per-vCPU vLPI config
KVM: arm64: Refactor out locked section of
kvm_vgic_v4_set_forwarding()
KVM: arm64: Implement vLPI QUERY ioctl for per-vCPU vLPI injection API
KVM: arm64: Implement vLPI ENABLE ioctl for per-vCPU vLPI injection
API
KVM: arm64: Resolve race between vCPU scheduling and vLPI enablement
KVM: arm64: Implement vLPI DISABLE ioctl for per-vCPU vLPI Injection
API
KVM: arm64: Make per-vCPU vLPI control ioctls atomic
KVM: arm64: Couple vSGI enablement with per-vCPU vPE allocation
KVM: selftests: fix MAPC RDbase target formatting in vgic_lpi_stress
KVM: Ioctl to set up userspace-injected MSIs as software-bypassing
vLPIs
KVM: arm64: selftests: Add support for stress testing direct-injected
vLPIs
KVM: arm64: selftests: Add test for per-vCPU vLPI control API
Documentation/virt/kvm/api.rst | 56 +++
arch/arm64/kvm/arm.c | 89 +++++
arch/arm64/kvm/vgic/vgic-its.c | 142 ++++++-
arch/arm64/kvm/vgic/vgic-v3.c | 14 +-
arch/arm64/kvm/vgic/vgic-v4.c | 370 +++++++++++++++++-
arch/arm64/kvm/vgic/vgic.h | 10 +
drivers/irqchip/Kconfig | 13 +
drivers/irqchip/irq-gic-v3-its.c | 58 ++-
drivers/irqchip/irq-gic-v4.c | 75 +++-
include/kvm/arm_vgic.h | 8 +
include/linux/irqchip/arm-gic-v3.h | 5 +
include/linux/irqchip/arm-gic-v4.h | 10 +-
include/linux/kvm_host.h | 11 +
include/uapi/linux/kvm.h | 22 ++
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../selftests/kvm/arm64/per_vcpu_vlpi.c | 274 +++++++++++++
.../selftests/kvm/arm64/vgic_lpi_stress.c | 181 ++++++++-
.../selftests/kvm/lib/arm64/gic_v3_its.c | 9 +-
18 files changed, 1307 insertions(+), 41 deletions(-)
create mode 100644 tools/testing/selftests/kvm/arm64/per_vcpu_vlpi.c
--
2.50.1 (Apple Git-155)
Amazon Web Services Development Center Germany GmbH
Tamara-Danz-Str. 13
10243 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Christof Hellmis
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
This patchset introduces target resume capability to netconsole allowing
it to recover targets when underlying low-level interface comes back
online.
The patchset starts by refactoring netconsole state representation in
order to allow representing deactivated targets (targets that are
disabled due to interfaces going down).
It then modifies netconsole to handle NETDEV_UP events for such targets
and setups netpoll. Targets are matched with incoming interfaces
depending on how they were initially bound in netconsole (by mac or
interface name).
The patchset includes a selftest that validates netconsole target state
transitions and that target is functional after resumed.
Signed-off-by: Andre Carvalho <asantostc(a)gmail.com>
---
Changes in v5:
- patch 3: Set (de)enslaved target as DISABLED instead of DEACTIVATED to prevent
resuming it.
- selftest: Fix test cleanup by moving trap line to outside of loop and remove
unneeded 'local' keyword
- Rename maybe_resume_target to resume_target, add netconsole_ prefix to
process_resumable_targets.
- Hold device reference before calling __netpoll_setup.
- Link to v4: https://lore.kernel.org/r/20251116-netcons-retrigger-v4-0-5290b5f140c2@gmai…
Changes in v4:
- Simplify selftest cleanup, removing trap setup in loop.
- Drop netpoll helper (__setup_netpoll_hold) and manage reference inside
netconsole.
- Move resume_list processing logic to separate function.
- Link to v3: https://lore.kernel.org/r/20251109-netcons-retrigger-v3-0-1654c280bbe6@gmai…
Changes in v3:
- Resume by mac or interface name depending on how target was created.
- Attempt to resume target without holding target list lock, by moving
the target to a temporary list. This is required as netpoll may
attempt to allocate memory.
- Link to v2: https://lore.kernel.org/r/20250921-netcons-retrigger-v2-0-a0e84006237f@gmai…
Changes in v2:
- Attempt to resume target in the same thread, instead of using
workqueue .
- Add wrapper around __netpoll_setup (patch 4).
- Renamed resume_target to maybe_resume_target and moved conditionals to
inside its implementation, keeping code more clear.
- Verify that device addr matches target mac address when target was
setup using mac.
- Update selftest to cover targets bound by mac and interface name.
- Fix typo in selftest comment and sort tests alphabetically in
Makefile.
- Link to v1:
https://lore.kernel.org/r/20250909-netcons-retrigger-v1-0-3aea904926cf@gmai…
---
Andre Carvalho (3):
netconsole: convert 'enabled' flag to enum for clearer state management
netconsole: resume previously deactivated target
selftests: netconsole: validate target resume
Breno Leitao (2):
netconsole: add target_state enum
netconsole: add STATE_DEACTIVATED to track targets disabled by low level
drivers/net/netconsole.c | 155 +++++++++++++++++----
tools/testing/selftests/drivers/net/Makefile | 1 +
.../selftests/drivers/net/lib/sh/lib_netcons.sh | 35 ++++-
.../selftests/drivers/net/netcons_resume.sh | 97 +++++++++++++
4 files changed, 254 insertions(+), 34 deletions(-)
---
base-commit: a057e8e4ac5b1ddd12be590e2e039fa08d0c8aa4
change-id: 20250816-netcons-retrigger-a4f547bfc867
Best regards,
--
Andre Carvalho <asantostc(a)gmail.com>
This patch series introduces LANDLOCK_SCOPE_MEMFD_EXEC, a new Landlock
scoping mechanism that restricts execution of anonymous memory file
descriptors (memfd) created via memfd_create(2). This addresses security
gaps where processes can bypass W^X policies and execute arbitrary code
through anonymous memory objects.
Fixes: https://github.com/landlock-lsm/linux/issues/37
SECURITY PROBLEM
================
Current Landlock filesystem restrictions do not cover memfd objects,
allowing processes to:
1. Read-to-execute bypass: Create writable memfd, inject code,
then execute via mmap(PROT_EXEC) or direct execve()
2. Anonymous execution: Execute code without touching the filesystem via
execve("/proc/self/fd/N") where N is a memfd descriptor
3. Cross-domain access violations: Pass memfd between processes to
bypass domain restrictions
These scenarios can occur in sandboxed environments where filesystem
access is restricted but memfd creation remains possible.
IMPLEMENTATION
==============
The implementation adds hierarchical execution control through domain
scoping:
Core Components:
- is_memfd_file(): Reliable memfd detection via "memfd:" dentry prefix
- domain_is_scoped(): Cross-domain hierarchy checking (moved to domain.c)
- LSM hooks: mmap_file, file_mprotect, bprm_creds_for_exec
- Creation-time restrictions: hook_file_alloc_security
Security Matrix:
Execution decisions follow domain hierarchy rules preventing both
same-domain bypass attempts and cross-domain access violations while
preserving legitimate hierarchical access patterns.
Domain Hierarchy with LANDLOCK_SCOPE_MEMFD_EXEC:
===============================================
Root (no domain) - No restrictions
|
+-- Domain A [SCOPE_MEMFD_EXEC] Layer 1
| +-- memfd_A (tagged with Domain A as creator)
| |
| +-- Domain A1 (child) [NO SCOPE] Layer 2
| | +-- Inherits Layer 1 restrictions from parent
| | +-- memfd_A1 (can create, inherits restrictions)
| | +-- Domain A1a [SCOPE_MEMFD_EXEC] Layer 3
| | +-- memfd_A1a (tagged with Domain A1a)
| |
| +-- Domain A2 (child) [SCOPE_MEMFD_EXEC] Layer 2
| +-- memfd_A2 (tagged with Domain A2 as creator)
| +-- CANNOT access memfd_A1 (different subtree)
|
+-- Domain B [SCOPE_MEMFD_EXEC] Layer 1
+-- memfd_B (tagged with Domain B as creator)
+-- CANNOT access ANY memfd from Domain A subtree
Execution Decision Matrix:
========================
Executor-> | A | A1 | A1a | A2 | B | Root
Creator | | | | | |
------------|-----|----|-----|----|----|-----
Domain A | X | X | X | X | X | Y
Domain A1 | Y | X | X | X | X | Y
Domain A1a | Y | Y | X | X | X | Y
Domain A2 | Y | X | X | X | X | Y
Domain B | X | X | X | X | X | Y
Root | Y | Y | Y | Y | Y | Y
Legend: Y = Execution allowed, X = Execution denied
Scenarios Covered:
- Direct mmap(PROT_EXEC) on memfd files
- Two-stage mmap(PROT_READ) + mprotect(PROT_EXEC) bypass attempts
- execve("/proc/self/fd/N") anonymous execution
- execveat() and fexecve() file descriptor execution
- Cross-process memfd inheritance and IPC passing
TESTING
=======
All patches have been validated with:
- scripts/checkpatch.pl --strict (clean)
- Selftests covering same-domain restrictions, cross-domain
hierarchy enforcement, and regular file isolation
- KUnit tests for memfd detection edge cases
DISCLAIMER
==========
My understanding of Landlock scoping semantics may be limited, but this
implementation reflects my current understanding based on available
documentation and code analysis. I welcome feedback and corrections
regarding the scoping logic and domain hierarchy enforcement.
Signed-off-by: Abhinav Saxena <xandfury(a)gmail.com>
---
Abhinav Saxena (4):
landlock: add LANDLOCK_SCOPE_MEMFD_EXEC scope
landlock: implement memfd detection
landlock: add memfd exec LSM hooks and scoping
selftests/landlock: add memfd execution tests
include/uapi/linux/landlock.h | 5 +
security/landlock/.kunitconfig | 1 +
security/landlock/audit.c | 4 +
security/landlock/audit.h | 1 +
security/landlock/cred.c | 14 -
security/landlock/domain.c | 67 ++++
security/landlock/domain.h | 4 +
security/landlock/fs.c | 405 ++++++++++++++++++++-
security/landlock/limits.h | 2 +-
security/landlock/task.c | 67 ----
.../selftests/landlock/scoped_memfd_exec_test.c | 325 +++++++++++++++++
11 files changed, 812 insertions(+), 83 deletions(-)
---
base-commit: 5b74b2eff1eeefe43584e5b7b348c8cd3b723d38
change-id: 20250716-memfd-exec-ac0d582018c3
Best regards,
--
Abhinav Saxena <xandfury(a)gmail.com>
Currently the vDSO selftests use the time-related types from libc.
This works on glibc by chance today but will break with other libc
implementations or on distributions which switch to 64-bit times
everywhere.
The kernel's UAPI headers provide the proper types to use with the vDSO
(and raw syscalls) but are not necessarily compatible with libc types.
Introduce a new header which makes the UAPI headers compatible with the
libc.
Also contains some related cleanups.
Signed-off-by: Thomas Weißschuh <thomas.weissschuh(a)linutronix.de>
---
Changes in v2:
- Use __kernel_old_time_t in vdso_time_t.
- Add vdso_syscalls.h.
- Add a test for the time() function.
- Validate return value of syscall(clock_getres) in vdso_test_abi
- Link to v1: https://lore.kernel.org/r/20251111-vdso-test-types-v1-0-03b31f88c659@linutr…
---
Thomas Weißschuh (14):
Revert "selftests: vDSO: parse_vdso: Use UAPI headers instead of libc headers"
selftests: vDSO: Introduce vdso_types.h
selftests: vDSO: Introduce vdso_syscalls.h
selftests: vDSO: vdso_test_gettimeofday: Remove nolibc checks
selftests: vDSO: vdso_test_gettimeofday: Use types from vdso_types.h
selftests: vDSO: vdso_test_abi: Use types from vdso_types.h
selftests: vDSO: vdso_test_abi: Validate return value of syscall(clock_getres)
selftests: vDSO: vdso_test_abi: Use system call wrappers from vdso_syscalls.h
selftests: vDSO: vdso_test_correctness: Drop SYS_getcpu fallbacks
selftests: vDSO: vdso_test_correctness: Make ts_leq() and tv_leq() more generic
selftests: vDSO: vdso_test_correctness: Use types from vdso_types.h
selftests: vDSO: vdso_test_correctness: Use system call wrappers from vdso_syscalls.h
selftests: vDSO: vdso_test_correctness: Use facilities from parse_vdso.c
selftests: vDSO: vdso_test_correctness: Add a test for time()
tools/testing/selftests/vDSO/Makefile | 6 +-
tools/testing/selftests/vDSO/parse_vdso.c | 3 +-
tools/testing/selftests/vDSO/vdso_syscalls.h | 93 ++++++++++
tools/testing/selftests/vDSO/vdso_test_abi.c | 46 +++--
.../testing/selftests/vDSO/vdso_test_correctness.c | 190 +++++++++++----------
.../selftests/vDSO/vdso_test_gettimeofday.c | 9 +-
tools/testing/selftests/vDSO/vdso_types.h | 70 ++++++++
7 files changed, 285 insertions(+), 132 deletions(-)
---
base-commit: 1b2eb8c1324859864f4aa79dc3cfbb2f7ef5c524
change-id: 20251110-vdso-test-types-68ce0c712b79
Best regards,
--
Thomas Weißschuh <thomas.weissschuh(a)linutronix.de>
The `FIXTURE(args)` macro defines an empty `struct _test_data_args`,
leading to `sizeof(struct _test_data_args)` evaluating to 0. This
caused a build error due to a compiler warning on a `memset` call
with a zero size argument.
Adding a dummy member to the struct ensures its size is non-zero,
resolving the build issue.
Signed-off-by: Wake Liu <wakel(a)google.com>
---
tools/testing/selftests/futex/functional/futex_requeue_pi.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/futex/functional/futex_requeue_pi.c b/tools/testing/selftests/futex/functional/futex_requeue_pi.c
index f299d75848cd..000fec468835 100644
--- a/tools/testing/selftests/futex/functional/futex_requeue_pi.c
+++ b/tools/testing/selftests/futex/functional/futex_requeue_pi.c
@@ -52,6 +52,7 @@ struct thread_arg {
FIXTURE(args)
{
+ char dummy;
};
FIXTURE_SETUP(args)
--
2.52.0.rc1.455.g30608eb744-goog