[ based on kvm/next ]
Unmapping virtual machine guest memory from the host kernel's direct map
is a successful mitigation against Spectre-style transient execution
issues: if the kernel page tables do not contain entries pointing to
guest memory, then any attempted speculative read through the direct map
will necessarily be blocked by the MMU before any observable
microarchitectural side-effects happen. This means that Spectre-gadgets
and similar cannot be used to target virtual machine memory. Roughly
60% of speculative execution issues fall into this category [1, Table
1].
This patch series extends guest_memfd with the ability to remove its
memory from the host kernel's direct map, to be able to attain the above
protection for KVM guests running inside guest_memfd.
Additionally, a Firecracker branch with support for these VMs can be
found on GitHub [2].
For more details, please refer to the v5 cover letter. No substantial
changes in design have taken place since.
See also related write() syscall support in guest_memfd [3] where
the interoperation between the two features is described.
Changes since v7:
- David: separate patches for adding x86 and ARM support
- Dave/Will: drop support for disabling TLB flushes
v7: https://lore.kernel.org/kvm/20250924151101.2225820-1-patrick.roy@campus.lmu…
v6: https://lore.kernel.org/kvm/20250912091708.17502-1-roypat@amazon.co.uk
v5: https://lore.kernel.org/kvm/20250828093902.2719-1-roypat@amazon.co.uk
v4: https://lore.kernel.org/kvm/20250221160728.1584559-1-roypat@amazon.co.uk
RFCv3: https://lore.kernel.org/kvm/20241030134912.515725-1-roypat@amazon.co.uk
RFCv2: https://lore.kernel.org/kvm/20240910163038.1298452-1-roypat@amazon.co.uk
RFCv1: https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk
[1] https://download.vusec.net/papers/quarantine_raid23.pdf
[2] https://github.com/firecracker-microvm/firecracker/tree/feature/secret-hidi…
[3] https://lore.kernel.org/kvm/20251114151828.98165-1-kalyazin@amazon.com
Patrick Roy (13):
x86: export set_direct_map_valid_noflush to KVM module
x86/tlb: export flush_tlb_kernel_range to KVM module
mm: introduce AS_NO_DIRECT_MAP
KVM: guest_memfd: Add stub for kvm_arch_gmem_invalidate
KVM: guest_memfd: Add flag to remove from direct map
KVM: x86: define kvm_arch_gmem_supports_no_direct_map()
KVM: arm64: define kvm_arch_gmem_supports_no_direct_map()
KVM: selftests: load elf via bounce buffer
KVM: selftests: set KVM_MEM_GUEST_MEMFD in vm_mem_add() if guest_memfd
!= -1
KVM: selftests: Add guest_memfd based vm_mem_backing_src_types
KVM: selftests: cover GUEST_MEMFD_FLAG_NO_DIRECT_MAP in existing
selftests
KVM: selftests: stuff vm_mem_backing_src_type into vm_shape
KVM: selftests: Test guest execution from direct map removed gmem
Documentation/virt/kvm/api.rst | 22 ++++---
arch/arm64/include/asm/kvm_host.h | 13 ++++
arch/x86/include/asm/kvm_host.h | 9 +++
arch/x86/include/asm/tlbflush.h | 3 +-
arch/x86/mm/pat/set_memory.c | 1 +
arch/x86/mm/tlb.c | 1 +
include/linux/kvm_host.h | 14 ++++
include/linux/pagemap.h | 16 +++++
include/linux/secretmem.h | 18 ------
include/uapi/linux/kvm.h | 1 +
lib/buildid.c | 4 +-
mm/gup.c | 19 ++----
mm/mlock.c | 2 +-
mm/secretmem.c | 8 +--
.../testing/selftests/kvm/guest_memfd_test.c | 17 ++++-
.../testing/selftests/kvm/include/kvm_util.h | 37 ++++++++---
.../testing/selftests/kvm/include/test_util.h | 8 +++
tools/testing/selftests/kvm/lib/elf.c | 8 +--
tools/testing/selftests/kvm/lib/io.c | 23 +++++++
tools/testing/selftests/kvm/lib/kvm_util.c | 59 +++++++++--------
tools/testing/selftests/kvm/lib/test_util.c | 8 +++
tools/testing/selftests/kvm/lib/x86/sev.c | 1 +
.../selftests/kvm/pre_fault_memory_test.c | 1 +
.../selftests/kvm/set_memory_region_test.c | 52 +++++++++++++--
.../kvm/x86/private_mem_conversions_test.c | 7 +-
virt/kvm/guest_memfd.c | 64 +++++++++++++++++--
26 files changed, 314 insertions(+), 102 deletions(-)
base-commit: e0c26d47def7382d7dbd9cad58bc653aed75737a
--
2.50.1
The resctrl selftest currently exhibits several failures on Hygon CPUs
due to missing vendor detection and edge-case handling specific to
Hygon's architecture.
This patch series addresses three distinct issues:
1. Missing CPU vendor detection, causing the test to fail with
"# Can not get vendor info..." on Hygon CPUs.
2. A division-by-zero crash in SNC detection on Hygon CPUs.
3. Incorrect handling of non-contiguous CBM support on Hygon CPUs.
These changes enable resctrl selftest to run successfully on
Hygon CPUs that support Platform QoS features.
Changelog:
v2:
- Patch 1: switch all of the vendor id bitmasks to use BIT() (Reinette)
- Patch 2: add Reviewed-by: Reinette Chatre <reinette.chatre(a)intel.com>
- Patch 3: add Reviewed-by: Reinette Chatre <reinette.chatre(a)intel.com>
add a maintainer note to highlight it is not a candidate for
backport (Reinette)
Xiaochen Shen (3):
selftests/resctrl: Add CPU vendor detection for Hygon
selftests/resctrl: Fix a division by zero error on Hygon
selftests/resctrl: Fix non-contiguous CBM check for Hygon
tools/testing/selftests/resctrl/cat_test.c | 4 ++--
tools/testing/selftests/resctrl/resctrl.h | 6 ++++--
tools/testing/selftests/resctrl/resctrl_tests.c | 2 ++
tools/testing/selftests/resctrl/resctrlfs.c | 10 ++++++++++
4 files changed, 18 insertions(+), 4 deletions(-)
--
2.47.3
Currently, x86, Riscv, Loongarch use the Generic Entry which makes
maintainers' work easier and codes more elegant. arm64 has already
successfully switched to the Generic IRQ Entry in commit
b3cf07851b6c ("arm64: entry: Switch to generic IRQ entry"), it is
time to completely convert arm64 to Generic Entry.
The goal is to bring arm64 in line with other architectures that already
use the generic entry infrastructure, reducing duplicated code and
making it easier to share future changes in entry/exit paths, such as
"Syscall User Dispatch".
This patch set is rebased on v6.18-rc7. And the performance was measured
on Kunpeng 920 using "perf bench basic syscall" with "arm64.nopauth
selinux=0 audit=1".
After switch to Generic Entry, the performance are below:
| Metric | W/O Generic Framework | With Generic Framework | Change |
| ---------- | --------------------- | ---------------------- | ------ |
| Total time | 2.130 [sec] | 2.235 [sec] | ↑4.90% |
| usecs/op | 0.213095 | 0.223512 | ↑4.89% |
| ops/sec | 4,692,753 | 4,474,044 | ↓4.89% |
Compared to earlier with arch specific handling, the performance decreased
by approximately 4.9%.
On the basis of optimizing syscall_get_arguments()[1], el0_svc_common()
and syscall_exit_work(), the performance are below:
| Metric | W/O Generic Entry | With Generic Entry opt| Change |
| ---------- | ----------------- | ------------------ | ------ |
| Total time | 2.130 [sec] | 2.134 [sec] | ↑0.18% |
| usecs/op | 0.213095 | 0.213414 | ↑0.15% |
| ops/sec | 4,692,753 | 4,685,737 | ↓0.15% |
Therefore, after the optimization, ARM64 System Call performance remains
almost unchanged.
It was tested ok with following test cases on kunpeng920 and QEMU
virt platform:
- Perf tests.
- Different `dynamic preempt` mode switch.
- Pseudo NMI tests.
- Stress-ng CPU stress test.
- Hackbench stress test.
- MTE test case in Documentation/arch/arm64/memory-tagging-extension.rst
and all test cases in tools/testing/selftests/arm64/mte/*.
- "sud" selftest testcase.
- get_set_sud, get_syscall_info, set_syscall_info, peeksiginfo
in tools/testing/selftests/ptrace.
- breakpoint_test_arm64 in selftests/breakpoints.
- syscall-abi and ptrace in tools/testing/selftests/arm64/abi
- fp-ptrace, sve-ptrace, za-ptrace in selftests/arm64/fp.
- vdso_test_getrandom in tools/testing/selftests/vDSO
- Strace tests.
The test QEMU configuration is as follows:
qemu-system-aarch64 \
-M virt,gic-version=3,virtualization=on,mte=on \
-cpu max,pauth-impdef=on \
-kernel Image \
-smp 8,sockets=1,cores=4,threads=2 \
-m 512m \
-nographic \
-no-reboot \
-device virtio-rng-pci \
-append "root=/dev/vda rw console=ttyAMA0 kgdboc=ttyAMA0,115200 \
earlycon preempt=voluntary irqchip.gicv3_pseudo_nmi=1" \
-drive if=none,file=images/rootfs.ext4,format=raw,id=hd0 \
-device virtio-blk-device,drive=hd0 \
[1]: https://lore.kernel.org/all/20251201120633.1193122-3-ruanjinjie@huawei.com/
Changes in v9:
- Move "Return early for ptrace_report_syscall_entry() error" patch ahead
to make it not introduce a regression.
- Not check _TIF_SECCOMP/SYSCALL_EMU for syscall_exit_work() in
a separate patch.
- Do not report_syscall_exit() for PTRACE_SYSEMU_SINGLESTEP in a separate
patch.
- Add two performance patch to improve the arm64 performance.
- Add Reviewed-by.
- Link to v8: https://lore.kernel.org/all/20251126071446.3234218-1-ruanjinjie@huawei.com/
Changes in v8:
- Rename "report_syscall_enter()" to "report_syscall_entry()".
- Add ptrace_save_reg() to avoid duplication.
- Remove unused _TIF_WORK_MASK in a standalone patch.
- Align syscall_trace_enter() return value with the generic version.
- Use "scno" instead of regs->syscallno in el0_svc_common().
- Move rseq_syscall() ahead in a standalone patch to clarify it clearly.
- Rename "syscall_trace_exit()" to "syscall_exit_work()".
- Keep the goto in el0_svc_common().
- No argument was passed to __secure_computing() and check -1 not -1L.
- Remove "Add has_syscall_work() helper" patch.
- Move "Add syscall_exit_to_user_mode_prepare() helper" patch later.
- Add miss header for asm/entry-common.h.
- Update the implementation of arch_syscall_is_vdso_sigreturn().
- Add "ARCH_SYSCALL_WORK_EXIT" to be defined as "SECCOMP | SYSCALL_EMU"
to keep the behaviour unchanged.
- Add more testcases test.
- Add Reviewed-by.
- Update the commit message.
- Link to v7: https://lore.kernel.org/all/20251117133048.53182-1-ruanjinjie@huawei.com/
Chanegs in v7:
- Support "Syscall User Dispatch" by implementing
arch_syscall_is_vdso_sigreturn() as kemal suggested.
- Add aarch64 support for "sud" selftest testcase, which tested ok with
the patch series.
- Fix the kernel test robot warning for arch_ptrace_report_syscall_entry()
and arch_ptrace_report_syscall_exit() in asm/entry-common.h.
- Add perf syscall performance test.
- Link to v6: https://lore.kernel.org/all/20250916082611.2972008-1-ruanjinjie@huawei.com/
Changes in v6:
- Rebased on v6.17-rc5-next as arm64 generic irq entry has merged.
- Update the commit message.
- Link to v5: https://lore.kernel.org/all/20241206101744.4161990-1-ruanjinjie@huawei.com/
Changes in v5:
- Not change arm32 and keep inerrupts_enabled() macro for gicv3 driver.
- Move irqentry_state definition into arch/arm64/kernel/entry-common.c.
- Avoid removing the __enter_from_*() and __exit_to_*() wrappers.
- Update "irqentry_state_t ret/irq_state" to "state"
to keep it consistently.
- Use generic irq entry header for PREEMPT_DYNAMIC after split
the generic entry.
- Also refactor the ARM64 syscall code.
- Introduce arch_ptrace_report_syscall_entry/exit(), instead of
arch_pre/post_report_syscall_entry/exit() to simplify code.
- Make the syscall patches clear separation.
- Update the commit message.
- Link to v4: https://lore.kernel.org/all/20241025100700.3714552-1-ruanjinjie@huawei.com/
Changes in v4:
- Rework/cleanup split into a few patches as Mark suggested.
- Replace interrupts_enabled() macro with regs_irqs_disabled(), instead
of left it here.
- Remove rcu and lockdep state in pt_regs by using temporary
irqentry_state_t as Mark suggested.
- Remove some unnecessary intermediate functions to make it clear.
- Rework preempt irq and PREEMPT_DYNAMIC code
to make the switch more clear.
- arch_prepare_*_entry/exit() -> arch_pre_*_entry/exit().
- Expand the arch functions comment.
- Make arch functions closer to its caller.
- Declare saved_reg in for block.
- Remove arch_exit_to_kernel_mode_prepare(), arch_enter_from_kernel_mode().
- Adjust "Add few arch functions to use generic entry" patch to be
the penultimate.
- Update the commit message.
- Add suggested-by.
- Link to v3: https://lore.kernel.org/all/20240629085601.470241-1-ruanjinjie@huawei.com/
Changes in v3:
- Test the MTE test cases.
- Handle forget_syscall() in arch_post_report_syscall_entry()
- Make the arch funcs not use __weak as Thomas suggested, so move
the arch funcs to entry-common.h, and make arch_forget_syscall() folded
in arch_post_report_syscall_entry() as suggested.
- Move report_single_step() to thread_info.h for arm64
- Change __always_inline() to inline, add inline for the other arch funcs.
- Remove unused signal.h for entry-common.h.
- Add Suggested-by.
- Update the commit message.
Changes in v2:
- Add tested-by.
- Fix a bug that not call arch_post_report_syscall_entry() in
syscall_trace_enter() if ptrace_report_syscall_entry() return not zero.
- Refactor report_syscall().
- Add comment for arch_prepare_report_syscall_exit().
- Adjust entry-common.h header file inclusion to alphabetical order.
- Update the commit message.
Jinjie Ruan (15):
arm64: Remove unused _TIF_WORK_MASK
arm64/ptrace: Split report_syscall()
arm64/ptrace: Return early for ptrace_report_syscall_entry() error
arm64/ptrace: Refactor syscall_trace_enter/exit()
arm64: ptrace: Move rseq_syscall() before audit_syscall_exit()
arm64: syscall: Rework el0_svc_common()
arm64/ptrace: Not check _TIF_SECCOMP/SYSCALL_EMU for
syscall_exit_work()
arm64/ptrace: Do not report_syscall_exit() for
PTRACE_SYSEMU_SINGLESTEP
arm64/ptrace: Expand secure_computing() in place
arm64/ptrace: Use syscall_get_arguments() helper
entry: Split syscall_exit_to_user_mode_work() for arch reuse
entry: Add arch_ptrace_report_syscall_entry/exit()
arm64: entry: Convert to generic entry
arm64: Inline el0_svc_common()
entry: Inline syscall_exit_work()
kemal (1):
selftests: sud_test: Support aarch64
arch/arm64/Kconfig | 2 +-
arch/arm64/include/asm/entry-common.h | 76 ++++++++++++++
arch/arm64/include/asm/syscall.h | 19 +++-
arch/arm64/include/asm/thread_info.h | 22 +----
arch/arm64/kernel/debug-monitors.c | 7 ++
arch/arm64/kernel/ptrace.c | 94 ------------------
arch/arm64/kernel/signal.c | 2 +-
arch/arm64/kernel/syscall.c | 29 ++----
include/linux/entry-common.h | 98 ++++++++++++++++---
kernel/entry/syscall-common.c | 60 +++++-------
.../syscall_user_dispatch/sud_test.c | 4 +
11 files changed, 220 insertions(+), 193 deletions(-)
--
2.34.1
This series extends BPF's cryptographic capabilities by adding kfuncs for
SHA hashing and ECDSA signature verification. These functions enable BPF
programs to perform cryptographic operations for use cases such as content
verification, integrity checking, and data authentication.
BPF programs increasingly need to verify data integrity and authenticity in
networking, security, and observability contexts. While BPF already supports
symmetric encryption/decryption, it lacks support for:
1. Cryptographic hashing - needed for content verification, fingerprinting,
and preparing message digests for signature operations
2. Asymmetric signature verification - needed to verify signed data without
requiring the signing key in the datapath
These capabilities enable use cases such as:
- Verifying signed network packets or application data in XDP/TC programs
- Implementing integrity checks in tracing and security monitoring
- Building zero-trust security models where BPF programs verify credentials
- Content-addressed storage and deduplication in BPF-based filesystems
Implementation:
The implementation follows BPF's existing crypto patterns:
1. Uses bpf_dynptr for safe memory access without page fault risks
2. Leverages the kernel's existing crypto library (lib/crypto/sha256.c and
crypto/ecdsa.c) rather than reimplementing algorithms
3. Provides context-based API for ECDSA to enable key reuse and support
multiple program types (syscall, XDP, TC)
4. Includes comprehensive selftests with NIST test vectors
Patch 1: bpf: Extend bpf_crypto_type with hash operations
- Adds hash operation callbacks to bpf_crypto_type structure
- Adds hash() and digestsize() function pointers
- Must come before crypto module to maintain bisectability
Patch 2: crypto: Add BPF hash algorithm type registration module
- Adds bpf_crypto_shash module in crypto/ subsystem
- Registers hash type with BPF crypto infrastructure
- Enables hash algorithm access through unified bpf_crypto_type interface
- Implements callbacks: alloc_tfm, free_tfm, hash, digestsize, get_flags
- Manages shash_desc lifecycle internally
Patch 3: bpf: Add SHA hash kfunc for cryptographic hashing
- Adds bpf_crypto_hash() kfunc for SHA-256/384/512
- Updates bpf_crypto_ctx_create() to support keyless operations
- Protected by CONFIG_CRYPTO_HASH2 guards
- Uses kernel's crypto library implementations
- Fixed u64 types for dynptr sizes to prevent truncation
Patch 4: selftests/bpf: Add tests for bpf_crypto_hash kfunc
- Tests basic functionality with NIST "abc" test vectors
- Validates error handling for invalid parameters (zero-length input)
- Ensures correct hash output for SHA-256, SHA-384, and SHA-512
- Adds CONFIG_CRYPTO_HASH2 and CONFIG_CRYPTO_SHA512 to selftest config
- Refactored test setup code to reduce duplication
Patch 5: bpf: Add ECDSA signature verification kfuncs
- Context-based API: bpf_ecdsa_ctx_create/acquire/release pattern
- Supports NIST curves (P-256, P-384, P-521)
- Adds bpf_ecdsa_verify() for signature verification
- Includes size query functions: keysize, digestsize, maxsize
- Enables use in non-sleepable contexts via pre-allocated contexts
- Uses crypto_sig API with p1363 format (r || s signatures)
Patch 6: selftests/bpf: Add tests for ECDSA signature verification
- Tests valid signature acceptance with RFC 6979 test vectors for P-256
- Tests invalid signature rejection
- Tests size query functions (keysize, digestsize, maxsize)
- Uses well-known NIST test vectors with "sample" message
- Adds CONFIG_CRYPTO_ECDSA to selftest config
v2:
- Fixed redundant __bpf_dynptr_is_rdonly() checks (Vadim)
- Added BPF hash algorithm type registration module in crypto/ subsystem
- Added CONFIG_CRYPTO_HASH2 guards around bpf_crypto_hash() kfunc and its
BTF registration, matching the pattern used for CONFIG_CRYPTO_ECDSA
- Added mandatory digestsize validation for hash operations
v3:
- Fixed patch ordering - header changes now in separate first commit before
crypto module to ensure bisectability (bot+bpf-ci)
- Fixed type mismatch - changed u32 to u64 for dynptr sizes in
bpf_crypto_hash() to match __bpf_dynptr_size() return type (Mykyta)
- Added CONFIG_CRYPTO_ECDSA to selftest config (Song)
- Refactored test code duplication with setup_skel() helper (Song)
- Added copyright notices to all new files
Daniel Hodges (6):
bpf: Extend bpf_crypto_type with hash operations
crypto: Add BPF hash algorithm type registration module
bpf: Add SHA hash kfunc for cryptographic hashing
selftests/bpf: Add tests for bpf_crypto_hash kfunc
bpf: Add ECDSA signature verification kfuncs
selftests/bpf: Add tests for ECDSA signature verification kfuncs
crypto/Makefile | 3 +
crypto/bpf_crypto_shash.c | 95 ++++++
include/linux/bpf_crypto.h | 2 +
kernel/bpf/crypto.c | 306 +++++++++++++++++-
tools/testing/selftests/bpf/config | 3 +
.../selftests/bpf/prog_tests/crypto_hash.c | 147 +++++++++
.../selftests/bpf/prog_tests/ecdsa_verify.c | 75 +++++
.../testing/selftests/bpf/progs/crypto_hash.c | 142 ++++++++
.../selftests/bpf/progs/ecdsa_verify.c | 160 +++++++++
9 files changed, 925 insertions(+), 8 deletions(-)
create mode 100644 crypto/bpf_crypto_shash.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/crypto_hash.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/ecdsa_verify.c
create mode 100644 tools/testing/selftests/bpf/progs/crypto_hash.c
create mode 100644 tools/testing/selftests/bpf/progs/ecdsa_verify.c
--
2.51.0
Patch series "Fix va_high_addr_switch.sh test failure - again", v1.
There are two issues exist for the va_high_addr_switch test. One issue is
the test return value is ignored in va_high_addr_switch.sh. The second is
the va_high_addr_switch requires 6 hugepages but it requires 5.
Besides that, the nr_hugepages setup in run_vmtests.sh for arm64 can be
done in va_high_addr_switch.sh too.
This patch: (of 3)
The return value should be return value of va_high_addr_switch, otherwise
a test failure would be silently ignored.
Fixes: d9d957bd7b61 ("selftests/mm: alloc hugepages in va_high_addr_switch test")
CC: Luiz Capitulino <luizcap(a)redhat.com>
Signed-off-by: Chunyu Hu <chuhu(a)redhat.com>
---
tools/testing/selftests/mm/va_high_addr_switch.sh | 2 ++
1 file changed, 2 insertions(+)
diff --git a/tools/testing/selftests/mm/va_high_addr_switch.sh b/tools/testing/selftests/mm/va_high_addr_switch.sh
index a7d4b02b21dd..f89fe078a8e6 100755
--- a/tools/testing/selftests/mm/va_high_addr_switch.sh
+++ b/tools/testing/selftests/mm/va_high_addr_switch.sh
@@ -114,4 +114,6 @@ save_nr_hugepages
# 4 keep_mapped pages, and one for tmp usage
setup_nr_hugepages 5
./va_high_addr_switch --run-hugetlb
+retcode=$?
restore_nr_hugepages
+exit $retcode
--
2.49.0
As describe in the help string, the user might want to disable these
tests if they don't like to see stacktraces/BUG etc in their kernel log.
However, if they enable PANIC_ON_OOPS, these tests also crash the
machine, which it's safe to assume _almost_ nobody wants.
One might argue that _absolutely_ nobody ever wants their kernel to
crash so this should just be a hard dependency instead of a default.
However, since this is rather special code that's anyway concerned with
deliberately doing "bad" things, the normal rules don't seem to apply,
hence prefer flexibility and allow users to set up a crashing Kconfig if
they so choose.
Signed-off-by: Brendan Jackman <jackmanb(a)google.com>
---
lib/kunit/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/kunit/Kconfig b/lib/kunit/Kconfig
index 50ecf55d2b9c8a82f2aff7a0b4156bd6179b0a2f..498cc51e493dc9a819e012b8082fb765f25512b9 100644
--- a/lib/kunit/Kconfig
+++ b/lib/kunit/Kconfig
@@ -28,7 +28,7 @@ config KUNIT_FAULT_TEST
bool "Enable KUnit tests which print BUG stacktraces"
depends on KUNIT_TEST
depends on !UML
- default y
+ default !PANIC_ON_OOPS
help
Enables fault handling tests for the KUnit framework. These tests may
trigger a kernel BUG(), and the associated stack trace, even when they
---
base-commit: 7bc16e72ddb993d706f698c2f6cee694e485f557
change-id: 20251207-kunit-fault-no-panic-e9bdce848031
Best regards,
--
Brendan Jackman <jackmanb(a)google.com>
Hi,
the kernel selftests do not currently support glibc v2.35 and older.
This is primarily because glibc v2.36 added support for syscall API
functions such as fsopen, move_mount, fsmount, open_tree, and fsconfig,
and those API functions are now used in several tests.
As result, glibc v2.35 and older are no longer supported, at least not
for affected tests.
Historically the selftest framework implemented such functions with
wrappers named sys_<syscall>. This means that sys_<syscall> and <syscall>
functions are now used in parallel.
I see a number of possibilities to solve the problem.
1) Do nothing. Document somewhere that glibc v2.35 and older is not
supported by the kernel selftests.
2) Document that glibc v2.35 and older is not supported, drop
all wrappers implementing the sys_<syscall> functions, and rename
the calling code to <syscall>.
3) Rename all calls to sys_<syscall> to <syscall> and modify the
wrapper code to only implement those functions for glibc v2.35 and
older.
4) Add defines to the existing wrapper code to also define <syscall>
in addition to sys_<syscall>, and leave the code otherwise alone.
What would be the preferred solution ? My personal preference would be 3),
followed by 4).
Thanks,
Guenter