This unintended LRU eviction issue was observed while developing the
selftest for
"[PATCH bpf-next v10 0/8] bpf: Introduce BPF_F_CPU and BPF_F_ALL_CPUS flags for percpu maps" [1].
When updating an existing element in lru_hash or lru_percpu_hash maps,
the current implementation calls prealloc_lru_pop() to get a new node
before checking if the key already exists. If the map is full, this
triggers LRU eviction and removes an existing element, even though the
update operation only needs to modify the value in-place.
In the selftest of the aforementioned patch, this was to be worked around by
reserving an extra entry to
void triggering eviction in __htab_lru_percpu_map_update_elem(). However, the
underlying issue remains problematic because:
1. Users may unexpectedly lose entries when updating existing keys in a
full map.
2. The eviction overhead is unnecessary for existing key updates.
This patchset fixes the issue by first checking if the key exists before
allocating a new node. If the key is found, update the value using the extra
LRU node without triggering any eviction. Only proceed with node allocation
if the key does not exist.
Links:
[1] https://lore.kernel.org/bpf/20251117162033.6296-1-leon.hwang@linux.dev/
Changes:
v2 -> v3:
* Rebase onto the latest tree to fix CI build failures.
* Free special fields of 'l_old' on the non-error path (per bot).
v1 -> v2:
* Tidy hash handling in LRU code.
* Factor out bpf_lru_node_reset_state helper.
* Factor out bpf_lru_move_next_inactive_rotation helper.
* Update element using preallocated extra elements in order to avoid
breaking the update atomicity (per Alexei).
* Check values on other CPUs in tests (per bot).
* v1: https://lore.kernel.org/bpf/20251202153032.10118-1-leon.hwang@linux.dev/
Leon Hwang (5):
bpf: lru: Tidy hash handling in LRU code
bpf: lru: Factor out bpf_lru_node_reset_state helper
bpf: lru: Factor out bpf_lru_move_next_inactive_rotation helper
bpf: lru: Fix unintended eviction when updating lru hash maps
selftests/bpf: Add tests to verify no unintended eviction when
updating lru_[percpu_,]hash maps
kernel/bpf/bpf_lru_list.c | 228 ++++++++++++++----
kernel/bpf/bpf_lru_list.h | 10 +-
kernel/bpf/hashtab.c | 96 +++++++-
.../selftests/bpf/prog_tests/htab_update.c | 129 ++++++++++
4 files changed, 408 insertions(+), 55 deletions(-)
--
2.52.0
Nicely enough MS defines a button type for a pressurepad touchpad [2]
and it looks like most touchpad vendors fill this in.
The selftests require a bit of prep work (and a hack for the test
itself) - hidtools 0.12 requires python-libevdev 0.13 which in turn
provides constructors for unknown properties.
[2] https://learn.microsoft.com/en-us/windows-hardware/design/component-guideli…
Signed-off-by: Peter Hutterer <peter.hutterer(a)who-t.net>
---
Changes in v2:
- rebased on top of 6.18
- hid-multitouch changes split out into a separate patch
- Patches reordered for slightly nicer history, tests changes are
grouped together
- Link to v1: https://lore.kernel.org/r/20251121-wip-hid-pressurepad-v1-0-e32e5565a527@wh…
---
Peter Hutterer (4):
HID: multitouch: set INPUT_PROP_PRESSUREPAD based on Digitizer/Button Type
selftests/hid: require hidtools 0.12
selftests/hid: use a enum class for the different button types
selftests/hid: add a test for the Digitizer/Button Type pressurepad
drivers/hid/hid-multitouch.c | 12 ++++-
tools/testing/selftests/hid/tests/conftest.py | 14 +++++
.../testing/selftests/hid/tests/test_multitouch.py | 61 +++++++++++++++++-----
3 files changed, 73 insertions(+), 14 deletions(-)
---
base-commit: 7d0a66e4bb9081d75c82ec4957c50034cb0ea449
change-id: 20251111-wip-hid-pressurepad-8a800cdf1813
Best regards,
--
Peter Hutterer <peter.hutterer(a)who-t.net>
Since the merge with v6.19-rc my CI is broken because of the newly
enabled -fms-extensions.
Add the missing flags when generating the CFLAGS for bpf.o to solve this
and continue running the tests while applying the patches.
Cheers,
Benjamin
Signed-off-by: Benjamin Tissoires <bentiss(a)kernel.org>
---
Benjamin Tissoires (2):
HID: bpf: fix bpf compilation with -fms-extensions
selftests/hid: fix bpf compilations due to -fms-extensions
drivers/hid/bpf/progs/Makefile | 6 ++++--
tools/testing/selftests/hid/Makefile | 2 ++
2 files changed, 6 insertions(+), 2 deletions(-)
---
base-commit: fde4ce068d1bccacf1e2d6a28697a3847f28e0a6
change-id: 20260106-wip-fix-bpf-compilation-4707e0faacb1
Best regards,
--
Benjamin Tissoires <bentiss(a)kernel.org>
pytest can run unittest-based testsuites, like kunit_tool_test.py.
It has a more features than the standard runner.
Unfortunately a few minor issues currently break this.
Adapt the testsuite to work with pytest.
Signed-off-by: Thomas Weißschuh <thomas.weissschuh(a)linutronix.de>
---
Changes in v2:
- Rebase onto current kselftest/kunit branch
- Pick up review tags from LKML
- Link to v1: https://lore.kernel.org/r/20251230-kunit-pytest-v1-0-e2dae0dae200@linutroni…
---
Thomas Weißschuh (2):
kunit: tool: test: Rename test_data_path() to _test_data_path()
kunit: tool: test: Don't rely on implicit working directory change
tools/testing/kunit/kunit_tool_test.py | 61 +++++++++++++++++-----------------
1 file changed, 31 insertions(+), 30 deletions(-)
---
base-commit: ab150c2bbafe9425759eca1e45e08d4ad1456818
change-id: 20251230-kunit-pytest-259a1eb36a42
Best regards,
--
Thomas Weißschuh <thomas.weissschuh(a)linutronix.de>
These two patches improve PSP self tests when there are no PSP devs
found (test failures instead of Python exception) and when there are
multiple PSP devices present (use the first one instead of failing).
Cosmin Ratiu (2):
selftests: drv-net: psp: Use first device when multiple are present
selftests: drv-net: psp: Don't fail psp_responder when no PSP devs
found
.../selftests/drivers/net/psp_responder.c | 21 ++++++++++---------
1 file changed, 11 insertions(+), 10 deletions(-)
--
2.45.0
While debugging issues related to aarch64 only systems I ran into
speedbumps due to the lack of detail in the results reported when the
guest register read and reset value preservation tests were run, they
generated an immediately fatal assert without indicating which register
was being tested. Update these tests to report a result per register,
making it much easier to see what the problem being reported is.
A similar, though less severe, issue exists with the validation of the
individual bitfields in registers due to the use of immediately fatal
asserts. Update those asserts to be standard kselftest reports.
Finally we have a fix for spurious errors on some NV systems.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
Changes in v4:
- Correct check for 32 bit ID registers.
- Link to v3: https://patch.msgid.link/20251219-kvm-arm64-set-id-regs-aarch64-v3-0-bfa474…
Changes in v3:
- Rebase onto v6.19-rc1.
- Link to v2: https://patch.msgid.link/20251114-kvm-arm64-set-id-regs-aarch64-v2-0-672f21…
Changes in v2:
- Add a fix for spurious failures with 64 bit only guests.
- Link to v1: https://patch.msgid.link/20251030-kvm-arm64-set-id-regs-aarch64-v1-0-96fe0d…
---
Mark Brown (5):
KVM: selftests: arm64: Report set_id_reg reads of test registers as tests
KVM: selftests: arm64: Report register reset tests individually
KVM: selftests: arm64: Make set_id_regs bitfield validatity checks non-fatal
KVM: selftests: arm64: Skip all 32 bit IDs when set_id_regs is aarch64 only
KVM: selftests: arm64: Use is_aarch32_id_reg() in test_vm_ftr_id_regs()
tools/testing/selftests/kvm/arm64/set_id_regs.c | 159 ++++++++++++++++++------
1 file changed, 119 insertions(+), 40 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20251028-kvm-arm64-set-id-regs-aarch64-ebb77969401c
Best regards,
--
Mark Brown <broonie(a)kernel.org>
This patch series suggests fixes for several corner cases in the RISC-V
vector ptrace implementation:
- init vector context with proper vlenb, to avoid reading zero vlenb
by an early attached debugger
- follow gdbserver expectations and return ENODATA instead of EINVAL
if vector extension is supported but not yet activated for the
traced process
- validate input vector csr registers in ptrace, to maintain an accurate
view of the tracee's vector context across multiple halt/resume
debug cycles
For detailed description see the appropriate commit messages. A new test
suite validate_v_ptrace is added to the tools/testing/selftests/riscv/vector
to verify some of the vector ptrace functionality and corner cases.
So far tested on the following platforms:
- test in QEMU rv32/rv64
- test on c908 (BananaPi CanMV K230D Zero)
- test on c906 (MangoPi MQ Pro)
Previous versions:
- v4: https://lore.kernel.org/linux-riscv/20251108194207.1257866-1-geomatsi@gmail…
- v3: https://lore.kernel.org/linux-riscv/20251025210655.43099-1-geomatsi@gmail.c…
- v2: https://lore.kernel.org/linux-riscv/20250821173957.563472-1-geomatsi@gmail.…
- v1: https://lore.kernel.org/linux-riscv/20251007115840.2320557-1-geomatsi@gmail…
Changes in v5:
- add support and minimal set of tests for XTheadVector
Changes in v4:
The form 'vsetvli x0, x0, ...' can only be used if VLMAX remains
unchanged, see spec 6.2. This condition was not met by the initial
values in the selftests w.r.t. the initial zeroed context. QEMU accepted
such values, but actual hardware (c908, BananaPi CanMV Zero board) did
not, setting vill. So fix the selftests after testing on hardware:
- replace 'vsetvli x0, x0, ...' by 'vsetvli rd, x0, ...'
- fixed instruction returns VLMAX, so use it in checks as well
- replace fixed vlenb == 16 in the syscall test
Changes in v3:
Address the review comments by Andy Chiu and rework the approach:
- drop forced vector context save entirely
- perform strict validation of vector csr regs in ptrace
Changes in v2:
- add thread_info flag to allow to force vector context save
- force vector context save after vector ptrace to ensure valid vector
context in the next ptrace operations
- force vector context save on the first context switch after vector
context init to get proper vlenb
---
Ilya Mamay (1):
riscv: ptrace: return ENODATA for inactive vector extension
Sergey Matyukevich (8):
riscv: vector: init vector context with proper vlenb
riscv: csr: define vtype register elements
riscv: ptrace: validate input vector csr registers
selftests: riscv: test ptrace vector interface
selftests: riscv: verify initial vector state with ptrace
selftests: riscv: verify syscalls discard vector context
selftests: riscv: verify ptrace rejects invalid vector csr inputs
selftests: riscv: verify ptrace accepts valid vector csr values
arch/riscv/include/asm/csr.h | 17 +
arch/riscv/kernel/ptrace.c | 98 +-
arch/riscv/kernel/vector.c | 12 +-
.../testing/selftests/riscv/vector/.gitignore | 2 +
tools/testing/selftests/riscv/vector/Makefile | 10 +-
.../selftests/riscv/vector/v_helpers.c | 23 +
.../selftests/riscv/vector/v_helpers.h | 2 +
.../riscv/vector/validate_v_ptrace.c | 919 ++++++++++++++++++
8 files changed, 1075 insertions(+), 8 deletions(-)
create mode 100644 tools/testing/selftests/riscv/vector/validate_v_ptrace.c
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
--
2.52.0
This patch set introduces the BPF_F_CPU and BPF_F_ALL_CPUS flags for
percpu maps, as the requirement of BPF_F_ALL_CPUS flag for percpu_array
maps was discussed in the thread of
"[PATCH bpf-next v3 0/4] bpf: Introduce global percpu data"[1].
The goal of BPF_F_ALL_CPUS flag is to reduce data caching overhead in light
skeletons by allowing a single value to be reused to update values across all
CPUs. This avoids the M:N problem where M cached values are used to update a
map on N CPUs kernel.
The BPF_F_CPU flag is accompanied by *flags*-embedded cpu info, which
specifies the target CPU for the operation:
* For lookup operations: the flag field alongside cpu info enable querying
a value on the specified CPU.
* For update operations: the flag field alongside cpu info enable
updating value for specified CPU.
Links:
[1] https://lore.kernel.org/bpf/20250526162146.24429-1-leon.hwang@linux.dev/
Changes:
v12 -> v13:
* No changes, rebased on latest tree.
v11 -> v12:
* Dropped the v11 changes.
* Stabilized the lru_percpu_hash map test by keeping an extra spare entry,
which can be used temporarily during updates to avoid unintended LRU
evictions.
v10 -> v11:
* Support the combination of BPF_EXIST and BPF_F_CPU/BPF_F_ALL_CPUS for
update operations.
* Fix unstable lru_percpu_hash map test using the combination of
BPF_EXIST and BPF_F_CPU/BPF_F_ALL_CPUS to avoid LRU eviction
(reported by Alexei).
v9 -> v10:
* Add tests to verify array and hash maps do not support BPF_F_CPU and
BPF_F_ALL_CPUS flags.
* Address comment from Andrii:
* Copy map value using copy_map_value_long for percpu_cgroup_storage
maps in a separate patch.
v8 -> v9:
* Change value type from u64 to u32 in selftests.
* Address comments from Andrii:
* Keep value_size unaligned and update everywhere for consistency when
cpu flags are specified.
* Update value by getting pointer for percpu hash and percpu
cgroup_storage maps.
v7 -> v8:
* Address comments from Andrii:
* Check BPF_F_LOCK when update percpu_array, percpu_hash and
lru_percpu_hash maps.
* Refactor flags check in __htab_map_lookup_and_delete_batch().
* Keep value_size unaligned and copy value using copy_map_value() in
__htab_map_lookup_and_delete_batch() when BPF_F_CPU is specified.
* Update warn message in libbpf's validate_map_op().
* Update comment of libbpf's bpf_map__lookup_elem().
v6 -> v7:
* Get correct value size for percpu_hash and lru_percpu_hash in
update_batch API.
* Set 'count' as 'max_entries' in test cases for lookup_batch API.
* Address comment from Alexei:
* Move cpu flags check into bpf_map_check_op_flags().
v5 -> v6:
* Move bpf_map_check_op_flags() from 'bpf.h' to 'syscall.c'.
* Address comments from Alexei:
* Drop the refactoring code of data copying logic for percpu maps.
* Drop bpf_map_check_op_flags() wrappers.
v4 -> v5:
* Address comments from Andrii:
* Refactor data copying logic for all percpu maps.
* Drop this_cpu_ptr() micro-optimization.
* Drop cpu check in libbpf's validate_map_op().
* Enhance bpf_map_check_op_flags() using *allowed flags* instead of
'extra_flags_mask'.
v3 -> v4:
* Address comments from Andrii:
* Remove unnecessary map_type check in bpf_map_value_size().
* Reduce code churn.
* Remove unnecessary do_delete check in
__htab_map_lookup_and_delete_batch().
* Introduce bpf_percpu_copy_to_user() and bpf_percpu_copy_from_user().
* Rename check_map_flags() to bpf_map_check_op_flags() with
extra_flags_mask.
* Add human-readable pr_warn() explanations in validate_map_op().
* Use flags in bpf_map__delete_elem() and
bpf_map__lookup_and_delete_elem().
* Drop "for alignment reasons".
v3 link: https://lore.kernel.org/bpf/20250821160817.70285-1-leon.hwang@linux.dev/
v2 -> v3:
* Address comments from Alexei:
* Use BPF_F_ALL_CPUS instead of BPF_ALL_CPUS magic.
* Introduce these two cpu flags for all percpu maps.
* Address comments from Jiri:
* Reduce some unnecessary u32 cast.
* Refactor more generic map flags check function.
* A code style issue.
v2 link: https://lore.kernel.org/bpf/20250805163017.17015-1-leon.hwang@linux.dev/
v1 -> v2:
* Address comments from Andrii:
* Embed cpu info as high 32 bits of *flags* totally.
* Use ERANGE instead of E2BIG.
* Few format issues.
Leon Hwang (7):
bpf: Introduce BPF_F_CPU and BPF_F_ALL_CPUS flags
bpf: Add BPF_F_CPU and BPF_F_ALL_CPUS flags support for percpu_array
maps
bpf: Add BPF_F_CPU and BPF_F_ALL_CPUS flags support for percpu_hash
and lru_percpu_hash maps
bpf: Copy map value using copy_map_value_long for
percpu_cgroup_storage maps
bpf: Add BPF_F_CPU and BPF_F_ALL_CPUS flags support for
percpu_cgroup_storage maps
libbpf: Add BPF_F_CPU and BPF_F_ALL_CPUS flags support for percpu maps
selftests/bpf: Add cases to test BPF_F_CPU and BPF_F_ALL_CPUS flags
include/linux/bpf-cgroup.h | 4 +-
include/linux/bpf.h | 35 +-
include/uapi/linux/bpf.h | 2 +
kernel/bpf/arraymap.c | 29 +-
kernel/bpf/hashtab.c | 94 +++--
kernel/bpf/local_storage.c | 27 +-
kernel/bpf/syscall.c | 37 +-
tools/include/uapi/linux/bpf.h | 2 +
tools/lib/bpf/bpf.h | 8 +
tools/lib/bpf/libbpf.c | 26 +-
tools/lib/bpf/libbpf.h | 21 +-
.../selftests/bpf/prog_tests/percpu_alloc.c | 328 ++++++++++++++++++
.../selftests/bpf/progs/percpu_alloc_array.c | 32 ++
13 files changed, 560 insertions(+), 85 deletions(-)
--
2.52.0