The series composes of two parts. The first part provides a quick fix for
the issue on a recent thread[1]. The issue happens when a platform has
ununified vector register length across multiple cores. Specifically,
patch 1 adds a comment at a callsite of riscv_setup_vsize to clarify how
vlenb is observed by the system. Patch 2 fixes the issue by failing the
boot process of a secondary core if vlenb mismatches.
The second part of the series provide a finer grain view of the Vector
extension. Patch 3 give the obsolete ISA parser the ability to expand
ISA extensions for sigle letter extensions. Patch 3, 4 introduces Zve32x,
Zve32f, Zve64x, Zve64f, Zve64d for isa parsing and hwprobe. Patch 5
updates all callsites such that Vector subextensions are maximumly
supported by the kernel.
Two parts of the series are sent together to ease the effort of picking
dependency patches. The first part can be merged independent of the
second one if necessary.
The series is tested on a QEMU and verified that booting, Vector
programs context-switch, signal, ptrace, prctl interfaces works when we
only report partial V from the ISA.
Note that the signal test was performed after applying the commit
c27fa53b858b ("riscv: Fix vector state restore in rt_sigreturn()")
This patch should be able to apply on risc-v for-next branch on top of
the commit ba5ea59f768f ("riscv: Do not save the scratch CSR during suspend")
[1]: https://lore.kernel.org/all/20240228-vicinity-cornstalk-4b8eb5fe5730@spud/T…
Changes in v4:
- Add a patch to trigger prctl test on ZVE32X (9)
- Add a patch to fix integer promotion bug in hwprobe (8)
- Fix a build fail on !CONFIG_RISCV_ISA_V (7)
- Add more comment in the assembly code change (2)
- Link to v3: https://lore.kernel.org/r/20240318-zve-detection-v3-0-e12d42107fa8@sifive.c…
Changelog v3:
- Include correct maintainers and mailing list into CC.
- Cleanup isa string parser code (3)
- Adjust extensions order and name (4, 5)
- Refine commit message (6)
Changelog v2:
- Update comments and commit messages (1, 2, 7)
- Refine isa_exts[] lists for zve extensions (4)
- Add a patch for dt-binding (5)
- Make ZVE* extensions depend on has_vector(ZVE32X) (6, 7)
---
---
Andy Chiu (9):
riscv: vector: add a comment when calling riscv_setup_vsize()
riscv: smp: fail booting up smp if inconsistent vlen is detected
riscv: cpufeature: call match_isa_ext() for single-letter extensions
riscv: cpufeature: add zve32[xf] and zve64[xfd] isa detection
dt-bindings: riscv: add Zve32[xf] Zve64[xfd] ISA extension description
riscv: hwprobe: add zve Vector subextensions into hwprobe interface
riscv: vector: adjust minimum Vector requirement to ZVE32X
hwprobe: fix integer promotion in RISCV_HWPROBE_EXT macro
selftest: run vector prctl test for ZVE32X
Documentation/arch/riscv/hwprobe.rst | 15 ++++++
.../devicetree/bindings/riscv/extensions.yaml | 30 ++++++++++++
arch/riscv/include/asm/hwcap.h | 5 ++
arch/riscv/include/asm/switch_to.h | 2 +-
arch/riscv/include/asm/vector.h | 25 ++++++----
arch/riscv/include/asm/xor.h | 2 +-
arch/riscv/include/uapi/asm/hwprobe.h | 7 ++-
arch/riscv/kernel/cpufeature.c | 56 ++++++++++++++++++----
arch/riscv/kernel/head.S | 19 +++++---
arch/riscv/kernel/kernel_mode_vector.c | 4 +-
arch/riscv/kernel/process.c | 4 +-
arch/riscv/kernel/signal.c | 6 +--
arch/riscv/kernel/smpboot.c | 14 ++++--
arch/riscv/kernel/sys_hwprobe.c | 13 ++++-
arch/riscv/kernel/vector.c | 15 +++---
arch/riscv/lib/uaccess.S | 2 +-
.../testing/selftests/riscv/vector/vstate_prctl.c | 6 +--
17 files changed, 174 insertions(+), 51 deletions(-)
---
base-commit: ba5ea59f768f67d127b319b26ba209ff67e0d9a5
change-id: 20240318-zve-detection-50106d2da527
Best regards,
--
Andy Chiu <andy.chiu(a)sifive.com>
The test is inspired by the pmu_event_filter_test which implemented by x86. On
the arm64 platform, there is the same ability to set the pmu_event_filter
through the KVM_ARM_VCPU_PMU_V3_FILTER attribute. So add the test for arm64.
The series first create the helper function which can be used
for the vpmu related tests. Then, it implement the test.
Changelog:
----------
v6->v7:
- Rebased to v6.9-rc3.
v5->v6:
- Rebased to v6.9-rc1.
- Collect RB.
- Add multiple filter test.
v4->v5:
- Rebased to v6.8-rc6.
- Refactor the helper function, make it fine-grained and easy to be used.
- Namimg improvements.
- Use the kvm_device_attr_set() helper.
- Make the test descriptor array readable and clean.
- Delete the patch which moves the pmu related helper to vpmu.h.
- Remove the kvm_supports_pmu_event_filter() function since nobody will run
this on a old kernel.
v3->v4:
- Rebased to the v6.8-rc2.
v2->v3:
- Check the pmceid in guest code instead of pmu event count since different
hardware may have different event count result, check pmceid makes it stable
on different platform. [Eric]
- Some typo fixed and commit message improved.
v1->v2:
- Improve the commit message. [Eric]
- Fix the bug in [enable|disable]_counter. [Raghavendra & Marc]
- Add the check if kvm has attr KVM_ARM_VCPU_PMU_V3_FILTER.
- Add if host pmu support the test event throught pmceid0.
- Split the test_invalid_filter() to another patch. [Eric]
v1: https://lore.kernel.org/all/20231123063750.2176250-1-shahuang@redhat.com/
v2: https://lore.kernel.org/all/20231129072712.2667337-1-shahuang@redhat.com/
v3: https://lore.kernel.org/all/20240116060129.55473-1-shahuang@redhat.com/
v4: https://lore.kernel.org/all/20240202025659.5065-1-shahuang@redhat.com/
v5: https://lore.kernel.org/all/20240229065625.114207-1-shahuang@redhat.com/
v6: https://lore.kernel.org/all/20240326033706.117189-1-shahuang@redhat.com/
Shaoqin Huang (3):
KVM: selftests: aarch64: Add helper function for the vpmu vcpu
creation
KVM: selftests: aarch64: Introduce pmu_event_filter_test
KVM: selftests: aarch64: Add invalid filter test in
pmu_event_filter_test
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/pmu_event_filter_test.c | 336 ++++++++++++++++++
.../kvm/aarch64/vpmu_counter_access.c | 33 +-
.../selftests/kvm/include/aarch64/vpmu.h | 28 ++
4 files changed, 372 insertions(+), 26 deletions(-)
create mode 100644 tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c
create mode 100644 tools/testing/selftests/kvm/include/aarch64/vpmu.h
--
2.40.1
RFC v8:
=======
Major Changes:
--------------
- Fixed build error generated by patch-by-patch build.
- Applied docs suggestions from Randy.
RFC v7:
=======
Major Changes:
--------------
This revision largely rebases on top of net-next and addresses the feedback
RFCv6 received from folks, namely Jakub, Yunsheng, Arnd, David, & Pavel.
The series remains in RFC because the queue-API ndos defined in this
series are not yet implemented. I have a GVE implementation I carry out
of tree for my testing. A upstreamable GVE implementation is in the
works. Aside from that, in my estimation all the patches are ready for
review/merge. Please do take a look.
As usual the full devmem TCP changes including the full GVE driver
implementation is here:
https://github.com/mina/linux/commits/tcpdevmem-v7/
Detailed changelog:
- Use admin-perm in netlink API.
- Addressed feedback from Jakub with regards to netlink API
implementation.
- Renamed devmem.c functions to something more appropriate for that
file.
- Improve the performance seen through the page_pool benchmark.
- Fix the value definition of all the SO_DEVMEM_* uapi.
- Various fixes to documentation.
Perf - page-pool benchmark:
---------------------------
Improved performance of bench_page_pool_simple.ko tests compared to v6:
https://pastebin.com/raw/v5dYRg8L
net-next base: 8 cycle fast path.
RFC v6: 10 cycle fast path.
RFC v7: 9 cycle fast path.
RFC v7 with CONFIG_DMA_SHARED_BUFFER disabled: 8 cycle fast path,
same as baseline.
Perf - Devmem TCP benchmark:
---------------------
Perf is about the same regardless of the changes in v7, namely the
removal of the static_branch_unlikely to improve the page_pool benchmark
performance:
189/200gbps bi-directional throughput with RX devmem TCP and regular TCP
TX i.e. ~95% line rate.
RFC v6:
=======
Major Changes:
--------------
This revision largely rebases on top of net-next and addresses the little
feedback RFCv5 received.
The series remains in RFC because the queue-API ndos defined in this
series are not yet implemented. I have a GVE implementation I carry out
of tree for my testing. A upstreamable GVE implementation is in the
works. Aside from that, in my estimation all the patches are ready for
review/merge. Please do take a look.
As usual the full devmem TCP changes including the full GVE driver
implementation is here:
https://github.com/mina/linux/commits/tcpdevmem-v6/
This version also comes with some performance data recorded in the cover
letter (see below changelog).
Detailed changelog:
- Rebased on top of the merged netmem_ref changes.
- Converted skb->dmabuf to skb->readable (Pavel). Pavel's original
suggestion was to remove the skb->dmabuf flag entirely, but when I
looked into it closely, I found the issue that if we remove the flag
we have to dereference the shinfo(skb) pointer to obtain the first
frag to tell whether an skb is readable or not. This can cause a
performance regression if it dirties the cache line when the
shinfo(skb) was not really needed. Instead, I converted the skb->dmabuf
flag into a generic skb->readable flag which can be re-used by io_uring
0-copy RX.
- Squashed a few locking optimizations from Eric Dumazet in the RX path
and the DEVMEM_DONTNEED setsockopt.
- Expanded the tests a bit. Added validation for invalid scenarios and
added some more coverage.
Perf - page-pool benchmark:
---------------------------
bench_page_pool_simple.ko tests with and without these changes:
https://pastebin.com/raw/ncHDwAbn
AFAIK the number that really matters in the perf tests is the
'tasklet_page_pool01_fast_path Per elem'. This one measures at about 8
cycles without the changes but there is some 1 cycle noise in some
results.
With the patches this regresses to 9 cycles with the changes but there
is 1 cycle noise occasionally running this test repeatedly.
Lastly I tried disable the static_branch_unlikely() in
netmem_is_net_iov() check. To my surprise disabling the
static_branch_unlikely() check reduces the fast path back to 8 cycles,
but the 1 cycle noise remains.
Perf - Devmem TCP benchmark:
---------------------
189/200gbps bi-directional throughput with RX devmem TCP and regular TCP
TX i.e. ~95% line rate.
Major changes in RFC v5:
========================
1. Rebased on top of 'Abstract page from net stack' series and used the
new netmem type to refer to LSB set pointers instead of re-using
struct page.
2. Downgraded this series back to RFC and called it RFC v5. This is
because this series is now dependent on 'Abstract page from net
stack'[1] and the queue API. Both are removed from the series to
reduce the patch # and those bits are fairly independent or
pre-requisite work.
3. Reworked the page_pool devmem support to use netmem and for some
more unified handling.
4. Reworked the reference counting of net_iov (renamed from
page_pool_iov) to use pp_ref_count for refcounting.
The full changes including the dependent series and GVE page pool
support is here:
https://github.com/mina/linux/commits/tcpdevmem-rfcv5/
[1] https://patchwork.kernel.org/project/netdevbpf/list/?series=810774
Major changes in v1:
====================
1. Implemented MVP queue API ndos to remove the userspace-visible
driver reset.
2. Fixed issues in the napi_pp_put_page() devmem frag unref path.
3. Removed RFC tag.
Many smaller addressed comments across all the patches (patches have
individual change log).
Full tree including the rest of the GVE driver changes:
https://github.com/mina/linux/commits/tcpdevmem-v1
Changes in RFC v3:
==================
1. Pulled in the memory-provider dependency from Jakub's RFC[1] to make the
series reviewable and mergeable.
2. Implemented multi-rx-queue binding which was a todo in v2.
3. Fix to cmsg handling.
The sticking point in RFC v2[2] was the device reset required to refill
the device rx-queues after the dmabuf bind/unbind. The solution
suggested as I understand is a subset of the per-queue management ops
Jakub suggested or similar:
https://lore.kernel.org/netdev/20230815171638.4c057dcd@kernel.org/
This is not addressed in this revision, because:
1. This point was discussed at netconf & netdev and there is openness to
using the current approach of requiring a device reset.
2. Implementing individual queue resetting seems to be difficult for my
test bed with GVE. My prototype to test this ran into issues with the
rx-queues not coming back up properly if reset individually. At the
moment I'm unsure if it's a mistake in the POC or a genuine issue in
the virtualization stack behind GVE, which currently doesn't test
individual rx-queue restart.
3. Our usecases are not bothered by requiring a device reset to refill
the buffer queues, and we'd like to support NICs that run into this
limitation with resetting individual queues.
My thought is that drivers that have trouble with per-queue configs can
use the support in this series, while drivers that support new netdev
ops to reset individual queues can automatically reset the queue as
part of the dma-buf bind/unbind.
The same approach with device resets is presented again for consideration
with other sticking points addressed.
This proposal includes the rx devmem path only proposed for merge. For a
snapshot of my entire tree which includes the GVE POC page pool support &
device memory support:
https://github.com/torvalds/linux/compare/master...mina:linux:tcpdevmem-v3
[1] https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.…
[2] https://lore.kernel.org/netdev/CAHS8izOVJGJH5WF68OsRWFKJid1_huzzUK+hpKbLcL4…
Changes in RFC v2:
==================
The sticking point in RFC v1[1] was the dma-buf pages approach we used to
deliver the device memory to the TCP stack. RFC v2 is a proof-of-concept
that attempts to resolve this by implementing scatterlist support in the
networking stack, such that we can import the dma-buf scatterlist
directly. This is the approach proposed at a high level here[2].
Detailed changes:
1. Replaced dma-buf pages approach with importing scatterlist into the
page pool.
2. Replace the dma-buf pages centric API with a netlink API.
3. Removed the TX path implementation - there is no issue with
implementing the TX path with scatterlist approach, but leaving
out the TX path makes it easier to review.
4. Functionality is tested with this proposal, but I have not conducted
perf testing yet. I'm not sure there are regressions, but I removed
perf claims from the cover letter until they can be re-confirmed.
5. Added Signed-off-by: contributors to the implementation.
6. Fixed some bugs with the RX path since RFC v1.
Any feedback welcome, but specifically the biggest pending questions
needing feedback IMO are:
1. Feedback on the scatterlist-based approach in general.
2. Netlink API (Patch 1 & 2).
3. Approach to handle all the drivers that expect to receive pages from
the page pool (Patch 6).
[1] https://lore.kernel.org/netdev/dfe4bae7-13a0-3c5d-d671-f61b375cb0b4@gmail.c…
[2] https://lore.kernel.org/netdev/CAHS8izPm6XRS54LdCDZVd0C75tA1zHSu6jLVO8nzTLX…
==================
* TL;DR:
Device memory TCP (devmem TCP) is a proposal for transferring data to and/or
from device memory efficiently, without bouncing the data to a host memory
buffer.
* Problem:
A large amount of data transfers have device memory as the source and/or
destination. Accelerators drastically increased the volume of such transfers.
Some examples include:
- ML accelerators transferring large amounts of training data from storage into
GPU/TPU memory. In some cases ML training setup time can be as long as 50% of
TPU compute time, improving data transfer throughput & efficiency can help
improving GPU/TPU utilization.
- Distributed training, where ML accelerators, such as GPUs on different hosts,
exchange data among them.
- Distributed raw block storage applications transfer large amounts of data with
remote SSDs, much of this data does not require host processing.
Today, the majority of the Device-to-Device data transfers the network are
implemented as the following low level operations: Device-to-Host copy,
Host-to-Host network transfer, and Host-to-Device copy.
The implementation is suboptimal, especially for bulk data transfers, and can
put significant strains on system resources, such as host memory bandwidth,
PCIe bandwidth, etc. One important reason behind the current state is the
kernel’s lack of semantics to express device to network transfers.
* Proposal:
In this patch series we attempt to optimize this use case by implementing
socket APIs that enable the user to:
1. send device memory across the network directly, and
2. receive incoming network packets directly into device memory.
Packet _payloads_ go directly from the NIC to device memory for receive and from
device memory to NIC for transmit.
Packet _headers_ go to/from host memory and are processed by the TCP/IP stack
normally. The NIC _must_ support header split to achieve this.
Advantages:
- Alleviate host memory bandwidth pressure, compared to existing
network-transfer + device-copy semantics.
- Alleviate PCIe BW pressure, by limiting data transfer to the lowest level
of the PCIe tree, compared to traditional path which sends data through the
root complex.
* Patch overview:
** Part 1: netlink API
Gives user ability to bind dma-buf to an RX queue.
** Part 2: scatterlist support
Currently the standard for device memory sharing is DMABUF, which doesn't
generate struct pages. On the other hand, networking stack (skbs, drivers, and
page pool) operate on pages. We have 2 options:
1. Generate struct pages for dmabuf device memory, or,
2. Modify the networking stack to process scatterlist.
Approach #1 was attempted in RFC v1. RFC v2 implements approach #2.
** part 3: page pool support
We piggy back on page pool memory providers proposal:
https://github.com/kuba-moo/linux/tree/pp-providers
It allows the page pool to define a memory provider that provides the
page allocation and freeing. It helps abstract most of the device memory
TCP changes from the driver.
** part 4: support for unreadable skb frags
Page pool iovs are not accessible by the host; we implement changes
throughput the networking stack to correctly handle skbs with unreadable
frags.
** Part 5: recvmsg() APIs
We define user APIs for the user to send and receive device memory.
Not included with this series is the GVE devmem TCP support, just to
simplify the review. Code available here if desired:
https://github.com/mina/linux/tree/tcpdevmem
This series is built on top of net-next with Jakub's pp-providers changes
cherry-picked.
* NIC dependencies:
1. (strict) Devmem TCP require the NIC to support header split, i.e. the
capability to split incoming packets into a header + payload and to put
each into a separate buffer. Devmem TCP works by using device memory
for the packet payload, and host memory for the packet headers.
2. (optional) Devmem TCP works better with flow steering support & RSS support,
i.e. the NIC's ability to steer flows into certain rx queues. This allows the
sysadmin to enable devmem TCP on a subset of the rx queues, and steer
devmem TCP traffic onto these queues and non devmem TCP elsewhere.
The NIC I have access to with these properties is the GVE with DQO support
running in Google Cloud, but any NIC that supports these features would suffice.
I may be able to help reviewers bring up devmem TCP on their NICs.
* Testing:
The series includes a udmabuf kselftest that show a simple use case of
devmem TCP and validates the entire data path end to end without
a dependency on a specific dmabuf provider.
** Test Setup
Kernel: net-next with this series and memory provider API cherry-picked
locally.
Hardware: Google Cloud A3 VMs.
NIC: GVE with header split & RSS & flow steering support.
Cc: Pavel Begunkov <asml.silence(a)gmail.com>
Cc: David Wei <dw(a)davidwei.uk>
Cc: Jason Gunthorpe <jgg(a)ziepe.ca>
Cc: Yunsheng Lin <linyunsheng(a)huawei.com>
Cc: Shailend Chand <shailend(a)google.com>
Cc: Harshitha Ramamurthy <hramamurthy(a)google.com>
Cc: Shakeel Butt <shakeel.butt(a)linux.dev>
Cc: Jeroen de Borst <jeroendb(a)google.com>
Cc: Praveen Kaligineedi <pkaligineedi(a)google.com>
Jakub Kicinski (1):
net: page_pool: create hooks for custom page providers
Mina Almasry (13):
queue_api: define queue api
net: netdev netlink api to bind dma-buf to a net device
netdev: support binding dma-buf to netdevice
netdev: netdevice devmem allocator
page_pool: convert to use netmem
page_pool: devmem support
memory-provider: dmabuf devmem memory provider
net: support non paged skb frags
net: add support for skbs with unreadable frags
tcp: RX path for devmem TCP
net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags
net: add devmem TCP documentation
selftests: add ncdevmem, netcat for devmem TCP
Documentation/netlink/specs/netdev.yaml | 57 +++
Documentation/networking/devmem.rst | 256 +++++++++++
Documentation/networking/index.rst | 1 +
arch/alpha/include/uapi/asm/socket.h | 6 +
arch/mips/include/uapi/asm/socket.h | 6 +
arch/parisc/include/uapi/asm/socket.h | 6 +
arch/sparc/include/uapi/asm/socket.h | 6 +
include/linux/netdevice.h | 3 +
include/linux/skbuff.h | 73 +++-
include/linux/socket.h | 1 +
include/net/devmem.h | 124 ++++++
include/net/netdev_queues.h | 27 ++
include/net/netdev_rx_queue.h | 2 +
include/net/netmem.h | 234 +++++++++-
include/net/page_pool/helpers.h | 155 +++++--
include/net/page_pool/types.h | 33 +-
include/net/sock.h | 2 +
include/net/tcp.h | 5 +-
include/trace/events/page_pool.h | 29 +-
include/uapi/asm-generic/socket.h | 6 +
include/uapi/linux/netdev.h | 19 +
include/uapi/linux/uio.h | 17 +
net/bpf/test_run.c | 5 +-
net/core/Makefile | 2 +-
net/core/datagram.c | 6 +
net/core/dev.c | 6 +-
net/core/devmem.c | 425 ++++++++++++++++++
net/core/gro.c | 8 +-
net/core/netdev-genl-gen.c | 23 +
net/core/netdev-genl-gen.h | 6 +
net/core/netdev-genl.c | 107 +++++
net/core/page_pool.c | 364 +++++++++-------
net/core/skbuff.c | 110 ++++-
net/core/sock.c | 61 +++
net/ipv4/esp4.c | 2 +-
net/ipv4/tcp.c | 254 ++++++++++-
net/ipv4/tcp_input.c | 13 +-
net/ipv4/tcp_ipv4.c | 9 +
net/ipv4/tcp_minisocks.c | 2 +
net/ipv4/tcp_output.c | 5 +-
net/ipv6/esp6.c | 2 +-
net/packet/af_packet.c | 4 +-
tools/include/uapi/linux/netdev.h | 19 +
tools/testing/selftests/net/.gitignore | 1 +
tools/testing/selftests/net/Makefile | 5 +
tools/testing/selftests/net/ncdevmem.c | 546 ++++++++++++++++++++++++
46 files changed, 2776 insertions(+), 277 deletions(-)
create mode 100644 Documentation/networking/devmem.rst
create mode 100644 include/net/devmem.h
create mode 100644 net/core/devmem.c
create mode 100644 tools/testing/selftests/net/ncdevmem.c
--
2.44.0.478.gd926399ef9-goog
809216233555 ("selftests/harness: remove use of LINE_MAX") introduced
asprintf into kselftest_harness.h, which is a GNU extension and needs
_GNU_SOURCE to either be defined prior to including headers or with the
-D_GNU_SOURCE flag passed to the compiler.
Edward Liaw (10):
selftests/sgx: Compile with -D_GNU_SOURCE
selftests/alsa: Compile with -D_GNU_SOURCE
selftests/hid: Compile with -D_GNU_SOURCE
selftests/kvm: Define _GNU_SOURCE
selftests/nci: Compile with -D_GNU_SOURCE
selftests/net: Define _GNU_SOURCE
selftests/prctl: Compile with -D_GNU_SOURCE
selftests/rtc: Compile with -D_GNU_SOURCE
selftests/tdx: Compile with -D_GNU_SOURCE
selftests/user_events: Compiled with -D_GNU_SOURCE
tools/testing/selftests/alsa/Makefile | 1 +
tools/testing/selftests/hid/Makefile | 2 +-
tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c | 2 ++
tools/testing/selftests/nci/Makefile | 2 +-
tools/testing/selftests/net/bind_wildcard.c | 1 +
tools/testing/selftests/net/ip_local_port_range.c | 1 +
tools/testing/selftests/net/reuseaddr_ports_exhausted.c | 2 ++
tools/testing/selftests/prctl/Makefile | 1 +
tools/testing/selftests/rtc/Makefile | 2 +-
tools/testing/selftests/sgx/Makefile | 2 +-
tools/testing/selftests/sgx/sigstruct.c | 2 --
tools/testing/selftests/tdx/Makefile | 2 +-
tools/testing/selftests/user_events/Makefile | 2 +-
tools/testing/selftests/user_events/abi_test.c | 1 -
14 files changed, 14 insertions(+), 9 deletions(-)
--
2.45.0.rc0.197.gbae5840b3b-goog
When I introduced HID-BPF, I mentioned that we should ship the HID-BPF
programs in the kernel when they are fixes so that everybody can benefit
from them.
I tried multiple times to do so but I was confronted to a tough problem:
how can I make the kernel load them automatically?
I went over a few solutions, but it always came down to something either
ugly, or either not satisfying (like forcing `bpftool` to be compiled
first, or not being able to insert them as a module).
OTOH, I was working with Peter on `udev-hid-bpf`[0] as a proof of
concept on how a minimal loader should look like. This allowed me to
experiment on the BPF files and how they should look like.
And after further thoughts, I realized that `udev-hid-bpf` could very
well be the `kmod load` that we currently have:
- the kernel handles the device normally
- a udev event is emitted
- a udev rule fires `udev-hid-bpf` and load the appropriate HID-BPF
file(s) based on the modalias
Given that most HID devices are supposed to work to a minimal level when
connected without any driver, this makes the whole HID-BPF programs nice
to have but not critical. We can then postpone the HID-BPF loading when
userspace is ready.
Working with HID-BPF is also a much better user experience for end users
(as I predicted). All they have to do is to go to the `udev-hid-bpf`
project, fetch an artifact from the MR that concerns them, run
`install.sh` (no compilation required), and their devices are fixed
(minus some back and forth when the HID-BPF program needs some changes).
So I already have that loader available, and it works well enough for
our users. But the missing point was still how to "upstream" those BPF
fixes?
That's where this patch series comes in: we simply store the fixes in
the kernel under `drivers/hid/bpf/progs`, provide a way to compile them,
but also add tests for them in the selftests dir.
Once a program is accepted here, for convenience, the same program will
move from a "testing" directory to a "stable" directory on
`udev-hid-bpf`. This way, distributions don't need to follow when there
is a new program added here, they can just ship the "stable" ones from
`udev-hid-bpf`.
Signed-off-by: Benjamin Tissoires <bentiss(a)kernel.org>
[0] https://gitlab.freedesktop.org/libevdev/udev-hid-bpf/
---
Benjamin Tissoires (18):
HID: do not assume HAT Switch logical max < 8
HID: bpf: add first in-tree HID-BPF fix for the XPPen Artist 24
HID: bpf: add in-tree HID-BPF fix for the XPPen Artist 16
HID: bpf: add in-tree HID-BPF fix for the HP Elite Presenter Mouse
HID: bpf: add in-tree HID-BPF fix for the IOGear Kaliber Gaming MMOmentum mouse
HID: bpf: add in-tree HID-BPF fix for the Wacom ArtPen
HID: bpf: add in-tree HID-BPF fix for the XBox Elite 2 over Bluetooth
HID: bpf: add in-tree HID-BPF fix for the Huion Kamvas Pro 19
HID: bpf: add in-tree HID-BPF fix for the Raptor Mach 2
selftests/hid: import base_device.py from hid-tools
selftests/hid: add support for HID-BPF pre-loading before starting a test
selftests/hid: tablets: reduce the number of pen state
selftests/hid: tablets: add a couple of XP-PEN tablets
selftests/hid: tablets: also check for XP-Pen offset correction
selftests/hid: add Huion Kamvas Pro 19 tests
selftests/hid: import base_gamepad.py from hid-tools
selftests/hid: move the gamepads definitions in the test file
selftests/hid: add tests for the Raptor Mach 2 joystick
drivers/hid/bpf/progs/FR-TEC__Raptor-Mach-2.bpf.c | 185 ++++++
drivers/hid/bpf/progs/HP__Elite-Presenter.bpf.c | 58 ++
drivers/hid/bpf/progs/Huion__Kamvas-Pro-19.bpf.c | 290 +++++++++
.../hid/bpf/progs/IOGEAR__Kaliber-MMOmentum.bpf.c | 59 ++
drivers/hid/bpf/progs/Makefile | 91 +++
.../hid/bpf/progs/Microsoft__XBox-Elite-2.bpf.c | 133 ++++
drivers/hid/bpf/progs/README | 102 +++
drivers/hid/bpf/progs/Wacom__ArtPen.bpf.c | 173 +++++
drivers/hid/bpf/progs/XPPen__Artist24.bpf.c | 229 +++++++
drivers/hid/bpf/progs/XPPen__ArtistPro16Gen2.bpf.c | 274 ++++++++
drivers/hid/bpf/progs/hid_bpf.h | 15 +
drivers/hid/bpf/progs/hid_bpf_helpers.h | 170 +++++
include/linux/hid.h | 6 +-
tools/testing/selftests/hid/tests/base.py | 87 ++-
tools/testing/selftests/hid/tests/base_device.py | 421 ++++++++++++
tools/testing/selftests/hid/tests/base_gamepad.py | 238 +++++++
tools/testing/selftests/hid/tests/test_gamepad.py | 457 ++++++++++++-
tools/testing/selftests/hid/tests/test_tablet.py | 723 +++++++++++++++------
18 files changed, 3507 insertions(+), 204 deletions(-)
---
base-commit: 3e78a6c0d3e02e4cf881dc84c5127e9990f939d6
change-id: 20240328-bpf_sources-be1f3c617c5e
Best regards,
--
Benjamin Tissoires <bentiss(a)kernel.org>
From: Geliang Tang <tanggeliang(a)kylinos.cn>
v4:
- fix a bug in v3, it should be 'if (err)', not 'if (!err)'.
- move "selftests/bpf: Use log_err in network_helpers" out of this
series.
v3:
- add two more patches.
- use log_err instead of ASSERT in v3.
- let send_recv_data return int as Martin suggested.
v2:
Address Martin's comments for v1 (thanks.)
- drop patch 1, "export send_byte helper".
- drop "WRITE_ONCE(arg.stop, 0)".
- rebased.
send_recv_data will be re-used in MPTCP bpf tests, but not included
in this set because it depends on other patches that have not been
in the bpf-next yet. It will be sent as another set soon.
Geliang Tang (3):
selftests/bpf: Add struct send_recv_arg
selftests/bpf: Export send_recv_data helper
selftests/bpf: Support nonblock for send_recv_data
tools/testing/selftests/bpf/network_helpers.c | 103 ++++++++++++++++++
tools/testing/selftests/bpf/network_helpers.h | 1 +
.../selftests/bpf/prog_tests/bpf_tcp_ca.c | 71 +-----------
3 files changed, 105 insertions(+), 70 deletions(-)
--
2.40.1