Hi there:
I am probably just doing something wrong, but I tried to run
tools/testing/selftests/drivers/net/stats.py today and hit what is possibly
a bug?
Background info: Python 3.12.3
I'm using net-next at commit 9dd15d5088e9 ("Merge branch
'sparx5-port-mirroring'") with a couple driver modifications added on top
of it that don't seem relevant to the two test failures I'm hitting:
1. "loopback has no stats", and
2. "Try to get stats for lowest unused ifindex but not 0"
Both of these tests expect the ynl library to raise an exception, but I
don't think it does, from tools/net/ynl/lib/ynl.py, the _ops method:
if nl_msg.error:
raise NlError(nl_msg)
if nl_msg.done:
if nl_msg.extack:
print("Netlink warning:")
print(nl_msg)
And the code in net/core/netdev-genl.c seems to set:
else {
NL_SET_BAD_ATTR(info->extack,
info->attrs[NETDEV_A_QSTATS_IFINDEX]);
err = netdev ? -EOPNOTSUPP : -ENODEV;
which is what cli.py says:
$ ./cli.py --spec ../../../Documentation/netlink/specs/netdev.yaml \
--dump qstats-get --json '{"ifindex": "1"}'
Netlink warning:
nl_len = 28 (12) nl_flags = 0x202 nl_type = 3
extack: {'bad-attr': '.ifindex'}
[]
that seems to be the warning print out from the above
tools/net/ynl/lib/ynl.py snippet, not an NlError, which is what you'd get
if you tried ifindex 0 (which is listed as out of range in the YAML spec):
$ ./cli.py --spec ../../../Documentation/netlink/specs/netdev.yaml \
--dump qstats-get --json '{"ifindex": "0"}'
Netlink error: Numerical result out of range
nl_len = 108 (92) nl_flags = 0x300 nl_type = 2
error: -34
extack: {'msg': 'integer out of range', 'policy': {'min-value': 1,
'max-value': 4294967295, 'type': 'u32'}, 'bad-attr': '.ifindex'}
I'm not sure whether:
1. tools/net/ynl/lib/ynl.py should be raising NlError when there is an
extack in this case (I think this is probably the way to go?), or
2. the tests should be changed so that they don't expect an exception to be
raised but (ideally?) hide the warning report from tools/net/ynl/lib/ynl.py
when the warning is expected.
I don't know python at all so this is definitely wrong, but here's a small
change I made to fix the test (a similar change was made for the test which
follows).
The following patch is not intended to be seriously considered for
application, just to highlight the issue I am hitting:
diff --git a/tools/testing/selftests/drivers/net/stats.py b/tools/testing/selftests/drivers/net/stats.py
index 7a7b16b180e2..d9f5d1f3ed34 100755
--- a/tools/testing/selftests/drivers/net/stats.py
+++ b/tools/testing/selftests/drivers/net/stats.py
@@ -115,9 +115,8 @@ def qstat_by_ifindex(cfg) -> None:
ksft_eq(cm.exception.nl_msg.extack['bad-attr'], '.ifindex')
# loopback has no stats
- with ksft_raises(NlError) as cm:
- netfam.qstats_get({"ifindex": 1}, dump=True)
- ksft_eq(cm.exception.nl_msg.error, -95)
+ stats = netfam.qstats_get({"ifindex": 1}, dump=True)
+ ksft_eq(cm.exception.nl_msg.error, -34)
ksft_eq(cm.exception.nl_msg.extack['bad-attr'], '.ifindex')
Thanks,
Joe
I had to explain how to run the driver tests twice already.
Improve the README so we can just point to it.
Also fix a potential problem with the SSH remote sessions.
Jakub Kicinski (4):
selftests: drv-net: force pseudo-terminal allocation in ssh
selftests: drv-net: extend the README with more info and example
selftests: drv-net: reimplement the config parser
selftests: drv-net: validate the environment
.../testing/selftests/drivers/net/README.rst | 95 ++++++++++++++++---
.../selftests/drivers/net/lib/py/env.py | 46 ++++++---
.../drivers/net/lib/py/remote_ssh.py | 2 +-
3 files changed, 118 insertions(+), 25 deletions(-)
--
2.44.0
From: donsheng <dongsheng.x.zhang(a)intel.com>
If the host was booted with the "default_hugepagesz=1G" kernel command-line
parameter, running the NX hugepage test will fail with error "Invalid argument"
at the TEST_ASSERT line in kvm_util.c's __vm_mem_region_delete() function:
static void __vm_mem_region_delete(struct kvm_vm *vm,
struct userspace_mem_region *region,
bool unlink)
{
int ret;
...
ret = munmap(region->mmap_start, region->mmap_size);
TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret));
...
}
NX hugepage test creates a VM with a data slot of 6M size backed with huge
pages. If the default hugetlb page size is set to 1G, calling mmap() with
MAP_HUGETLB and a length of 6M will succeed but calling its matching munmap()
will fail. Documentation/admin-guide/mm/hugetlbpage.rst specifies this behavior:
"Syscalls that operate on memory backed by hugetlb pages only have their lengths
aligned to the native page size of the processor; they will normally fail with
errno set to EINVAL or exclude hugetlb pages that extend beyond the length if
not hugepage aligned. For example, munmap(2) will fail if memory is backed by
a hugetlb page and the length is smaller than the hugepage size."
Explicitly use MAP_HUGE_2MB in conjunction with MAP_HUGETLB to fix the issue.
Signed-off-by: donsheng <dongsheng.x.zhang(a)intel.com>
Suggested-by: Zide Chen <zide.chen(a)intel.com>
---
tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
index 17bbb96fc4df..146e9033e206 100644
--- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
+++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
@@ -129,7 +129,7 @@ void run_test(int reclaim_period_ms, bool disable_nx_huge_pages,
vcpu = vm_vcpu_add(vm, 0, guest_code);
- vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_HUGETLB,
+ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_HUGETLB_2MB,
HPAGE_GPA, HPAGE_SLOT,
HPAGE_SLOT_NPAGES, 0);
--
2.43.0
RFC v8:
=======
Major Changes:
--------------
- Fixed build error generated by patch-by-patch build.
- Applied docs suggestions from Randy.
RFC v7:
=======
Major Changes:
--------------
This revision largely rebases on top of net-next and addresses the feedback
RFCv6 received from folks, namely Jakub, Yunsheng, Arnd, David, & Pavel.
The series remains in RFC because the queue-API ndos defined in this
series are not yet implemented. I have a GVE implementation I carry out
of tree for my testing. A upstreamable GVE implementation is in the
works. Aside from that, in my estimation all the patches are ready for
review/merge. Please do take a look.
As usual the full devmem TCP changes including the full GVE driver
implementation is here:
https://github.com/mina/linux/commits/tcpdevmem-v7/
Detailed changelog:
- Use admin-perm in netlink API.
- Addressed feedback from Jakub with regards to netlink API
implementation.
- Renamed devmem.c functions to something more appropriate for that
file.
- Improve the performance seen through the page_pool benchmark.
- Fix the value definition of all the SO_DEVMEM_* uapi.
- Various fixes to documentation.
Perf - page-pool benchmark:
---------------------------
Improved performance of bench_page_pool_simple.ko tests compared to v6:
https://pastebin.com/raw/v5dYRg8L
net-next base: 8 cycle fast path.
RFC v6: 10 cycle fast path.
RFC v7: 9 cycle fast path.
RFC v7 with CONFIG_DMA_SHARED_BUFFER disabled: 8 cycle fast path,
same as baseline.
Perf - Devmem TCP benchmark:
---------------------
Perf is about the same regardless of the changes in v7, namely the
removal of the static_branch_unlikely to improve the page_pool benchmark
performance:
189/200gbps bi-directional throughput with RX devmem TCP and regular TCP
TX i.e. ~95% line rate.
RFC v6:
=======
Major Changes:
--------------
This revision largely rebases on top of net-next and addresses the little
feedback RFCv5 received.
The series remains in RFC because the queue-API ndos defined in this
series are not yet implemented. I have a GVE implementation I carry out
of tree for my testing. A upstreamable GVE implementation is in the
works. Aside from that, in my estimation all the patches are ready for
review/merge. Please do take a look.
As usual the full devmem TCP changes including the full GVE driver
implementation is here:
https://github.com/mina/linux/commits/tcpdevmem-v6/
This version also comes with some performance data recorded in the cover
letter (see below changelog).
Detailed changelog:
- Rebased on top of the merged netmem_ref changes.
- Converted skb->dmabuf to skb->readable (Pavel). Pavel's original
suggestion was to remove the skb->dmabuf flag entirely, but when I
looked into it closely, I found the issue that if we remove the flag
we have to dereference the shinfo(skb) pointer to obtain the first
frag to tell whether an skb is readable or not. This can cause a
performance regression if it dirties the cache line when the
shinfo(skb) was not really needed. Instead, I converted the skb->dmabuf
flag into a generic skb->readable flag which can be re-used by io_uring
0-copy RX.
- Squashed a few locking optimizations from Eric Dumazet in the RX path
and the DEVMEM_DONTNEED setsockopt.
- Expanded the tests a bit. Added validation for invalid scenarios and
added some more coverage.
Perf - page-pool benchmark:
---------------------------
bench_page_pool_simple.ko tests with and without these changes:
https://pastebin.com/raw/ncHDwAbn
AFAIK the number that really matters in the perf tests is the
'tasklet_page_pool01_fast_path Per elem'. This one measures at about 8
cycles without the changes but there is some 1 cycle noise in some
results.
With the patches this regresses to 9 cycles with the changes but there
is 1 cycle noise occasionally running this test repeatedly.
Lastly I tried disable the static_branch_unlikely() in
netmem_is_net_iov() check. To my surprise disabling the
static_branch_unlikely() check reduces the fast path back to 8 cycles,
but the 1 cycle noise remains.
Perf - Devmem TCP benchmark:
---------------------
189/200gbps bi-directional throughput with RX devmem TCP and regular TCP
TX i.e. ~95% line rate.
Major changes in RFC v5:
========================
1. Rebased on top of 'Abstract page from net stack' series and used the
new netmem type to refer to LSB set pointers instead of re-using
struct page.
2. Downgraded this series back to RFC and called it RFC v5. This is
because this series is now dependent on 'Abstract page from net
stack'[1] and the queue API. Both are removed from the series to
reduce the patch # and those bits are fairly independent or
pre-requisite work.
3. Reworked the page_pool devmem support to use netmem and for some
more unified handling.
4. Reworked the reference counting of net_iov (renamed from
page_pool_iov) to use pp_ref_count for refcounting.
The full changes including the dependent series and GVE page pool
support is here:
https://github.com/mina/linux/commits/tcpdevmem-rfcv5/
[1] https://patchwork.kernel.org/project/netdevbpf/list/?series=810774
Major changes in v1:
====================
1. Implemented MVP queue API ndos to remove the userspace-visible
driver reset.
2. Fixed issues in the napi_pp_put_page() devmem frag unref path.
3. Removed RFC tag.
Many smaller addressed comments across all the patches (patches have
individual change log).
Full tree including the rest of the GVE driver changes:
https://github.com/mina/linux/commits/tcpdevmem-v1
Changes in RFC v3:
==================
1. Pulled in the memory-provider dependency from Jakub's RFC[1] to make the
series reviewable and mergeable.
2. Implemented multi-rx-queue binding which was a todo in v2.
3. Fix to cmsg handling.
The sticking point in RFC v2[2] was the device reset required to refill
the device rx-queues after the dmabuf bind/unbind. The solution
suggested as I understand is a subset of the per-queue management ops
Jakub suggested or similar:
https://lore.kernel.org/netdev/20230815171638.4c057dcd@kernel.org/
This is not addressed in this revision, because:
1. This point was discussed at netconf & netdev and there is openness to
using the current approach of requiring a device reset.
2. Implementing individual queue resetting seems to be difficult for my
test bed with GVE. My prototype to test this ran into issues with the
rx-queues not coming back up properly if reset individually. At the
moment I'm unsure if it's a mistake in the POC or a genuine issue in
the virtualization stack behind GVE, which currently doesn't test
individual rx-queue restart.
3. Our usecases are not bothered by requiring a device reset to refill
the buffer queues, and we'd like to support NICs that run into this
limitation with resetting individual queues.
My thought is that drivers that have trouble with per-queue configs can
use the support in this series, while drivers that support new netdev
ops to reset individual queues can automatically reset the queue as
part of the dma-buf bind/unbind.
The same approach with device resets is presented again for consideration
with other sticking points addressed.
This proposal includes the rx devmem path only proposed for merge. For a
snapshot of my entire tree which includes the GVE POC page pool support &
device memory support:
https://github.com/torvalds/linux/compare/master...mina:linux:tcpdevmem-v3
[1] https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.…
[2] https://lore.kernel.org/netdev/CAHS8izOVJGJH5WF68OsRWFKJid1_huzzUK+hpKbLcL4…
Changes in RFC v2:
==================
The sticking point in RFC v1[1] was the dma-buf pages approach we used to
deliver the device memory to the TCP stack. RFC v2 is a proof-of-concept
that attempts to resolve this by implementing scatterlist support in the
networking stack, such that we can import the dma-buf scatterlist
directly. This is the approach proposed at a high level here[2].
Detailed changes:
1. Replaced dma-buf pages approach with importing scatterlist into the
page pool.
2. Replace the dma-buf pages centric API with a netlink API.
3. Removed the TX path implementation - there is no issue with
implementing the TX path with scatterlist approach, but leaving
out the TX path makes it easier to review.
4. Functionality is tested with this proposal, but I have not conducted
perf testing yet. I'm not sure there are regressions, but I removed
perf claims from the cover letter until they can be re-confirmed.
5. Added Signed-off-by: contributors to the implementation.
6. Fixed some bugs with the RX path since RFC v1.
Any feedback welcome, but specifically the biggest pending questions
needing feedback IMO are:
1. Feedback on the scatterlist-based approach in general.
2. Netlink API (Patch 1 & 2).
3. Approach to handle all the drivers that expect to receive pages from
the page pool (Patch 6).
[1] https://lore.kernel.org/netdev/dfe4bae7-13a0-3c5d-d671-f61b375cb0b4@gmail.c…
[2] https://lore.kernel.org/netdev/CAHS8izPm6XRS54LdCDZVd0C75tA1zHSu6jLVO8nzTLX…
==================
* TL;DR:
Device memory TCP (devmem TCP) is a proposal for transferring data to and/or
from device memory efficiently, without bouncing the data to a host memory
buffer.
* Problem:
A large amount of data transfers have device memory as the source and/or
destination. Accelerators drastically increased the volume of such transfers.
Some examples include:
- ML accelerators transferring large amounts of training data from storage into
GPU/TPU memory. In some cases ML training setup time can be as long as 50% of
TPU compute time, improving data transfer throughput & efficiency can help
improving GPU/TPU utilization.
- Distributed training, where ML accelerators, such as GPUs on different hosts,
exchange data among them.
- Distributed raw block storage applications transfer large amounts of data with
remote SSDs, much of this data does not require host processing.
Today, the majority of the Device-to-Device data transfers the network are
implemented as the following low level operations: Device-to-Host copy,
Host-to-Host network transfer, and Host-to-Device copy.
The implementation is suboptimal, especially for bulk data transfers, and can
put significant strains on system resources, such as host memory bandwidth,
PCIe bandwidth, etc. One important reason behind the current state is the
kernel’s lack of semantics to express device to network transfers.
* Proposal:
In this patch series we attempt to optimize this use case by implementing
socket APIs that enable the user to:
1. send device memory across the network directly, and
2. receive incoming network packets directly into device memory.
Packet _payloads_ go directly from the NIC to device memory for receive and from
device memory to NIC for transmit.
Packet _headers_ go to/from host memory and are processed by the TCP/IP stack
normally. The NIC _must_ support header split to achieve this.
Advantages:
- Alleviate host memory bandwidth pressure, compared to existing
network-transfer + device-copy semantics.
- Alleviate PCIe BW pressure, by limiting data transfer to the lowest level
of the PCIe tree, compared to traditional path which sends data through the
root complex.
* Patch overview:
** Part 1: netlink API
Gives user ability to bind dma-buf to an RX queue.
** Part 2: scatterlist support
Currently the standard for device memory sharing is DMABUF, which doesn't
generate struct pages. On the other hand, networking stack (skbs, drivers, and
page pool) operate on pages. We have 2 options:
1. Generate struct pages for dmabuf device memory, or,
2. Modify the networking stack to process scatterlist.
Approach #1 was attempted in RFC v1. RFC v2 implements approach #2.
** part 3: page pool support
We piggy back on page pool memory providers proposal:
https://github.com/kuba-moo/linux/tree/pp-providers
It allows the page pool to define a memory provider that provides the
page allocation and freeing. It helps abstract most of the device memory
TCP changes from the driver.
** part 4: support for unreadable skb frags
Page pool iovs are not accessible by the host; we implement changes
throughput the networking stack to correctly handle skbs with unreadable
frags.
** Part 5: recvmsg() APIs
We define user APIs for the user to send and receive device memory.
Not included with this series is the GVE devmem TCP support, just to
simplify the review. Code available here if desired:
https://github.com/mina/linux/tree/tcpdevmem
This series is built on top of net-next with Jakub's pp-providers changes
cherry-picked.
* NIC dependencies:
1. (strict) Devmem TCP require the NIC to support header split, i.e. the
capability to split incoming packets into a header + payload and to put
each into a separate buffer. Devmem TCP works by using device memory
for the packet payload, and host memory for the packet headers.
2. (optional) Devmem TCP works better with flow steering support & RSS support,
i.e. the NIC's ability to steer flows into certain rx queues. This allows the
sysadmin to enable devmem TCP on a subset of the rx queues, and steer
devmem TCP traffic onto these queues and non devmem TCP elsewhere.
The NIC I have access to with these properties is the GVE with DQO support
running in Google Cloud, but any NIC that supports these features would suffice.
I may be able to help reviewers bring up devmem TCP on their NICs.
* Testing:
The series includes a udmabuf kselftest that show a simple use case of
devmem TCP and validates the entire data path end to end without
a dependency on a specific dmabuf provider.
** Test Setup
Kernel: net-next with this series and memory provider API cherry-picked
locally.
Hardware: Google Cloud A3 VMs.
NIC: GVE with header split & RSS & flow steering support.
Cc: Pavel Begunkov <asml.silence(a)gmail.com>
Cc: David Wei <dw(a)davidwei.uk>
Cc: Jason Gunthorpe <jgg(a)ziepe.ca>
Cc: Yunsheng Lin <linyunsheng(a)huawei.com>
Cc: Shailend Chand <shailend(a)google.com>
Cc: Harshitha Ramamurthy <hramamurthy(a)google.com>
Cc: Shakeel Butt <shakeel.butt(a)linux.dev>
Cc: Jeroen de Borst <jeroendb(a)google.com>
Cc: Praveen Kaligineedi <pkaligineedi(a)google.com>
Jakub Kicinski (1):
net: page_pool: create hooks for custom page providers
Mina Almasry (13):
queue_api: define queue api
net: netdev netlink api to bind dma-buf to a net device
netdev: support binding dma-buf to netdevice
netdev: netdevice devmem allocator
page_pool: convert to use netmem
page_pool: devmem support
memory-provider: dmabuf devmem memory provider
net: support non paged skb frags
net: add support for skbs with unreadable frags
tcp: RX path for devmem TCP
net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags
net: add devmem TCP documentation
selftests: add ncdevmem, netcat for devmem TCP
Documentation/netlink/specs/netdev.yaml | 57 +++
Documentation/networking/devmem.rst | 256 +++++++++++
Documentation/networking/index.rst | 1 +
arch/alpha/include/uapi/asm/socket.h | 6 +
arch/mips/include/uapi/asm/socket.h | 6 +
arch/parisc/include/uapi/asm/socket.h | 6 +
arch/sparc/include/uapi/asm/socket.h | 6 +
include/linux/netdevice.h | 3 +
include/linux/skbuff.h | 73 +++-
include/linux/socket.h | 1 +
include/net/devmem.h | 124 ++++++
include/net/netdev_queues.h | 27 ++
include/net/netdev_rx_queue.h | 2 +
include/net/netmem.h | 234 +++++++++-
include/net/page_pool/helpers.h | 155 +++++--
include/net/page_pool/types.h | 33 +-
include/net/sock.h | 2 +
include/net/tcp.h | 5 +-
include/trace/events/page_pool.h | 29 +-
include/uapi/asm-generic/socket.h | 6 +
include/uapi/linux/netdev.h | 19 +
include/uapi/linux/uio.h | 17 +
net/bpf/test_run.c | 5 +-
net/core/Makefile | 2 +-
net/core/datagram.c | 6 +
net/core/dev.c | 6 +-
net/core/devmem.c | 425 ++++++++++++++++++
net/core/gro.c | 8 +-
net/core/netdev-genl-gen.c | 23 +
net/core/netdev-genl-gen.h | 6 +
net/core/netdev-genl.c | 107 +++++
net/core/page_pool.c | 364 +++++++++-------
net/core/skbuff.c | 110 ++++-
net/core/sock.c | 61 +++
net/ipv4/esp4.c | 2 +-
net/ipv4/tcp.c | 254 ++++++++++-
net/ipv4/tcp_input.c | 13 +-
net/ipv4/tcp_ipv4.c | 9 +
net/ipv4/tcp_minisocks.c | 2 +
net/ipv4/tcp_output.c | 5 +-
net/ipv6/esp6.c | 2 +-
net/packet/af_packet.c | 4 +-
tools/include/uapi/linux/netdev.h | 19 +
tools/testing/selftests/net/.gitignore | 1 +
tools/testing/selftests/net/Makefile | 5 +
tools/testing/selftests/net/ncdevmem.c | 546 ++++++++++++++++++++++++
46 files changed, 2776 insertions(+), 277 deletions(-)
create mode 100644 Documentation/networking/devmem.rst
create mode 100644 include/net/devmem.h
create mode 100644 net/core/devmem.c
create mode 100644 tools/testing/selftests/net/ncdevmem.c
--
2.44.0.478.gd926399ef9-goog
This is a followup of sleepable bpf_timer[0].
When discussing sleepable bpf_timer, it was thought that we should give
a try to bpf_wq, as the 2 APIs are similar but distinct enough to
justify a new one.
So here it is.
I tried to keep as much as possible common code in kernel/bpf/helpers.c
but I couldn't get away with code duplication in kernel/bpf/verifier.c.
This series introduces a basic bpf_wq support:
- creation is supported
- assignment is supported
- running a simple bpf_wq is also supported.
We will probably need to extend the API further with:
- a full delayed_work API (can be piggy backed on top with a correct
flag)
- bpf_wq_cancel() <- apparently not, this is shooting ourself in the
foot
- bpf_wq_cancel_sync() (for sleepable programs)
- documentation
---
For reference, the use cases I have in mind:
---
Basically, I need to be able to defer a HID-BPF program for the
following reasons (from the aforementioned patch):
1. defer an event:
Sometimes we receive an out of proximity event, but the device can not
be trusted enough, and we need to ensure that we won't receive another
one in the following n milliseconds. So we need to wait those n
milliseconds, and eventually re-inject that event in the stack.
2. inject new events in reaction to one given event:
We might want to transform one given event into several. This is the
case for macro keys where a single key press is supposed to send
a sequence of key presses. But this could also be used to patch a
faulty behavior, if a device forgets to send a release event.
3. communicate with the device in reaction to one event:
We might want to communicate back to the device after a given event.
For example a device might send us an event saying that it came back
from sleeping state and needs to be re-initialized.
Currently we can achieve that by keeping a userspace program around,
raise a bpf event, and let that userspace program inject the events and
commands.
However, we are just keeping that program alive as a daemon for just
scheduling commands. There is no logic in it, so it doesn't really justify
an actual userspace wakeup. So a kernel workqueue seems simpler to handle.
bpf_timers are currently running in a soft IRQ context, this patch
series implements a sleppable context for them.
Cheers,
Benjamin
To: Alexei Starovoitov <ast(a)kernel.org>
To: Daniel Borkmann <daniel(a)iogearbox.net>
To: Andrii Nakryiko <andrii(a)kernel.org>
To: Martin KaFai Lau <martin.lau(a)linux.dev>
To: Eduard Zingerman <eddyz87(a)gmail.com>
To: Song Liu <song(a)kernel.org>
To: Yonghong Song <yonghong.song(a)linux.dev>
To: John Fastabend <john.fastabend(a)gmail.com>
To: KP Singh <kpsingh(a)kernel.org>
To: Stanislav Fomichev <sdf(a)google.com>
To: Hao Luo <haoluo(a)google.com>
To: Jiri Olsa <jolsa(a)kernel.org>
To: Mykola Lysenko <mykolal(a)fb.com>
To: Shuah Khan <shuah(a)kernel.org>
Cc: <bpf(a)vger.kernel.org>
Cc: <linux-kernel(a)vger.kernel.org>
Cc: <linux-kselftest(a)vger.kernel.org>
Signed-off-by: Benjamin Tissoires <bentiss(a)kernel.org>
[0] https://lore.kernel.org/all/20240408-hid-bpf-sleepable-v6-0-0499ddd91b94@ke…
---
Changes in v2:
- took previous review into account
- mainly dropped BPF_F_WQ_SLEEPABLE
- Link to v1: https://lore.kernel.org/r/20240416-bpf_wq-v1-0-c9e66092f842@kernel.org
---
Benjamin Tissoires (16):
bpf: make timer data struct more generic
bpf: replace bpf_timer_init with a generic helper
bpf: replace bpf_timer_set_callback with a generic helper
bpf: replace bpf_timer_cancel_and_free with a generic helper
bpf: add support for bpf_wq user type
tools: sync include/uapi/linux/bpf.h
bpf: verifier: bail out if the argument is not a map
bpf: add support for KF_ARG_PTR_TO_WORKQUEUE
bpf: allow struct bpf_wq to be embedded in arraymaps and hashmaps
selftests/bpf: add bpf_wq tests
bpf: wq: add bpf_wq_init
selftests/bpf: wq: add bpf_wq_init() checks
bpf: wq: add bpf_wq_set_callback_impl
selftests/bpf: add checks for bpf_wq_set_callback()
bpf: add bpf_wq_start
selftests/bpf: wq: add bpf_wq_start() checks
include/linux/bpf.h | 13 +-
include/linux/bpf_verifier.h | 1 +
include/uapi/linux/bpf.h | 4 +
kernel/bpf/arraymap.c | 18 +-
kernel/bpf/btf.c | 17 +
kernel/bpf/hashtab.c | 55 +++-
kernel/bpf/helpers.c | 348 ++++++++++++++++-----
kernel/bpf/syscall.c | 15 +-
kernel/bpf/verifier.c | 154 ++++++++-
tools/include/uapi/linux/bpf.h | 4 +
tools/testing/selftests/bpf/bpf_experimental.h | 7 +
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 5 +
.../selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h | 1 +
tools/testing/selftests/bpf/prog_tests/wq.c | 41 +++
tools/testing/selftests/bpf/progs/wq.c | 186 +++++++++++
tools/testing/selftests/bpf/progs/wq_failures.c | 144 +++++++++
16 files changed, 910 insertions(+), 103 deletions(-)
---
base-commit: ffa6b26b4d8a0520b78636ca9373ab842cb3b1a8
change-id: 20240411-bpf_wq-fe24e8d24f5e
Best regards,
--
Benjamin Tissoires <bentiss(a)kernel.org>
Hi!
Implement support for tests which require access to a remote system /
endpoint which can generate traffic.
This series concludes the "groundwork" for upstream driver tests.
I wanted to support the three models which came up in discussions:
- SW testing with netdevsim
- "local" testing with two ports on the same system in a loopback
- "remote" testing via SSH
so there is a tiny bit of an abstraction which wraps up how "remote"
commands are executed. Otherwise hopefully there's nothing surprising.
I'm only adding a ping test. I had a bigger one written but I was
worried we'll get into discussing the details of the test itself
and how I chose to hack up netdevsim, instead of the test infra...
So that test will be a follow up :)
v5:
- fix rand port generation, and wrap it in a helper in case
the random thing proves to be flaky
- reuseaddr
- explicitly select the address family
v4: https://lore.kernel.org/all/20240418233844.2762396-1-kuba@kernel.org
- improve coding style of patch 5
- switch from netcat to socat (patch 6)
- support exit_wait for bkg() in context manager
- add require_XYZ() helpers (patch 7)
- increase timeouts a little (1,3 -> 5 sec)
v3: https://lore.kernel.org/all/20240417231146.2435572-1-kuba@kernel.org
- first two patches are new
- make Remote::cmd() return Popen() object (patch 3)
- always operate on absolute paths (patch 3)
- last two patches are new
v2: https://lore.kernel.org/all/20240416004556.1618804-1-kuba@kernel.org
- rename endpoint -> remote
- use 2001:db8:: v6 prefix
- add a note about persistent SSH connections
- add the kernel config
v1: https://lore.kernel.org/all/20240412233705.1066444-1-kuba@kernel.org
Jakub Kicinski (7):
selftests: drv-net: define endpoint structures
selftests: drv-net: factor out parsing of the env
selftests: drv-net: construct environment for running tests which
require an endpoint
selftests: drv-net: add a trivial ping test
selftests: net: support matching cases by name prefix
selftests: drv-net: add a TCP ping test case (and useful helpers)
selftests: drv-net: add require_XYZ() helpers for validating env
tools/testing/selftests/drivers/net/Makefile | 5 +-
.../testing/selftests/drivers/net/README.rst | 33 ++++
.../selftests/drivers/net/lib/py/__init__.py | 1 +
.../selftests/drivers/net/lib/py/env.py | 175 ++++++++++++++++--
.../selftests/drivers/net/lib/py/remote.py | 15 ++
.../drivers/net/lib/py/remote_netns.py | 21 +++
.../drivers/net/lib/py/remote_ssh.py | 39 ++++
tools/testing/selftests/drivers/net/ping.py | 51 +++++
.../testing/selftests/net/lib/py/__init__.py | 1 +
tools/testing/selftests/net/lib/py/ksft.py | 13 +-
tools/testing/selftests/net/lib/py/netns.py | 31 ++++
tools/testing/selftests/net/lib/py/utils.py | 60 +++++-
12 files changed, 416 insertions(+), 29 deletions(-)
create mode 100644 tools/testing/selftests/drivers/net/lib/py/remote.py
create mode 100644 tools/testing/selftests/drivers/net/lib/py/remote_netns.py
create mode 100644 tools/testing/selftests/drivers/net/lib/py/remote_ssh.py
create mode 100755 tools/testing/selftests/drivers/net/ping.py
create mode 100644 tools/testing/selftests/net/lib/py/netns.py
--
2.44.0
From: Geliang Tang <tanggeliang(a)kylinos.cn>
This patchset uses more network helpers in test_sock_addr.c, but
first of all, patch 2 is needed to make network_helpers.c independent
of test_progs.c. Then network_helpers.h can be included into
test_sock_addr.c without compile errors.
Patch 1 and patch 2 address Martin's comments for the previous series
too.
v2:
- Only a few minor cleanups to patch 5.
Geliang Tang (5):
selftests/bpf: Fix a fd leak in error paths in open_netns
selftests/bpf: Use log_err in open_netns/close_netns
selftests/bpf: Use start_server_addr in test_sock_addr
selftests/bpf: Use connect_to_addr in test_sock_addr
selftests/bpf: Use make_sockaddr in test_sock_addr
tools/testing/selftests/bpf/Makefile | 3 +-
tools/testing/selftests/bpf/network_helpers.c | 20 ++-
tools/testing/selftests/bpf/network_helpers.h | 1 +
.../selftests/bpf/prog_tests/empty_skb.c | 2 +
.../bpf/prog_tests/ip_check_defrag.c | 2 +
.../selftests/bpf/prog_tests/tc_redirect.c | 2 +-
.../selftests/bpf/prog_tests/test_tunnel.c | 4 +
.../selftests/bpf/prog_tests/xdp_metadata.c | 16 ++
tools/testing/selftests/bpf/test_sock_addr.c | 138 +++---------------
9 files changed, 63 insertions(+), 125 deletions(-)
--
2.40.1
Adding GHCB support for selftests. Very similar code to the ucall
functionality, I didn't refactor anything common out since I was unsure
with just two instances that is required. If pulling out common code
between those two is preferred please let me know. The series only adds a
single usage of the GHCB which is a special outsb GHCB exit to allow for
passing the 64-bit ucall pointer. In future series we can test more GHCB
functionality of KVM. I'd like to base some SNP smoke tests off of this
and the current SEV selftest work.
base-commit: 40e09b3ccfacc640d58e1e3d6b8f29b2db0a9848
Cc: Vishal Annapurve <vannapurve(a)google.com>
Cc: Ackerley Tng <ackerleytng(a)google.com>
Cc: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: Claudio Imbrenda <imbrenda(a)linux.ibm.com>
Cc: Sean Christopherson <seanjc(a)google.com>
Cc: Carlos Bilbao <carlos.bilbao(a)amd.com>
Cc: Tom Lendacky <thomas.lendacky(a)amd.com>
Cc: Michael Roth <michael.roth(a)amd.com>
Cc: kvm(a)vger.kernel.org
Cc: linux-kselftest(a)vger.kernel.org
Signed-off-by: Peter Gonda <pgonda(a)google.com>
Peter Gonda (6):
Add GHCB with setters and getters
Add arch specific additional guest pages
Add vm_vaddr_alloc_pages_shared()
Add GHCB allocations and helpers
Add is_sev_enabled() helpers
Add ability for SEV-ES guests to use ucalls via GHCB
tools/testing/selftests/kvm/Makefile | 2 +-
.../selftests/kvm/include/kvm_util_base.h | 4 +
.../selftests/kvm/include/x86_64/sev.h | 7 +
.../selftests/kvm/include/x86_64/svm.h | 106 +++++++++++++
tools/testing/selftests/kvm/lib/kvm_util.c | 22 ++-
.../selftests/kvm/lib/x86_64/processor.c | 8 +
tools/testing/selftests/kvm/lib/x86_64/sev.c | 149 ++++++++++++++++++
.../testing/selftests/kvm/lib/x86_64/ucall.c | 17 ++
.../selftests/kvm/x86_64/sev_smoke_test.c | 22 +--
9 files changed, 313 insertions(+), 24 deletions(-)
--
2.44.0.478.gd926399ef9-goog