Hi all,
We are students from the State University of Campinas with an interest in contributing to the kernel. We are part of LKCAMP, a student group that focuses on researching and contributing to open source software. Our group has organized kernel hackathons in the past [1] that resulted in sucessful contributions, and we would like to continue the effort this year.
This time, we were thinking about writing KUnit tests for data structures in `lib/` (or converting existing lib test code), similarly to our previous hackathon. We are currently considering a few candidates:
- lib/kfifo.c
- lib/llist.c
- tools/testing/scatterlist
- tools/testing/radix-tree
We would like to know if these are good candidates, and also ask for suggestions of other code that could benefit from having KUnit tests.
Thanks!
Artur Alves
[1] https://lore.kernel.org/dri-devel/20211011152333.gm5jkaog6b6nbv5w@notapiano/
From: Geliang Tang <tanggeliang(a)kylinos.cn>
bpf_prog5 and bpf_prog7 are removed from progs/test_sockmap_kern.h in
commit d79a32129b21 ("bpf: Selftests, remove prints from sockmap tests"),
now there are only 9 progs in it, not 11:
SEC("sk_skb1")
int bpf_prog1(struct __sk_buff *skb)
SEC("sk_skb2")
int bpf_prog2(struct __sk_buff *skb)
SEC("sk_skb3")
int bpf_prog3(struct __sk_buff *skb)
SEC("sockops")
int bpf_sockmap(struct bpf_sock_ops *skops)
SEC("sk_msg1")
int bpf_prog4(struct sk_msg_md *msg)
SEC("sk_msg2")
int bpf_prog6(struct sk_msg_md *msg)
SEC("sk_msg3")
int bpf_prog8(struct sk_msg_md *msg)
SEC("sk_msg4")
int bpf_prog9(struct sk_msg_md *msg)
SEC("sk_msg5")
int bpf_prog10(struct sk_msg_md *msg)
This patch updates the array sizes of prog_fd[], prog_attach_type[] and
prog_type[] from 11 to 9 accordingly.
Fixes: d79a32129b21 ("bpf: Selftests, remove prints from sockmap tests")
Signed-off-by: Geliang Tang <tanggeliang(a)kylinos.cn>
---
tools/testing/selftests/bpf/test_sockmap.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
index 92752f5eeded..4499b3cfc3a6 100644
--- a/tools/testing/selftests/bpf/test_sockmap.c
+++ b/tools/testing/selftests/bpf/test_sockmap.c
@@ -63,7 +63,7 @@ int passed;
int failed;
int map_fd[9];
struct bpf_map *maps[9];
-int prog_fd[11];
+int prog_fd[9];
int txmsg_pass;
int txmsg_redir;
@@ -1793,8 +1793,6 @@ int prog_attach_type[] = {
BPF_SK_MSG_VERDICT,
BPF_SK_MSG_VERDICT,
BPF_SK_MSG_VERDICT,
- BPF_SK_MSG_VERDICT,
- BPF_SK_MSG_VERDICT,
};
int prog_type[] = {
@@ -1807,8 +1805,6 @@ int prog_type[] = {
BPF_PROG_TYPE_SK_MSG,
BPF_PROG_TYPE_SK_MSG,
BPF_PROG_TYPE_SK_MSG,
- BPF_PROG_TYPE_SK_MSG,
- BPF_PROG_TYPE_SK_MSG,
};
static int populate_progs(char *bpf_file)
--
2.43.0
The va_high_addr_switch memory selftest tests out some corner cases
related to allocation and page/hugepage faulting around the switch
boundary. Currently, the page size and hugepage size have been statically
defined. Post FEAT_LPA2, the Aarch64 Linux kernel adds support for 4k and
16k translation granules on higher addresses; we restructure the test to
support the same. In addition, we avoid invocation of the binary twice,
in the shell script, to reduce test noise.
Dev Jain (2):
selftests/mm: va_high_addr_switch: Reduce test noise
selftests/mm: va_high_addr_switch: Dynamically initialize testcases to
enable LPA2 testing
.../selftests/mm/va_high_addr_switch.c | 454 +++++++++---------
.../selftests/mm/va_high_addr_switch.sh | 4 -
2 files changed, 232 insertions(+), 226 deletions(-)
--
2.34.1
From: donsheng <dongsheng.x.zhang(a)intel.com>
If the host was booted with the "default_hugepagesz=1G" kernel command-line
parameter, running the NX hugepage test will fail with error "Invalid argument"
at the TEST_ASSERT line in kvm_util.c's __vm_mem_region_delete() function:
static void __vm_mem_region_delete(struct kvm_vm *vm,
struct userspace_mem_region *region,
bool unlink)
{
int ret;
...
ret = munmap(region->mmap_start, region->mmap_size);
TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret));
...
}
NX hugepage test creates a VM with a data slot of 6M size backed with huge
pages. If the default hugetlb page size is set to 1G, calling mmap() with
MAP_HUGETLB and a length of 6M will succeed but calling its matching munmap()
will fail. Documentation/admin-guide/mm/hugetlbpage.rst specifies this behavior:
"Syscalls that operate on memory backed by hugetlb pages only have their lengths
aligned to the native page size of the processor; they will normally fail with
errno set to EINVAL or exclude hugetlb pages that extend beyond the length if
not hugepage aligned. For example, munmap(2) will fail if memory is backed by
a hugetlb page and the length is smaller than the hugepage size."
Explicitly use MAP_HUGE_2MB in conjunction with MAP_HUGETLB to fix the issue.
Signed-off-by: donsheng <dongsheng.x.zhang(a)intel.com>
Suggested-by: Zide Chen <zide.chen(a)intel.com>
---
tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
index 17bbb96fc4df..146e9033e206 100644
--- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
+++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
@@ -129,7 +129,7 @@ void run_test(int reclaim_period_ms, bool disable_nx_huge_pages,
vcpu = vm_vcpu_add(vm, 0, guest_code);
- vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_HUGETLB,
+ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_HUGETLB_2MB,
HPAGE_GPA, HPAGE_SLOT,
HPAGE_SLOT_NPAGES, 0);
--
2.43.0
From: Geliang Tang <tanggeliang(a)kylinos.cn>
This patchset uses post_socket_cb and post_connect_cb callbacks of struct
network_helper_opts to refactor do_test() in bpf_tcp_ca.c to move dctcp
test dedicated code out of do_test() into test_dctcp().
Patch 3 adds a new member in post_socket_opts and patch 4 adds a new
callback in network_helper_opts. I'm not sure if this is going too far.
v2:
- rebased on commit "selftests/bpf: Add test for the use of new args in
cong_control"
Geliang Tang (4):
selftests/bpf: Use post_socket_cb in connect_to_fd_opts
selftests/bpf: Use start_server_addr in bpf_tcp_ca
selftests/bpf: Use connect_to_fd_opts in do_test in bpf_tcp_ca
selftests/bpf: Add post_connect_cb callback
tools/testing/selftests/bpf/network_helpers.c | 13 +-
tools/testing/selftests/bpf/network_helpers.h | 8 +-
.../selftests/bpf/prog_tests/bpf_tcp_ca.c | 111 ++++++++++++------
3 files changed, 86 insertions(+), 46 deletions(-)
--
2.43.0
From: "Steven Rostedt (Google)" <rostedt(a)goodmis.org>
The kprobe_eventname.tc test checks if a function with .isra. can have a
kprobe attached to it. It loops through the kallsyms file for all the
functions that have the .isra. name, and checks if it exists in the
available_filter_functions file, and if it does, it uses it to attach a
kprobe to it.
The issue is that kprobes can not attach to functions that are listed more
than once in available_filter_functions. With the latest kernel, the
function that is found is: rapl_event_update.isra.0
# grep rapl_event_update.isra.0 /sys/kernel/tracing/available_filter_functions
rapl_event_update.isra.0
rapl_event_update.isra.0
It is listed twice. This causes the attached kprobe to it to fail which in
turn fails the test. Instead of just picking the function function that is
found in available_filter_functions, pick the first one that is listed
only once in available_filter_functions.
Cc: stable(a)vger.kernel.org
Fixes: 604e3548236de ("selftests/ftrace: Select an existing function in kprobe_eventname test")
Signed-off-by: Steven Rostedt (Google) <rostedt(a)goodmis.org>
---
.../testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc
index 1f6981ef7afa..ba19b81cef39 100644
--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc
+++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc
@@ -30,7 +30,8 @@ find_dot_func() {
fi
grep " [tT] .*\.isra\..*" /proc/kallsyms | cut -f 3 -d " " | while read f; do
- if grep -s $f available_filter_functions; then
+ cnt=`grep -s $f available_filter_functions | wc -l`;
+ if [ $cnt -eq 1 ]; then
echo $f
break
fi
--
2.43.0
Post FEAT_LPA2, Aarch64 extends the 4KB and 16KB translation granule to
large virtual addresses. Currently, the test is being skipped for said
granule sizes, because the page sizes have been statically defined; to
work around that would mean breaking the nice array of structs used for
adding testcases. Instead, don't skip the test, and encourage the user
to manually change the macros.
Signed-off-by: Dev Jain <dev.jain(a)arm.com>
---
.../testing/selftests/mm/va_high_addr_switch.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/mm/va_high_addr_switch.c b/tools/testing/selftests/mm/va_high_addr_switch.c
index cfbc501290d3..ba862f51d395 100644
--- a/tools/testing/selftests/mm/va_high_addr_switch.c
+++ b/tools/testing/selftests/mm/va_high_addr_switch.c
@@ -292,12 +292,24 @@ static int supported_arch(void)
#elif defined(__x86_64__)
return 1;
#elif defined(__aarch64__)
- return getpagesize() == PAGE_SIZE;
+ return 1;
#else
return 0;
#endif
}
+#if defined(__aarch64__)
+void failure_message(void)
+{
+ printf("TEST MAY FAIL: Are you running on a pagesize other than 64K?\n");
+ printf("If yes, please change macros manually. Ensure to change the\n");
+ printf("address macros too if running defconfig on 16K pagesize,\n");
+ printf("since userspace VA = 47 bits post FEAT_LPA2.\n");
+}
+#else
+void failure_message(void) {}
+#endif
+
int main(int argc, char **argv)
{
int ret;
@@ -308,5 +320,8 @@ int main(int argc, char **argv)
ret = run_test(testcases, ARRAY_SIZE(testcases));
if (argc == 2 && !strcmp(argv[1], "--run-hugetlb"))
ret = run_test(hugetlb_testcases, ARRAY_SIZE(hugetlb_testcases));
+
+ if (ret)
+ failure_message();
return ret;
}
--
2.39.2
The compaction_test memory selftest introduces fragmentation in memory
and then tries to allocate as many hugepages as possible. This series
addresses some problems.
On Aarch64, if nr_hugepages == 0, then the test trivially succeeds since
compaction_index becomes 0, which is less than 3, due to no division by
zero exception being raised. We fix that by checking for division by
zero.
Secondly, correctly set the number of hugepages to zero before trying
to set a large number of them.
Now, consider a situation in which, at the start of the test, a non-zero
number of hugepages have been already set (while running the entire
selftests/mm suite, or manually by the admin). The test operates on 80%
of memory to avoid OOM-killer invocation, and because some memory is
already blocked by hugepages, it would increase the chance of OOM-killing.
Also, since mem_free used in check_compaction() is the value before we
set nr_hugepages to zero, the chance that the compaction_index will
be small is very high if the preset nr_hugepages was high, leading to a
bogus test success.
This series applies on top of the stable 6.9 kernel.
Changes in v2:
- Handle an unsigned long number of hugepages
- Combine the first patch (previously standalone) with this series
Link to v1:
https://lore.kernel.org/all/20240513082842.4117782-1-dev.jain@arm.com/https://lore.kernel.org/all/20240515093633.54814-1-dev.jain@arm.com/
Dev Jain (3):
selftests/mm: compaction_test: Fix bogus test success on Aarch64
selftests/mm: compaction_test: Fix incorrect write of zero to
nr_hugepages
selftests/mm: compaction_test: Fix bogus test success and reduce
probability of OOM-killer invocation
tools/testing/selftests/mm/compaction_test.c | 85 ++++++++++++++------
1 file changed, 60 insertions(+), 25 deletions(-)
--
2.34.1