v2:
- Added enable check in executor.c to prevent wrong error output from
kunit_tool.py when run against a KUnit disabled kernel
- kunit_tool.py now passes kunit.enable=1
- Flipped around logic of new config to KUNIT_DEFAULT_ENABLED
- Now load modules containing tests but not executing them
- Various message/description text clean up
There are some use cases where the kernel binary is desired to be the same
for both production and testing. This poses a problem for users of KUnit
as built-in tests will automatically run at startup and test modules
can still be loaded leaving the kernel in an unsafe state. There is a
"test" taint flag that gets set if a test runs but nothing to prevent
the execution.
This patch adds the kunit.enable module parameter that will need to be
set to true in addition to KUNIT being enabled for KUnit tests to run.
The default value is true giving backwards compatibility. However, for
the production+testing use case the new config option KUNIT_DEFAULT_ENABLED
can be set to N requiring the tester to opt-in by passing kunit.enable=1 to
the kernel.
Joe Fradley (2):
kunit: add kunit.enable to enable/disable KUnit test
kunit: no longer call module_info(test, "Y") for kunit modules
.../admin-guide/kernel-parameters.txt | 6 +++++
include/kunit/test.h | 3 ++-
lib/kunit/Kconfig | 11 +++++++++
lib/kunit/executor.c | 4 ++++
lib/kunit/test.c | 24 +++++++++++++++++++
tools/testing/kunit/kunit_kernel.py | 1 +
6 files changed, 48 insertions(+), 1 deletion(-)
--
2.37.1.595.g718a3a8f04-goog
There is a spelling mistake in a debug message. Fix it.
Signed-off-by: Colin Ian King <colin.i.king(a)gmail.com>
---
tools/testing/selftests/kvm/demand_paging_test.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 779ae54f89c4..56035d94c653 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -225,7 +225,7 @@ static void setup_demand_paging(struct kvm_vm *vm,
PER_PAGE_DEBUG("Userfaultfd %s mode, faults resolved with %s\n",
is_minor ? "MINOR" : "MISSING",
- is_minor ? "UFFDIO_CONINUE" : "UFFDIO_COPY");
+ is_minor ? "UFFDIO_CONTINUE" : "UFFDIO_COPY");
/* In order to get minor faults, prefault via the alias. */
if (is_minor) {
--
2.37.1
Our memory management kernel CI testing at Red Hat uses the VM
selftests and we have run into two problems:
First, our LTP tests overlap with the VM selftests.
We want to avoid unhelpful redundancy in our testing practices.
Second, we have observed the current run_vmtests.sh to report overall
failure/ambiguous results in the case that a machine lacks the necessary
hardware to perform one or more of the tests. E.g. ksm tests that
require more than one numa node.
We want to be able to run the vm selftests suitable to particular hardware.
Add the ability to run one or more groups of vm tests via run_vmtests.sh
instead of simply all-or-none in order to solve these problems.
Preserve existing default behavior of running all tests when the script
is invoked with no arguments.
Documentation of test groups is included in the patch as follows:
# ./run_vmtests.sh [ -h || --help ]
usage: ./tools/testing/selftests/vm/run_vmtests.sh [ -h | -t "<categories>"]
-t: specify specific categories to tests to run
-h: display this message
The default behavior is to run all tests.
Alternatively, specific groups tests can be run by passing a string
to the -t argument containing one or more of the following categories
separated by spaces:
- mmap
tests for mmap(2)
- gup_test
tests for gup using gup_test interface
- userfaultfd
tests for userfaultfd(2)
- compaction
a test for the patch "Allow compaction of unevictable pages"
- mlock
tests for mlock(2)
- mremap
tests for mremap(2)
- hugevm
tests for very large virtual address space
- vmalloc
vmalloc smoke tests
- hmm
hmm smoke tests
- madv_populate
test memadvise(2) MADV_POPULATE_{READ,WRITE} options
- memfd_secret
test memfd_secret(2)
- process_mrelease
test process_mrelease(2)
- ksm
ksm tests that do not require >=2 NUMA nodes
- ksm_numa
ksm tests that require >=2 NUMA nodes
- pkey
memory protection key tests
example: ./run_vmtests.sh -t "hmm mmap ksm"
Changes from v4:
- fix imprecise checking in test_selected
- drop conditional setup/cleanup of hugetlb
Changes from v3:
- rename variable TEST_ITEMS as VM_TEST_ITEMS
Changes from v2:
- rebase onto the mm-everyting branch in
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git
- integrate this functionality with new the tests
Changes from v1:
- use a command line argument to pass the test categories to the
script instead of an environmet variable
- remove novel prints to avoid messing with extant parsers of this
script
- update the usage text
Signed-off-by: Joel Savitz <jsavitz(a)redhat.com>
---
tools/testing/selftests/vm/run_vmtests.sh | 232 +++++++++++++++-------
1 file changed, 155 insertions(+), 77 deletions(-)
diff --git a/tools/testing/selftests/vm/run_vmtests.sh b/tools/testing/selftests/vm/run_vmtests.sh
index 249295a10f56..f115e9a6d3a1 100755
--- a/tools/testing/selftests/vm/run_vmtests.sh
+++ b/tools/testing/selftests/vm/run_vmtests.sh
@@ -1,6 +1,6 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
-#please run as root
+# Please run as root
# Kselftest framework requirement - SKIP code is 4.
ksft_skip=4
@@ -8,15 +8,76 @@ ksft_skip=4
mnt=./huge
exitcode=0
-#get huge pagesize and freepages from /proc/meminfo
-while read -r name size unit; do
- if [ "$name" = "HugePages_Free:" ]; then
- freepgs="$size"
+usage() {
+ cat <<EOF
+usage: ${BASH_SOURCE[0]:-$0} [ -h | -t "<categories>"]
+ -t: specify specific categories to tests to run
+ -h: display this message
+
+The default behavior is to run all tests.
+
+Alternatively, specific groups tests can be run by passing a string
+to the -t argument containing one or more of the following categories
+separated by spaces:
+- mmap
+ tests for mmap(2)
+- gup_test
+ tests for gup using gup_test interface
+- userfaultfd
+ tests for userfaultfd(2)
+- compaction
+ a test for the patch "Allow compaction of unevictable pages"
+- mlock
+ tests for mlock(2)
+- mremap
+ tests for mremap(2)
+- hugevm
+ tests for very large virtual address space
+- vmalloc
+ vmalloc smoke tests
+- hmm
+ hmm smoke tests
+- madv_populate
+ test memadvise(2) MADV_POPULATE_{READ,WRITE} options
+- memfd_secret
+ test memfd_secret(2)
+- process_mrelease
+ test process_mrelease(2)
+- ksm
+ ksm tests that do not require >=2 NUMA nodes
+- ksm_numa
+ ksm tests that require >=2 NUMA nodes
+- pkey
+ memory protection key tests
+example: ./run_vmtests.sh -t "hmm mmap ksm"
+EOF
+ exit 0
+}
+
+
+while getopts "ht:" OPT; do
+ case ${OPT} in
+ "h") usage ;;
+ "t") VM_SELFTEST_ITEMS=${OPTARG} ;;
+ esac
+done
+shift $((OPTIND -1))
+
+# default behavior: run all tests
+VM_SELFTEST_ITEMS=${VM_SELFTEST_ITEMS:-default}
+
+test_selected() {
+ if [ "$VM_SELFTEST_ITEMS" == "default" ]; then
+ # If no VM_SELFTEST_ITEMS are specified, run all tests
+ return 0
fi
- if [ "$name" = "Hugepagesize:" ]; then
- hpgsize_KB="$size"
+ # If test selected argument is one of the test items
+ if [[ " ${VM_SELFTEST_ITEMS[*]} " =~ " ${1} " ]]; then
+ return 0
+ else
+ return 1
fi
-done < /proc/meminfo
+}
# Simple hugetlbfs tests have a hardcoded minimum requirement of
# huge pages totaling 256MB (262144KB) in size. The userfaultfd
@@ -28,7 +89,17 @@ hpgsize_MB=$((hpgsize_KB / 1024))
half_ufd_size_MB=$((((nr_cpus * hpgsize_MB + 127) / 128) * 128))
needmem_KB=$((half_ufd_size_MB * 2 * 1024))
-#set proper nr_hugepages
+# get huge pagesize and freepages from /proc/meminfo
+while read -r name size unit; do
+ if [ "$name" = "HugePages_Free:" ]; then
+ freepgs="$size"
+ fi
+ if [ "$name" = "Hugepagesize:" ]; then
+ hpgsize_KB="$size"
+ fi
+done < /proc/meminfo
+
+# set proper nr_hugepages
if [ -n "$freepgs" ] && [ -n "$hpgsize_KB" ]; then
nr_hugepgs=$(cat /proc/sys/vm/nr_hugepages)
needpgs=$((needmem_KB / hpgsize_KB))
@@ -57,140 +128,147 @@ else
exit 1
fi
-#filter 64bit architectures
+# filter 64bit architectures
ARCH64STR="arm64 ia64 mips64 parisc64 ppc64 ppc64le riscv64 s390x sh64 sparc64 x86_64"
if [ -z "$ARCH" ]; then
ARCH=$(uname -m 2>/dev/null | sed -e 's/aarch64.*/arm64/')
fi
VADDR64=0
-echo "$ARCH64STR" | grep "$ARCH" && VADDR64=1
+echo "$ARCH64STR" | grep "$ARCH" &>/dev/null && VADDR64=1
# Usage: run_test [test binary] [arbitrary test arguments...]
run_test() {
- local title="running $*"
- local sep=$(echo -n "$title" | tr "[:graph:][:space:]" -)
- printf "%s\n%s\n%s\n" "$sep" "$title" "$sep"
-
- "$@"
- local ret=$?
- if [ $ret -eq 0 ]; then
- echo "[PASS]"
- elif [ $ret -eq $ksft_skip ]; then
- echo "[SKIP]"
- exitcode=$ksft_skip
- else
- echo "[FAIL]"
- exitcode=1
- fi
+ if test_selected ${CATEGORY}; then
+ echo "running: $1"
+ local title="running $*"
+ local sep=$(echo -n "$title" | tr "[:graph:][:space:]" -)
+ printf "%s\n%s\n%s\n" "$sep" "$title" "$sep"
+
+ "$@"
+ local ret=$?
+ if [ $ret -eq 0 ]; then
+ echo "[PASS]"
+ elif [ $ret -eq $ksft_skip ]; then
+ echo "[SKIP]"
+ exitcode=$ksft_skip
+ else
+ echo "[FAIL]"
+ exitcode=1
+ fi
+ fi # test_selected
}
mkdir "$mnt"
mount -t hugetlbfs none "$mnt"
-run_test ./hugepage-mmap
+CATEGORY="hugetlb" run_test ./hugepage-mmap
shmmax=$(cat /proc/sys/kernel/shmmax)
shmall=$(cat /proc/sys/kernel/shmall)
echo 268435456 > /proc/sys/kernel/shmmax
echo 4194304 > /proc/sys/kernel/shmall
-run_test ./hugepage-shm
+CATEGORY="hugetlb" run_test ./hugepage-shm
echo "$shmmax" > /proc/sys/kernel/shmmax
echo "$shmall" > /proc/sys/kernel/shmall
-run_test ./map_hugetlb
+CATEGORY="hugetlb" run_test ./map_hugetlb
-run_test ./hugepage-mremap "$mnt"/huge_mremap
-rm -f "$mnt"/huge_mremap
+CATEGORY="hugetlb" run_test ./hugepage-mremap "$mnt"/huge_mremap
+test_selected "hugetlb" && rm -f "$mnt"/huge_mremap
-run_test ./hugepage-vmemmap
+CATEGORY="hugetlb" run_test ./hugepage-vmemmap
-run_test ./hugetlb-madvise "$mnt"/madvise-test
-rm -f "$mnt"/madvise-test
+CATEGORY="hugetlb" run_test ./hugetlb-madvise "$mnt"/madvise-test
+test_selected "hugetlb" && rm -f "$mnt"/madvise-test
-echo "NOTE: The above hugetlb tests provide minimal coverage. Use"
-echo " https://github.com/libhugetlbfs/libhugetlbfs.git for"
-echo " hugetlb regression testing."
+if test_selected "hugetlb"; then
+ echo "NOTE: These hugetlb tests provide minimal coverage. Use"
+ echo " https://github.com/libhugetlbfs/libhugetlbfs.git for"
+ echo " hugetlb regression testing."
+fi
-run_test ./map_fixed_noreplace
+CATEGORY="mmap" run_test ./map_fixed_noreplace
# get_user_pages_fast() benchmark
-run_test ./gup_test -u
+CATEGORY="gup_test" run_test ./gup_test -u
# pin_user_pages_fast() benchmark
-run_test ./gup_test -a
+CATEGORY="gup_test" run_test ./gup_test -a
# Dump pages 0, 19, and 4096, using pin_user_pages:
-run_test ./gup_test -ct -F 0x1 0 19 0x1000
+CATEGORY="gup_test" run_test ./gup_test -ct -F 0x1 0 19 0x1000
-run_test ./userfaultfd anon 20 16
-run_test ./userfaultfd anon:dev 20 16
+CATEGORY="userfaultfd" run_test ./userfaultfd anon 20 16
+CATEGORY="userfaultfd" run_test ./userfaultfd anon:dev 20 16
# Hugetlb tests require source and destination huge pages. Pass in half the
# size ($half_ufd_size_MB), which is used for *each*.
-run_test ./userfaultfd hugetlb "$half_ufd_size_MB" 32
-run_test ./userfaultfd hugetlb:dev "$half_ufd_size_MB" 32
-run_test ./userfaultfd hugetlb_shared "$half_ufd_size_MB" 32 "$mnt"/uffd-test
+CATEGORY="userfaultfd" run_test ./userfaultfd hugetlb "$half_ufd_size_MB" 32
+CATEGORY="userfaultfd" run_test ./userfaultfd hugetlb:dev "$half_ufd_size_MB" 32
+CATEGORY="userfaultfd" run_test ./userfaultfd hugetlb_shared "$half_ufd_size_MB" 32 "$mnt"/uffd-test
rm -f "$mnt"/uffd-test
-run_test ./userfaultfd hugetlb_shared:dev "$half_ufd_size_MB" 32 "$mnt"/uffd-test
+CATEGORY="userfaultfd" run_test ./userfaultfd hugetlb_shared:dev "$half_ufd_size_MB" 32 "$mnt"/uffd-test
rm -f "$mnt"/uffd-test
-run_test ./userfaultfd shmem 20 16
-run_test ./userfaultfd shmem:dev 20 16
-
-#cleanup
-umount "$mnt"
-rm -rf "$mnt"
-echo "$nr_hugepgs" > /proc/sys/vm/nr_hugepages
+CATEGORY="userfaultfd" run_test ./userfaultfd shmem 20 16
+CATEGORY="userfaultfd" run_test ./userfaultfd shmem:dev 20 16
+
+# cleanup (only needed when running hugetlb tests)
+if test_selected "hugetlb"; then
+ umount "$mnt"
+ rm -rf "$mnt"
+ echo "$nr_hugepgs" > /proc/sys/vm/nr_hugepages
+fi
-run_test ./compaction_test
+CATEGORY="compaction" run_test ./compaction_test
-run_test sudo -u nobody ./on-fault-limit
+CATEGORY="mlock" run_test sudo -u nobody ./on-fault-limit
-run_test ./map_populate
+CATEGORY="mmap" run_test ./map_populate
-run_test ./mlock-random-test
+CATEGORY="mlock" run_test ./mlock-random-test
-run_test ./mlock2-tests
+CATEGORY="mlock" run_test ./mlock2-tests
-run_test ./mrelease_test
+CATEGORY="process_mrelease" run_test ./mrelease_test
-run_test ./mremap_test
+CATEGORY="mremap" run_test ./mremap_test
-run_test ./thuge-gen
+CATEGORY="hugetlb" run_test ./thuge-gen
if [ $VADDR64 -ne 0 ]; then
- run_test ./virtual_address_range
+ CATEGORY="hugevm" run_test ./virtual_address_range
# virtual address 128TB switch test
- run_test ./va_128TBswitch.sh
+ CATEGORY="hugevm" run_test ./va_128TBswitch.sh
fi # VADDR64
# vmalloc stability smoke test
-run_test ./test_vmalloc.sh smoke
+CATEGORY="vmalloc" run_test ./test_vmalloc.sh smoke
-run_test ./mremap_dontunmap
+CATEGORY="mremap" run_test ./mremap_dontunmap
-run_test ./test_hmm.sh smoke
+CATEGORY="hmm" run_test ./test_hmm.sh smoke
# MADV_POPULATE_READ and MADV_POPULATE_WRITE tests
-run_test ./madv_populate
+CATEGORY="madv_populate" run_test ./madv_populate
-run_test ./memfd_secret
+CATEGORY="memfd_secret" run_test ./memfd_secret
# KSM MADV_MERGEABLE test with 10 identical pages
-run_test ./ksm_tests -M -p 10
+CATEGORY="ksm" run_test ./ksm_tests -M -p 10
# KSM unmerge test
-run_test ./ksm_tests -U
+CATEGORY="ksm" run_test ./ksm_tests -U
# KSM test with 10 zero pages and use_zero_pages = 0
-run_test ./ksm_tests -Z -p 10 -z 0
+CATEGORY="ksm" run_test ./ksm_tests -Z -p 10 -z 0
# KSM test with 10 zero pages and use_zero_pages = 1
-run_test ./ksm_tests -Z -p 10 -z 1
+CATEGORY="ksm" run_test ./ksm_tests -Z -p 10 -z 1
# KSM test with 2 NUMA nodes and merge_across_nodes = 1
-run_test ./ksm_tests -N -m 1
+CATEGORY="ksm_numa" run_test ./ksm_tests -N -m 1
# KSM test with 2 NUMA nodes and merge_across_nodes = 0
-run_test ./ksm_tests -N -m 0
+CATEGORY="ksm_numa" run_test ./ksm_tests -N -m 0
# protection_keys tests
if [ $VADDR64 -eq 0 ]; then
- run_test ./protection_keys_32
+ CATEGORY="pkey" run_test ./protection_keys_32
else
- run_test ./protection_keys_64
+ CATEGORY="pkey" run_test ./protection_keys_64
fi
exit $exitcode
--
2.31.1
Fix the comment to accurately describe the test and recently added
SYSTEM_SUSPEND test case.
What was once psci_cpu_on_test was renamed and extended to squeeze in a
test case for PSCI SYSTEM_SUSPEND. Nonetheless, the author of those
changes (whoever they may be...) failed to update the file comment to
reflect what had changed.
Reported-by: Reiji Watanabe <reijiw(a)google.com>
Signed-off-by: Oliver Upton <oliver.upton(a)linux.dev>
---
Forgetting the name of the darned UAPI event. Tsk tsk.
tools/testing/selftests/kvm/aarch64/psci_test.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kvm/aarch64/psci_test.c b/tools/testing/selftests/kvm/aarch64/psci_test.c
index f7621f6e938e..e0b9e81a3e09 100644
--- a/tools/testing/selftests/kvm/aarch64/psci_test.c
+++ b/tools/testing/selftests/kvm/aarch64/psci_test.c
@@ -1,12 +1,14 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * psci_cpu_on_test - Test that the observable state of a vCPU targeted by the
- * CPU_ON PSCI call matches what the caller requested.
+ * psci_test - Tests relating to KVM's PSCI implementation.
*
* Copyright (c) 2021 Google LLC.
*
- * This is a regression test for a race between KVM servicing the PSCI call and
- * userspace reading the vCPUs registers.
+ * This test includes:
+ * - A regression test for a race between KVM servicing the PSCI CPU_ON call
+ * and userspace reading the targeted vCPU's registers.
+ * - A test for KVM's handling of PSCI SYSTEM_SUSPEND and the associated
+ * KVM_SYSTEM_EVENT_SUSPEND UAPI.
*/
#define _GNU_SOURCE
base-commit: 568035b01cfb107af8d2e4bd2fb9aea22cf5b868
--
2.37.1.595.g718a3a8f04-goog
For this patchset, test cases of the qdisc modules are added to the
tc-testing test suite.
Last, thanks to Victor for testing and suggestion.
After a test case is added locally, the test result is as follows:
./tdc.py -c atm
ok 1 7628 - Create ATM with default setting
ok 2 390a - Delete ATM with valid handle
ok 3 32a0 - Show ATM class
ok 4 6310 - Dump ATM stats
./tdc.py -c choke
ok 1 8937 - Create CHOKE with default setting
ok 2 48c0 - Create CHOKE with min packet setting
ok 3 38c1 - Create CHOKE with max packet setting
ok 4 234a - Create CHOKE with ecn setting
ok 5 4380 - Create CHOKE with burst setting
ok 6 48c7 - Delete CHOKE with valid handle
ok 7 4398 - Replace CHOKE with min setting
ok 8 0301 - Change CHOKE with limit setting
./tdc.py -c codel
ok 1 983a - Create CODEL with default setting
ok 2 38aa - Create CODEL with limit packet setting
ok 3 9178 - Create CODEL with target setting
ok 4 78d1 - Create CODEL with interval setting
ok 5 238a - Create CODEL with ecn setting
ok 6 939c - Create CODEL with ce_threshold setting
ok 7 8380 - Delete CODEL with valid handle
ok 8 289c - Replace CODEL with limit setting
ok 9 0648 - Change CODEL with limit setting
./tdc.py -c etf
ok 1 34ba - Create ETF with default setting
ok 2 438f - Create ETF with delta nanos setting
ok 3 9041 - Create ETF with deadline_mode setting
ok 4 9a0c - Create ETF with skip_sock_check setting
ok 5 2093 - Delete ETF with valid handle
./tdc.py -c fq
ok 1 983b - Create FQ with default setting
ok 2 38a1 - Create FQ with limit packet setting
ok 3 0a18 - Create FQ with flow_limit setting
ok 4 2390 - Create FQ with quantum setting
ok 5 845b - Create FQ with initial_quantum setting
ok 6 9398 - Create FQ with maxrate setting
ok 7 342c - Create FQ with nopacing setting
ok 8 6391 - Create FQ with refill_delay setting
ok 9 238b - Create FQ with low_rate_threshold setting
ok 10 7582 - Create FQ with orphan_mask setting
ok 11 4894 - Create FQ with timer_slack setting
ok 12 324c - Create FQ with ce_threshold setting
ok 13 424a - Create FQ with horizon time setting
ok 14 89e1 - Create FQ with horizon_cap setting
ok 15 32e1 - Delete FQ with valid handle
ok 16 49b0 - Replace FQ with limit setting
ok 17 9478 - Change FQ with limit setting
./tdc.py -c gred
ok 1 8942 - Create GRED with default setting
ok 2 5783 - Create GRED with grio setting
ok 3 8a09 - Create GRED with limit setting
ok 4 48cb - Create GRED with ecn setting
ok 5 763a - Change GRED setting
ok 6 8309 - Show GRED class
./tdc.py -c hhf
ok 1 4812 - Create HHF with default setting
ok 2 8a92 - Create HHF with limit setting
ok 3 3491 - Create HHF with quantum setting
ok 4 ba04 - Create HHF with reset_timeout setting
ok 5 4238 - Create HHF with admit_bytes setting
ok 6 839f - Create HHF with evict_timeout setting
ok 7 a044 - Create HHF with non_hh_weight setting
ok 8 32f9 - Change HHF with limit setting
ok 9 385e - Show HHF class
./tdc.py -c pfifo_fast
ok 1 900c - Create pfifo_fast with default setting
ok 2 7470 - Dump pfifo_fast stats
ok 3 b974 - Replace pfifo_fast with different handle
ok 4 3240 - Delete pfifo_fast with valid handle
ok 5 4385 - Delete pfifo_fast with invalid handle
./tdc.py -c plug
ok 1 3289 - Create PLUG with default setting
ok 2 0917 - Create PLUG with block setting
ok 3 483b - Create PLUG with release setting
ok 4 4995 - Create PLUG with release_indefinite setting
ok 5 389c - Create PLUG with limit setting
ok 6 384a - Delete PLUG with valid handle
ok 7 439a - Replace PLUG with limit setting
ok 8 9831 - Change PLUG with limit setting
./tdc.py -c sfb
ok 1 3294 - Create SFB with default setting
ok 2 430a - Create SFB with rehash setting
ok 3 3410 - Create SFB with db setting
ok 4 49a0 - Create SFB with limit setting
ok 5 1241 - Create SFB with max setting
ok 6 3249 - Create SFB with target setting
ok 7 30a9 - Create SFB with increment setting
ok 8 239a - Create SFB with decrement setting
ok 9 9301 - Create SFB with penalty_rate setting
ok 10 2a01 - Create SFB with penalty_burst setting
ok 11 3209 - Change SFB with rehash setting
ok 12 5447 - Show SFB class
./tdc.py -c sfq
ok 1 7482 - Create SFQ with default setting
ok 2 c186 - Create SFQ with limit setting
ok 3 ae23 - Create SFQ with perturb setting
ok 4 a430 - Create SFQ with quantum setting
ok 5 4539 - Create SFQ with divisor setting
ok 6 b089 - Create SFQ with flows setting
ok 7 99a0 - Create SFQ with depth setting
ok 8 7389 - Create SFQ with headdrop setting
ok 9 6472 - Create SFQ with redflowlimit setting
ok 10 8929 - Show SFQ class
./tdc.py -c skbprio
ok 1 283e - Create skbprio with default setting
ok 2 c086 - Create skbprio with limit setting
ok 3 6733 - Change skbprio with limit setting
ok 4 2958 - Show skbprio class
./tdc.py -c taprio
ok 1 ba39 - Add taprio Qdisc to multi-queue device (8 queues)
ok 2 9462 - Add taprio Qdisc with multiple sched-entry
ok 3 8d92 - Add taprio Qdisc with txtime-delay
ok 4 d092 - Delete taprio Qdisc with valid handle
ok 5 8471 - Show taprio class
ok 6 0a85 - Add taprio Qdisc to single-queue device
./tdc.py -c tbf
ok 1 6430 - Create TBF with default setting
ok 2 0518 - Create TBF with mtu setting
ok 3 320a - Create TBF with peakrate setting
ok 4 239b - Create TBF with latency setting
ok 5 c975 - Create TBF with overhead setting
ok 6 948c - Create TBF with linklayer setting
ok 7 3549 - Replace TBF with mtu
ok 8 f948 - Change TBF with latency time
ok 9 2348 - Show TBF class
./tdc.py -c teql
ok 1 84a0 - Create TEQL with default setting
ok 2 7734 - Create TEQL with multiple device
ok 3 34a9 - Delete TEQL with valid handle
ok 4 6289 - Show TEQL stats
---
v3: add config
v2: modify subject prefix
---
Zhengchao Shao (15):
selftests/tc-testing: add selftests for atm qdisc
selftests/tc-testing: add selftests for choke qdisc
selftests/tc-testing: add selftests for codel qdisc
selftests/tc-testing: add selftests for etf qdisc
selftests/tc-testing: add selftests for fq qdisc
selftests/tc-testing: add selftests for gred qdisc
selftests/tc-testing: add selftests for hhf qdisc
selftests/tc-testing: add selftests for pfifo_fast qdisc
selftests/tc-testing: add selftests for plug qdisc
selftests/tc-testing: add selftests for sfb qdisc
selftests/tc-testing: add selftests for sfq qdisc
selftests/tc-testing: add selftests for skbprio qdisc
selftests/tc-testing: add selftests for taprio qdisc
selftests/tc-testing: add selftests for tbf qdisc
selftests/tc-testing: add selftests for teql qdisc
tools/testing/selftests/tc-testing/config | 15 +
.../tc-testing/tc-tests/qdiscs/atm.json | 94 +++++
.../tc-testing/tc-tests/qdiscs/choke.json | 188 +++++++++
.../tc-testing/tc-tests/qdiscs/codel.json | 211 ++++++++++
.../tc-testing/tc-tests/qdiscs/etf.json | 117 ++++++
.../tc-testing/tc-tests/qdiscs/fq.json | 395 ++++++++++++++++++
.../tc-testing/tc-tests/qdiscs/gred.json | 164 ++++++++
.../tc-testing/tc-tests/qdiscs/hhf.json | 210 ++++++++++
.../tc-tests/qdiscs/pfifo_fast.json | 119 ++++++
.../tc-testing/tc-tests/qdiscs/plug.json | 188 +++++++++
.../tc-testing/tc-tests/qdiscs/sfb.json | 279 +++++++++++++
.../tc-testing/tc-tests/qdiscs/sfq.json | 232 ++++++++++
.../tc-testing/tc-tests/qdiscs/skbprio.json | 95 +++++
.../tc-testing/tc-tests/qdiscs/taprio.json | 135 ++++++
.../tc-testing/tc-tests/qdiscs/tbf.json | 211 ++++++++++
.../tc-testing/tc-tests/qdiscs/teql.json | 97 +++++
16 files changed, 2750 insertions(+)
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/atm.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/choke.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/codel.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/etf.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/gred.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/hhf.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/pfifo_fast.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/plug.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfq.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/skbprio.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/tbf.json
create mode 100644 tools/testing/selftests/tc-testing/tc-tests/qdiscs/teql.json
--
2.17.1
Hi.
First, I hope you are fine and the same for your relatives.
Normally, when BPF ring buffer are full, producers cannot write anymore and
need to wait for consumer to get some data.
As a consequence, calling bpf_ringbuf_reserve() from eBPF code returns NULL.
This contribution adds a new flag to make BPF ring buffer overwritable.
Perf ring buffers already implement an option to be overwritable. In order to
avoid data corruption, the data is written backward, see
commit 9ecda41acb97 ("perf/core: Add ::write_backward attribute to perf event").
This patch series re-uses the same idea from perf ring buffers but in BPF ring
buffers.
So, calling bpf_ringbuf_reserve() on an overwritable BPF ring buffer never
returns NULL.
As a consequence, oldest data will be overwritten by the newest so consumer will
loose data.
Overwritable ring buffers are useful in BPF programs that are permanently
enabled but rarely read, only on-demand, for example in case of a user request
to investigate problems. We would like to use this in the Traceloop project [1].
The self test added in this series was tested and validated in a VM:
you@vm# ./share/linux/tools/testing/selftests/bpf/test_progs -t ringbuf_over
Can't find bpf_testmod.ko kernel module: -2
WARNING! Selftests relying on bpf_testmod.ko will be skipped.
#135 ringbuf_over_writable:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
You can also test the libbpf implementation by using the last patch of this
series which should be applied to iovisor/bcc:
you@home$ cd /path/to/iovisor/bcc
you@home$ git am -3 v2-0005-for-test-purpose-only-Add-toy-to-play-with-BPF-ri.patch
you@home$ cd /path/to/linux/tools/lib/bpf
you@home$ make -j$(nproc)
you@home$ cp libbpf.a /path/to/iovisor/bcc/libbpf-tools/.output
you@home$ cd /path/to/iovisor/bcc/libbpf-tools/
you@home$ make -j toy
# Start your VM and copy toy executable inside it.
root@vm-amd64:~# ./share/toy &
[1] 287
root@vm-amd64:~# for i in {1..16}; do ls > /dev/null; done
16
15
14
13
12
11
10
9
root@vm-amd64:~# ls > /dev/null && ls > /dev/null
18
17
As you can see, the first eight events are overwritten.
If you see any way to improve this contribution, feel free to share.
Changes since:
v1:
* Made producers write backward like perf ring buffer, so it permits avoiding
memory corruption.
* Added libbpf implementation to consume all events available.
* Added selftest.
* Added documentation.
Francis Laniel (5):
bpf: Make ring buffer overwritable.
selftests: Add BPF overwritable ring buffer self tests.
docs/bpf: Add documentation for overwritable ring buffer.
libbpf: Add implementation to consume overwritable BPF ring buffer.
for test purpose only: Add toy to play with BPF ring.
...-only-Add-toy-to-play-with-BPF-ring-.patch | 147 ++++++++++++++++
Documentation/bpf/ringbuf.rst | 18 +-
include/uapi/linux/bpf.h | 3 +
kernel/bpf/ringbuf.c | 43 +++--
tools/include/uapi/linux/bpf.h | 3 +
tools/lib/bpf/ringbuf.c | 106 ++++++++++++
tools/testing/selftests/bpf/Makefile | 5 +-
.../bpf/prog_tests/ringbuf_overwritable.c | 158 ++++++++++++++++++
.../bpf/progs/test_ringbuf_overwritable.c | 61 +++++++
9 files changed, 531 insertions(+), 13 deletions(-)
create mode 100644 0001-for-test-purpose-only-Add-toy-to-play-with-BPF-ring-.patch
create mode 100644 tools/testing/selftests/bpf/prog_tests/ringbuf_overwritable.c
create mode 100644 tools/testing/selftests/bpf/progs/test_ringbuf_overwritable.c
Best regards and thank you in advance.
---
[1] https://github.com/kinvolk/traceloop
Traceloop was presented at LPC 2020 (https://lpc.events/event/7/contributions/667/)
--
2.25.1