Hi,
I've tried to build kselftests for several years now, but I always find the build broken. Which makes me wonder if the instructions are broken or something. I follow the instructions in Documentation/dev-tools/kselftest.rst and start with "make -C tools/testing/selftests". Here is the errors I get on the upstream commit 16d72dd4891fecc1e1bf7ca193bb7d5b9804c038:
error: unable to create target: 'No available targets are compatible with triple "bpf"' 1 error generated. Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed test_execve.c:4:10: fatal error: cap-ng.h: No such file or directory ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/capabilities/test_execve' failed gpio-mockup-chardev.c:20:10: fatal error: libmount.h: No such file or directory <builtin>: recipe for target 'gpio-mockup-chardev' failed fuse_mnt.c:17:10: fatal error: fuse.h: No such file or directory ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/memfd/fuse_mnt' failed collect2: error: ld returned 1 exit status ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/mqueue/mq_open_tests' failed reuseport_bpf_numa.c:24:10: fatal error: numa.h: No such file or directory ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/net/reuseport_bpf_numa' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/vm/mlock-random-test' failed
Here is full log:
https://gist.githubusercontent.com/dvyukov/47430636e160f297b657df5ba2efa82b/...
I have libelf-dev installed. Do I need to install something else? Or run some other command?
Thanks
Hi Dmitry,
On 6/11/19 4:30 AM, Dmitry Vyukov wrote:
Hi,
I've tried to build kselftests for several years now, but I always find the build broken. Which makes me wonder if the instructions are broken or something. I follow the instructions in Documentation/dev-tools/kselftest.rst and start with "make -C tools/testing/selftests". Here is the errors I get on the upstream commit 16d72dd4891fecc1e1bf7ca193bb7d5b9804c038:
error: unable to create target: 'No available targets are compatible
with triple "bpf"' 1 error generated. Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed test_execve.c:4:10: fatal error: cap-ng.h: No such file or directory
These errors are due to missing dependencies. You will need
libmount-dev libcap-ng-dev libelf-dev
for bpf to build and also clang
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/capabilities/test_execve' failed gpio-mockup-chardev.c:20:10: fatal error: libmount.h: No such file or directory > <builtin>: recipe for target 'gpio-mockup-chardev' failed fuse_mnt.c:17:10: fatal error: fuse.h: No such file or directory
libfuse-dev is missing.
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/memfd/fuse_mnt' failed collect2: error: ld returned 1 exit status ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/mqueue/mq_open_tests' failed
Needs libpopt-dev
reuseport_bpf_numa.c:24:10: fatal error: numa.h: No such file or directory
Needs libnuma-dev
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/net/reuseport_bpf_numa' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory > ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/vm/mlock-random-test' failed
Here is full log:
https://gist.githubusercontent.com/dvyukov/47430636e160f297b657df5ba2efa82b/...
I have libelf-dev installed. Do I need to install something else? Or run some other command?
ii libelf-dev:amd 0.170-0.4ubu amd64 libelf1 development libraries and ii libelf1:amd64 0.170-0.4ubu amd64 library to read and write ELF fil
All of the above built for me on Linux 5.2-rc4. Try installing all of these and let me know if you still see problems. thanks, -- Shuah
On Tue, Jun 11, 2019 at 5:16 PM shuah shuah@kernel.org wrote:
Hi Dmitry,
On 6/11/19 4:30 AM, Dmitry Vyukov wrote:
Hi,
I've tried to build kselftests for several years now, but I always find the build broken. Which makes me wonder if the instructions are broken or something. I follow the instructions in Documentation/dev-tools/kselftest.rst and start with "make -C tools/testing/selftests". Here is the errors I get on the upstream commit 16d72dd4891fecc1e1bf7ca193bb7d5b9804c038:
error: unable to create target: 'No available targets are compatible
with triple "bpf"' 1 error generated. Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed test_execve.c:4:10: fatal error: cap-ng.h: No such file or directory
These errors are due to missing dependencies. You will need
libmount-dev libcap-ng-dev libelf-dev
for bpf to build and also clang
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/capabilities/test_execve' failed gpio-mockup-chardev.c:20:10: fatal error: libmount.h: No such file or directory > <builtin>: recipe for target 'gpio-mockup-chardev' failed fuse_mnt.c:17:10: fatal error: fuse.h: No such file or directory
libfuse-dev is missing.
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/memfd/fuse_mnt' failed collect2: error: ld returned 1 exit status ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/mqueue/mq_open_tests' failed
Needs libpopt-dev
reuseport_bpf_numa.c:24:10: fatal error: numa.h: No such file or directory
Needs libnuma-dev
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/net/reuseport_bpf_numa' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory > ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/vm/mlock-random-test' failed
Here is full log:
https://gist.githubusercontent.com/dvyukov/47430636e160f297b657df5ba2efa82b/...
I have libelf-dev installed. Do I need to install something else? Or run some other command?
ii libelf-dev:amd 0.170-0.4ubu amd64 libelf1 development libraries and ii libelf1:amd64 0.170-0.4ubu amd64 library to read and write ELF fil
All of the above built for me on Linux 5.2-rc4. Try installing all of these and let me know if you still see problems.
Hi Shuah,
Thanks for quick reply!
I've installed these: libmount-dev libcap-ng-dev libfuse-dev libpopt-dev libnuma-dev. libelf-dev I already had. And for clang I switched to distro-provided one.
This reduced number of errors, but I still see some:
clang: error: unable to execute command: Broken pipe clang: error: clang frontend command failed due to signal (use -v to see invocation) Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’? ../../lib.mk:138: recipe for target '/linux/tools/testing/selftests/networking/timestamping/timestamping' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/vm/mlock-random-test' failed
Full log: https://gist.githubusercontent.com/dvyukov/5c334e7e7e136909cb66b23b9fb7d439/...
On 6/11/19 10:03 AM, Dmitry Vyukov wrote:
On Tue, Jun 11, 2019 at 5:16 PM shuah shuah@kernel.org wrote:
Hi Dmitry,
On 6/11/19 4:30 AM, Dmitry Vyukov wrote:
Hi,
I've tried to build kselftests for several years now, but I always find the build broken. Which makes me wonder if the instructions are broken or something. I follow the instructions in Documentation/dev-tools/kselftest.rst and start with "make -C tools/testing/selftests". Here is the errors I get on the upstream commit 16d72dd4891fecc1e1bf7ca193bb7d5b9804c038:
error: unable to create target: 'No available targets are compatible
with triple "bpf"' 1 error generated. Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed test_execve.c:4:10: fatal error: cap-ng.h: No such file or directory
These errors are due to missing dependencies. You will need
libmount-dev libcap-ng-dev libelf-dev
for bpf to build and also clang
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/capabilities/test_execve' failed gpio-mockup-chardev.c:20:10: fatal error: libmount.h: No such file or directory > <builtin>: recipe for target 'gpio-mockup-chardev' failed fuse_mnt.c:17:10: fatal error: fuse.h: No such file or directory
libfuse-dev is missing.
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/memfd/fuse_mnt' failed collect2: error: ld returned 1 exit status ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/mqueue/mq_open_tests' failed
Needs libpopt-dev
reuseport_bpf_numa.c:24:10: fatal error: numa.h: No such file or directory
Needs libnuma-dev
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/net/reuseport_bpf_numa' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory > ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/vm/mlock-random-test' failed
Here is full log:
https://gist.githubusercontent.com/dvyukov/47430636e160f297b657df5ba2efa82b/...
I have libelf-dev installed. Do I need to install something else? Or run some other command?
ii libelf-dev:amd 0.170-0.4ubu amd64 libelf1 development libraries and ii libelf1:amd64 0.170-0.4ubu amd64 library to read and write ELF fil
All of the above built for me on Linux 5.2-rc4. Try installing all of these and let me know if you still see problems.
Hi Shuah,
Thanks for quick reply!
I've installed these: libmount-dev libcap-ng-dev libfuse-dev libpopt-dev libnuma-dev. libelf-dev I already had. And for clang I switched to distro-provided one.
This reduced number of errors, but I still see some:
clang: error: unable to execute command: Broken pipe clang: error: clang frontend command failed due to signal (use -v to see invocation) Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed
Getting bpf compile to work take a few steps. If I remember correctly, You will need llvm as well. Here is what I have on my system:
ii libllvm6.0:amd 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM) ii llvm-6.0 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-6.0-dev 1:6.0-1ubunt amd64 Modular compiler and toolchain te un llvm-6.0-doc <none> <none> (no description available) ii llvm-6.0-runti 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-runtime 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM),
timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’? ../../lib.mk:138: recipe for target '/linux/tools/testing/selftests/networking/timestamping/timestamping' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory
Do you have libcap-dev installed?
ii libcap-dev:amd 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (devel ii libcap-ng-dev 0.7.7-3.1 amd64 Development and header files for ii libcap-ng0:amd 0.7.7-3.1 amd64 An alternate POSIX capabilities l ii libcap2:amd64 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (libra ii libcap2-bin 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (utili un libcap2-dev <none> <none> (no description available)
thanks, -- Shuah
On Tue, Jun 11, 2019 at 9:20 PM shuah shuah@kernel.org wrote:
On 6/11/19 10:03 AM, Dmitry Vyukov wrote:
On Tue, Jun 11, 2019 at 5:16 PM shuah shuah@kernel.org wrote:
Hi Dmitry,
On 6/11/19 4:30 AM, Dmitry Vyukov wrote:
Hi,
I've tried to build kselftests for several years now, but I always find the build broken. Which makes me wonder if the instructions are broken or something. I follow the instructions in Documentation/dev-tools/kselftest.rst and start with "make -C tools/testing/selftests". Here is the errors I get on the upstream commit 16d72dd4891fecc1e1bf7ca193bb7d5b9804c038:
error: unable to create target: 'No available targets are compatible
with triple "bpf"' 1 error generated. Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed test_execve.c:4:10: fatal error: cap-ng.h: No such file or directory
These errors are due to missing dependencies. You will need
libmount-dev libcap-ng-dev libelf-dev
for bpf to build and also clang
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/capabilities/test_execve' failed gpio-mockup-chardev.c:20:10: fatal error: libmount.h: No such file or directory > <builtin>: recipe for target 'gpio-mockup-chardev' failed fuse_mnt.c:17:10: fatal error: fuse.h: No such file or directory
libfuse-dev is missing.
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/memfd/fuse_mnt' failed collect2: error: ld returned 1 exit status ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/mqueue/mq_open_tests' failed
Needs libpopt-dev
reuseport_bpf_numa.c:24:10: fatal error: numa.h: No such file or directory
Needs libnuma-dev
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/net/reuseport_bpf_numa' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory > ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/vm/mlock-random-test' failed
Here is full log:
https://gist.githubusercontent.com/dvyukov/47430636e160f297b657df5ba2efa82b/...
I have libelf-dev installed. Do I need to install something else? Or run some other command?
ii libelf-dev:amd 0.170-0.4ubu amd64 libelf1 development libraries and ii libelf1:amd64 0.170-0.4ubu amd64 library to read and write ELF fil
All of the above built for me on Linux 5.2-rc4. Try installing all of these and let me know if you still see problems.
Hi Shuah,
Thanks for quick reply!
I've installed these: libmount-dev libcap-ng-dev libfuse-dev libpopt-dev libnuma-dev. libelf-dev I already had. And for clang I switched to distro-provided one.
This reduced number of errors, but I still see some:
clang: error: unable to execute command: Broken pipe clang: error: clang frontend command failed due to signal (use -v to see invocation) Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed
Getting bpf compile to work take a few steps. If I remember correctly, You will need llvm as well. Here is what I have on my system:
ii libllvm6.0:amd 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM) ii llvm-6.0 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-6.0-dev 1:6.0-1ubunt amd64 Modular compiler and toolchain te un llvm-6.0-doc <none> <none> (no description available) ii llvm-6.0-runti 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-runtime 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM),
timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’? ../../lib.mk:138: recipe for target '/linux/tools/testing/selftests/networking/timestamping/timestamping' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory
Do you have libcap-dev installed?
ii libcap-dev:amd 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (devel ii libcap-ng-dev 0.7.7-3.1 amd64 Development and header files for ii libcap-ng0:amd 0.7.7-3.1 amd64 An alternate POSIX capabilities l ii libcap2:amd64 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (libra ii libcap2-bin 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (utili un libcap2-dev <none> <none> (no description available)
I've installed libcap-dev and resolved the missing header.
I've also installed llvm llvm-6.0 llvm-6.0-dev llvm-6.0-doc libllvm6.0 llvm-6.0-runtime llvm-runtime and it fixed crashing compiler. But bpf tests build was still failing due to missing libelf. But I had the library, so I went and removed some random files: tools/testing/selftests/bpf/{feature,FEATURE-DUMP.libbpf}. Don't ask me why these.
I am now down to just 1 build error:
CC /usr/local/google/home/dvyukov/src/linux/tools/testing/selftests/bpf/str_error.o timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’?
On Wed, Jun 12, 2019 at 10:51 AM Dmitry Vyukov dvyukov@google.com wrote:
On Tue, Jun 11, 2019 at 9:20 PM shuah shuah@kernel.org wrote:
On 6/11/19 10:03 AM, Dmitry Vyukov wrote:
On Tue, Jun 11, 2019 at 5:16 PM shuah shuah@kernel.org wrote:
Hi Dmitry,
On 6/11/19 4:30 AM, Dmitry Vyukov wrote:
Hi,
I've tried to build kselftests for several years now, but I always find the build broken. Which makes me wonder if the instructions are broken or something. I follow the instructions in Documentation/dev-tools/kselftest.rst and start with "make -C tools/testing/selftests". Here is the errors I get on the upstream commit 16d72dd4891fecc1e1bf7ca193bb7d5b9804c038:
error: unable to create target: 'No available targets are compatible
with triple "bpf"' 1 error generated. Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed test_execve.c:4:10: fatal error: cap-ng.h: No such file or directory
These errors are due to missing dependencies. You will need
libmount-dev libcap-ng-dev libelf-dev
for bpf to build and also clang
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/capabilities/test_execve' failed gpio-mockup-chardev.c:20:10: fatal error: libmount.h: No such file or directory > <builtin>: recipe for target 'gpio-mockup-chardev' failed fuse_mnt.c:17:10: fatal error: fuse.h: No such file or directory
libfuse-dev is missing.
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/memfd/fuse_mnt' failed collect2: error: ld returned 1 exit status ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/mqueue/mq_open_tests' failed
Needs libpopt-dev
reuseport_bpf_numa.c:24:10: fatal error: numa.h: No such file or directory
Needs libnuma-dev
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/net/reuseport_bpf_numa' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory > ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/vm/mlock-random-test' failed
Here is full log:
https://gist.githubusercontent.com/dvyukov/47430636e160f297b657df5ba2efa82b/...
I have libelf-dev installed. Do I need to install something else? Or run some other command?
ii libelf-dev:amd 0.170-0.4ubu amd64 libelf1 development libraries and ii libelf1:amd64 0.170-0.4ubu amd64 library to read and write ELF fil
All of the above built for me on Linux 5.2-rc4. Try installing all of these and let me know if you still see problems.
Hi Shuah,
Thanks for quick reply!
I've installed these: libmount-dev libcap-ng-dev libfuse-dev libpopt-dev libnuma-dev. libelf-dev I already had. And for clang I switched to distro-provided one.
This reduced number of errors, but I still see some:
clang: error: unable to execute command: Broken pipe clang: error: clang frontend command failed due to signal (use -v to see invocation) Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed
Getting bpf compile to work take a few steps. If I remember correctly, You will need llvm as well. Here is what I have on my system:
ii libllvm6.0:amd 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM) ii llvm-6.0 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-6.0-dev 1:6.0-1ubunt amd64 Modular compiler and toolchain te un llvm-6.0-doc <none> <none> (no description available) ii llvm-6.0-runti 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-runtime 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM),
timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’? ../../lib.mk:138: recipe for target '/linux/tools/testing/selftests/networking/timestamping/timestamping' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory
Do you have libcap-dev installed?
ii libcap-dev:amd 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (devel ii libcap-ng-dev 0.7.7-3.1 amd64 Development and header files for ii libcap-ng0:amd 0.7.7-3.1 amd64 An alternate POSIX capabilities l ii libcap2:amd64 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (libra ii libcap2-bin 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (utili un libcap2-dev <none> <none> (no description available)
I've installed libcap-dev and resolved the missing header.
I've also installed llvm llvm-6.0 llvm-6.0-dev llvm-6.0-doc libllvm6.0 llvm-6.0-runtime llvm-runtime and it fixed crashing compiler. But bpf tests build was still failing due to missing libelf. But I had the library, so I went and removed some random files: tools/testing/selftests/bpf/{feature,FEATURE-DUMP.libbpf}. Don't ask me why these.
I am now down to just 1 build error:
CC /usr/local/google/home/dvyukov/src/linux/tools/testing/selftests/bpf/str_error.o timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’?
Is this a non-fatal error? Usually when make produces errors, one expects that nothing is done and it was aborted mid-way. But make seems to produce some test binaries by now.
Reading the doc further, these command seem to implicitly assume that the tests will run right on my host machine:
$ make -C tools/testing/selftests run_tests $ make kselftest
Is it right? At least I don't see how it's configured to run them somewhere else? Or it uses something like qemu by default to run the kernel under test? If it runs the tests on the host, it can't work for me. I don't have the test kernel installed and there is no way I can do this. Policy rules aside, this is yet untested kernel, so by installing it I am risking losing my whole machine and all data...
What am I missing?
On Wed, Jun 12, 2019 at 11:09 AM Dmitry Vyukov dvyukov@google.com wrote:
On Tue, Jun 11, 2019 at 9:20 PM shuah shuah@kernel.org wrote:
On 6/11/19 10:03 AM, Dmitry Vyukov wrote:
On Tue, Jun 11, 2019 at 5:16 PM shuah shuah@kernel.org wrote:
Hi Dmitry,
On 6/11/19 4:30 AM, Dmitry Vyukov wrote:
Hi,
I've tried to build kselftests for several years now, but I always find the build broken. Which makes me wonder if the instructions are broken or something. I follow the instructions in Documentation/dev-tools/kselftest.rst and start with "make -C tools/testing/selftests". Here is the errors I get on the upstream commit 16d72dd4891fecc1e1bf7ca193bb7d5b9804c038: > error: unable to create target: 'No available targets are compatible with triple "bpf"' 1 error generated. Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed test_execve.c:4:10: fatal error: cap-ng.h: No such file or directory
These errors are due to missing dependencies. You will need
libmount-dev libcap-ng-dev libelf-dev
for bpf to build and also clang
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/capabilities/test_execve' failed gpio-mockup-chardev.c:20:10: fatal error: libmount.h: No such file or directory > <builtin>: recipe for target 'gpio-mockup-chardev' failed fuse_mnt.c:17:10: fatal error: fuse.h: No such file or directory
libfuse-dev is missing.
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/memfd/fuse_mnt' failed collect2: error: ld returned 1 exit status ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/mqueue/mq_open_tests' failed
Needs libpopt-dev
reuseport_bpf_numa.c:24:10: fatal error: numa.h: No such file or directory
Needs libnuma-dev
../lib.mk:138: recipe for target '/linux/tools/testing/selftests/net/reuseport_bpf_numa' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory > ../lib.mk:138: recipe for target '/linux/tools/testing/selftests/vm/mlock-random-test' failed
Here is full log:
https://gist.githubusercontent.com/dvyukov/47430636e160f297b657df5ba2efa82b/...
I have libelf-dev installed. Do I need to install something else? Or run some other command?
ii libelf-dev:amd 0.170-0.4ubu amd64 libelf1 development libraries and ii libelf1:amd64 0.170-0.4ubu amd64 library to read and write ELF fil
All of the above built for me on Linux 5.2-rc4. Try installing all of these and let me know if you still see problems.
Hi Shuah,
Thanks for quick reply!
I've installed these: libmount-dev libcap-ng-dev libfuse-dev libpopt-dev libnuma-dev. libelf-dev I already had. And for clang I switched to distro-provided one.
This reduced number of errors, but I still see some:
clang: error: unable to execute command: Broken pipe clang: error: clang frontend command failed due to signal (use -v to see invocation) Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed
Getting bpf compile to work take a few steps. If I remember correctly, You will need llvm as well. Here is what I have on my system:
ii libllvm6.0:amd 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM) ii llvm-6.0 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-6.0-dev 1:6.0-1ubunt amd64 Modular compiler and toolchain te un llvm-6.0-doc <none> <none> (no description available) ii llvm-6.0-runti 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-runtime 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM),
timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’? ../../lib.mk:138: recipe for target '/linux/tools/testing/selftests/networking/timestamping/timestamping' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory
Do you have libcap-dev installed?
ii libcap-dev:amd 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (devel ii libcap-ng-dev 0.7.7-3.1 amd64 Development and header files for ii libcap-ng0:amd 0.7.7-3.1 amd64 An alternate POSIX capabilities l ii libcap2:amd64 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (libra ii libcap2-bin 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (utili un libcap2-dev <none> <none> (no description available)
I've installed libcap-dev and resolved the missing header.
I've also installed llvm llvm-6.0 llvm-6.0-dev llvm-6.0-doc libllvm6.0 llvm-6.0-runtime llvm-runtime and it fixed crashing compiler. But bpf tests build was still failing due to missing libelf. But I had the library, so I went and removed some random files: tools/testing/selftests/bpf/{feature,FEATURE-DUMP.libbpf}. Don't ask me why these.
I am now down to just 1 build error:
CC /usr/local/google/home/dvyukov/src/linux/tools/testing/selftests/bpf/str_error.o timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’?
Is this a non-fatal error? Usually when make produces errors, one expects that nothing is done and it was aborted mid-way. But make seems to produce some test binaries by now.
Reading the doc further, these command seem to implicitly assume that the tests will run right on my host machine:
$ make -C tools/testing/selftests run_tests $ make kselftest
Is it right? At least I don't see how it's configured to run them somewhere else? Or it uses something like qemu by default to run the kernel under test? If it runs the tests on the host, it can't work for me. I don't have the test kernel installed and there is no way I can do this. Policy rules aside, this is yet untested kernel, so by installing it I am risking losing my whole machine and all data...
What am I missing?
Reading further. "Install selftests" and "Running installed selftests" sections. Is it something I can use to copy the pre-built tests to the test machine? The sections don't spell it, so I am just trying to second guess. Or what's the purpose of installing?
The "Running installed selftests" section says: "Kselftest install as well as the Kselftest tarball provide a script named "run_kselftest.sh" to run the tests".
What is the "Kselftest tarball"? Where does one get one? I don't see any mentions of "tarball" anywhere else in the doc.
On Wed, Jun 12, 2019 at 11:13 AM Dmitry Vyukov dvyukov@google.com wrote:
On Wed, Jun 12, 2019 at 11:09 AM Dmitry Vyukov dvyukov@google.com wrote:
On Tue, Jun 11, 2019 at 9:20 PM shuah shuah@kernel.org wrote:
On 6/11/19 10:03 AM, Dmitry Vyukov wrote:
On Tue, Jun 11, 2019 at 5:16 PM shuah shuah@kernel.org wrote:
Hi Dmitry,
On 6/11/19 4:30 AM, Dmitry Vyukov wrote: > Hi, > > I've tried to build kselftests for several years now, but I always > find the build broken. Which makes me wonder if the instructions are > broken or something. I follow the instructions in > Documentation/dev-tools/kselftest.rst and start with "make -C > tools/testing/selftests". Here is the errors I get on the upstream > commit 16d72dd4891fecc1e1bf7ca193bb7d5b9804c038: >> error: unable to create target: 'No available targets are compatible > with triple "bpf"' > 1 error generated. > Makefile:259: recipe for target 'elfdep' failed > Makefile:156: recipe for target 'all' failed > Makefile:106: recipe for target > '/linux/tools/testing/selftests/bpf/libbpf.a' failed > test_execve.c:4:10: fatal error: cap-ng.h: No such file or directory
These errors are due to missing dependencies. You will need
libmount-dev libcap-ng-dev libelf-dev
for bpf to build and also clang
> ../lib.mk:138: recipe for target > '/linux/tools/testing/selftests/capabilities/test_execve' failed > gpio-mockup-chardev.c:20:10: fatal error: libmount.h: No such file or directory > <builtin>: recipe for target 'gpio-mockup-chardev' failed > fuse_mnt.c:17:10: fatal error: fuse.h: No such file or directory
libfuse-dev is missing.
> ../lib.mk:138: recipe for target > '/linux/tools/testing/selftests/memfd/fuse_mnt' failed > collect2: error: ld returned 1 exit status > ../lib.mk:138: recipe for target > '/linux/tools/testing/selftests/mqueue/mq_open_tests' failed
Needs libpopt-dev
> reuseport_bpf_numa.c:24:10: fatal error: numa.h: No such file or directory
Needs libnuma-dev
> ../lib.mk:138: recipe for target > '/linux/tools/testing/selftests/net/reuseport_bpf_numa' failed > mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file > or directory > ../lib.mk:138: recipe for target > '/linux/tools/testing/selftests/vm/mlock-random-test' failed > > Here is full log: > > https://gist.githubusercontent.com/dvyukov/47430636e160f297b657df5ba2efa82b/... > > I have libelf-dev installed. Do I need to install something else? Or > run some other command?
ii libelf-dev:amd 0.170-0.4ubu amd64 libelf1 development libraries and ii libelf1:amd64 0.170-0.4ubu amd64 library to read and write ELF fil
All of the above built for me on Linux 5.2-rc4. Try installing all of these and let me know if you still see problems.
Hi Shuah,
Thanks for quick reply!
I've installed these: libmount-dev libcap-ng-dev libfuse-dev libpopt-dev libnuma-dev. libelf-dev I already had. And for clang I switched to distro-provided one.
This reduced number of errors, but I still see some:
clang: error: unable to execute command: Broken pipe clang: error: clang frontend command failed due to signal (use -v to see invocation) Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed
Getting bpf compile to work take a few steps. If I remember correctly, You will need llvm as well. Here is what I have on my system:
ii libllvm6.0:amd 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM) ii llvm-6.0 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-6.0-dev 1:6.0-1ubunt amd64 Modular compiler and toolchain te un llvm-6.0-doc <none> <none> (no description available) ii llvm-6.0-runti 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-runtime 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM),
timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’? ../../lib.mk:138: recipe for target '/linux/tools/testing/selftests/networking/timestamping/timestamping' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory
Do you have libcap-dev installed?
ii libcap-dev:amd 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (devel ii libcap-ng-dev 0.7.7-3.1 amd64 Development and header files for ii libcap-ng0:amd 0.7.7-3.1 amd64 An alternate POSIX capabilities l ii libcap2:amd64 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (libra ii libcap2-bin 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (utili un libcap2-dev <none> <none> (no description available)
I've installed libcap-dev and resolved the missing header.
I've also installed llvm llvm-6.0 llvm-6.0-dev llvm-6.0-doc libllvm6.0 llvm-6.0-runtime llvm-runtime and it fixed crashing compiler. But bpf tests build was still failing due to missing libelf. But I had the library, so I went and removed some random files: tools/testing/selftests/bpf/{feature,FEATURE-DUMP.libbpf}. Don't ask me why these.
I am now down to just 1 build error:
CC /usr/local/google/home/dvyukov/src/linux/tools/testing/selftests/bpf/str_error.o timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’?
Is this a non-fatal error? Usually when make produces errors, one expects that nothing is done and it was aborted mid-way. But make seems to produce some test binaries by now.
Reading the doc further, these command seem to implicitly assume that the tests will run right on my host machine:
$ make -C tools/testing/selftests run_tests $ make kselftest
Is it right? At least I don't see how it's configured to run them somewhere else? Or it uses something like qemu by default to run the kernel under test? If it runs the tests on the host, it can't work for me. I don't have the test kernel installed and there is no way I can do this. Policy rules aside, this is yet untested kernel, so by installing it I am risking losing my whole machine and all data...
What am I missing?
Reading further. "Install selftests" and "Running installed selftests" sections. Is it something I can use to copy the pre-built tests to the test machine? The sections don't spell it, so I am just trying to second guess. Or what's the purpose of installing?
The "Running installed selftests" section says: "Kselftest install as well as the Kselftest tarball provide a script named "run_kselftest.sh" to run the tests".
What is the "Kselftest tarball"? Where does one get one? I don't see any mentions of "tarball" anywhere else in the doc.
Running ./kselftest_install.sh I am getting:
/bin/sh: llvm-readelf: command not found
I can't find any package that would provide this. I happened to have a custom llvm build that has that binary, but I am interested how I was supposed to get this for the purposes of documentation and reuse of instructions by others.
After adding my llvm-readelf to PATH, I am then getting:
make[1]: *** No rule to make target 'emit_tests'. Stop.
Looks like an error, or is it?
It produced something in the output dir, so copied that to the test machine and tried to run run_kselftest.sh there, but it failed too:
~/kselftest# ./run_kselftest.sh ./run_kselftest.sh: 2: ./run_kselftest.sh: realpath: not found ./run_kselftest.sh: 4: .: Can't open ./kselftest/runner.sh
Is there some kind of prerequisites that I am supposed to install there? Since the target may have non-x86 arch and a custom distro, any additional dependency there may be very painful to get...
On Wed, Jun 12, 2019 at 11:19 AM Dmitry Vyukov dvyukov@google.com wrote:
On Tue, Jun 11, 2019 at 9:20 PM shuah shuah@kernel.org wrote:
On 6/11/19 10:03 AM, Dmitry Vyukov wrote:
On Tue, Jun 11, 2019 at 5:16 PM shuah shuah@kernel.org wrote: > > Hi Dmitry, > > On 6/11/19 4:30 AM, Dmitry Vyukov wrote: >> Hi, >> >> I've tried to build kselftests for several years now, but I always >> find the build broken. Which makes me wonder if the instructions are >> broken or something. I follow the instructions in >> Documentation/dev-tools/kselftest.rst and start with "make -C >> tools/testing/selftests". Here is the errors I get on the upstream >> commit 16d72dd4891fecc1e1bf7ca193bb7d5b9804c038: >>> error: unable to create target: 'No available targets are compatible >> with triple "bpf"' >> 1 error generated. >> Makefile:259: recipe for target 'elfdep' failed >> Makefile:156: recipe for target 'all' failed >> Makefile:106: recipe for target >> '/linux/tools/testing/selftests/bpf/libbpf.a' failed >> test_execve.c:4:10: fatal error: cap-ng.h: No such file or directory > > These errors are due to missing dependencies. You will need > > libmount-dev > libcap-ng-dev > libelf-dev > > for bpf to build and also clang > >> ../lib.mk:138: recipe for target >> '/linux/tools/testing/selftests/capabilities/test_execve' failed >> gpio-mockup-chardev.c:20:10: fatal error: libmount.h: No such file or directory > <builtin>: recipe for target 'gpio-mockup-chardev' failed >> fuse_mnt.c:17:10: fatal error: fuse.h: No such file or directory > > libfuse-dev is missing. > >> ../lib.mk:138: recipe for target >> '/linux/tools/testing/selftests/memfd/fuse_mnt' failed >> collect2: error: ld returned 1 exit status >> ../lib.mk:138: recipe for target >> '/linux/tools/testing/selftests/mqueue/mq_open_tests' failed > > Needs libpopt-dev > >> reuseport_bpf_numa.c:24:10: fatal error: numa.h: No such file or directory > > Needs libnuma-dev > >> ../lib.mk:138: recipe for target >> '/linux/tools/testing/selftests/net/reuseport_bpf_numa' failed >> mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file >> or directory > ../lib.mk:138: recipe for target >> '/linux/tools/testing/selftests/vm/mlock-random-test' failed >> >> Here is full log: >> >> https://gist.githubusercontent.com/dvyukov/47430636e160f297b657df5ba2efa82b/... >> >> I have libelf-dev installed. Do I need to install something else? Or >> run some other command? > > ii libelf-dev:amd 0.170-0.4ubu amd64 libelf1 development > libraries and > ii libelf1:amd64 0.170-0.4ubu amd64 library to read and write > ELF fil > > > All of the above built for me on Linux 5.2-rc4. Try installing all of > these and let me know if you still see problems.
Hi Shuah,
Thanks for quick reply!
I've installed these: libmount-dev libcap-ng-dev libfuse-dev libpopt-dev libnuma-dev. libelf-dev I already had. And for clang I switched to distro-provided one.
This reduced number of errors, but I still see some:
clang: error: unable to execute command: Broken pipe clang: error: clang frontend command failed due to signal (use -v to see invocation) Makefile:259: recipe for target 'elfdep' failed Makefile:156: recipe for target 'all' failed Makefile:106: recipe for target '/linux/tools/testing/selftests/bpf/libbpf.a' failed
Getting bpf compile to work take a few steps. If I remember correctly, You will need llvm as well. Here is what I have on my system:
ii libllvm6.0:amd 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM) ii llvm-6.0 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-6.0-dev 1:6.0-1ubunt amd64 Modular compiler and toolchain te un llvm-6.0-doc <none> <none> (no description available) ii llvm-6.0-runti 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-runtime 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM),
timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’? ../../lib.mk:138: recipe for target '/linux/tools/testing/selftests/networking/timestamping/timestamping' failed mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file or directory
Do you have libcap-dev installed?
ii libcap-dev:amd 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (devel ii libcap-ng-dev 0.7.7-3.1 amd64 Development and header files for ii libcap-ng0:amd 0.7.7-3.1 amd64 An alternate POSIX capabilities l ii libcap2:amd64 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (libra ii libcap2-bin 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (utili un libcap2-dev <none> <none> (no description available)
I've installed libcap-dev and resolved the missing header.
I've also installed llvm llvm-6.0 llvm-6.0-dev llvm-6.0-doc libllvm6.0 llvm-6.0-runtime llvm-runtime and it fixed crashing compiler. But bpf tests build was still failing due to missing libelf. But I had the library, so I went and removed some random files: tools/testing/selftests/bpf/{feature,FEATURE-DUMP.libbpf}. Don't ask me why these.
I am now down to just 1 build error:
CC /usr/local/google/home/dvyukov/src/linux/tools/testing/selftests/bpf/str_error.o timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’?
Is this a non-fatal error? Usually when make produces errors, one expects that nothing is done and it was aborted mid-way. But make seems to produce some test binaries by now.
Reading the doc further, these command seem to implicitly assume that the tests will run right on my host machine:
$ make -C tools/testing/selftests run_tests $ make kselftest
Is it right? At least I don't see how it's configured to run them somewhere else? Or it uses something like qemu by default to run the kernel under test? If it runs the tests on the host, it can't work for me. I don't have the test kernel installed and there is no way I can do this. Policy rules aside, this is yet untested kernel, so by installing it I am risking losing my whole machine and all data...
What am I missing?
Reading further. "Install selftests" and "Running installed selftests" sections. Is it something I can use to copy the pre-built tests to the test machine? The sections don't spell it, so I am just trying to second guess. Or what's the purpose of installing?
The "Running installed selftests" section says: "Kselftest install as well as the Kselftest tarball provide a script named "run_kselftest.sh" to run the tests".
What is the "Kselftest tarball"? Where does one get one? I don't see any mentions of "tarball" anywhere else in the doc.
Running ./kselftest_install.sh I am getting:
/bin/sh: llvm-readelf: command not found
I can't find any package that would provide this. I happened to have a custom llvm build that has that binary, but I am interested how I was supposed to get this for the purposes of documentation and reuse of instructions by others.
After adding my llvm-readelf to PATH, I am then getting:
make[1]: *** No rule to make target 'emit_tests'. Stop.
Looks like an error, or is it?
It produced something in the output dir, so copied that to the test machine and tried to run run_kselftest.sh there, but it failed too:
~/kselftest# ./run_kselftest.sh ./run_kselftest.sh: 2: ./run_kselftest.sh: realpath: not found ./run_kselftest.sh: 4: .: Can't open ./kselftest/runner.sh
Is there some kind of prerequisites that I am supposed to install there? Since the target may have non-x86 arch and a custom distro, any additional dependency there may be very painful to get...
Hi Shuah,
I am asking lots of questions, but I did not provide my motivation and end goal. I am trying to understand overall state of the kernel testing better and understand (1) if there are working instructions to run kernel testing that I can give to a new team member or a new external kernel developer, (2) if/how I can ask a kernel developer fixing a bug to add a regression test and ensure that it works. Note in these cases a user may not have lots of specific expertise (e.g. any unsaid/implicit thing may be a showstopper) and/or don't have infinite motivation/time (may give up given a single excuse to do so) and/or don't have specific interest/expertise in the tested subsystem (e.g. a drive-by fix). So now I am trying to follow this route myself, documenting steps here.
Back to: ./run_kselftest.sh: 2: ./run_kselftest.sh: realpath: not found
dpkg on my host says that this binary comes from coreutils:
$ dpkg -S realpath coreutils: /usr/bin/realpath
but installing coreutils does not help. Seems that I need to install realpath itself. But this package happens to be broken on my test distro:
# apt-get install realpath Reading package lists... Done Building dependency tree... Done The following NEW packages will be installed: realpath 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 16.7 kB of archives. After this operation, 115 kB of additional disk space will be used. Err http://deb.debian.org/debian/ wheezy/main realpath amd64 1.18 404 Not Found [IP: 151.101.120.204 80] Failed to fetch http://deb.debian.org/debian/pool/main/r/realpath/realpath_1.18_amd64.deb 404 Not Found [IP: 151.101.120.204 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
I've switched to another distro (fortunately I had another one pre-built), and run_kselftest.sh started running tests. But now I have even more questions :)
1. Meta-question: "Running a subset of selftests" section talks about "subsystems". How can I map a source file I changed in a drive-by fix to a subsystem? Say, I changed net/ipv6/netfilter/nft_redir_ipv6.c or drivers/usb/c67x00/c67x00-drv.c, what subsystems do I need to run?
2. All C tests seem to fail with: # ./test_maps: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.26' not found (required by ./test_maps) Which means that my image is still unsuitable. How can I get an image that is suitable to run the tests?
3. Lots of tests that do run (probably shell tests), fail/skipped with some cryptic for me errors like:
# Cannot find device "ip6gre11"
# selftests: [SKIP] Could not run test without the ip xdpgeneric support
# modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.1.0+/modules.dep.bin'
# selftests: bpf: test_tc_edt.sh # nc is not available not ok 40 selftests: bpf: test_tc_edt.sh
Say, I either want to run tests for a specific subsystem because I am doing a drive-by fix (a typical newcomer/good Samaritan scenario), or I want to run as many tests as possible (a typical CI scenario). Is there a way to bulk satisfy all these prerequisite (configs, binaries and whatever they are asking for)?
4. There is a test that consistently reboots my machine:
# selftests: breakpoints: step_after_suspend_test [ 514.024889] PM: suspend entry (deep) [ 514.025959] PM: Syncing filesystems ... done. [ 514.051573] Freezing user space processes ... (elapsed 0.001 seconds) done. [ 514.054140] OOM killer disabled. [ 514.054764] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done. [ 514.057695] printk: Suspending console(s) (use no_console_suspend to debug) early console in extract_kernel input_data: 0x0000000007ddc2e9 input_len: 0x0000000002c26bf0 output: 0x0000000001000000 output_len: 0x0000000008492a48 kernel_total_size: 0x0000000009a26000 trampoline_32bit: 0x000000000009d000 Decompressing Linux... Parsing ELF... done. Booting the kernel. [ 0.000000] Linux version 5.0.0 (gcc version 7.3.0 (Debian 7.3.0-18)) #7 SMP PREEMPT Wed Jun 12 11:38:12 CEST 2019
Is it a bug in the test? in the kernel? Or how is this supposed to work/what am I supposed to do with this?
5. There is a test that triggers a use-after-free:
[ 262.639848][ C1] BUG: KASAN: use-after-free in ip6gre_tunnel_lookup+0x1a27/0x1ae0
I went back to v5.0 (3+ months ago), and I see that it's also the case there. Do you know if anybody running these tests? With KMEMLEAK, LOCKDEP, etc?
6. Do we know what's the current code coverage achieved by these tests? What's covered? What's not? Overall percent/per-subsystem/etc?
Thanks
On Wed, Jun 12, 2019 at 1:05 PM Dmitry Vyukov dvyukov@google.com wrote:
On Wed, Jun 12, 2019 at 11:19 AM Dmitry Vyukov dvyukov@google.com wrote:
On Tue, Jun 11, 2019 at 9:20 PM shuah shuah@kernel.org wrote:
On 6/11/19 10:03 AM, Dmitry Vyukov wrote: > On Tue, Jun 11, 2019 at 5:16 PM shuah shuah@kernel.org wrote: >> >> Hi Dmitry, >> >> On 6/11/19 4:30 AM, Dmitry Vyukov wrote: >>> Hi, >>> >>> I've tried to build kselftests for several years now, but I always >>> find the build broken. Which makes me wonder if the instructions are >>> broken or something. I follow the instructions in >>> Documentation/dev-tools/kselftest.rst and start with "make -C >>> tools/testing/selftests". Here is the errors I get on the upstream >>> commit 16d72dd4891fecc1e1bf7ca193bb7d5b9804c038: >>>> error: unable to create target: 'No available targets are compatible >>> with triple "bpf"' >>> 1 error generated. >>> Makefile:259: recipe for target 'elfdep' failed >>> Makefile:156: recipe for target 'all' failed >>> Makefile:106: recipe for target >>> '/linux/tools/testing/selftests/bpf/libbpf.a' failed >>> test_execve.c:4:10: fatal error: cap-ng.h: No such file or directory >> >> These errors are due to missing dependencies. You will need >> >> libmount-dev >> libcap-ng-dev >> libelf-dev >> >> for bpf to build and also clang >> >>> ../lib.mk:138: recipe for target >>> '/linux/tools/testing/selftests/capabilities/test_execve' failed >>> gpio-mockup-chardev.c:20:10: fatal error: libmount.h: No such file or directory > <builtin>: recipe for target 'gpio-mockup-chardev' failed >>> fuse_mnt.c:17:10: fatal error: fuse.h: No such file or directory >> >> libfuse-dev is missing. >> >>> ../lib.mk:138: recipe for target >>> '/linux/tools/testing/selftests/memfd/fuse_mnt' failed >>> collect2: error: ld returned 1 exit status >>> ../lib.mk:138: recipe for target >>> '/linux/tools/testing/selftests/mqueue/mq_open_tests' failed >> >> Needs libpopt-dev >> >>> reuseport_bpf_numa.c:24:10: fatal error: numa.h: No such file or directory >> >> Needs libnuma-dev >> >>> ../lib.mk:138: recipe for target >>> '/linux/tools/testing/selftests/net/reuseport_bpf_numa' failed >>> mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file >>> or directory > ../lib.mk:138: recipe for target >>> '/linux/tools/testing/selftests/vm/mlock-random-test' failed >>> >>> Here is full log: >>> >>> https://gist.githubusercontent.com/dvyukov/47430636e160f297b657df5ba2efa82b/... >>> >>> I have libelf-dev installed. Do I need to install something else? Or >>> run some other command? >> >> ii libelf-dev:amd 0.170-0.4ubu amd64 libelf1 development >> libraries and >> ii libelf1:amd64 0.170-0.4ubu amd64 library to read and write >> ELF fil >> >> >> All of the above built for me on Linux 5.2-rc4. Try installing all of >> these and let me know if you still see problems. > > > Hi Shuah, > > Thanks for quick reply! > > I've installed these: libmount-dev libcap-ng-dev libfuse-dev > libpopt-dev libnuma-dev. > libelf-dev I already had. And for clang I switched to distro-provided one. > > This reduced number of errors, but I still see some: > > clang: error: unable to execute command: Broken pipe > clang: error: clang frontend command failed due to signal (use -v to > see invocation) > Makefile:259: recipe for target 'elfdep' failed > Makefile:156: recipe for target 'all' failed > Makefile:106: recipe for target > '/linux/tools/testing/selftests/bpf/libbpf.a' failed
Getting bpf compile to work take a few steps. If I remember correctly, You will need llvm as well. Here is what I have on my system:
ii libllvm6.0:amd 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM) ii llvm-6.0 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-6.0-dev 1:6.0-1ubunt amd64 Modular compiler and toolchain te un llvm-6.0-doc <none> <none> (no description available) ii llvm-6.0-runti 1:6.0-1ubunt amd64 Modular compiler and toolchain te ii llvm-runtime 1:6.0-41~exp amd64 Low-Level Virtual Machine (LLVM),
> timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in > this function); did you mean ‘SIOCGSTAMPNS’? > ../../lib.mk:138: recipe for target > '/linux/tools/testing/selftests/networking/timestamping/timestamping' > failed > mlock-random-test.c:8:10: fatal error: sys/capability.h: No such file > or directory
Do you have libcap-dev installed?
ii libcap-dev:amd 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (devel ii libcap-ng-dev 0.7.7-3.1 amd64 Development and header files for ii libcap-ng0:amd 0.7.7-3.1 amd64 An alternate POSIX capabilities l ii libcap2:amd64 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (libra ii libcap2-bin 1:2.25-1.2 amd64 POSIX 1003.1e capabilities (utili un libcap2-dev <none> <none> (no description available)
I've installed libcap-dev and resolved the missing header.
I've also installed llvm llvm-6.0 llvm-6.0-dev llvm-6.0-doc libllvm6.0 llvm-6.0-runtime llvm-runtime and it fixed crashing compiler. But bpf tests build was still failing due to missing libelf. But I had the library, so I went and removed some random files: tools/testing/selftests/bpf/{feature,FEATURE-DUMP.libbpf}. Don't ask me why these.
I am now down to just 1 build error:
CC /usr/local/google/home/dvyukov/src/linux/tools/testing/selftests/bpf/str_error.o timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’?
Is this a non-fatal error? Usually when make produces errors, one expects that nothing is done and it was aborted mid-way. But make seems to produce some test binaries by now.
Reading the doc further, these command seem to implicitly assume that the tests will run right on my host machine:
$ make -C tools/testing/selftests run_tests $ make kselftest
Is it right? At least I don't see how it's configured to run them somewhere else? Or it uses something like qemu by default to run the kernel under test? If it runs the tests on the host, it can't work for me. I don't have the test kernel installed and there is no way I can do this. Policy rules aside, this is yet untested kernel, so by installing it I am risking losing my whole machine and all data...
What am I missing?
Reading further. "Install selftests" and "Running installed selftests" sections. Is it something I can use to copy the pre-built tests to the test machine? The sections don't spell it, so I am just trying to second guess. Or what's the purpose of installing?
The "Running installed selftests" section says: "Kselftest install as well as the Kselftest tarball provide a script named "run_kselftest.sh" to run the tests".
What is the "Kselftest tarball"? Where does one get one? I don't see any mentions of "tarball" anywhere else in the doc.
Running ./kselftest_install.sh I am getting:
/bin/sh: llvm-readelf: command not found
I can't find any package that would provide this. I happened to have a custom llvm build that has that binary, but I am interested how I was supposed to get this for the purposes of documentation and reuse of instructions by others.
After adding my llvm-readelf to PATH, I am then getting:
make[1]: *** No rule to make target 'emit_tests'. Stop.
Looks like an error, or is it?
It produced something in the output dir, so copied that to the test machine and tried to run run_kselftest.sh there, but it failed too:
~/kselftest# ./run_kselftest.sh ./run_kselftest.sh: 2: ./run_kselftest.sh: realpath: not found ./run_kselftest.sh: 4: .: Can't open ./kselftest/runner.sh
Is there some kind of prerequisites that I am supposed to install there? Since the target may have non-x86 arch and a custom distro, any additional dependency there may be very painful to get...
Hi Shuah,
I am asking lots of questions, but I did not provide my motivation and end goal. I am trying to understand overall state of the kernel testing better and understand (1) if there are working instructions to run kernel testing that I can give to a new team member or a new external kernel developer, (2) if/how I can ask a kernel developer fixing a bug to add a regression test and ensure that it works. Note in these cases a user may not have lots of specific expertise (e.g. any unsaid/implicit thing may be a showstopper) and/or don't have infinite motivation/time (may give up given a single excuse to do so) and/or don't have specific interest/expertise in the tested subsystem (e.g. a drive-by fix). So now I am trying to follow this route myself, documenting steps here.
Back to: ./run_kselftest.sh: 2: ./run_kselftest.sh: realpath: not found
dpkg on my host says that this binary comes from coreutils:
$ dpkg -S realpath coreutils: /usr/bin/realpath
but installing coreutils does not help. Seems that I need to install realpath itself. But this package happens to be broken on my test distro:
# apt-get install realpath Reading package lists... Done Building dependency tree... Done The following NEW packages will be installed: realpath 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 16.7 kB of archives. After this operation, 115 kB of additional disk space will be used. Err http://deb.debian.org/debian/ wheezy/main realpath amd64 1.18 404 Not Found [IP: 151.101.120.204 80] Failed to fetch http://deb.debian.org/debian/pool/main/r/realpath/realpath_1.18_amd64.deb 404 Not Found [IP: 151.101.120.204 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
I've switched to another distro (fortunately I had another one pre-built), and run_kselftest.sh started running tests. But now I have even more questions :)
- Meta-question: "Running a subset of selftests" section talks about
"subsystems". How can I map a source file I changed in a drive-by fix to a subsystem? Say, I changed net/ipv6/netfilter/nft_redir_ipv6.c or drivers/usb/c67x00/c67x00-drv.c, what subsystems do I need to run?
- All C tests seem to fail with:
# ./test_maps: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.26' not found (required by ./test_maps) Which means that my image is still unsuitable. How can I get an image that is suitable to run the tests?
- Lots of tests that do run (probably shell tests), fail/skipped with
some cryptic for me errors like:
# Cannot find device "ip6gre11"
# selftests: [SKIP] Could not run test without the ip xdpgeneric support
# modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.1.0+/modules.dep.bin'
# selftests: bpf: test_tc_edt.sh # nc is not available not ok 40 selftests: bpf: test_tc_edt.sh
Say, I either want to run tests for a specific subsystem because I am doing a drive-by fix (a typical newcomer/good Samaritan scenario), or I want to run as many tests as possible (a typical CI scenario). Is there a way to bulk satisfy all these prerequisite (configs, binaries and whatever they are asking for)?
- There is a test that consistently reboots my machine:
# selftests: breakpoints: step_after_suspend_test [ 514.024889] PM: suspend entry (deep) [ 514.025959] PM: Syncing filesystems ... done. [ 514.051573] Freezing user space processes ... (elapsed 0.001 seconds) done. [ 514.054140] OOM killer disabled. [ 514.054764] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done. [ 514.057695] printk: Suspending console(s) (use no_console_suspend to debug) early console in extract_kernel input_data: 0x0000000007ddc2e9 input_len: 0x0000000002c26bf0 output: 0x0000000001000000 output_len: 0x0000000008492a48 kernel_total_size: 0x0000000009a26000 trampoline_32bit: 0x000000000009d000 Decompressing Linux... Parsing ELF... done. Booting the kernel. [ 0.000000] Linux version 5.0.0 (gcc version 7.3.0 (Debian 7.3.0-18)) #7 SMP PREEMPT Wed Jun 12 11:38:12 CEST 2019
Is it a bug in the test? in the kernel? Or how is this supposed to work/what am I supposed to do with this?
- There is a test that triggers a use-after-free:
[ 262.639848][ C1] BUG: KASAN: use-after-free in ip6gre_tunnel_lookup+0x1a27/0x1ae0
I went back to v5.0 (3+ months ago), and I see that it's also the case there. Do you know if anybody running these tests? With KMEMLEAK, LOCKDEP, etc?
- Do we know what's the current code coverage achieved by these
tests? What's covered? What's not? Overall percent/per-subsystem/etc?
Thanks
I've deleted the test that caused reboot for now and the run_kselftest.sh script finished running, but I can't understand the result. Is it all passed? Or something failed? What failed? The output is as follows:
... screens of output ... # make swap with zram device(s) [ 5335.215175] Adding 1020k swap on /dev/zram0. Priority:-2 extents:1 across:1020k SSFS # done with /dev/zram0 # zram making zram mkswap and swapon: OK # zram swapoff: OK # zram cleanup [ 5335.420825] zram0: detected capacity change from 1048576 to 0 # zram02 : [PASS] ok 1 selftests: zram: zram.sh ~/kselftest#
docs say:
"The above commands by default run the tests and print full pass/fail report. Kselftest supports "summary" option to make it easier to understand the test results.... $ make summary=1 kselftest "
But this is for make. How can I understand the result with run_kselftest.sh?
I've deleted the test that caused reboot for now and the
run_kselftest.sh script finished running, but I can't understand the result. Is it all passed? Or something failed? What failed? The output is as follows:
... screens of output ... # make swap with zram device(s) [ 5335.215175] Adding 1020k swap on /dev/zram0. Priority:-2 extents:1 across:1020k SSFS # done with /dev/zram0 # zram making zram mkswap and swapon: OK # zram swapoff: OK # zram cleanup [ 5335.420825] zram0: detected capacity change from 1048576 to 0 # zram02 : [PASS] ok 1 selftests: zram: zram.sh ~/kselftest#
Hi Dmitry,
This is the 6th email from you in a span of 3 hours! I am just going to respond this last one. Please try to summarize your questions instead of sending email storm, so it will be easier to parse and more productive for both of us.
I am not sure what you are asking here. kselftest has bunch of tests, and you have to look at individual test results to see if they passed or failed. If you are looking for aggregate result, ksefltest doesn't keep track of that.
docs say:
"The above commands by default run the tests and print full pass/fail report. Kselftest supports "summary" option to make it easier to understand the test results.... $ make summary=1 kselftest "
If you want to build and run tests on the same test system use the following commands, after building kernel and rebooting the newly installed kernel:
make kselftest or make -C tools/testing/selftests run_tests
If you want to just build and install to default location run: make -C tools/testing/selftests install This installs under tools/testing/selftests/install and creates run_kselftest.sh
You can specify INSTALL_PATH as follows:
export INSTALL_PATH=/tmp/kselftest; make -C tools/testing/selftest/install
I am seeing an error
/bin/sh: 1: llvm-readelf: not found make[1]: *** No rule to make target 'emit_tests'. Stop. chmod u+x /tmp/kselftest/run_kselftest.sh
in both of the above. Something broke and I will look into it. This doesn't break the install and run_kselftest.sh generation
About summary option:
make -C tools/testing/selftests/ summary=1 install probably should generate summary from emit_tests target, but I am not sure if it is working.
thanks, -- Shuah
On Wed, Jun 12, 2019 at 6:45 PM shuah shuah@kernel.org wrote:
Hi Dmitry,
This is the 6th email from you in a span of 3 hours! I am just going to respond this last one. Please try to summarize your questions instead of sending email storm, so it will be easier to parse and more productive for both of us.
Hi Shuah,
Sorry for that. Let me combine all current questions in a more structured way.
My motivation: I am trying to understand what does it take to run/add kernel tests in particular for the purpose of providing working instructions to run kernel test to a new team member or a new external kernel developer, and if it's feasible to ask a kernel developer fixing a bug to add a regression test and ensure that it works. Note in these cases a user may not have lots of specific expertise (e.g. any unsaid/implicit thing may be a showstopper) and/or don't have infinite motivation/time (may give up given a single excuse to do so) and/or don't have specific interest/expertise in the tested subsystem (e.g. a drive-by fix). So now I am trying to follow this route myself, documenting steps.
1. You suggested to install a bunch of packages. That helped to some degree. Is there a way to figure out what packages one needs to install to build the tests other than asking you?
2. Build of bpf tests was broken after installing all required packages. It helped to delete some random files (tools/testing/selftests/bpf/{feature,FEATURE-DUMP.libbpf}). Is it something to fix in kselftests? Deleting random files was a chaotic action which I can't explain to anybody.
3. I am still getting 1 build error:
CC /usr/local/google/home/dvyukov/src/linux/tools/testing/selftests/bpf/str_error.o timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’?
What should I do to fix this?
4. Are individual test errors are supposed to be fatal? Or I can just ignore a single error and proceed? I've tried to proceed, but I am not sure if I will get some unexplainable errors later because of that. By default I would assume that any errors during make are fatal.
5. The instructions on running tests:
$ make -C tools/testing/selftests run_tests $ make kselftest
Do they assume that the tests will run right on my host machine? It's not stated/explained anywhere, but I don't see how "make kselftest" can use my usual setup because it don't know about it. I cannot run tests on the host. Policy rules aside, this is yet untested kernel, so by installing it I am risking losing my whole machine. Reading further, "Install selftests" and "Running installed selftests" sections seem to be a way to run tests on another machine. Is it correct? Are there any other options? There seems to be a bunch of implicit unsaid things, so I am asking in case I am missing some even simpler way to run tests. Or otherwise, what is the purpose of "installing" tests?
6. The "Running installed selftests" section says: "Kselftest install as well as the Kselftest tarball provide a script named "run_kselftest.sh" to run the tests".
What is the "Kselftest tarball"? Where does one get one? I don't see any mentions of "tarball" anywhere else in the doc.
7. What image am I supposed to use to run kselftests? Say, my goal is either running as many tests as possible (CI scenario), or tests for a specific subsystem (a drive-by fix scenario). All images that I have do not seem to be suitable. One is failing with: ./run_kselftest.sh: 2: ./run_kselftest.sh: realpath: not found And there is no clear path to fix this. After I guessed the right package to install, it turned out to be broken in the distro. In another image all C programs fail to run with: ./test_maps: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.26'
How is one supposed to get an image suitable for running kselftests?
8. Lots of tests fail/skipped with some cryptic for me errors like:
# Cannot find device "ip6gre11"
# selftests: [SKIP] Could not run test without the ip xdpgeneric support
# modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.1.0+/modules.dep.bin'
# selftests: bpf: test_tc_edt.sh # nc is not available not ok 40 selftests: bpf: test_tc_edt.sh
Say, I either want to run tests for a specific subsystem because I am doing a drive-by fix (a typical newcomer/good Samaritan scenario), or I want to run as many tests as possible (a typical CI scenario). Is there a way to bulk satisfy all these prerequisite (configs, binaries and whatever they are asking for)?
9. There is a test somewhere in the middle that consistently reboots my machine:
# selftests: breakpoints: step_after_suspend_test [ 514.024889] PM: suspend entry (deep) [ 514.025959] PM: Syncing filesystems ... done. [ 514.051573] Freezing user space processes ... (elapsed 0.001 seconds) done. [ 514.054140] OOM killer disabled. [ 514.054764] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done. [ 514.057695] printk: Suspending console(s) (use no_console_suspend to debug) early console in extract_kernel ...
Is it a bug in the test? in the kernel? Or how is this supposed to work/what am I supposed to do with this?
10. Do you know if anybody is running kselftests? Running as in running continuously, noticing new failures, reporting these failures, keeping them green, etc. I am asking because one of the tests triggers a use-after-free and I checked it was the same 3+ months ago. And I have some vague memories of trying to run kselftests 3 or so years ago, and there was a bunch of use-after-free's as well.
11. Do we know what's the current code coverage achieved by kselftests? What's covered? What's not? Overall percent/per-subsystem/etc?
12. I am asking about the aggregate result, because that's usually the first thing anybody needs (both devs testing a change and a CI). You said that kselftest does not keep track of the aggregate result. So the intended usage is always storing all output to a file and then grepping it for "[SKIP]" and "[FAIL]". Is it correct?
Thanks in advance for bearing with me.
On 6/12/19 12:29 PM, Dmitry Vyukov wrote:
On Wed, Jun 12, 2019 at 6:45 PM shuah shuah@kernel.org wrote:
Hi Dmitry,
This is the 6th email from you in a span of 3 hours! I am just going to respond this last one. Please try to summarize your questions instead of sending email storm, so it will be easier to parse and more productive for both of us.
Hi Shuah,
Sorry for that. Let me combine all current questions in a more structured way.
My motivation: I am trying to understand what does it take to run/add kernel tests in particular for the purpose of providing working instructions to run kernel test to a new team member or a new external kernel developer, and if it's feasible to ask a kernel developer fixing a bug to add a regression test and ensure that it works. Note in these cases a user may not have lots of specific expertise (e.g. any unsaid/implicit thing may be a showstopper) and/or don't have infinite motivation/time (may give up given a single excuse to do so) and/or don't have specific interest/expertise in the tested subsystem (e.g. a drive-by fix). So now I am trying to follow this route myself, documenting steps.
- You suggested to install a bunch of packages. That helped to some
degree. Is there a way to figure out what packages one needs to install to build the tests other than asking you?
I have to go through discovery at times when new tests get added. I consider this a part of being a open source developer figuring out dependencies for compiling and running. I don't have a magic answer for you and there is no way to make sure all dependencies will be documented.
- Build of bpf tests was broken after installing all required
packages. It helped to delete some random files (tools/testing/selftests/bpf/{feature,FEATURE-DUMP.libbpf}). Is it something to fix in kselftests? Deleting random files was a chaotic action which I can't explain to anybody.
I am still getting 1 build error:
CC /usr/local/google/home/dvyukov/src/linux/tools/testing/selftests/bpf/str_error.o
timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’?
What should I do to fix this?
I am not seeing that on my system. I suspect you are still missing some packages and/or headers.
- Are individual test errors are supposed to be fatal? Or I can just
ignore a single error and proceed?
Individual test errors aren't fatal and the run completes reporting errors from individual tests.
I've tried to proceed, but I am not sure if I will get some unexplainable errors later because of that. By default I would assume that any errors during make are fatal.
Kselftest is a suite of developer tests. Please read the documentation.
The instructions on running tests:
$ make -C tools/testing/selftests run_tests $ make kselftest
Do they assume that the tests will run right on my host machine? It's not stated/explained anywhere, but I don't see how "make kselftest" can use my usual setup because it don't know about it.
You have to tailor to it your environment. This is really for kernel developers and test rings that routines test kernels under development.
I cannot run tests on the host. Policy rules aside, this is yet untested kernel, so by installing it I am risking losing my whole machine.
This is just like running kernel make. Build happens on the system. The idea is that kernel developers use these tests to test their code.
Reading further, "Install selftests" and "Running installed selftests" sections seem to be a way to run tests on another machine. Is it correct? Are there any other options? There seems to be a bunch of implicit unsaid things, so I am asking in case I am missing some even simpler way to run tests. Or otherwise, what is the purpose of "installing" tests?
- The "Running installed selftests" section says:
"Kselftest install as well as the Kselftest tarball provide a script named "run_kselftest.sh" to run the tests".
Right. You have to generate it. As documented in the kselftest.rst. kselftest_install.sh will install compiled tests and run_skefltest.sh
You can run gen_kselftest_tar.sh to create tarball and unpack it on your test system.
What is the "Kselftest tarball"? Where does one get one? I don't see any mentions of "tarball" anywhere else in the doc.
Please see above. You can generate tarball yourself using "tar"
- What image am I supposed to use to run kselftests? Say, my goal is
either running as many tests as possible (CI scenario), or tests for a specific subsystem (a drive-by fix scenario). All images that I have do not seem to be suitable. One is failing with: ./run_kselftest.sh: 2: ./run_kselftest.sh: realpath: not found And there is no clear path to fix this. After I guessed the right package to install, it turned out to be broken in the distro. In another image all C programs fail to run with: ./test_maps: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.26'
How is one supposed to get an image suitable for running kselftests?
When you say image - what is image in this context.
You build new kernel, install it, boot the new kernel and run selftests. If your kernel build system is different from test system, then you build and install kernel and build kselftest and install kselftest and copy them over to the test system.
- Lots of tests fail/skipped with some cryptic for me errors like:
# Cannot find device "ip6gre11"
# selftests: [SKIP] Could not run test without the ip xdpgeneric support
Right that means the kernel doesn't support the feature.
# modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.1.0+/modules.dep.bin'
You don't have the module built-in.
# selftests: bpf: test_tc_edt.sh # nc is not available not ok 40 selftests: bpf: test_tc_edt.sh
Say, I either want to run tests for a specific subsystem because I am doing a drive-by fix (a typical newcomer/good Samaritan scenario), or I want to run as many tests as possible (a typical CI scenario). Is there a way to bulk satisfy all these prerequisite (configs, binaries and whatever they are asking for)?
- There is a test somewhere in the middle that consistently reboots my machine:
# selftests: breakpoints: step_after_suspend_test [ 514.024889] PM: suspend entry (deep) [ 514.025959] PM: Syncing filesystems ... done. [ 514.051573] Freezing user space processes ... (elapsed 0.001 seconds) done. [ 514.054140] OOM killer disabled. [ 514.054764] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done. [ 514.057695] printk: Suspending console(s) (use no_console_suspend to debug) early console in extract_kernel ...
Yes. Some tests require reboot and you want to avoid those, if you don't want to run them. Please look at the kselftest.rst.
Is it a bug in the test? in the kernel? Or how is this supposed to work/what am I supposed to do with this?
- Do you know if anybody is running kselftests? Running as in
running continuously, noticing new failures, reporting these failures, keeping them green, etc. I am asking because one of the tests triggers a use-after-free and I checked it was the same 3+ months ago. And I have some vague memories of trying to run kselftests 3 or so years ago, and there was a bunch of use-after-free's as well.
Yes Linaro test rings run them and kernel developers do. I am cc'ing Naresh and Anders to help with tips on how they run tests in their environment. They have several test systems that they install tests and run tests routine on all stable releases.
Naresh and Anders! Can you share your process for running kselftest in Linaro test farm. Thanks in advance.
- Do we know what's the current code coverage achieved by kselftests?
What's covered? What's not? Overall percent/per-subsystem/etc?
No idea.
- I am asking about the aggregate result, because that's usually the
first thing anybody needs (both devs testing a change and a CI). You said that kselftest does not keep track of the aggregate result. So the intended usage is always storing all output to a file and then grepping it for "[SKIP]" and "[FAIL]". Is it correct?
As I explained, Kselftest will not give you the aggregate result.
thanks, -- Shuah
Hello!
On Wed, 12 Jun 2019 at 14:32, shuah shuah@kernel.org wrote:
On 6/12/19 12:29 PM, Dmitry Vyukov wrote:
[...]
- You suggested to install a bunch of packages. That helped to some
degree. Is there a way to figure out what packages one needs to install to build the tests other than asking you?
I have to go through discovery at times when new tests get added. I consider this a part of being a open source developer figuring out dependencies for compiling and running. I don't have a magic answer for you and there is no way to make sure all dependencies will be documented.
This is something we, as users of Kselftests, would very much like to see improved. We also go by trial-and-error finding out what is missing, but keeping up with the new tests or subsystems is often difficult and tend to remain broken (in usage) for some time, until we have the resources to look into that and fix it. The config fragments is an excellent example of how the test developers and the framework complement each other to make things work. Even documenting dependencies would go a long way, as a starting point, but I do believe that the test writers should do that and not the users go figure out what all is needed to run their tests.
Maybe a precheck() on the tests in order to ensure that the needed binaries are around?
For what it's worth, this is the list of run-time dependencies package for OpenEmbedded: bash bc ethtool fuse-utils iproute2 iproute2-tc iputils-ping iputils-ping6 ncurses perl sudo python3-argparse python3-datetime python3-json python3-pprint python3-subprocess util-linux-uuidgen cpupower glibc-utils. We are probably missing a few.
[...]
- Do you know if anybody is running kselftests? Running as in
running continuously, noticing new failures, reporting these failures, keeping them green, etc. I am asking because one of the tests triggers a use-after-free and I checked it was the same 3+ months ago. And I have some vague memories of trying to run kselftests 3 or so years ago, and there was a bunch of use-after-free's as well.
Yes Linaro test rings run them and kernel developers do. I am cc'ing Naresh and Anders to help with tips on how they run tests in their environment. They have several test systems that they install tests and run tests routine on all stable releases.
Naresh and Anders! Can you share your process for running kselftest in Linaro test farm. Thanks in advance.
They're both in time zones where it's better to be sleeping at the moment, so I'll let them chime in with more info tomorrow (their time). I can share that we, as part of LKFT [1], run Kselftests with Linux 4.4, 4.9, 4.14, 4.19, 5.1, Linus' mainline, and linux-next, on arm, aarch64, x86, and x86-64, *very* often: Our test counter recently exceeded 5 million! You can see today's mainline results of Kselftests [2] and all tests therein.
We do not build our kernels with KASAN, though, so our test runs don't exhibit that bug.
Greetings!
Daniel Díaz daniel.diaz@linaro.org
[1] https://lkft.linaro.org/ [2] https://qa-reports.linaro.org/lkft/linux-mainline-oe/build/v5.2-rc4-20-gaa72...
On Thu, 13 Jun 2019 at 02:43, Daniel Díaz daniel.diaz@linaro.org wrote:
Hello!
On Wed, 12 Jun 2019 at 14:32, shuah shuah@kernel.org wrote:
On 6/12/19 12:29 PM, Dmitry Vyukov wrote:
[...]
- You suggested to install a bunch of packages. That helped to some
degree. Is there a way to figure out what packages one needs to install to build the tests other than asking you?
I have to go through discovery at times when new tests get added. I consider this a part of being a open source developer figuring out dependencies for compiling and running. I don't have a magic answer for you and there is no way to make sure all dependencies will be documented.
This is something we, as users of Kselftests, would very much like to see improved. We also go by trial-and-error finding out what is missing, but keeping up with the new tests or subsystems is often difficult and tend to remain broken (in usage) for some time, until we have the resources to look into that and fix it. The config fragments is an excellent example of how the test developers and the framework complement each other to make things work. Even documenting dependencies would go a long way, as a starting point, but I do believe that the test writers should do that and not the users go figure out what all is needed to run their tests.
Maybe a precheck() on the tests in order to ensure that the needed binaries are around?
For what it's worth, this is the list of run-time dependencies package for OpenEmbedded: bash bc ethtool fuse-utils iproute2 iproute2-tc iputils-ping iputils-ping6 ncurses perl sudo python3-argparse python3-datetime python3-json python3-pprint python3-subprocess util-linux-uuidgen cpupower glibc-utils. We are probably missing a few.
[...]
- Do you know if anybody is running kselftests? Running as in
running continuously, noticing new failures, reporting these failures, keeping them green, etc. I am asking because one of the tests triggers a use-after-free and I checked it was the same 3+ months ago. And I have some vague memories of trying to run kselftests 3 or so years ago, and there was a bunch of use-after-free's as well.
Yes Linaro test rings run them and kernel developers do. I am cc'ing Naresh and Anders to help with tips on how they run tests in their environment. They have several test systems that they install tests and run tests routine on all stable releases.
Naresh and Anders! Can you share your process for running kselftest in Linaro test farm. Thanks in advance.
They're both in time zones where it's better to be sleeping at the moment, so I'll let them chime in with more info tomorrow (their time). I can share that we, as part of LKFT [1], run Kselftests with Linux 4.4, 4.9, 4.14, 4.19, 5.1, Linus' mainline, and linux-next, on arm, aarch64, x86, and x86-64, *very* often: Our test counter recently exceeded 5 million! You can see today's mainline results of Kselftests [2] and all tests therein.
Thanks Daniel.
In the recent past we have found kernel oops, bugs and warnings while running kselftest suite on our environment. It is worth running them in CI. Linaro 's test farm have been reporting these issues to kernel sub-system maintainers and test authors and they have investigated and fixed.
The test cases which are known to fail due to missing dependency it could be Kconfig or userland packages. There is one more case we see failures when running latest test cases on older kernel branches. We have marked them as known failures XFAIL [3]. qa reports will parse actual results and applies xfails as blue in color.
Best regards Naresh Kamboju
Daniel Díaz daniel.diaz@linaro.org
[1] https://lkft.linaro.org/ [2] https://qa-reports.linaro.org/lkft/linux-mainline-oe/build/v5.2-rc4-20-gaa72...
[3] https://github.com/Linaro/qa-reports-known-issues/blob/master/kselftests-pro...
On 6/12/19 3:12 PM, Daniel Díaz wrote:
Hello!
On Wed, 12 Jun 2019 at 14:32, shuah shuah@kernel.org wrote:
On 6/12/19 12:29 PM, Dmitry Vyukov wrote:
[...]
- You suggested to install a bunch of packages. That helped to some
degree. Is there a way to figure out what packages one needs to install to build the tests other than asking you?
I have to go through discovery at times when new tests get added. I consider this a part of being a open source developer figuring out dependencies for compiling and running. I don't have a magic answer for you and there is no way to make sure all dependencies will be documented.
This is something we, as users of Kselftests, would very much like to see improved. We also go by trial-and-error finding out what is missing, but keeping up with the new tests or subsystems is often difficult and tend to remain broken (in usage) for some time, until we have the resources to look into that and fix it. The config fragments is an excellent example of how the test developers and the framework complement each other to make things work. Even documenting dependencies would go a long way, as a starting point, but I do believe that the test writers should do that and not the users go figure out what all is needed to run their tests.
Maybe a precheck() on the tests in order to ensure that the needed binaries are around?
Right. Take a look at x86 test Makefile - it handles that. Tests can handle these in their Makefile - not at run-time.
I will be happy to take patches similar to the checks x86 does. These shouldn't fail the kselftest build and print out dependencies.
This way users can go install them.
For what it's worth, this is the list of run-time dependencies package for OpenEmbedded: bash bc ethtool fuse-utils iproute2 iproute2-tc iputils-ping iputils-ping6 ncurses perl sudo python3-argparse python3-datetime python3-json python3-pprint python3-subprocess util-linux-uuidgen cpupower glibc-utils. We are probably missing a few.
Sure see above.
thanks, -- Shuah
On Wed, Jun 12, 2019 at 11:13 PM Daniel Díaz daniel.diaz@linaro.org wrote:
Hello!
On Wed, 12 Jun 2019 at 14:32, shuah shuah@kernel.org wrote:
On 6/12/19 12:29 PM, Dmitry Vyukov wrote:
[...]
- You suggested to install a bunch of packages. That helped to some
degree. Is there a way to figure out what packages one needs to install to build the tests other than asking you?
I have to go through discovery at times when new tests get added. I consider this a part of being a open source developer figuring out dependencies for compiling and running. I don't have a magic answer for you and there is no way to make sure all dependencies will be documented.
This is something we, as users of Kselftests, would very much like to see improved. We also go by trial-and-error finding out what is missing, but keeping up with the new tests or subsystems is often difficult and tend to remain broken (in usage) for some time, until we have the resources to look into that and fix it. The config fragments is an excellent example of how the test developers and the framework complement each other to make things work. Even documenting dependencies would go a long way, as a starting point, but I do believe that the test writers should do that and not the users go figure out what all is needed to run their tests.
Maybe a precheck() on the tests in order to ensure that the needed binaries are around?
Hi Daniel,
The Automated Testing effort: https://elinux.org/Automated_Testing is working on a standard for test metadata description which will capture required configs, hardware, runtime-dependencies, etc. I am not sure what's the current progress, though.
Documenting or doing a precheck is a useful first step. But ultimately this needs to be in machine-readable meta-data. So that it's possible to, say, enable as much tests as possible on a CI, rather then simply skip tests. A skipped test is better then a falsely failed test, but it still does not give any test coverage.
For what it's worth, this is the list of run-time dependencies package for OpenEmbedded: bash bc ethtool fuse-utils iproute2 iproute2-tc iputils-ping iputils-ping6 ncurses perl sudo python3-argparse python3-datetime python3-json python3-pprint python3-subprocess util-linux-uuidgen cpupower glibc-utils. We are probably missing a few.
Something like this would save me (and thousands of other people) some time.
[...]
- Do you know if anybody is running kselftests? Running as in
running continuously, noticing new failures, reporting these failures, keeping them green, etc. I am asking because one of the tests triggers a use-after-free and I checked it was the same 3+ months ago. And I have some vague memories of trying to run kselftests 3 or so years ago, and there was a bunch of use-after-free's as well.
Yes Linaro test rings run them and kernel developers do. I am cc'ing Naresh and Anders to help with tips on how they run tests in their environment. They have several test systems that they install tests and run tests routine on all stable releases.
Naresh and Anders! Can you share your process for running kselftest in Linaro test farm. Thanks in advance.
They're both in time zones where it's better to be sleeping at the moment, so I'll let them chime in with more info tomorrow (their time). I can share that we, as part of LKFT [1], run Kselftests with Linux 4.4, 4.9, 4.14, 4.19, 5.1, Linus' mainline, and linux-next, on arm, aarch64, x86, and x86-64, *very* often: Our test counter recently exceeded 5 million! You can see today's mainline results of Kselftests [2] and all tests therein.
We do not build our kernels with KASAN, though, so our test runs don't exhibit that bug.
But you are aware of KASAN, right? Do you have any plans to use it? Dynamic tools significantly improve runtime testing efficiency. Otherwise a test may expose all of use-after-free, out-of-bounds write, information leak, potential deadlock, memory leak, etc and still be considered "everything is fine". Some of these bug may even be as bad as a remote code execution. I would expect that catching these would be a reasonable price for running tests somewhat less often :) Each of these tools require a one-off investment for deployment, but then gives you constant benefit on each run. If you are interested I can go into more details as we do lots of this on syzbot. Besides catching more bugs there is also an interesting possibility of systematically testing all error paths.
Hello!
On Thu, 13 Jun 2019 at 09:22, Dmitry Vyukov dvyukov@google.com wrote:
On Wed, Jun 12, 2019 at 11:13 PM Daniel Díaz daniel.diaz@linaro.org wrote:
Maybe a precheck() on the tests in order to ensure that the needed binaries are around?
Hi Daniel, The Automated Testing effort: https://elinux.org/Automated_Testing is working on a standard for test metadata description which will capture required configs, hardware, runtime-dependencies, etc. I am not sure what's the current progress, though.
We just had the monthly call one hour ago. You should join our next call! Details are in the Wiki link you shared.
Documenting or doing a precheck is a useful first step. But ultimately this needs to be in machine-readable meta-data. So that it's possible to, say, enable as much tests as possible on a CI, rather then simply skip tests. A skipped test is better then a falsely failed test, but it still does not give any test coverage.
I agree. We discussed some of this in an impromptu microsummit at Linaro Connect BKK19 a few months back, i.e. a way to encapsulate tests and tests' definitions. Tim Bird is leading that effort; the minutes of today's call will be sent to the mailing list, so keep an eye on his update!
[...] we, as part of LKFT [1], run Kselftests with Linux 4.4, 4.9, 4.14, 4.19, 5.1, Linus' mainline, and linux-next, on arm, aarch64, x86, and x86-64, *very* often: Our test counter recently exceeded 5 million!
I was wrong by an order of magnitude: It's currently at 51.7 million tests.
We do not build our kernels with KASAN, though, so our test runs don't exhibit that bug.
But you are aware of KASAN, right? Do you have any plans to use it?
Not at the moment. We are redesigning our entire build and test infrastructure, and this is something that we are considering for our next iteration.
If you are interested I can go into more details as we do lots of this on syzbot. Besides catching more bugs there is also an interesting possibility of systematically testing all error paths.
Definitely join us on the Automated Testing monthly call; next one is July 11th. There are efforts on several fronts on testing the kernel, and we all are eager to contribute to improving the kernel test infrastructure.
Greetings!
Daniel Díaz daniel.diaz@linaro.org
On Thu, Jun 13, 2019 at 4:58 PM Daniel Díaz daniel.diaz@linaro.org wrote:
Hello!
On Thu, 13 Jun 2019 at 09:22, Dmitry Vyukov dvyukov@google.com wrote:
On Wed, Jun 12, 2019 at 11:13 PM Daniel Díaz daniel.diaz@linaro.org wrote:
Maybe a precheck() on the tests in order to ensure that the needed binaries are around?
Hi Daniel, The Automated Testing effort: https://elinux.org/Automated_Testing is working on a standard for test metadata description which will capture required configs, hardware, runtime-dependencies, etc. I am not sure what's the current progress, though.
We just had the monthly call one hour ago. You should join our next call! Details are in the Wiki link you shared.
Documenting or doing a precheck is a useful first step. But ultimately this needs to be in machine-readable meta-data. So that it's possible to, say, enable as much tests as possible on a CI, rather then simply skip tests. A skipped test is better then a falsely failed test, but it still does not give any test coverage.
I agree. We discussed some of this in an impromptu microsummit at Linaro Connect BKK19 a few months back, i.e. a way to encapsulate tests and tests' definitions. Tim Bird is leading that effort; the minutes of today's call will be sent to the mailing list, so keep an eye on his update!
[...] we, as part of LKFT [1], run Kselftests with Linux 4.4, 4.9, 4.14, 4.19, 5.1, Linus' mainline, and linux-next, on arm, aarch64, x86, and x86-64, *very* often: Our test counter recently exceeded 5 million!
I was wrong by an order of magnitude: It's currently at 51.7 million tests.
w00t!
We do not build our kernels with KASAN, though, so our test runs don't exhibit that bug.
But you are aware of KASAN, right? Do you have any plans to use it?
Not at the moment. We are redesigning our entire build and test infrastructure, and this is something that we are considering for our next iteration.
If you are interested I can go into more details as we do lots of this on syzbot. Besides catching more bugs there is also an interesting possibility of systematically testing all error paths.
Definitely join us on the Automated Testing monthly call; next one is July 11th. There are efforts on several fronts on testing the kernel, and we all are eager to contribute to improving the kernel test infrastructure.
Thanks, I will try to join the August one. Jul 11 I will be on a conference.
On Wed, Jun 12, 2019 at 9:32 PM shuah shuah@kernel.org wrote:
On 6/12/19 12:29 PM, Dmitry Vyukov wrote:
On Wed, Jun 12, 2019 at 6:45 PM shuah shuah@kernel.org wrote:
Hi Dmitry,
This is the 6th email from you in a span of 3 hours! I am just going to respond this last one. Please try to summarize your questions instead of sending email storm, so it will be easier to parse and more productive for both of us.
Hi Shuah,
Sorry for that. Let me combine all current questions in a more structured way.
My motivation: I am trying to understand what does it take to run/add kernel tests in particular for the purpose of providing working instructions to run kernel test to a new team member or a new external kernel developer, and if it's feasible to ask a kernel developer fixing a bug to add a regression test and ensure that it works. Note in these cases a user may not have lots of specific expertise (e.g. any unsaid/implicit thing may be a showstopper) and/or don't have infinite motivation/time (may give up given a single excuse to do so) and/or don't have specific interest/expertise in the tested subsystem (e.g. a drive-by fix). So now I am trying to follow this route myself, documenting steps.
- You suggested to install a bunch of packages. That helped to some
degree. Is there a way to figure out what packages one needs to install to build the tests other than asking you?
I have to go through discovery at times when new tests get added. I consider this a part of being a open source developer figuring out dependencies for compiling and running. I don't have a magic answer for you and there is no way to make sure all dependencies will be documented.
The problem with this is that all of this needs to be figured again and again, thousands of times. And since testing is now always considered as a first-class citizen by developers, requirement to do any additional step leads to the fact that testing is not done at all, tests are not added at all, etc. If for, say, building kernel users are usually ready to jump through all possible hoops just because they don't have any other choice; testing needs to be as simple as possible to be used.
Documenting is good. But everything captured in machine usable form is even better because it allows further automation. One way to do it is to either provide a reference system, or provide a minimal system and then each test specifying all additional requirements (kernel configs, hardware, host packages, target packages, etc). Then this can be tested by installing all specified and building/running the test. If it works, anybody is able to trivially repeat the same. If it does not work, we caught broken/regressed test.
- Build of bpf tests was broken after installing all required
packages. It helped to delete some random files (tools/testing/selftests/bpf/{feature,FEATURE-DUMP.libbpf}). Is it something to fix in kselftests? Deleting random files was a chaotic action which I can't explain to anybody.
I am still getting 1 build error:
CC /usr/local/google/home/dvyukov/src/linux/tools/testing/selftests/bpf/str_error.o
timestamping.c:249:19: error: ‘SIOCGSTAMP’ undeclared (first use in this function); did you mean ‘SIOCGSTAMPNS’?
What should I do to fix this?
I am not seeing that on my system. I suspect you are still missing some packages and/or headers.
Shouldn't the test use the headers from the kernel source? Docs say:
* First use the headers inside the kernel source and/or git repo, and then the system headers. Headers for the kernel release as opposed to headers installed by the distro on the system should be the primary focus to be able to find regressions.
And it makes sense because it both avoids the problem of installing unknown set of packages and more importantly testing of things that are not yet present in headers of all distros.
- Are individual test errors are supposed to be fatal? Or I can just
ignore a single error and proceed?
Individual test errors aren't fatal and the run completes reporting errors from individual tests.
I've tried to proceed, but I am not sure if I will get some unexplainable errors later because of that. By default I would assume that any errors during make are fatal.
Kselftest is a suite of developer tests. Please read the documentation.
I read Documentation/dev-tools/kselftest.rst again but I don't see anything about this. Which part of the doc are you referring to?
The instructions on running tests:
$ make -C tools/testing/selftests run_tests $ make kselftest
Do they assume that the tests will run right on my host machine? It's not stated/explained anywhere, but I don't see how "make kselftest" can use my usual setup because it don't know about it.
You have to tailor to it your environment. This is really for kernel developers and test rings that routines test kernels under development.
It's good when a system allows for fine-tuning by experts when they are willing to do so. But it's equally important to provide the default 1-step way of using a system for people who don't yet have the expertise, or maybe not willing to invest lots of time. I think it would be positive for kernel code quality to allow new kernel developers to run kernel test as the first thing they do with kernel, rather then require years of experience first. What do you think regarding reducing the entry barrier for kernel testing?
I cannot run tests on the host. Policy rules aside, this is yet untested kernel, so by installing it I am risking losing my whole machine.
This is just like running kernel make. Build happens on the system. The idea is that kernel developers use these tests to test their code.
Reading further, "Install selftests" and "Running installed selftests" sections seem to be a way to run tests on another machine. Is it correct? Are there any other options? There seems to be a bunch of implicit unsaid things, so I am asking in case I am missing some even simpler way to run tests. Or otherwise, what is the purpose of "installing" tests?
- The "Running installed selftests" section says:
"Kselftest install as well as the Kselftest tarball provide a script named "run_kselftest.sh" to run the tests".
Right. You have to generate it. As documented in the kselftest.rst. kselftest_install.sh will install compiled tests and run_skefltest.sh
You can run gen_kselftest_tar.sh to create tarball and unpack it on your test system.
What is the "Kselftest tarball"? Where does one get one? I don't see any mentions of "tarball" anywhere else in the doc.
Please see above. You can generate tarball yourself using "tar"
I see. gen_kselftest_tar.sh may be something worth mentioning in the doc.
- What image am I supposed to use to run kselftests? Say, my goal is
either running as many tests as possible (CI scenario), or tests for a specific subsystem (a drive-by fix scenario). All images that I have do not seem to be suitable. One is failing with: ./run_kselftest.sh: 2: ./run_kselftest.sh: realpath: not found And there is no clear path to fix this. After I guessed the right package to install, it turned out to be broken in the distro. In another image all C programs fail to run with: ./test_maps: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.26'
How is one supposed to get an image suitable for running kselftests?
When you say image - what is image in this context.
You build new kernel, install it, boot the new kernel and run selftests. If your kernel build system is different from test system, then you build and install kernel and build kselftest and install kselftest and copy them over to the test system.
By image I mean disk image, what provides the user-space system, binaries, etc. kselftests seem to make a bunch of assumptions about the disk image required to run them, so I am looking for a way to create the image suitable for running kselftest.
- Lots of tests fail/skipped with some cryptic for me errors like:
# Cannot find device "ip6gre11"
# selftests: [SKIP] Could not run test without the ip xdpgeneric support
Right that means the kernel doesn't support the feature.
# modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.1.0+/modules.dep.bin'
You don't have the module built-in.
# selftests: bpf: test_tc_edt.sh # nc is not available not ok 40 selftests: bpf: test_tc_edt.sh
Say, I either want to run tests for a specific subsystem because I am doing a drive-by fix (a typical newcomer/good Samaritan scenario), or I want to run as many tests as possible (a typical CI scenario). Is there a way to bulk satisfy all these prerequisite (configs, binaries and whatever they are asking for)?
- There is a test somewhere in the middle that consistently reboots my machine:
# selftests: breakpoints: step_after_suspend_test [ 514.024889] PM: suspend entry (deep) [ 514.025959] PM: Syncing filesystems ... done. [ 514.051573] Freezing user space processes ... (elapsed 0.001 seconds) done. [ 514.054140] OOM killer disabled. [ 514.054764] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done. [ 514.057695] printk: Suspending console(s) (use no_console_suspend to debug) early console in extract_kernel ...
Yes. Some tests require reboot and you want to avoid those, if you don't want to run them. Please look at the kselftest.rst.
I've read it several times. I don't see anything relevant, which part are you referring to? I've searched for "reboot" and "breakpoint" specifically, but also don't see any matches.
Wouldn't it be more useful to disable this test by default? Users interested specifically in this test can enable it them. But it looks like it does not lead to a useful behavior for anybody else.
Is it a bug in the test? in the kernel? Or how is this supposed to work/what am I supposed to do with this?
- Do you know if anybody is running kselftests? Running as in
running continuously, noticing new failures, reporting these failures, keeping them green, etc. I am asking because one of the tests triggers a use-after-free and I checked it was the same 3+ months ago. And I have some vague memories of trying to run kselftests 3 or so years ago, and there was a bunch of use-after-free's as well.
Yes Linaro test rings run them and kernel developers do. I am cc'ing Naresh and Anders to help with tips on how they run tests in their environment. They have several test systems that they install tests and run tests routine on all stable releases.
Naresh and Anders! Can you share your process for running kselftest in Linaro test farm. Thanks in advance.
- Do we know what's the current code coverage achieved by kselftests?
What's covered? What's not? Overall percent/per-subsystem/etc?
No idea.
- I am asking about the aggregate result, because that's usually the
first thing anybody needs (both devs testing a change and a CI). You said that kselftest does not keep track of the aggregate result. So the intended usage is always storing all output to a file and then grepping it for "[SKIP]" and "[FAIL]". Is it correct?
As I explained, Kselftest will not give you the aggregate result.
Is it just not implemented? Or it's some kind of principal position? It's just a bit surprising to me because the aggregate result is the first thing anybody is looking for after running a set of tests...
linux-kselftest-mirror@lists.linaro.org