## Build * kernel: 5.18.19 * git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc * git branch: linux-5.18.y * git commit: 22a992953741ad79c07890d3f4104585e52ef26b * git describe: cea142b * test details: https://qa-reports.linaro.org/lkft/ltp/build/cea142b
## Test Regressions (compared to 98140f3) * qemu_i386, ltp-controllers - cpuacct_100_100
* qemu_x86_64, ltp-cve - cve-2018-1000204
## Metric Regressions (compared to 98140f3) No metric regressions found.
Reported-by: Linux Kernel Functional Testing lkft@linaro.org
## Test Fixes (compared to 98140f3) * qemu_arm64, ltp-controllers - cpuacct_100_100
## Metric Fixes (compared to 98140f3) No metric fixes found.
## Test result summary total: 12630, pass: 10739, fail: 161, skip: 1730, xfail: 0
## Build Summary
## Test suites summary * log-parser-boot * log-parser-test * ltp-cap_bounds * ltp-commands * ltp-containers * ltp-controllers * ltp-cpuhotplug * ltp-crypto * ltp-cve * ltp-dio * ltp-fcntl-locktests * ltp-filecaps * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-fsx * ltp-hugetlb * ltp-io * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-pty * ltp-sched * ltp-securebits * ltp-syscalls * ltp-tracing
-- Linaro LKFT https://lkft.linaro.org
Hi all,
## Build
- kernel: 5.18.19
- git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc
- git branch: linux-5.18.y
- git commit: 22a992953741ad79c07890d3f4104585e52ef26b
- git describe: cea142b
- test details: https://qa-reports.linaro.org/lkft/ltp/build/cea142b
## Test Regressions (compared to 98140f3)
- qemu_i386, ltp-controllers
- cpuacct_100_100
- qemu_x86_64, ltp-cve
- cve-2018-1000204
OK, 3252ea38d ("ioctl_sg01: Add max_runtime") didn't help.
looking at the log [1] I don't see anything obvious why test timeouts:
tst_test.c:1524: TINFO: Timeout per run is 0h 00m 30s ioctl_sg01.c:81: TINFO: Found SCSI device /dev/sg1 Test timeouted, sending SIGKILL! tst_test.c:1575: TINFO: If you are running on slow machine, try exporting LTP_TIMEOUT_MUL > 1 tst_test.c:1577: TBROK: Test killed! (timeout?)
Summary: passed 0 failed 0 broken 1 skipped 0 warnings 0
@lkft I haven't find dmesg after starting running tests in the test details [2]. Is there any? I really like you keep the history [3], thanks! It'd be great if you could print the test name into dmesg, so that it can be visible which test caused particular message / kernel oops.
Also, it'd be great if you could put some header for each test with the test name or at least blank line to separate the end of the summary.
Kind regards, Petr
[1] https://qa-reports.linaro.org/lkft/ltp/build/cea142b/testrun/11956785/suite/... [2] https://qa-reports.linaro.org/lkft/ltp/build/cea142b/testrun/11956785/suite/... [3] https://qa-reports.linaro.org/lkft/ltp/build/cea142b/testrun/11956785/suite/...
## Metric Regressions (compared to 98140f3) No metric regressions found.
Reported-by: Linux Kernel Functional Testing lkft@linaro.org
## Test Fixes (compared to 98140f3)
- qemu_arm64, ltp-controllers
- cpuacct_100_100
## Metric Fixes (compared to 98140f3) No metric fixes found.
## Test result summary total: 12630, pass: 10739, fail: 161, skip: 1730, xfail: 0
## Build Summary
## Test suites summary
- log-parser-boot
- log-parser-test
- ltp-cap_bounds
- ltp-commands
- ltp-containers
- ltp-controllers
- ltp-cpuhotplug
- ltp-crypto
- ltp-cve
- ltp-dio
- ltp-fcntl-locktests
- ltp-filecaps
- ltp-fs
- ltp-fs_bind
- ltp-fs_perms_simple
- ltp-fsx
- ltp-hugetlb
- ltp-io
- ltp-ipc
- ltp-math
- ltp-mm
- ltp-nptl
- ltp-pty
- ltp-sched
- ltp-securebits
- ltp-syscalls
- ltp-tracing
On Mon, Sep 19, 2022 at 11:14 AM Petr Vorel pvorel@suse.cz wrote:
Hi all,
## Build
- kernel: 5.18.19
- git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc
- git branch: linux-5.18.y
- git commit: 22a992953741ad79c07890d3f4104585e52ef26b
- git describe: cea142b
- test details: https://qa-reports.linaro.org/lkft/ltp/build/cea142b
## Test Regressions (compared to 98140f3)
- qemu_i386, ltp-controllers
- cpuacct_100_100
- qemu_x86_64, ltp-cve
- cve-2018-1000204
OK, 3252ea38d ("ioctl_sg01: Add max_runtime") didn't help.
looking at the log [1] I don't see anything obvious why test timeouts:
tst_pollute_memory() consume time is proportional to the amount of free RAM, it is hard to find one fixed value of max_runtime to fit all test platforms.
From my experience, if you limited this test only run with small machine (e.g. RAM <= 32G), that performs well with whatever bare metal or VM, no timeout ever.
tst_test.c:1524: TINFO: Timeout per run is 0h 00m 30s ioctl_sg01.c:81: TINFO: Found SCSI device /dev/sg1 Test timeouted, sending SIGKILL! tst_test.c:1575: TINFO: If you are running on slow machine, try exporting LTP_TIMEOUT_MUL > 1 tst_test.c:1577: TBROK: Test killed! (timeout?)
Summary: passed 0 failed 0 broken 1 skipped 0 warnings 0
@lkft I haven't find dmesg after starting running tests in the test details [2]. Is there any? I really like you keep the history [3], thanks! It'd be great if you could print the test name into dmesg, so that it can be visible which test caused particular message / kernel oops.
Also, it'd be great if you could put some header for each test with the test name or at least blank line to separate the end of the summary.
Kind regards, Petr
[1] https://qa-reports.linaro.org/lkft/ltp/build/cea142b/testrun/11956785/suite/... [2] https://qa-reports.linaro.org/lkft/ltp/build/cea142b/testrun/11956785/suite/... [3] https://qa-reports.linaro.org/lkft/ltp/build/cea142b/testrun/11956785/suite/...
## Metric Regressions (compared to 98140f3) No metric regressions found.
Reported-by: Linux Kernel Functional Testing lkft@linaro.org
## Test Fixes (compared to 98140f3)
- qemu_arm64, ltp-controllers
- cpuacct_100_100
## Metric Fixes (compared to 98140f3) No metric fixes found.
## Test result summary total: 12630, pass: 10739, fail: 161, skip: 1730, xfail: 0
## Build Summary
## Test suites summary
- log-parser-boot
- log-parser-test
- ltp-cap_bounds
- ltp-commands
- ltp-containers
- ltp-controllers
- ltp-cpuhotplug
- ltp-crypto
- ltp-cve
- ltp-dio
- ltp-fcntl-locktests
- ltp-filecaps
- ltp-fs
- ltp-fs_bind
- ltp-fs_perms_simple
- ltp-fsx
- ltp-hugetlb
- ltp-io
- ltp-ipc
- ltp-math
- ltp-mm
- ltp-nptl
- ltp-pty
- ltp-sched
- ltp-securebits
- ltp-syscalls
- ltp-tracing
-- Mailing list info: https://lists.linux.it/listinfo/ltp
On Mon, Sep 19, 2022 at 11:27 AM Li Wang liwang@redhat.com wrote:
On Mon, Sep 19, 2022 at 11:14 AM Petr Vorel pvorel@suse.cz wrote:
Hi all,
## Build
- kernel: 5.18.19
- git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc
- git branch: linux-5.18.y
- git commit: 22a992953741ad79c07890d3f4104585e52ef26b
- git describe: cea142b
- test details: https://qa-reports.linaro.org/lkft/ltp/build/cea142b
## Test Regressions (compared to 98140f3)
- qemu_i386, ltp-controllers
- cpuacct_100_100
- qemu_x86_64, ltp-cve
- cve-2018-1000204
OK, 3252ea38d ("ioctl_sg01: Add max_runtime") didn't help.
looking at the log [1] I don't see anything obvious why test timeouts:
tst_pollute_memory() consume time is proportional to the amount of free RAM, it is hard to find one fixed value of max_runtime to fit all test platforms.
From my experience, if you limited this test only run with small machine (e.g. RAM <= 32G), that performs well with whatever bare metal or VM, no timeout ever.
Btw, we did that by setting a test filter before LTP running, also we could add a field .max_mem_avail to tst_test struct for achieving that, but not sure if it's worth doing that at this moment.
Hi!
tst_pollute_memory() consume time is proportional to the amount of free RAM, it is hard to find one fixed value of max_runtime to fit all test platforms.
From my experience, if you limited this test only run with small machine (e.g. RAM <= 32G), that performs well with whatever bare metal or VM, no timeout ever.
Btw, we did that by setting a test filter before LTP running, also we could add a field .max_mem_avail to tst_test struct for achieving that, but not sure if it's worth doing that at this moment.
The proper solution will be adding a separate timeouts for setup/cleanup and limiting the setup runtime to something sensible, but that is something for the next development cycle.
On Mon, Sep 19, 2022 at 4:26 PM Cyril Hrubis chrubis@suse.cz wrote:
Hi!
tst_pollute_memory() consume time is proportional to the amount of free RAM, it is hard to find one fixed value of max_runtime to fit all
test
platforms.
From my experience, if you limited this test only run with small machine (e.g. RAM <= 32G), that performs well with whatever bare metal or VM, no timeout ever.
Btw, we did that by setting a test filter before LTP running, also we
could
add a field .max_mem_avail to tst_test struct for achieving that, but not sure if it's worth doing that at this moment.
The proper solution will be adding a separate timeouts for setup/cleanup and limiting the setup runtime to something sensible, but that is
Separate timeouts for setup/cleanup will break the integrity of setting functions, my concern is that if tst_pollute_memory stopped in uncompleted, the main test part is meaningless, right?
Or, I may misunderstand you here.
something for the next development cycle.
+1
Hi!
Separate timeouts for setup/cleanup will break the integrity of setting functions, my concern is that if tst_pollute_memory stopped in uncompleted, the main test part is meaningless, right?
Or, I may misunderstand you here.
The more we pollute the higher probability is that we hit a piece of memory that contains non-zero bytes. That's why Martin is reluctant to stop polluting memory prematurelly.
And separating the timeout for setup/cleanup is of-course only part of the solution. We can make the test setup smarter by measuring the pollution speed and aborting early if it's too slow too, but let's discuss that once we are done with the release.
On 19. 09. 22 5:13, Petr Vorel wrote:
Hi all,
## Build
- kernel: 5.18.19
- git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc
- git branch: linux-5.18.y
- git commit: 22a992953741ad79c07890d3f4104585e52ef26b
- git describe: cea142b
- test details: https://qa-reports.linaro.org/lkft/ltp/build/cea142b
## Test Regressions (compared to 98140f3)
- qemu_i386, ltp-controllers
- cpuacct_100_100
- qemu_x86_64, ltp-cve
- cve-2018-1000204
OK, 3252ea38d ("ioctl_sg01: Add max_runtime") didn't help.
looking at the log [1] I don't see anything obvious why test timeouts:
tst_test.c:1524: TINFO: Timeout per run is 0h 00m 30s
I do. The line above is supposed to say "Timeout per run is 1h 00m 30s" instead. Whatever LTP version this was, it did not have the ioctl_sg01 max_runtime patch applied.
On 19. 09. 22 5:13, Petr Vorel wrote:
Hi all,
## Build
- kernel: 5.18.19
- git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc
- git branch: linux-5.18.y
- git commit: 22a992953741ad79c07890d3f4104585e52ef26b
- git describe: cea142b
FYI talking below about this line.
- test details: https://qa-reports.linaro.org/lkft/ltp/build/cea142b
## Test Regressions (compared to 98140f3)
- qemu_i386, ltp-controllers
- cpuacct_100_100
- qemu_x86_64, ltp-cve
- cve-2018-1000204
OK, 3252ea38d ("ioctl_sg01: Add max_runtime") didn't help.
looking at the log [1] I don't see anything obvious why test timeouts:
tst_test.c:1524: TINFO: Timeout per run is 0h 00m 30s
I do. The line above is supposed to say "Timeout per run is 1h 00m 30s" instead. Whatever LTP version this was, it did not have the ioctl_sg01 max_runtime patch applied.
Hi Martin,
thanks for info. I expected the line above document LTP version, i.e. cea142b73 ("df01.sh: Convert to TST_ALL_FILESYSTEMS=1") which contains .max_runtime = 3600 (i.e. 1 hour runtime + 30 sec for basic cleanup). Although "git describe" could mean any git repository.
Kind regards, Petr