Summary ------------------------------------------------------------------------
kernel: 4.4.91 git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git git branch: linux-4.4.y git commit: c030c36a88cdc54a5d657c0a2ee630ba495d5538 git describe: v4.4.91 Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-4.4-oe/build/v4.4.91
Regressions (compared to build v4.4.90-51-g033261bc6633) ------------------------------------------------------------------------
x15 - arm: kselftest: * rtctest
* test src: https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.13.tar.xz
Boards, architectures and test suites: -------------------------------------
juno-r2 - arm64 * boot * kselftest * libhugetlbfs * ltp-cap_bounds-tests * ltp-commands-tests * ltp-containers-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-io-tests * ltp-ipc-tests * ltp-math-tests * ltp-nptl-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * ltp-timers-tests
x15 - arm * boot * kselftest * libhugetlbfs * ltp-cap_bounds-tests * ltp-commands-tests * ltp-containers-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-io-tests * ltp-ipc-tests * ltp-math-tests * ltp-nptl-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * ltp-syscalls-tests
dell-poweredge-r200 - x86_64 * boot * kselftest * libhugetlbfs * ltp-cap_bounds-tests * ltp-commands-tests * ltp-containers-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-io-tests * ltp-ipc-tests * ltp-math-tests * ltp-nptl-tests * ltp-pty-tests * ltp-securebits-tests * ltp-syscalls-tests * ltp-timers-tests
Documentation - https://collaborate.linaro.org/display/LKFT/Email+Reports
On 9 October 2017 at 10:25, Linaro QA qa-reports@linaro.org wrote:
Summary
kernel: 4.4.91 git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git git branch: linux-4.4.y git commit: c030c36a88cdc54a5d657c0a2ee630ba495d5538 git describe: v4.4.91 Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-4.4-oe/build/v4.4.91
Regressions (compared to build v4.4.90-51-g033261bc6633)
x15 - arm: kselftest: * rtctest
* test src: https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.13.tar.xz
This looks like test case issue. The log looks as follows:
PIE delta error: 0.036778 should be close to 0.031250 selftests: rtctest [FAIL]
Apart from that there were a few tests that didn't complete due to setup issues: - LTP syscalls on juno (arm64) - problem with msgctl11 which is omitted on other boards. - LTP sched on x86 - running on NFS fails - LTP timers on x15 (arm) - investigating the problem
milosz
Boards, architectures and test suites:
juno-r2 - arm64
- boot
- kselftest
- libhugetlbfs
- ltp-cap_bounds-tests
- ltp-commands-tests
- ltp-containers-tests
- ltp-fcntl-locktests-tests
- ltp-filecaps-tests
- ltp-fs-tests
- ltp-fs_bind-tests
- ltp-fs_perms_simple-tests
- ltp-fsx-tests
- ltp-hugetlb-tests
- ltp-io-tests
- ltp-ipc-tests
- ltp-math-tests
- ltp-nptl-tests
- ltp-pty-tests
- ltp-sched-tests
- ltp-securebits-tests
- ltp-timers-tests
x15 - arm
- boot
- kselftest
- libhugetlbfs
- ltp-cap_bounds-tests
- ltp-commands-tests
- ltp-containers-tests
- ltp-fcntl-locktests-tests
- ltp-filecaps-tests
- ltp-fs-tests
- ltp-fs_bind-tests
- ltp-fs_perms_simple-tests
- ltp-fsx-tests
- ltp-hugetlb-tests
- ltp-io-tests
- ltp-ipc-tests
- ltp-math-tests
- ltp-nptl-tests
- ltp-pty-tests
- ltp-sched-tests
- ltp-securebits-tests
- ltp-syscalls-tests
dell-poweredge-r200 - x86_64
- boot
- kselftest
- libhugetlbfs
- ltp-cap_bounds-tests
- ltp-commands-tests
- ltp-containers-tests
- ltp-fcntl-locktests-tests
- ltp-filecaps-tests
- ltp-fs-tests
- ltp-fs_bind-tests
- ltp-fs_perms_simple-tests
- ltp-fsx-tests
- ltp-hugetlb-tests
- ltp-io-tests
- ltp-ipc-tests
- ltp-math-tests
- ltp-nptl-tests
- ltp-pty-tests
- ltp-securebits-tests
- ltp-syscalls-tests
- ltp-timers-tests
Documentation - https://collaborate.linaro.org/display/LKFT/Email+Reports
-- Linaro QA (beta) https://qa-reports.linaro.org
On Mon, Oct 09, 2017 at 10:30:05AM +0100, Milosz Wasilewski wrote:
On 9 October 2017 at 10:25, Linaro QA qa-reports@linaro.org wrote:
Summary
kernel: 4.4.91 git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git git branch: linux-4.4.y git commit: c030c36a88cdc54a5d657c0a2ee630ba495d5538 git describe: v4.4.91 Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-4.4-oe/build/v4.4.91
Regressions (compared to build v4.4.90-51-g033261bc6633)
x15 - arm: kselftest: * rtctest
* test src: https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.13.tar.xz
This looks like test case issue. The log looks as follows:
PIE delta error: 0.036778 should be close to 0.031250 selftests: rtctest [FAIL]
Apart from that there were a few tests that didn't complete due to setup issues:
- LTP syscalls on juno (arm64) - problem with msgctl11 which is
omitted on other boards.
- LTP sched on x86 - running on NFS fails
- LTP timers on x15 (arm) - investigating the problem
When can we start to "trust" these results? Right now they are saying "no regressions" from previous tests, yet there were failures on a previous test from what I remember, so I don't know if this specific run of testing actually is any better or not.
And have you all tried breaking something (a build, a test, etc.) to see if it is caught by the system? Based on the last 4.9-rc1 release, I think something is still wrong in that area...
thanks,
greg k-h
On Tue, Oct 10, 2017 at 04:43:09PM +0200, Greg KH wrote:
On Mon, Oct 09, 2017 at 10:30:05AM +0100, Milosz Wasilewski wrote:
Apart from that there were a few tests that didn't complete due to setup issues:
- LTP syscalls on juno (arm64) - problem with msgctl11 which is
omitted on other boards.
- LTP sched on x86 - running on NFS fails
- LTP timers on x15 (arm) - investigating the problem
When can we start to "trust" these results? Right now they are saying "no regressions" from previous tests, yet there were failures on a previous test from what I remember, so I don't know if this specific run of testing actually is any better or not.
I suspect we want to be showing the delta to some fixed baseline (ideally totally clean results!) rather than the previous test run, or including a full list of unexpected failures in the report. Otherwise any issue that doesn't get fixed immediately ends up getting hidden in the reporting which isn't ideal.
And have you all tried breaking something (a build, a test, etc.) to see if it is caught by the system? Based on the last 4.9-rc1 release, I think something is still wrong in that area...
Clearly our stable upstream software presents too little challenge for automated test systems! :)
On Tue, Oct 10, 2017 at 03:59:31PM +0100, Mark Brown wrote:
On Tue, Oct 10, 2017 at 04:43:09PM +0200, Greg KH wrote:
On Mon, Oct 09, 2017 at 10:30:05AM +0100, Milosz Wasilewski wrote:
Apart from that there were a few tests that didn't complete due to setup issues:
- LTP syscalls on juno (arm64) - problem with msgctl11 which is
omitted on other boards.
- LTP sched on x86 - running on NFS fails
- LTP timers on x15 (arm) - investigating the problem
When can we start to "trust" these results? Right now they are saying "no regressions" from previous tests, yet there were failures on a previous test from what I remember, so I don't know if this specific run of testing actually is any better or not.
I suspect we want to be showing the delta to some fixed baseline (ideally totally clean results!) rather than the previous test run, or including a full list of unexpected failures in the report. Otherwise any issue that doesn't get fixed immediately ends up getting hidden in the reporting which isn't ideal.
I agree.
And have you all tried breaking something (a build, a test, etc.) to see if it is caught by the system? Based on the last 4.9-rc1 release, I think something is still wrong in that area...
Clearly our stable upstream software presents too little challenge for automated test systems! :)
Heh.
Ok, the "why did it build" issue is now found out, you all didn't have the needed CONFIG option enabled. Which makes me ask, like I just did in public, can you all do better build testing for these arches? Like 'allmodconfig' or 'allyesconfig' or heck, a round of 'randconfig' would be good if possible...
thanks,
greg k-h
On Oct 10, 2017, at 9:59 AM, Mark Brown broonie@kernel.org wrote:
On Tue, Oct 10, 2017 at 04:43:09PM +0200, Greg KH wrote:
On Mon, Oct 09, 2017 at 10:30:05AM +0100, Milosz Wasilewski wrote:
Apart from that there were a few tests that didn't complete due to setup issues:
- LTP syscalls on juno (arm64) - problem with msgctl11 which is
omitted on other boards.
- LTP sched on x86 - running on NFS fails
- LTP timers on x15 (arm) - investigating the problem
When can we start to "trust" these results? Right now they are saying "no regressions" from previous tests, yet there were failures on a previous test from what I remember, so I don't know if this specific run of testing actually is any better or not.
I suspect we want to be showing the delta to some fixed baseline (ideally totally clean results!) rather than the previous test run, or including a full list of unexpected failures in the report. Otherwise any issue that doesn't get fixed immediately ends up getting hidden in the reporting which isn't ideal.
There’s a whole thread I started on this last week and why known failures and regressions need to change.
Comments welcome. ;-)
And have you all tried breaking something (a build, a test, etc.) to see if it is caught by the system? Based on the last 4.9-rc1 release, I think something is still wrong in that area...
Clearly our stable upstream software presents too little challenge for automated test systems! :)
Oo Oo, who gets to hit the big candy red button? :-)
With each new board we’ve added getting kernel configs in order has sure raised and validated errors.
Formally tho, I’d put this in the category of system health check and yes, seems like a useful approx “weekly” exercise.
Lts-dev mailing list Lts-dev@lists.linaro.org https://lists.linaro.org/mailman/listinfo/lts-dev
On Tue, Oct 10, 2017 at 02:02:30PM -0500, Tom Gall wrote:
I suspect we want to be showing the delta to some fixed baseline (ideally totally clean results!) rather than the previous test run, or including a full list of unexpected failures in the report. Otherwise any issue that doesn't get fixed immediately ends up getting hidden in the reporting which isn't ideal.
There’s a whole thread I started on this last week and why known failures and regressions need to change.
Comments welcome. ;-)
Not sure where you started that thread, I can't seem to see it anywhere but perhaps I'm missing something.
kernel-build-reports@lists.linaro.org