On Dec 5, 2017, at 12:24 AM, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Mon, Dec 04, 2017 at 03:12:45PM -0600, Tom Gall wrote:
On Dec 4, 2017, at 9:59 AM, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 4.14.4 release. There are 95 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed Dec 6 16:00:27 UTC 2017. Anything received after that time might be too late.
The whole patch series can be found in one patch at: kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.4-rc1.gz or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
Compiled, booted and ran the following package unit tests without regressions on x86_64
boringssl : go test target:0/0/5764/5764/5764 PASS ssl_test : 10 pass crypto_test : 28 pass e2fsprogs: make check : 340 pass sqlite make test : 143914 pass drm make check : 15 pass modetest, drmdevice : pass alsa-lib make check : 2 pass bluez make check : 25 pass libusb stress : 4 pass
How do the above tests stress the kernel?
Depends entirely on the package in question.
Sure, of completely no surprise a lot of package unit tests don’t really do much that’s particularly interesting save to the package itself.
There are sometimes an interesting subset that drives some amount of work in kernel. That’s the useful stuff.
Take bluez, and it’s use of CONFIG_CRYPTO_USER_API.
Aren't they just verifications that the source code in the package is correct?
So if there’s some useful subset, that’s what I’m looking for.
I guess it proves something, but have you ever seen the above regress in _any_ kernel release?
Past regressions make for a good test.
I know the drm developers have a huge test suite that they use to verify their kernel changes, why not use that?
Good feedback, thanks.
thanks,
greg k-h