HHi Lorenzo
On Thu, Oct 17, 2024 at 11:38 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Thu, Oct 17, 2024 at 12:49:40PM -0700, Jeff Xu wrote:
On Thu, Oct 17, 2024 at 12:00 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Thu, Oct 17, 2024 at 11:47:15AM -0700, Jeff Xu wrote:
On Thu, Oct 17, 2024 at 11:29 AM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Thu, Oct 17, 2024 at 11:14:20AM -0700, Jeff Xu wrote:
Hi Lorenzo and Muhammad
Reviving this thread since the merging window is closed and we have more time to review /work on this code in the next few weeks.
On Fri, Sep 13, 2024 at 3:50 PM Jeff Xu jeffxu@chromium.org wrote: > > Hi Lorenzo > > On Sat, Sep 7, 2024 at 12:28 PM Lorenzo Stoakes > lorenzo.stoakes@oracle.com wrote: > > > > > > I also suggest we figure out this FAIL_TEST_IF_FALSE() thing at this point > > too - I may be missing something, but I cannot for the life me understand > > why we have to assert negations only, and other self tests do not do this. > > > My most test-infra related comments comes from Muhammad Usama Anjum > (added into this email), e.g. assert is not recommended.[1] , > > [1] https://lore.kernel.org/all/148fc789-3c03-4490-a653-6a4e58f336b6@collabora.c... > Specifically regarding Lorenzo's comments about FAIL_TEST_IF_FALSE
Muhammad Usama Anjum doesn't want assert being used in selftest (see [1] above), and I quote: "We don't want to terminate the test if one test fails because of assert. We want the sub-tests to get executed in-dependent of other tests.
ksft_test_result(condition, fmt, ...); ksft_test_result_pass(fmt, ...);"
FAIL_TEST_IF_FALSE is a wrapper for ksft_test_result macro, and replacement of assert.
Please let me know if you have questions on this and Muhammad might also help to clarify the requirement if needed.
Thanks -Jeff
Right this is about not failing the test i.e. equivalent of an expect rather than an assert, which makes sense.
What I'm saying is we should have something more like
EXPECT_TRUE() EXPECT_FALSE()
etc.
Which would avoid these confusing
FAIL_TEST_IF_FALSE(!expr)
FAIL_TEST_IF_FALSE(expr) is the right way to use this macro.
But you don't only test position conditions, you also test negative ones.
So it is not a problem with the MACRO, but where is it used ?
ret = sys_mseal(ptr, size); FAIL_TEST_IF_FALSE(!ret);
Take this example, it would be assert(!ret)
The syscall return usually returns 0 to indicate success, where a negative comes from, but dev should be so used to (!ret), it is a common pattern to check syscall return code. e.g assert(!ret)
Or do you have specific examples of code that caused confusion ?
'Fail test if false false thing' is really confusing and hard to read.
I struggle to understand your tests as a result.
I understand 'fail test if false' is expressive in a way, but it's really hard to parse.
If you just read it as assert(), would that be easier ? (or you don't like assert() ?)
Obviously it's also misleading in that you're saying 'fail the test _later_ if false', which I hadn't even realised...
It's well established in basically all normal test suites that:
- assert = fail test _here_ if this fails (actually a valid thing to do if you assert something that means the test simply cannot reasonably continue if that condition is false).
- expect = the test will now fail, but carry on.
I mean you work for a company that does this :) [0] this is a very well established precedent.
Let's use expect as an example, let's say I create a new Macro: TEST_EXPECT_TRUE, which basically is same syntax as FAIL_TEST_IF_FALSE, I'm not sure how it is different: you still have !ret in the code.
ret = sys_mseal(ptr, size); TEST_EXPECT_TRUE(!ret);
Or is the FAIL_xxx_IF_FALSE pattern more un-readable than EXPECT_TURE ? maybe ..
It is same syntax as assert(expr), e.g:
man assert(expr) assert - abort the program if assertion is false
FAIL_TEST_IF_FALSE is a replacement for assert, instead of aborting the program, it just reports failure in this test.
So doesn't at all do what assert does, because that _does_ terminate execution on failure...
I don't know what you mean, the test case will fail, but the next test case will run. This the point, the mseal_test continues to run all test cases to finish, even if one of the test cases is failed.
We are writing unit tests in a test framework, let's use very well established industry practices please.
Also note that you don't even need to reinvent the wheel, there is a fully-featured test harness available in tools/testing/selftests/kselftest_harness.h with both ASSERT_xxx() and EXPECT_xxx() helpers.
The EXPECT_xxx() doesn't take care of reporting though, or maybe it needs to be combined with FIXTURE_SETUP, FIXTURE_TEARDOWN. I haven't spent much time on those, but on brief look, it seems it is for repeating some tests, which doesn't exactly fit into what I needed, e.g. the sealed memory won't be unmapped. It is possible that those tests can be adopted to use test fixtures, but I don't see significant gain for that.
I've used it extensively myself and it works well.
I'd basically suggest you use that. Though moving existing tests to that would be some churn.
On the other hand I really can't accept patches which are totally unreadable to me, so you'll need to fix this one way or another, and the churn is worth it as a one-time cost to be honest.
Is this still confusing ? (The FAIL_TEST_IF_FALSE is already a descriptive name, and the syntax of assert is well known.)
It's a super misleading name as it says nothing about _WHEN_ the test fails. Also the syntax of assert() may be well known but you don't call this function assert, you don't reference assert anywhere, and you don't do what assert() does so, you know, That's not a great example.
The semantics of unit test frameworks are very well known, and already implemented for you, and also do not require you to do unreadable double negations for no reason, so let's use those please.
As stated previously, I'm not sure whether the test fixture is benefiting mseal_test at this moment. But I'm open for future conversion when I have time for this. For now, I like to get those tests in so we can properly detect possible regression for memory sealing.
What will help you better review this code? Would the below help ?
s/FAIL_TEST_IF_FALSE/TEST_EXPECT_TRUE/g
Jeff, you're falling into your usual pattern of being unreasonably argumentative for apparently no reason and I really don't have time to constantly respond inline when you're just ignoring what I tell you.
You do this on nearly all code review and this just isn't working. If you want to effectively be involved in mseal you need to LISTEN.
The more you do this the less patient everybody gets with you and the less likely your series will ever get merged. This is not good for mseal or anybody involved.
On this issue - either use sensible macros that YOU ARE DEFINING, not assert.h, but YOU, that allow you to evaluate whether a condition is true or false - or I will not accept your unreadable test code.
It's that simple and I'm done discussing this.
Thanks for your time on discussing this.
Please, if I may say, when presenting your argument, keep it technical and avoid personal attack. Using personal attacks rather than using logic to refute your argument is “Ad Hominem Fallacy” [1] and will make it harder to get your point across.
[1] https://www.txst.edu/philosophy/resources/fallacy-definitions/ad-hominem.htm....
Additionally, The mseal_test was reviewed-by Muhammad Usama Anjum during original RFC discussion. IIUC, Muhammad Usama Anjum has domain knowledge for selftest infra, and I have relied on Muhammad’s comments and implemented all those comments.
I'm not saying there is no room for improvement, but it should happen in more respectful and constructive ways. In any case, I hope we have common interest and passion to get more test coverage to avoid future regression. Given that we had 2 regressions in the past during code reviews and a pending regression to fix at this moment in memory sealing area, the benefit of additional test coverage is obvious.
Specific on FAIL_TEST_IF_FALS macro, during the course of this discussion, your comments have started with, and I quote:
“ Why do we not have a FAIL_TEST_IF_TRUE()? This is crazy. Would be nice to have something human-readable like ASSERT_EQ() or ASSERT_TRUE() or ASSERT_FALSE().”
“This is beyond horrible. You really have to add more asserts.”
TO my response:
“ASSERT_EQ and ASSERT_TURE are not recommended by the self-test. The FAIL_TEST_IF_FAIL wrap will take care of some of the admin tasks related to self-test infra, such as counting how many tests are failing.”
And your question: “why we have to assert negations only, and other self tests do not do this.”
And my response: "My most test-infra related comments comes from Muhammad Usama Anjum" (added into this email), e.g. assert is not recommended.[1] , [1] https://lore.kernel.org/all/148fc789-3c03-4490-a653-6a4e58f336b6@collabora.c..."
And my additional try to clarify about your question about negations: “So it is not a problem with the MACRO, but where is it used ? ret = sys_mseal(ptr, size); FAIL_TEST_IF_FALSE(!ret); Take this example, it would be assert(!ret) The syscall return usually returns 0 to indicate success, where a negative comes from, but dev should be so used to (!ret), it is a common pattern to check syscall return code. e.g assert(!ret)"
And I offer an alternative approach for macro naming: "ret = sys_mseal(ptr, size); TEST_EXPECT_TRUE(!ret);"
Which you didn’t respond to directly.
Given the situation, I think it might be best to let domain experts from the testinfra team, such as Muhammad to suggestion a better replacement for this macro.
Best regards, -Jeff