On Fri, Oct 18, 2024 at 11:01:40AM -0700, Jeff Xu wrote:
Jeff, you're falling into your usual pattern of being unreasonably argumentative for apparently no reason and I really don't have time to constantly respond inline when you're just ignoring what I tell you.
You do this on nearly all code review and this just isn't working. If you want to effectively be involved in mseal you need to LISTEN.
The more you do this the less patient everybody gets with you and the less likely your series will ever get merged. This is not good for mseal or anybody involved.
On this issue - either use sensible macros that YOU ARE DEFINING, not assert.h, but YOU, that allow you to evaluate whether a condition is true or false - or I will not accept your unreadable test code.
It's that simple and I'm done discussing this.
Thanks for your time on discussing this.
Please, if I may say, when presenting your argument, keep it technical and avoid personal attack. Using personal attacks rather than using logic to refute your argument is “Ad Hominem Fallacy” [1] and will make it harder to get your point across.
This is not how ad hominem works, Jeff, it's a common misunderstanding of what that means.
It'd be the case if in replying to a specific technical point I was to respond with something personal. That is not what is happening here.
In fact it is not personal at all - code review consists of 1. technical points and 2. how well the conversation goes on review. If 2 is absolutely failing, it is absolutely fair and pertinent to bring that up.
What is happening here is that we have spent several months trying to explain to you very very simple points:
1. Have macros that assert both truth and falsity so you don't have some inverted and unreadable assert.
2. Do not use arbitrary 'magic numbers'.
Instead of listening, you have been needlessly difficult in a way others are not. I can't spend chunks of my working day going point-by-point knowing you will ultimately say something like 'well I just don't want to' or simply ignore points.
It is a waste of both of our time, but it is what you do.
[1] https://www.txst.edu/philosophy/resources/fallacy-definitions/ad-hominem.htm....
Additionally, The mseal_test was reviewed-by Muhammad Usama Anjum during original RFC discussion. IIUC, Muhammad Usama Anjum has domain knowledge for selftest infra, and I have relied on Muhammad’s comments and implemented all those comments.
If I can't read the tests, I'm going to NACK the series, sorry.
I'm not saying there is no room for improvement, but it should happen in more respectful and constructive ways. In any case, I hope we have common interest and passion to get more test coverage to avoid future regression. Given that we had 2 regressions in the past during code reviews and a pending regression to fix at this moment in memory sealing area, the benefit of additional test coverage is obvious.
You are repeatedly ignoring what you are being told by me and others. You have done this consistently to the point that you are taking up quite a bit of our time.
You do this on pretty much all code reviews. There is not just 'room for improvement', you are bordering on being impossible to deal with.
The benefits of additional test coverage is indeed obvious, but not if the tests are largely redundant, misleading or confused, which yours in fact are.
This is why I proposed that we sit down and figure out exactly what it is you want to test ahead of time, and NACKed the series (I'm not even quite sure why we are discussing this here still).
The fact you're debating about using ASSERT(), EXPECT() on the same series months later is not encouraging.
Specific on FAIL_TEST_IF_FALS macro, during the course of this discussion, your comments have started with, and I quote:
“ Why do we not have a FAIL_TEST_IF_TRUE()? This is crazy. Would be nice to have something human-readable like ASSERT_EQ() or ASSERT_TRUE() or ASSERT_FALSE().”
“This is beyond horrible. You really have to add more asserts.”
TO my response:
“ASSERT_EQ and ASSERT_TURE are not recommended by the self-test. The FAIL_TEST_IF_FAIL wrap will take care of some of the admin tasks related to self-test infra, such as counting how many tests are failing.”
And your question: “why we have to assert negations only, and other self tests do not do this.”
And my response: "My most test-infra related comments comes from Muhammad Usama Anjum" (added into this email), e.g. assert is not recommended.[1] , [1] https://lore.kernel.org/all/148fc789-3c03-4490-a653-6a4e58f336b6@collabora.c..."
This is a PERFECT example of the problem.
You are excluding my response to this - that he said NO SUCH THING - but rather he was talking about having an EXPECT()-like pattern!
You repeatedly do this - you ignore responses that contradict you. This is not a functional way to do code review.
And my additional try to clarify about your question about negations: “So it is not a problem with the MACRO, but where is it used ? ret = sys_mseal(ptr, size); FAIL_TEST_IF_FALSE(!ret); Take this example, it would be assert(!ret) The syscall return usually returns 0 to indicate success, where a negative comes from, but dev should be so used to (!ret), it is a common pattern to check syscall return code. e.g assert(!ret)"
And I offer an alternative approach for macro naming: "ret = sys_mseal(ptr, size); TEST_EXPECT_TRUE(!ret);"
Which you didn’t respond to directly.
Because we are going round in circles on 2 very very simple points and I don't have infinite time.
Simply introduce ASSERT_TRUE(), ASSERT_FALSE(), EXPECT_TRUE(), EXPECT_FALSE() perhaps with different spellings (no idea why you would want to do that) or use the test harness.
Instead you try to debate 'oh this is kind of like assert' or implying somebody said something or etc. etc. over dozens of emails.
Given the situation, I think it might be best to let domain experts from the testinfra team, such as Muhammad to suggestion a better replacement for this macro.
I'm not sure what kind of 'domain expertise' is required for you to not use magic numbers and to not create unreadable expressions. This is basic stuff.
The 'situation' is that I've asked you for 2 extremely simple things that pretty well all other tests do and you are arguing about it 2 months later on a NACKed series.
Inviting others into the thread is not going to help that.
Best regards, -Jeff
As you know I've gone out of my way and dedicated quite a bit of time to trying to find various different solutions to this.
It is not far off 10pm here and I am sat here replying to this. I would please ask you to take a step back, stop being so defensive, and just listen.
Go read some other code reviews in mm and see how successful code reviews progress. Concede some points. Defer to domain expertise and to maintainers at least on occasion.
The reason I keep on bringing up what you characterise as 'ad hominem' (incorrectly, though it's a very common misunderstanding of what that means), is because the problem in these reviews isn't technical - it is your unwillingness to listen and engage with reviewers.
If that isn't fixed, I don't see how any of your series can progress.
And in general I don't think I can sink more time into giving as detailed feedback on the review process as this. I have tried to do what I can here.
Anyway, I hope that we can find a positive resolution to this as I've said before.
Thanks, Lorenzo