FYI
https://bugzilla.kernel.org/show_bug.cgi?id=220745
-------- Forwarded Message -------- Subject: Re: Compile Error fs/nfsd/nfs4state.o - clamp() low limit slotsize greater than high limit total_avail/scale_factor Date: Thu, 06 Nov 2025 07:29:25 -0500 From: Jeff Layton jlayton@kernel.org To: Mike-SPC via Bugspray Bot bugbot@kernel.org, cel@kernel.org, neilb@ownmail.net, trondmy@kernel.org, linux-nfs@vger.kernel.org, anna@kernel.org, neilb@brown.name
On Thu, 2025-11-06 at 11:30 +0000, Mike-SPC via Bugspray Bot wrote:
Mike-SPC writes via Kernel.org Bugzilla:
(In reply to Bugspray Bot from comment #5)
Chuck Lever cel@kernel.org replies to comment #4:
On 11/5/25 7:25 AM, Mike-SPC via Bugspray Bot wrote:
Mike-SPC writes via Kernel.org Bugzilla:
Have you found a 6.1.y kernel for which the build doesn't fail?
Yes. Compiling Version 6.1.155 works without problems. Versions >= 6.1.156 aren't.
My analysis yesterday suggests that, because the nfs4state.c code hasn't changed, it's probably something elsewhere that introduced this problem. As we can't reproduce the issue, can you use "git bisect" between v6.1.155 and v6.1.156 to find the culprit commit?
(via https://msgid.link/ab235dbe-7949-4208-a21a-2cdd50347152@kernel.org)
Yes, your analysis is right (thanks for it). After some investigation, the issue appears to be caused by changes introduced in include/linux/minmax.h.
I verified this by replacing minmax.h in 6.1.156 with the version from 6.1.155, and the kernel then compiles successfully.
The relevant section in the 6.1.156 changelog (https://cdn.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.1.156) shows several modifications to minmax.h (notably around __clamp_once() and the use of BUILD_BUG_ON_MSG(statically_true(ulo > uhi), ...)), which seem to trigger a compile-time assertion when building NFSD.
Replacing the updated header with the previous one resolves the issue, so this appears to be a regression introduced by the new clamp() logic.
Could you please advise who is the right person or mailing list to report this issue to (minmax.h maintainers, kernel core, or stable tree)?
I'd let all 3 know, and I'd include the author of the patches that you suspect are the problem. They'll probably want to revise the one that's a problem.
Cheers,
On Thu, 6 Nov 2025 09:33:28 -0500 Chuck Lever cel@kernel.org wrote:
FYI
Ugg - that code is horrid. It seems to have got deleted since, but it is:
u32 slotsize = slot_bytes(ca); u32 num = ca->maxreqs; unsigned long avail, total_avail; unsigned int scale_factor;
spin_lock(&nfsd_drc_lock); if (nfsd_drc_max_mem > nfsd_drc_mem_used) total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; else /* We have handed out more space than we chose in * set_max_drc() to allow. That isn't really a * problem as long as that doesn't make us think we * have lots more due to integer overflow. */ total_avail = 0; avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail); /* * Never use more than a fraction of the remaining memory, * unless it's the only way to give this client a slot. * The chosen fraction is either 1/8 or 1/number of threads, * whichever is smaller. This ensures there are adequate * slots to support multiple clients per thread. * Give the client one slot even if that would require * over-allocation--it is better than failure. */ scale_factor = max_t(unsigned int, 8, nn->nfsd_serv->sv_nrthreads);
avail = clamp_t(unsigned long, avail, slotsize, total_avail/scale_factor); num = min_t(int, num, avail / slotsize); num = max_t(int, num, 1);
Lets rework it a bit... if (nfsd_drc_max_mem > nfsd_drc_mem_used) { total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; avail = min(NFSD_MAX_MEM_PER_SESSION, total_avail); avail = clamp(avail, n + sizeof(xxx), total_avail/8) } else { total_avail = 0; avail = 0; avail = clamp(0, n + sizeof(xxx), 0); }
Neither of those clamp() are sane at all - should be clamp(val, lo, hi) with 'lo <= hi' otherwise the result is dependant on the order of the comparisons. The compiler sees the second one and rightly bleats. I can't even guess what the code is actually trying to calculate!
Maybe looking at where the code came from, or the current version might help.
It MIGHT be that the 'lo' of slotsize was an attempt to ensure that the following 'avail / slotsize' was as least one. Some software archaeology might show that the 'num = max(num, 1)' was added because the code above didn't work. In that case the clamp can be clamp(avail, 0, total_avail/scale_factor) which is just min(avail, total_avail/scale_factor).
The person who rewrote it between 6.1 and 6.18 might now more.
David
-------- Forwarded Message -------- Subject: Re: Compile Error fs/nfsd/nfs4state.o - clamp() low limit slotsize greater than high limit total_avail/scale_factor Date: Thu, 06 Nov 2025 07:29:25 -0500 From: Jeff Layton jlayton@kernel.org To: Mike-SPC via Bugspray Bot bugbot@kernel.org, cel@kernel.org, neilb@ownmail.net, trondmy@kernel.org, linux-nfs@vger.kernel.org, anna@kernel.org, neilb@brown.name
On Thu, 2025-11-06 at 11:30 +0000, Mike-SPC via Bugspray Bot wrote:
Mike-SPC writes via Kernel.org Bugzilla:
(In reply to Bugspray Bot from comment #5)
Chuck Lever cel@kernel.org replies to comment #4:
On 11/5/25 7:25 AM, Mike-SPC via Bugspray Bot wrote:
Mike-SPC writes via Kernel.org Bugzilla:
Have you found a 6.1.y kernel for which the build doesn't fail?
Yes. Compiling Version 6.1.155 works without problems. Versions >= 6.1.156 aren't.
My analysis yesterday suggests that, because the nfs4state.c code hasn't changed, it's probably something elsewhere that introduced this problem. As we can't reproduce the issue, can you use "git bisect" between v6.1.155 and v6.1.156 to find the culprit commit?
(via https://msgid.link/ab235dbe-7949-4208-a21a-2cdd50347152@kernel.org)
Yes, your analysis is right (thanks for it). After some investigation, the issue appears to be caused by changes introduced in include/linux/minmax.h.
I verified this by replacing minmax.h in 6.1.156 with the version from 6.1.155, and the kernel then compiles successfully.
The relevant section in the 6.1.156 changelog (https://cdn.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.1.156) shows several modifications to minmax.h (notably around __clamp_once() and the use of BUILD_BUG_ON_MSG(statically_true(ulo > uhi), ...)), which seem to trigger a compile-time assertion when building NFSD.
Replacing the updated header with the previous one resolves the issue, so this appears to be a regression introduced by the new clamp() logic.
Could you please advise who is the right person or mailing list to report this issue to (minmax.h maintainers, kernel core, or stable tree)?
I'd let all 3 know, and I'd include the author of the patches that you suspect are the problem. They'll probably want to revise the one that's a problem.
Cheers,
On 11/6/25 2:22 PM, David Laight wrote:
On Thu, 6 Nov 2025 09:33:28 -0500 Chuck Lever cel@kernel.org wrote:
FYI
Ugg - that code is horrid. It seems to have got deleted since, but it is:
u32 slotsize = slot_bytes(ca); u32 num = ca->maxreqs; unsigned long avail, total_avail; unsigned int scale_factor;
spin_lock(&nfsd_drc_lock); if (nfsd_drc_max_mem > nfsd_drc_mem_used) total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; else /* We have handed out more space than we chose in * set_max_drc() to allow. That isn't really a * problem as long as that doesn't make us think we * have lots more due to integer overflow. */ total_avail = 0; avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail); /* * Never use more than a fraction of the remaining memory, * unless it's the only way to give this client a slot. * The chosen fraction is either 1/8 or 1/number of threads, * whichever is smaller. This ensures there are adequate * slots to support multiple clients per thread. * Give the client one slot even if that would require * over-allocation--it is better than failure. */ scale_factor = max_t(unsigned int, 8, nn->nfsd_serv->sv_nrthreads);
avail = clamp_t(unsigned long, avail, slotsize, total_avail/scale_factor); num = min_t(int, num, avail / slotsize); num = max_t(int, num, 1);
Lets rework it a bit... if (nfsd_drc_max_mem > nfsd_drc_mem_used) { total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; avail = min(NFSD_MAX_MEM_PER_SESSION, total_avail); avail = clamp(avail, n + sizeof(xxx), total_avail/8) } else { total_avail = 0; avail = 0; avail = clamp(0, n + sizeof(xxx), 0); }
Neither of those clamp() are sane at all - should be clamp(val, lo, hi) with 'lo <= hi' otherwise the result is dependant on the order of the comparisons. The compiler sees the second one and rightly bleats. I can't even guess what the code is actually trying to calculate!
Maybe looking at where the code came from, or the current version might help.
The current upstream code is part of a new feature that is not appropriate to backport to LTS kernels. I consider that code out of play.
The compiler error showed up in 6.1.y with the recent minmax.h changes -- there have been no reported problems in any of the LTS kernels until now, including with 32-bit builds.
The usual guidelines about regressions suggest that the most recent backports (ie, minmax.h) are the ones that should be removed or reworked to address the compile breakage. I don't think we should address this by writing special clean-ups to code that wasn't broken before the minmax.h changes. Cleaning that code up is more likely to introduce bugs than reverting the minmax.h changes.
It MIGHT be that the 'lo' of slotsize was an attempt to ensure that the following 'avail / slotsize' was as least one. Some software archaeology might show that the 'num = max(num, 1)' was added because the code above didn't work. In that case the clamp can be clamp(avail, 0, total_avail/scale_factor) which is just min(avail, total_avail/scale_factor).
The person who rewrote it between 6.1 and 6.18 might now more.
David
-------- Forwarded Message -------- Subject: Re: Compile Error fs/nfsd/nfs4state.o - clamp() low limit slotsize greater than high limit total_avail/scale_factor Date: Thu, 06 Nov 2025 07:29:25 -0500 From: Jeff Layton jlayton@kernel.org To: Mike-SPC via Bugspray Bot bugbot@kernel.org, cel@kernel.org, neilb@ownmail.net, trondmy@kernel.org, linux-nfs@vger.kernel.org, anna@kernel.org, neilb@brown.name
On Thu, 2025-11-06 at 11:30 +0000, Mike-SPC via Bugspray Bot wrote:
Mike-SPC writes via Kernel.org Bugzilla:
(In reply to Bugspray Bot from comment #5)
Chuck Lever cel@kernel.org replies to comment #4:
On 11/5/25 7:25 AM, Mike-SPC via Bugspray Bot wrote:
Mike-SPC writes via Kernel.org Bugzilla:
Have you found a 6.1.y kernel for which the build doesn't fail?
Yes. Compiling Version 6.1.155 works without problems. Versions >= 6.1.156 aren't.
My analysis yesterday suggests that, because the nfs4state.c code hasn't changed, it's probably something elsewhere that introduced this problem. As we can't reproduce the issue, can you use "git bisect" between v6.1.155 and v6.1.156 to find the culprit commit?
(via https://msgid.link/ab235dbe-7949-4208-a21a-2cdd50347152@kernel.org)
Yes, your analysis is right (thanks for it). After some investigation, the issue appears to be caused by changes introduced in include/linux/minmax.h.
I verified this by replacing minmax.h in 6.1.156 with the version from 6.1.155, and the kernel then compiles successfully.
The relevant section in the 6.1.156 changelog (https://cdn.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.1.156) shows several modifications to minmax.h (notably around __clamp_once() and the use of BUILD_BUG_ON_MSG(statically_true(ulo > uhi), ...)), which seem to trigger a compile-time assertion when building NFSD.
Replacing the updated header with the previous one resolves the issue, so this appears to be a regression introduced by the new clamp() logic.
Could you please advise who is the right person or mailing list to report this issue to (minmax.h maintainers, kernel core, or stable tree)?
I'd let all 3 know, and I'd include the author of the patches that you suspect are the problem. They'll probably want to revise the one that's a problem.
Cheers,
On Thu, 6 Nov 2025 14:32:34 -0500 Chuck Lever cel@kernel.org wrote:
On 11/6/25 2:22 PM, David Laight wrote:
On Thu, 6 Nov 2025 09:33:28 -0500 Chuck Lever cel@kernel.org wrote:
FYI
Ugg - that code is horrid. It seems to have got deleted since, but it is:
u32 slotsize = slot_bytes(ca); u32 num = ca->maxreqs; unsigned long avail, total_avail; unsigned int scale_factor;
spin_lock(&nfsd_drc_lock); if (nfsd_drc_max_mem > nfsd_drc_mem_used) total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; else /* We have handed out more space than we chose in * set_max_drc() to allow. That isn't really a * problem as long as that doesn't make us think we * have lots more due to integer overflow. */ total_avail = 0; avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail); /* * Never use more than a fraction of the remaining memory, * unless it's the only way to give this client a slot. * The chosen fraction is either 1/8 or 1/number of threads, * whichever is smaller. This ensures there are adequate * slots to support multiple clients per thread. * Give the client one slot even if that would require * over-allocation--it is better than failure. */ scale_factor = max_t(unsigned int, 8, nn->nfsd_serv->sv_nrthreads);
avail = clamp_t(unsigned long, avail, slotsize, total_avail/scale_factor); num = min_t(int, num, avail / slotsize); num = max_t(int, num, 1);
Lets rework it a bit... if (nfsd_drc_max_mem > nfsd_drc_mem_used) { total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; avail = min(NFSD_MAX_MEM_PER_SESSION, total_avail); avail = clamp(avail, n + sizeof(xxx), total_avail/8) } else { total_avail = 0; avail = 0; avail = clamp(0, n + sizeof(xxx), 0); }
Neither of those clamp() are sane at all - should be clamp(val, lo, hi) with 'lo <= hi' otherwise the result is dependant on the order of the comparisons. The compiler sees the second one and rightly bleats. I can't even guess what the code is actually trying to calculate!
Maybe looking at where the code came from, or the current version might help.
The current upstream code is part of a new feature that is not appropriate to backport to LTS kernels. I consider that code out of play.
The compiler error showed up in 6.1.y with the recent minmax.h changes -- there have been no reported problems in any of the LTS kernels until now, including with 32-bit builds.
The usual guidelines about regressions suggest that the most recent backports (ie, minmax.h) are the ones that should be removed or reworked to address the compile breakage. I don't think we should address this by writing special clean-ups to code that wasn't broken before the minmax.h changes. Cleaning that code up is more likely to introduce bugs than reverting the minmax.h changes.
No, that code needs fixing. It is broken..... The compiler warning/error is completely valid. The result of that clamp() has never been well defined. It is likely that it always generated the wrong result. It might be that a much older version of the function exists before someone changed a pair of conditionals to be a call to clamp(). That old version may well be fine.
David
It MIGHT be that the 'lo' of slotsize was an attempt to ensure that the following 'avail / slotsize' was as least one. Some software archaeology might show that the 'num = max(num, 1)' was added because the code above didn't work. In that case the clamp can be clamp(avail, 0, total_avail/scale_factor) which is just min(avail, total_avail/scale_factor).
The person who rewrote it between 6.1 and 6.18 might now more.
David
-------- Forwarded Message -------- Subject: Re: Compile Error fs/nfsd/nfs4state.o - clamp() low limit slotsize greater than high limit total_avail/scale_factor Date: Thu, 06 Nov 2025 07:29:25 -0500 From: Jeff Layton jlayton@kernel.org To: Mike-SPC via Bugspray Bot bugbot@kernel.org, cel@kernel.org, neilb@ownmail.net, trondmy@kernel.org, linux-nfs@vger.kernel.org, anna@kernel.org, neilb@brown.name
On Thu, 2025-11-06 at 11:30 +0000, Mike-SPC via Bugspray Bot wrote:
Mike-SPC writes via Kernel.org Bugzilla:
(In reply to Bugspray Bot from comment #5)
Chuck Lever cel@kernel.org replies to comment #4:
On 11/5/25 7:25 AM, Mike-SPC via Bugspray Bot wrote:
Mike-SPC writes via Kernel.org Bugzilla: > Have you found a 6.1.y kernel for which the build doesn't fail?
Yes. Compiling Version 6.1.155 works without problems. Versions >= 6.1.156 aren't.
My analysis yesterday suggests that, because the nfs4state.c code hasn't changed, it's probably something elsewhere that introduced this problem. As we can't reproduce the issue, can you use "git bisect" between v6.1.155 and v6.1.156 to find the culprit commit?
(via https://msgid.link/ab235dbe-7949-4208-a21a-2cdd50347152@kernel.org)
Yes, your analysis is right (thanks for it). After some investigation, the issue appears to be caused by changes introduced in include/linux/minmax.h.
I verified this by replacing minmax.h in 6.1.156 with the version from 6.1.155, and the kernel then compiles successfully.
The relevant section in the 6.1.156 changelog (https://cdn.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.1.156) shows several modifications to minmax.h (notably around __clamp_once() and the use of BUILD_BUG_ON_MSG(statically_true(ulo > uhi), ...)), which seem to trigger a compile-time assertion when building NFSD.
Replacing the updated header with the previous one resolves the issue, so this appears to be a regression introduced by the new clamp() logic.
Could you please advise who is the right person or mailing list to report this issue to (minmax.h maintainers, kernel core, or stable tree)?
I'd let all 3 know, and I'd include the author of the patches that you suspect are the problem. They'll probably want to revise the one that's a problem.
Cheers,
On Fri, 07 Nov 2025, David Laight wrote:
On Thu, 6 Nov 2025 09:33:28 -0500 Chuck Lever cel@kernel.org wrote:
FYI
Ugg - that code is horrid. It seems to have got deleted since, but it is:
u32 slotsize = slot_bytes(ca); u32 num = ca->maxreqs; unsigned long avail, total_avail; unsigned int scale_factor;
spin_lock(&nfsd_drc_lock); if (nfsd_drc_max_mem > nfsd_drc_mem_used) total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; else /* We have handed out more space than we chose in * set_max_drc() to allow. That isn't really a * problem as long as that doesn't make us think we * have lots more due to integer overflow. */ total_avail = 0; avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail); /* * Never use more than a fraction of the remaining memory, * unless it's the only way to give this client a slot. * The chosen fraction is either 1/8 or 1/number of threads, * whichever is smaller. This ensures there are adequate * slots to support multiple clients per thread. * Give the client one slot even if that would require * over-allocation--it is better than failure. */ scale_factor = max_t(unsigned int, 8, nn->nfsd_serv->sv_nrthreads);
avail = clamp_t(unsigned long, avail, slotsize, total_avail/scale_factor); num = min_t(int, num, avail / slotsize); num = max_t(int, num, 1);
Lets rework it a bit... if (nfsd_drc_max_mem > nfsd_drc_mem_used) { total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; avail = min(NFSD_MAX_MEM_PER_SESSION, total_avail); avail = clamp(avail, n + sizeof(xxx), total_avail/8) } else { total_avail = 0; avail = 0; avail = clamp(0, n + sizeof(xxx), 0); }
Neither of those clamp() are sane at all - should be clamp(val, lo, hi) with 'lo <= hi' otherwise the result is dependant on the order of the comparisons. The compiler sees the second one and rightly bleats.
In fact only gcc-9 bleats.
gcc-7 gcc-10 gcc-13 gcc-15 all seem to think it is fine.
NeilBrown
On Fri, 07 Nov 2025 22:17:20 +1100 NeilBrown neilb@ownmail.net wrote:
On Fri, 07 Nov 2025, David Laight wrote:
On Thu, 6 Nov 2025 09:33:28 -0500 Chuck Lever cel@kernel.org wrote:
FYI
Ugg - that code is horrid. It seems to have got deleted since, but it is:
u32 slotsize = slot_bytes(ca); u32 num = ca->maxreqs; unsigned long avail, total_avail; unsigned int scale_factor;
spin_lock(&nfsd_drc_lock); if (nfsd_drc_max_mem > nfsd_drc_mem_used) total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; else /* We have handed out more space than we chose in * set_max_drc() to allow. That isn't really a * problem as long as that doesn't make us think we * have lots more due to integer overflow. */ total_avail = 0; avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail); /* * Never use more than a fraction of the remaining memory, * unless it's the only way to give this client a slot. * The chosen fraction is either 1/8 or 1/number of threads, * whichever is smaller. This ensures there are adequate * slots to support multiple clients per thread. * Give the client one slot even if that would require * over-allocation--it is better than failure. */ scale_factor = max_t(unsigned int, 8, nn->nfsd_serv->sv_nrthreads);
avail = clamp_t(unsigned long, avail, slotsize, total_avail/scale_factor); num = min_t(int, num, avail / slotsize); num = max_t(int, num, 1);
Lets rework it a bit... if (nfsd_drc_max_mem > nfsd_drc_mem_used) { total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; avail = min(NFSD_MAX_MEM_PER_SESSION, total_avail); avail = clamp(avail, n + sizeof(xxx), total_avail/8) } else { total_avail = 0; avail = 0; avail = clamp(0, n + sizeof(xxx), 0); }
Neither of those clamp() are sane at all - should be clamp(val, lo, hi) with 'lo <= hi' otherwise the result is dependant on the order of the comparisons. The compiler sees the second one and rightly bleats.
In fact only gcc-9 bleats.
That is probably why it didn't get picked up earlier.
gcc-7 gcc-10 gcc-13 gcc-15 all seem to think it is fine.
Which, of course, it isn't...
David
NeilBrown
On Fri, 07 Nov 2025, David Laight wrote:
On Fri, 07 Nov 2025 22:17:20 +1100 NeilBrown neilb@ownmail.net wrote:
On Fri, 07 Nov 2025, David Laight wrote:
On Thu, 6 Nov 2025 09:33:28 -0500 Chuck Lever cel@kernel.org wrote:
FYI
Ugg - that code is horrid. It seems to have got deleted since, but it is:
u32 slotsize = slot_bytes(ca); u32 num = ca->maxreqs; unsigned long avail, total_avail; unsigned int scale_factor;
spin_lock(&nfsd_drc_lock); if (nfsd_drc_max_mem > nfsd_drc_mem_used) total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; else /* We have handed out more space than we chose in * set_max_drc() to allow. That isn't really a * problem as long as that doesn't make us think we * have lots more due to integer overflow. */ total_avail = 0; avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail); /* * Never use more than a fraction of the remaining memory, * unless it's the only way to give this client a slot. * The chosen fraction is either 1/8 or 1/number of threads, * whichever is smaller. This ensures there are adequate * slots to support multiple clients per thread. * Give the client one slot even if that would require * over-allocation--it is better than failure. */ scale_factor = max_t(unsigned int, 8, nn->nfsd_serv->sv_nrthreads);
avail = clamp_t(unsigned long, avail, slotsize, total_avail/scale_factor); num = min_t(int, num, avail / slotsize); num = max_t(int, num, 1);
Lets rework it a bit... if (nfsd_drc_max_mem > nfsd_drc_mem_used) { total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; avail = min(NFSD_MAX_MEM_PER_SESSION, total_avail); avail = clamp(avail, n + sizeof(xxx), total_avail/8) } else { total_avail = 0; avail = 0; avail = clamp(0, n + sizeof(xxx), 0); }
Neither of those clamp() are sane at all - should be clamp(val, lo, hi) with 'lo <= hi' otherwise the result is dependant on the order of the comparisons. The compiler sees the second one and rightly bleats.
In fact only gcc-9 bleats.
That is probably why it didn't get picked up earlier.
gcc-7 gcc-10 gcc-13 gcc-15 all seem to think it is fine.
Which, of course, it isn't...
I've now had a proper look at your analysis of the code - thanks.
I agree that the code is unclear (at best) and that if it were still upstream I would want to fix it. However is does function correctly.
As you say, when min > max, the result of clamp(val, min, max) depends on the order of comparison, and we know what the order of comparison is because we can look at the code for clamp().
Currently it is
((val) >= (hi) ? (hi) : ((val) <= (lo) ? (lo) : (val)))
which will use max when max is below val and min. Previously it was min((typeof(val))max(val, lo), hi) which also will use max when it is below val and min
Before that it was #define clamp_t(type, val, min, max) ({ \ type __val = (val); \ type __min = (min); \ type __max = (max); \ __val = __val < __min ? __min: __val; \ __val > __max ? __max: __val; })
which also uses max when that is less that val and min.
So I think the nfsd code has always worked correctly. That is not sufficient for mainline - there we want it to also be robust and maintainable. But for stable kernels it should be sufficient. Adding a patch to "stable" kernels which causes working code to fail to compile does not seem, to me, to be in the spirit of "stability". (Have the "clamp" checking in mainline, finding problems there, and backporting the fixes to stable seems to me to be the best way to use these checking improvements to improve "stable").
Thanks, NeilBrown
linux-stable-mirror@lists.linaro.org