On Thu, Nov 20, 2025 at 09:05:57PM +0800, Sun Shaojie sunshaojie@kylinos.cn wrote:
Do you actually want to achieve this or is it an implementation side-effect of the Case 1 scenario that you want to achieve?
Yes, this is indeed the functionality I intended to achieve, as I find it follows the same logic as Case 1.
So you want to achieve a stable [1] set of CPUs for a cgroup that cannot be taken away from you by any sibling, correct? My reasoning is that the siblings should be under one management entity and therefore such overcommitment should be avoided already in the configuration. Invalidating all conflicting siblings is then the most fair result achievable. B1 is a second-class partition _only_ because it starts later or why is it OK to not fulfill its requirement?
[1] Note that A1 should still watch its cpuset.cpus.partition if it takes exclusivity seriously because its cpus may be taken away by hot(un)plug or ancestry reconfiguration.
As for your point that "the effective config cannot be derived just from the applied values," even before this patch, we couldn't derive the final effective configuration solely from the applied values.
For example, consider the following scenario: (not apply this patch) Table 1: Step | A1's prstate | B1's prstate | #1> echo "0-1" > A1/cpuset.cpus | member | member | #2> echo "root" > A1/cpuset.cpus.partition | root | member | #3> echo "1-2" > B1/cpuset.cpus | root invalid | member |
Table 2: Step | A1's prstate | B1's prstate | #1> echo "1-2" > B1/cpuset.cpus | member | member | #2> echo "root" > A1/cpuset.cpus.partition | root invalid | member | #3> echo "0-1" > A1/cpuset.cpus | root | member |
After step #3, both Table 1 and Table 2 have identical value settings, yet A1's partition state differs between them.
Aha, I must admit I didn't expect that. IMO, nothing (documented) prevents the latter (Table 2) behavior (here I'm referring to cpuset.cpus, not sure about cpuset.cpus.exclusive). Which of Table 1 or Table do you prefer?
Thanks, Michal