On 3/5/24 17:18, Christophe Leroy wrote:
Le 05/03/2024 à 19:14, Sean Anderson a écrit :
[Vous ne recevez pas souvent de courriers de sean.anderson@linux.dev. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
Hi,
On 2/23/24 11:02, Sean Anderson wrote:
On 2/23/24 00:38, Christophe Leroy wrote:
Le 22/02/2024 à 18:07, Sean Anderson a écrit :
[Vous ne recevez pas souvent de courriers de sean.anderson@linux.dev. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
cgr_lock may be locked with interrupts already disabled by smp_call_function_single. As such, we must use a raw spinlock to avoid problems on PREEMPT_RT kernels. Although this bug has existed for a while, it was not apparent until commit ef2a8d5478b9 ("net: dpaa: Adjust queue depth on rate change") which invokes smp_call_function_single via qman_update_cgr_safe every time a link goes up or down.
Why a raw spinlock to avoid problems on PREEMPT_RT, can you elaborate ?
smp_call_function always runs its callback in hard IRQ context, even on PREEMPT_RT, where spinlocks can sleep. So we need to use raw spinlocks to ensure we aren't waiting on a sleeping task. See the first bug report for more discussion.
In the longer term it would be better to switch to some other abstraction.
Does this make sense to you?
Yes that fine, thanks for the clarification. Maybe you can explain that in the patch description in case you send a v5.
Hm, I thought I put this description in the commit message already. Maybe something like
| smp_call_function always runs its callback in hard IRQ context, even on | PREEMPT_RT, where spinlocks can sleep. So we need to use a raw spinlock | for cgr_lock to ensure we aren't waiting on a sleeping task. | | Although this bug has existed for a while, it was not apparent until | commit ef2a8d5478b9 ("net: dpaa: Adjust queue depth on rate change") | which invokes smp_call_function_single via qman_update_cgr_safe every | time a link goes up or down.
would be clearer.
--Sean