On 11/29, Bernd Edlinger wrote:
On 11/23/25 19:32, Oleg Nesterov wrote:
I don't follow. Do you mean PREEMPT_RT ?
If yes. In this case spin_lock_irq() is rt_spin_lock() which doesn't disable irqs, it does rt_lock_lock() (takes rt_mutex) + migrate_disable().
I do think that spin/mutex/whatever_unlock() is always safe. In any order, and regardless of RT.
It is hard to follow how linux implements that spin_lock_irq exactly,
Yes ;)
but to me it looks like it is done this way:
include/linux/spinlock_api_smp.h:static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) include/linux/spinlock_api_smp.h-{ include/linux/spinlock_api_smp.h- local_irq_disable(); include/linux/spinlock_api_smp.h- preempt_disable(); include/linux/spinlock_api_smp.h- spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); include/linux/spinlock_api_smp.h- LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock); include/linux/spinlock_api_smp.h-}
Again, I will assume you mean RT.
In this case spinlock_t and raw_spinlock_t are not the same thing.
include/linux/spinlock_types.h:
typedef struct spinlock { struct rt_mutex_base lock; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif } spinlock_t;
include/linux/spinlock_rt.h:
static __always_inline void spin_lock_irq(spinlock_t *lock) { rt_spin_lock(lock); }
rt_spin_lock() doesn't disable irqs, it takes "rt_mutex_base lock" and disables migration.
so an explicit task switch while locka_irq_disable looks very dangerous to me.
raw_spin_lock_irq() disables irqs/preemption regardless of RT, task switch is not possible.
Do you know other places where such a code pattern is used?
For example, double_lock_irq(). See task_numa_group(),
double_lock_irq(&my_grp->lock, &grp->lock);
....
spin_unlock(&my_grp->lock); spin_unlock_irq(&grp->lock);
this can unlock the locks in reverse order.
I am sure there are more examples.
I do just ask, because a close look at those might reveal some serious bugs, WDYT?
See above, I don't understand your concerns...
Oleg.