On Fri, Apr 11, 2025 at 11:37:25PM +0300, Jarkko Sakkinen wrote:
On Fri, Apr 11, 2025 at 04:59:11PM +0100, David Howells wrote:
Jarkko Sakkinen jarkko@kernel.org wrote:
- spin_lock_irqsave(&key_graveyard_lock, flags);
- list_splice_init(&key_graveyard, &graveyard);
- spin_unlock_irqrestore(&key_graveyard_lock, flags);
I would wrap this bit in a check to see if key_graveyard is empty so that we can avoid disabling irqs and taking the lock if the graveyard is empty.
Can do, and does make sense.
if (!refcount_inc_not_zero(&key->usage)) {
Sorry, but eww. You're going to wangle the refcount twice on every key on the system every time the gc does a pass. Further, in some cases inc_not_zero is not the fastest op in the world.
One could alternatively "test_bit(KEY_FLAG_FINAL_PUT, &key->flags)) && !refcount_inc_not_zero(&key->usage))" without mb() on either side and
Refactoring the changes to key_put() would be (draft):
void key_put(struct key *key) { unsigned long flags;
if (!key) return;
key_check(key);
if (!refcount_dec_and_test(&key->usage)) return;
local_irq_save(flags);
set_bit(KEY_FLAG_FINAL_PUT, &key->flags);
/* Remove user's key tracking and quota: */ if (test_bit(KEY_FLAG_IN_QUOTA, &key->flags)) { spin_lock(&key->user->lock); key->user->qnkeys--; key->user->qnbytes -= key->quotalen; spin_unlock(&key->user->lock); }
spin_lock(&key_graveyard_lock); list_add_tail(&key->graveyard_link, &key_graveyard); spin_unlock(&key_graveyard_lock);
schedule_work(&key_gc_work);
local_irq_restore(flags); }
And:
static bool key_get_gc(struct key *key) { return !test_bit(KEY_FLAG_FINAL_PUT, &key->flags)) && /* fast path */ refcount_inc_not_zero(&key->usage) /* slow path */ }
In the gc-loop:
if (!key_get_gc(&key)) { key = NULL; gc_state |= KEY_GC_REAP_AGAIN; goto skip_dead_key; }
[none yet compiled]
BR, Jarkko