From: Brandon Kammerdiener brandon.kammerdiener@intel.com
[ Upstream commit 75673fda0c557ae26078177dd14d4857afbf128d ]
The _safe variant used here gets the next element before running the callback, avoiding the endless loop condition.
Signed-off-by: Brandon Kammerdiener brandon.kammerdiener@intel.com Link: https://lore.kernel.org/r/20250424153246.141677-2-brandon.kammerdiener@intel... Signed-off-by: Alexei Starovoitov ast@kernel.org Acked-by: Hou Tao houtao1@huawei.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/hashtab.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 3ec941a0ea41c..ff5cec99293a3 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -2223,7 +2223,7 @@ static long bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_ b = &htab->buckets[i]; rcu_read_lock(); head = &b->head; - hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) { + hlist_nulls_for_each_entry_safe(elem, n, head, hash_node) { key = elem->key; if (is_percpu) { /* current cpu value for percpu map */