The intermediate product value_size * num_possible_cpus() is evaluated in 32-bit arithmetic and only then promoted to 64 bits. On systems with large value_size and many possible CPUs this can overflow and lead to an underestimated memory usage.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: 304849a27b34 ("bpf: hashtab memory usage") Cc: stable@vger.kernel.org Suggested-by: Yafang Shao laoar.shao@gmail.com Signed-off-by: Alexei Safin a.safin@rosa.ru --- v2: Promote value_size to u64 at declaration to avoid 32-bit overflow in all arithmetic using this variable (suggested by Yafang Shao) kernel/bpf/hashtab.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 570e2f723144..1f0add26ba3f 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -2252,7 +2252,7 @@ static long bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_ static u64 htab_map_mem_usage(const struct bpf_map *map) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map); - u32 value_size = round_up(htab->map.value_size, 8); + u64 value_size = round_up(htab->map.value_size, 8); bool prealloc = htab_is_prealloc(htab); bool percpu = htab_is_percpu(htab); bool lru = htab_is_lru(htab);
On Fri, Nov 7, 2025 at 6:03 PM Alexei Safin a.safin@rosa.ru wrote:
The intermediate product value_size * num_possible_cpus() is evaluated in 32-bit arithmetic and only then promoted to 64 bits. On systems with large value_size and many possible CPUs this can overflow and lead to an underestimated memory usage.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: 304849a27b34 ("bpf: hashtab memory usage") Cc: stable@vger.kernel.org Suggested-by: Yafang Shao laoar.shao@gmail.com Signed-off-by: Alexei Safin a.safin@rosa.ru
Acked-by: Yafang Shao laoar.shao@gmail.com
v2: Promote value_size to u64 at declaration to avoid 32-bit overflow in all arithmetic using this variable (suggested by Yafang Shao) kernel/bpf/hashtab.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 570e2f723144..1f0add26ba3f 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -2252,7 +2252,7 @@ static long bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_ static u64 htab_map_mem_usage(const struct bpf_map *map) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
u32 value_size = round_up(htab->map.value_size, 8);
u64 value_size = round_up(htab->map.value_size, 8); bool prealloc = htab_is_prealloc(htab); bool percpu = htab_is_percpu(htab); bool lru = htab_is_lru(htab);-- 2.50.1 (Apple Git-155)
On Fri, 7 Nov 2025 13:03:05 +0300 Alexei Safin a.safin@rosa.ru wrote:
The intermediate product value_size * num_possible_cpus() is evaluated in 32-bit arithmetic and only then promoted to 64 bits. On systems with large value_size and many possible CPUs this can overflow and lead to an underestimated memory usage.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
That code is insane. The size being calculated looks like a kernel memory size. You really don't want to be allocating single structures that exceed 4GB.
David
Fixes: 304849a27b34 ("bpf: hashtab memory usage") Cc: stable@vger.kernel.org Suggested-by: Yafang Shao laoar.shao@gmail.com Signed-off-by: Alexei Safin a.safin@rosa.ru
v2: Promote value_size to u64 at declaration to avoid 32-bit overflow in all arithmetic using this variable (suggested by Yafang Shao) kernel/bpf/hashtab.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 570e2f723144..1f0add26ba3f 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -2252,7 +2252,7 @@ static long bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_ static u64 htab_map_mem_usage(const struct bpf_map *map) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
- u32 value_size = round_up(htab->map.value_size, 8);
- u64 value_size = round_up(htab->map.value_size, 8); bool prealloc = htab_is_prealloc(htab); bool percpu = htab_is_percpu(htab); bool lru = htab_is_lru(htab);
linux-stable-mirror@lists.linaro.org