Hi Greg,
Please cherry-pick this patch series into 5.10.y stable. It
includes a feature that fixes CVE-2022-0500 which allows a user with
cap_bpf privileges to get root privileges. The patch that fixes
the bug is
patch 6/8: bpf: Make per_cpu_ptr return rdonly PTR_TO_MEM
The rest are the depedences required by the fix patch.
This patchset has been merged in mainline v5.17 and backported to v5.16[1]
and v5.15[2]
Tested by compile, build and run through the bpf selftest test_progs.
Before:
./test_progs -t ksyms_btf/write_check
test_ksyms_btf:PASS:btf_exists 0 nsec
test_write_check:FAIL:skel_open unexpected load of a prog writing to ksym memory
#44/3 write_check:FAIL
#44 ksyms_btf:FAIL
Summary: 0/0 PASSED, 0 SKIPPED, 2 FAILED
After:
./test_progs -t ksyms_btf/write_check
#44/3 write_check:OK
#44 ksyms_btf:OK
Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED
[1] https://lore.kernel.org/all/Yg6cixLJFoxDmp+I@kroah.com/
[2] https://lore.kernel.org/all/Ymupcl2JshcWjmMD@kroah.com/
Hao Luo (8):
bpf: Introduce composable reg, ret and arg types.
bpf: Replace ARG_XXX_OR_NULL with ARG_XXX | PTR_MAYBE_NULL
bpf: Replace RET_XXX_OR_NULL with RET_XXX | PTR_MAYBE_NULL
bpf: Replace PTR_TO_XXX_OR_NULL with PTR_TO_XXX | PTR_MAYBE_NULL
bpf: Introduce MEM_RDONLY flag
bpf: Make per_cpu_ptr return rdonly PTR_TO_MEM.
bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem.
bpf/selftests: Test PTR_TO_RDONLY_MEM
include/linux/bpf.h | 98 +++-
include/linux/bpf_verifier.h | 18 +
kernel/bpf/btf.c | 8 +-
kernel/bpf/cgroup.c | 2 +-
kernel/bpf/helpers.c | 10 +-
kernel/bpf/map_iter.c | 4 +-
kernel/bpf/ringbuf.c | 2 +-
kernel/bpf/verifier.c | 477 +++++++++---------
kernel/trace/bpf_trace.c | 22 +-
net/core/bpf_sk_storage.c | 2 +-
net/core/filter.c | 62 +--
net/core/sock_map.c | 2 +-
.../selftests/bpf/prog_tests/ksyms_btf.c | 14 +
.../bpf/progs/test_ksyms_btf_write_check.c | 29 ++
14 files changed, 441 insertions(+), 309 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c
--
2.47.1
Hello,
this is a followup to
https://lore.kernel.org/stable/cover.1749223334.git.u.kleine-koenig@baylibr…
that handled backporting the two patches by Alexandre to the active
stable kernels between 6.15 and 5.15. Here comes a backport to 5.10.y, git
am handles application to 5.4.y just fine.
Compared to the backport for later kernels I included a major rework of
rtc_time64_to_tm() by Cassio Neri. (FTR: I checked, that commit by
Cassio Neri isn't the reason we need to fix rtc_time64_to_tm(), the
actual problem is older.)
Now that I completed the backport and did some final checks on it I
noticed that the problem fixed here is (TTBOMK) a theoretic one because
only drivers with .start_secs < 0 are known to have issues and in 5.10
and before there is no such driver. I'm uncertain if this should result
in not backporting the changes. I would tend to pick them anyhow, but
I won't argue on a veto.
Best regards
Uwe
Alexandre Mergnat (2):
rtc: Make rtc_time64_to_tm() support dates before 1970
rtc: Fix offset calculation for .start_secs < 0
Cassio Neri (1):
rtc: Improve performance of rtc_time64_to_tm(). Add tests.
drivers/rtc/Kconfig | 10 ++++
drivers/rtc/Makefile | 1 +
drivers/rtc/class.c | 2 +-
drivers/rtc/lib.c | 121 ++++++++++++++++++++++++++++++++---------
drivers/rtc/lib_test.c | 79 +++++++++++++++++++++++++++
5 files changed, 185 insertions(+), 28 deletions(-)
create mode 100644 drivers/rtc/lib_test.c
base-commit: 01e7e36b8606e5d4fddf795938010f7bfa3aa277
--
2.49.0
From: Jakub Kicinski <kuba(a)kernel.org>
commit f22b4b55edb507a2b30981e133b66b642be4d13f upstream.
I find the behavior of xa_for_each_start() slightly counter-intuitive.
It doesn't end the iteration by making the index point after the last
element. IOW calling xa_for_each_start() again after it "finished"
will run the body of the loop for the last valid element, instead
of doing nothing.
This works fine for netlink dumps if they terminate correctly
(i.e. coalesce or carefully handle NLM_DONE), but as we keep getting
reminded legacy dumps are unlikely to go away.
Fixing this generically at the xa_for_each_start() level seems hard -
there is no index reserved for "end of iteration".
ifindexes are 31b wide, tho, and iterator is ulong so for
for_each_netdev_dump() it's safe to go to the next element.
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel(a)intel.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Jeremy Kerr <jk(a)codeconstruct.com.au>
---
The mctp RTM_GETADDR rework backport of acab78ae12c7 ("net: mctp: Don't
access ifa_index when missing") pulled 2d45eeb7d5d7 ("mctp: no longer
rely on net->dev_index_head[]") as a dependency. However, that change
relies on this backport for correct behaviour of for_each_netdev_dump().
Jakub mentions[1] that nothing should be relying on the old behaviour of
for_each_netdev_dump(), hence the backport.
[1]: https://lore.kernel.org/netdev/20250609083749.741c27f5@kernel.org/
This backport is only applicable to 6.6.y; the change hit upstream in
6.10.
---
include/linux/netdevice.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 0b0a172337dbac5716e5e5556befd95b4c201f5b..030d9de2ba2d23aa80b4b02182883f022f553964 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3036,7 +3036,8 @@ extern rwlock_t dev_base_lock; /* Device list lock */
#define net_device_entry(lh) list_entry(lh, struct net_device, dev_list)
#define for_each_netdev_dump(net, d, ifindex) \
- xa_for_each_start(&(net)->dev_by_index, (ifindex), (d), (ifindex))
+ for (; (d = xa_find(&(net)->dev_by_index, &ifindex, \
+ ULONG_MAX, XA_PRESENT)); ifindex++)
static inline struct net_device *next_net_device(struct net_device *dev)
{
---
base-commit: c2603c511feb427b2b09f74b57816a81272932a1
change-id: 20250610-nl-dump-618700905d4f
Best regards,
--
Jeremy Kerr <jk(a)codeconstruct.com.au>
Fix compilation warning:
In file included from ./include/linux/kernel.h:15,
from ./include/linux/list.h:9,
from ./include/linux/module.h:12,
from net/ipv4/inet_hashtables.c:12:
net/ipv4/inet_hashtables.c: In function ‘inet_ehash_locks_alloc’:
./include/linux/minmax.h:20:35: warning: comparison of distinct pointer types lacks a cast
20 | (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
| ^~
./include/linux/minmax.h:26:18: note: in expansion of macro ‘__typecheck’
26 | (__typecheck(x, y) && __no_side_effects(x, y))
| ^~~~~~~~~~~
./include/linux/minmax.h:36:31: note: in expansion of macro ‘__safe_cmp’
36 | __builtin_choose_expr(__safe_cmp(x, y), \
| ^~~~~~~~~~
./include/linux/minmax.h:52:25: note: in expansion of macro ‘__careful_cmp’
52 | #define max(x, y) __careful_cmp(x, y, >)
| ^~~~~~~~~~~~~
net/ipv4/inet_hashtables.c:946:19: note: in expansion of macro ‘max’
946 | nblocks = max(nblocks, num_online_nodes() * PAGE_SIZE / locksz);
| ^~~
CC block/badblocks.o
When warnings are treated as errors, this causes the build to fail.
The issue is a type mismatch between the operands passed to the max()
macro. Here, nblocks is an unsigned int, while the expression
num_online_nodes() * PAGE_SIZE / locksz is promoted to unsigned long.
This happens because:
- num_online_nodes() returns int
- PAGE_SIZE is typically defined as an unsigned long (depending on the
architecture)
- locksz is unsigned int
The resulting arithmetic expression is promoted to unsigned long.
Thus, the max() macro compares values of different types: unsigned int
vs unsigned long.
This issue was introduced in commit f8ece40786c9 ("tcp: bring back NUMA
dispersion in inet_ehash_locks_alloc()") during the update from kernel
v5.10.237 to v5.10.238.
It does not exist in newer kernel branches (e.g., v5.15.185 and all 6.x
branches), because they include commit d03eba99f5bf ("minmax: allow
min()/max()/clamp() if the arguments have the same signedness.")
Fix the issue by using max_t(unsigned int, ...) to explicitly cast both
operands to the same type, avoiding the type mismatch and ensuring
correctness.
Fixes: f8ece40786c9 ("tcp: bring back NUMA dispersion in inet_ehash_locks_alloc()")
Signed-off-by: Eliav Farber <farbere(a)amazon.com>
---
V1 -> V2: Use upstream commit SHA1 in reference
net/ipv4/inet_hashtables.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index fea74ab2a4be..ac2d185c04ef 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -943,7 +943,7 @@ int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo)
nblocks = max(2U * L1_CACHE_BYTES / locksz, 1U) * num_possible_cpus();
/* At least one page per NUMA node. */
- nblocks = max(nblocks, num_online_nodes() * PAGE_SIZE / locksz);
+ nblocks = max_t(unsigned int, nblocks, num_online_nodes() * PAGE_SIZE / locksz);
nblocks = roundup_pow_of_two(nblocks);
--
2.47.1
Use common wrappers operating directly on the struct sg_table objects to
fix incorrect use of scatterlists sync calls. dma_sync_sg_for_*()
functions have to be called with the number of elements originally passed
to dma_map_sg_*() function, not the one returned in sgtable's nents.
Fixes: 1ffe09590121 ("udmabuf: fix dma-buf cpu access")
CC: stable(a)vger.kernel.org
Signed-off-by: Marek Szyprowski <m.szyprowski(a)samsung.com>
Acked-by: Vivek Kasireddy <vivek.kasireddy(a)intel.com>
---
drivers/dma-buf/udmabuf.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 7eee3eb47a8e..c9d0c68d2fcb 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -264,8 +264,7 @@ static int begin_cpu_udmabuf(struct dma_buf *buf,
ubuf->sg = NULL;
}
} else {
- dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents,
- direction);
+ dma_sync_sgtable_for_cpu(dev, ubuf->sg, direction);
}
return ret;
@@ -280,7 +279,7 @@ static int end_cpu_udmabuf(struct dma_buf *buf,
if (!ubuf->sg)
return -EINVAL;
- dma_sync_sg_for_device(dev, ubuf->sg->sgl, ubuf->sg->nents, direction);
+ dma_sync_sgtable_for_device(dev, ubuf->sg, direction);
return 0;
}
--
2.34.1
Running as a Xen PV guest uncovered some bugs when ITS mitigation is
active.
Juergen Gross (3):
x86/execmem: don't use PAGE_KERNEL protection for code pages
x86/mm/pat: don't collapse pages without PSE set
x86/alternative: make kernel ITS thunks read-only
arch/x86/kernel/alternative.c | 16 ++++++++++++++++
arch/x86/mm/init.c | 2 +-
arch/x86/mm/pat/set_memory.c | 3 +++
3 files changed, 20 insertions(+), 1 deletion(-)
--
2.43.0