This adds support for receiving KeyUpdate messages (RFC 8446, 4.6.3
[1]). A sender transmits a KeyUpdate message and then changes its TX
key. The receiver should react by updating its RX key before
processing the next message.
This patchset implements key updates by:
1. pausing decryption when a KeyUpdate message is received, to avoid
attempting to use the old key to decrypt a record encrypted with
the new key
2. returning -EKEYEXPIRED to syscalls that cannot receive the
KeyUpdate message, until the rekey has been performed by userspace
3. passing the KeyUpdate message to userspace as a control message
4. allowing updates of the crypto_info via the TLS_TX/TLS_RX
setsockopts
This API has been tested with gnutls to make sure that it allows
userspace libraries to implement key updates [2]. Thanks to Frantisek
Krenzelok <fkrenzel(a)redhat.com> for providing the implementation in
gnutls and testing the kernel patches.
=======================================================================
Discussions around v2 of this patchset focused on how HW offload would
interact with rekey.
RX
- The existing SW path will handle all records between the KeyUpdate
message signaling the change of key and the new key becoming known
to the kernel -- those will be queued encrypted, and decrypted in
SW as they are read by userspace (once the key is provided, ie same
as this patchset)
- Call ->tls_dev_del + ->tls_dev_add immediately during
setsockopt(TLS_RX)
TX
- After setsockopt(TLS_TX), switch to the existing SW path (not the
current device_fallback) until we're able to re-enable HW offload
- tls_device_sendmsg will call into tls_sw_sendmsg under lock_sock
to avoid changing socket ops during the rekey while another
thread might be waiting on the lock
- We only re-enable HW offload (call ->tls_dev_add to install the new
key in HW) once all records sent with the old key have been
ACKed. At this point, all unacked records are SW-encrypted with the
new key, and the old key is unused by both HW and retransmissions.
- If there are no unacked records when userspace does
setsockopt(TLS_TX), we can (try to) install the new key in HW
immediately.
- If yet another key has been provided via setsockopt(TLS_TX), we
don't install intermediate keys, only the latest.
- TCP notifies ktls of ACKs via the icsk_clean_acked callback. In
case of a rekey, tls_icsk_clean_acked will record when all data
sent with the most recent past key has been sent. The next call
to sendmsg will install the new key in HW.
- We close and push the current SW record before reenabling
offload.
If ->tls_dev_add fails to install the new key in HW, we stay in SW
mode. We can add a counter to keep track of this.
In addition:
Because we can't change socket ops during a rekey, we'll also have to
modify do_tls_setsockopt_conf to check ctx->tx_conf and only call
either tls_set_device_offload or tls_set_sw_offload. RX already uses
the same ops for both TLS_HW and TLS_SW, so we could switch between HW
and SW mode on rekey.
An alternative would be to have a common sendmsg which locks
the socket and then calls the correct implementation. We'll need that
anyway for the offload under rekey case, so that would only add a test
to the SW path's ops (compared to the current code). That should allow
us to simplify build_protos a bit, but might have a performance
impact - we'll need to check it if we want to go that route.
=======================================================================
Changes since v3:
- rebase on top of net-next
- rework tls_check_pending_rekey according to Jakub's feedback
- add statistics for rekey: {RX,TX}REKEY{OK,ERROR}
- some coding style clean ups
Link: https://lore.kernel.org/netdev/cover.1691584074.git.sd@queasysnail.net/ [v3]
Link: https://lore.kernel.org/netdev/cover.1676052788.git.sd@queasysnail.net/ [v2]
Link: https://lore.kernel.org/netdev/cover.1673952268.git.sd@queasysnail.net/ [v1]
Link: https://www.rfc-editor.org/rfc/rfc8446#section-4.6.3 [1]
Link: https://gitlab.com/gnutls/gnutls/-/merge_requests/1625 [2]
Sabrina Dubroca (6):
tls: block decryption when a rekey is pending
tls: implement rekey for TLS1.3
tls: add counters for rekey
docs: tls: document TLS1.3 key updates
selftests: tls: add key_generation argument to tls_crypto_info_init
selftests: tls: add rekey tests
Documentation/networking/tls.rst | 31 ++
include/net/tls.h | 3 +
include/uapi/linux/snmp.h | 4 +
net/tls/tls.h | 3 +-
net/tls/tls_device.c | 2 +-
net/tls/tls_main.c | 71 ++++-
net/tls/tls_proc.c | 4 +
net/tls/tls_sw.c | 138 +++++++--
tools/testing/selftests/net/tls.c | 480 +++++++++++++++++++++++++++++-
9 files changed, 676 insertions(+), 60 deletions(-)
--
2.47.0
Currently the rseq constructor, rseq_init(), assumes that glibc always
has the support for rseq symbols (__rseq_size for instance). However,
glibc supports rseq from version 2.35 onwards. As a result, for the
systems that run glibc less than 2.35, the global rseq_size remains
initialized to -1U. When a thread then tries to register for rseq,
get_rseq_min_alloc_size() would end up returning -1U, which is
incorrect. Hence, initialize rseq_size for the cases where glibc doesn't
have the support for rseq symbols.
Cc: stable(a)vger.kernel.org
Fixes: 73a4f5a704a2 ("selftests/rseq: Fix mm_cid test failure")
Signed-off-by: Raghavendra Rao Ananta <rananta(a)google.com>
---
tools/testing/selftests/rseq/rseq.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c
index 5b9772cdf265..9eb5356f25fa 100644
--- a/tools/testing/selftests/rseq/rseq.c
+++ b/tools/testing/selftests/rseq/rseq.c
@@ -142,6 +142,16 @@ unsigned int get_rseq_kernel_feature_size(void)
return ORIG_RSEQ_FEATURE_SIZE;
}
+static void set_default_rseq_size(void)
+{
+ unsigned int rseq_kernel_feature_size = get_rseq_kernel_feature_size();
+
+ if (rseq_kernel_feature_size < ORIG_RSEQ_ALLOC_SIZE)
+ rseq_size = rseq_kernel_feature_size;
+ else
+ rseq_size = ORIG_RSEQ_ALLOC_SIZE;
+}
+
int rseq_register_current_thread(void)
{
int rc;
@@ -219,12 +229,7 @@ void rseq_init(void)
fallthrough;
case ORIG_RSEQ_ALLOC_SIZE:
{
- unsigned int rseq_kernel_feature_size = get_rseq_kernel_feature_size();
-
- if (rseq_kernel_feature_size < ORIG_RSEQ_ALLOC_SIZE)
- rseq_size = rseq_kernel_feature_size;
- else
- rseq_size = ORIG_RSEQ_ALLOC_SIZE;
+ set_default_rseq_size();
break;
}
default:
@@ -239,8 +244,10 @@ void rseq_init(void)
rseq_size = 0;
return;
}
+
rseq_offset = (void *)&__rseq_abi - rseq_thread_pointer();
rseq_flags = 0;
+ set_default_rseq_size();
}
static __attribute__((destructor))
base-commit: 40384c840ea1944d7c5a392e8975ed088ecf0b37
--
2.47.0.338.g60cca15819-goog
Series takes care of two issues with sockmap update: inconsistent behaviour
after update with same, and race/refcount imbalance on element replace.
I am hesitant if patch 3/3 ("bpf, sockmap: Fix race between element replace
and close()") is the right approach. I might have missed some detail of the
current __sock_map_delete() implementation. I'd be grateful for comments,
thanks.
Signed-off-by: Michal Luczaj <mhal(a)rbox.co>
---
Michal Luczaj (3):
bpf, sockmap: Fix update element with same
selftest/bpf: Extend test for sockmap update with same
bpf, sockmap: Fix race between element replace and close()
net/core/sock_map.c | 6 +++---
tools/testing/selftests/bpf/prog_tests/sockmap_basic.c | 8 +++++---
2 files changed, 8 insertions(+), 6 deletions(-)
---
base-commit: 537a2525eaf76ea9b0dca62b994500d8670b39d5
change-id: 20241201-sockmap-replace-67c7077f3a31
Best regards,
--
Michal Luczaj <mhal(a)rbox.co>
Context
=======
We've observed within Red Hat that isolated, NOHZ_FULL CPUs running a
pure-userspace application get regularly interrupted by IPIs sent from
housekeeping CPUs. Those IPIs are caused by activity on the housekeeping CPUs
leading to various on_each_cpu() calls, e.g.:
64359.052209596 NetworkManager 0 1405 smp_call_function_many_cond (cpu=0, func=do_kernel_range_flush)
smp_call_function_many_cond+0x1
smp_call_function+0x39
on_each_cpu+0x2a
flush_tlb_kernel_range+0x7b
__purge_vmap_area_lazy+0x70
_vm_unmap_aliases.part.42+0xdf
change_page_attr_set_clr+0x16a
set_memory_ro+0x26
bpf_int_jit_compile+0x2f9
bpf_prog_select_runtime+0xc6
bpf_prepare_filter+0x523
sk_attach_filter+0x13
sock_setsockopt+0x92c
__sys_setsockopt+0x16a
__x64_sys_setsockopt+0x20
do_syscall_64+0x87
entry_SYSCALL_64_after_hwframe+0x65
The heart of this series is the thought that while we cannot remove NOHZ_FULL
CPUs from the list of CPUs targeted by these IPIs, they may not have to execute
the callbacks immediately. Anything that only affects kernelspace can wait
until the next user->kernel transition, providing it can be executed "early
enough" in the entry code.
The original implementation is from Peter [1]. Nicolas then added kernel TLB
invalidation deferral to that [2], and I picked it up from there.
Deferral approach
=================
Storing each and every callback, like a secondary call_single_queue turned out
to be a no-go: the whole point of deferral is to keep NOHZ_FULL CPUs in
userspace for as long as possible - no signal of any form would be sent when
deferring an IPI. This means that any form of queuing for deferred callbacks
would end up as a convoluted memory leak.
Deferred IPIs must thus be coalesced, which this series achieves by assigning
IPIs a "type" and having a mapping of IPI type to callback, leveraged upon
kernel entry.
What about IPIs whose callback take a parameter, you may ask?
Peter suggested during OSPM23 [3] that since on_each_cpu() targets
housekeeping CPUs *and* isolated CPUs, isolated CPUs can access either global or
housekeeping-CPU-local state to "reconstruct" the data that would have been sent
via the IPI.
This series does not affect any IPI callback that requires an argument, but the
approach would remain the same (one coalescable callback executed on kernel
entry).
Kernel entry vs execution of the deferred operation
===================================================
This is what I've referred to as the "Danger Zone" during my LPC24 talk [4].
There is a non-zero length of code that is executed upon kernel entry before the
deferred operation can be itself executed (i.e. before we start getting into
context_tracking.c proper), i.e.:
idtentry_func_foo() <--- we're in the kernel
irqentry_enter()
enter_from_user_mode()
__ct_user_exit()
ct_kernel_enter_state()
ct_work_flush() <--- deferred operation is executed here
This means one must take extra care to what can happen in the early entry code,
and that <bad things> cannot happen. For instance, we really don't want to hit
instructions that have been modified by a remote text_poke() while we're on our
way to execute a deferred sync_core(). Patches doing the actual deferral have
more detail on this.
Patches
=======
o Patches 1-3 are standalone cleanups.
o Patches 4-5 add an RCU testing feature.
o Patches 6-8 add a new type of jump label for static keys that will not have
their IPI be deferred.
o Patch 9 adds objtool verification of static keys vs their text_poke IPI
deferral
o Patches 10-14 add the actual IPI deferrals
o Patch 15 is a freebie to enable the deferral feature for NO_HZ_IDLE
Patches are also available at:
https://gitlab.com/vschneid/linux.git -b redhat/isolirq/defer/v3
RFC status
==========
Things I'd like to get comments on and/or that are a bit WIPish; they're called
out in the individual changelogs:
o "forceful" jump label naming which I don't particularly like
o objtool usage of 'offset_of(static_key.type)' and JUMP_TYPE_FORCEFUL. I've
hardcoded them but it could do with being shoved in a kernel header objtool
can include directly
o The noinstr variant of __flush_tlb_all() doesn't have a paravirt variant, does
it need one?
Testing
=======
Xeon E5-2699 system with SMToff, NOHZ_FULL, isolated CPUs.
RHEL9 userspace.
Workload is using rteval (kernel compilation + hackbench) on housekeeping CPUs
and a dummy stay-in-userspace loop on the isolated CPUs. The main invocation is:
$ trace-cmd record -e "csd_queue_cpu" -f "cpu & CPUS{$ISOL_CPUS}" \
-e "ipi_send_cpumask" -f "cpumask & CPUS{$ISOL_CPUS}" \
-e "ipi_send_cpu" -f "cpu & CPUS{$ISOL_CPUS}" \
rteval --onlyload --loads-cpulist=$HK_CPUS \
--hackbench-runlowmem=True --duration=$DURATION
This only records IPIs sent to isolated CPUs, so any event there is interference
(with a bit of fuzz at the start/end of the workload when spawning the
processes). All tests were done with a duration of 1hr.
v6.12-rc4
# This is the actual IPI count
$ trace-cmd report trace-base.dat | grep callback | awk '{ print $(NF) }' | sort | uniq -c | sort -nr
1782 callback=generic_smp_call_function_single_interrupt+0x0
73 callback=0x0
# These are the different CSD's that caused IPIs
$ trace-cmd report | grep csd_queue | awk '{ print $(NF-1) }' | sort | uniq -c | sort -nr
22048 func=tlb_remove_table_smp_sync
16536 func=do_sync_core
2262 func=do_flush_tlb_all
182 func=do_kernel_range_flush
144 func=rcu_exp_handler
60 func=sched_ttwu_pending
v6.12-rc4 + patches:
# This is the actual IPI count
$ trace-cmd report | grep callback | awk '{ print $(NF) }' | sort | uniq -c | sort -nr
1168 callback=generic_smp_call_function_single_interrupt+0x0
74 callback=0x0
# These are the different CSD's that caused IPIs
$ trace-cmd report | grep csd_queue | awk '{ print $(NF-1) }' | sort | uniq -c | sort -nr
23686 func=tlb_remove_table_smp_sync
192 func=rcu_exp_handler
65 func=sched_ttwu_pending
Interestingly tlb_remove_table_smp_sync() started showing up on this machine,
while it didn't during testing for v2 and it's the same machine. Yair had a
series adressing this [5] which per these results would be worth revisiting.
Acknowledgements
================
Special thanks to:
o Clark Williams for listening to my ramblings about this and throwing ideas my way
o Josh Poimboeuf for his guidance regarding objtool and hinting at the
.data..ro_after_init section.
o All of the folks who attended various talks about this and provided precious
feedback.
Links
=====
[1]: https://lore.kernel.org/all/20210929151723.162004989@infradead.org/
[2]: https://github.com/vianpl/linux.git -b ct-work-defer-wip
[3]: https://youtu.be/0vjE6fjoVVE
[4]: https://lpc.events/event/18/contributions/1889/
[5]: https://lore.kernel.org/lkml/20230620144618.125703-1-ypodemsk@redhat.com/
Revisions
=========
RFCv2 -> RFCv3
+++++++++++
o Rebased onto v6.12-rc7
o Added objtool documentation for the new warning (Josh)
o Added low-size RCU watching counter to TREE04 torture scenario (Paul)
o Added FORCEFUL jump label and static key types
o Added noinstr-compliant helpers for tlb flush deferral
o Overall changelog & comments cleanup
RFCv1 -> RFCv2
++++++++++++++
o Rebased onto v6.5-rc1
o Updated the trace filter patches (Steven)
o Fixed __ro_after_init keys used in modules (Peter)
o Dropped the extra context_tracking atomic, squashed the new bits in the
existing .state field (Peter, Frederic)
o Added an RCU_EXPERT config for the RCU dynticks counter size, and added an
rcutorture case for a low-size counter (Paul)
o Fixed flush_tlb_kernel_range_deferrable() definition
Valentin Schneider (15):
objtool: Make validate_call() recognize indirect calls to pv_ops[]
objtool: Flesh out warning related to pv_ops[] calls
sched/clock: Make sched_clock_running __ro_after_init
rcu: Add a small-width RCU watching counter debug option
rcutorture: Make TREE04 use CONFIG_RCU_DYNTICKS_TORTURE
jump_label: Add forceful jump label type
x86/speculation/mds: Make mds_idle_clear forceful
sched/clock, x86: Make __sched_clock_stable forceful
objtool: Warn about non __ro_after_init static key usage in .noinstr
x86/alternatives: Record text_poke's of JUMP_TYPE_FORCEFUL labels
context-tracking: Introduce work deferral infrastructure
context_tracking,x86: Defer kernel text patching IPIs
context_tracking,x86: Add infrastructure to defer kernel TLBI
x86/mm, mm/vmalloc: Defer flush_tlb_kernel_range() targeting NOHZ_FULL
CPUs
context-tracking: Add a Kconfig to enable IPI deferral for NO_HZ_IDLE
arch/Kconfig | 9 +++
arch/x86/Kconfig | 1 +
arch/x86/include/asm/context_tracking_work.h | 20 +++++++
arch/x86/include/asm/special_insns.h | 1 +
arch/x86/include/asm/text-patching.h | 13 ++++-
arch/x86/include/asm/tlbflush.h | 17 +++++-
arch/x86/kernel/alternative.c | 49 ++++++++++++----
arch/x86/kernel/cpu/bugs.c | 2 +-
arch/x86/kernel/cpu/common.c | 6 +-
arch/x86/kernel/jump_label.c | 7 ++-
arch/x86/kernel/kprobes/core.c | 4 +-
arch/x86/kernel/kprobes/opt.c | 4 +-
arch/x86/kernel/module.c | 2 +-
arch/x86/mm/tlb.c | 49 ++++++++++++++--
include/linux/context_tracking.h | 21 +++++++
include/linux/context_tracking_state.h | 54 ++++++++++++++---
include/linux/context_tracking_work.h | 28 +++++++++
include/linux/jump_label.h | 26 ++++++---
kernel/context_tracking.c | 46 ++++++++++++++-
kernel/rcu/Kconfig.debug | 14 +++++
kernel/sched/clock.c | 4 +-
kernel/time/Kconfig | 19 ++++++
mm/vmalloc.c | 35 +++++++++--
tools/objtool/Documentation/objtool.txt | 13 +++++
tools/objtool/check.c | 58 ++++++++++++++++---
tools/objtool/include/objtool/check.h | 1 +
tools/objtool/include/objtool/special.h | 2 +
tools/objtool/special.c | 3 +
.../selftests/rcutorture/configs/rcu/TREE04 | 1 +
29 files changed, 450 insertions(+), 59 deletions(-)
create mode 100644 arch/x86/include/asm/context_tracking_work.h
create mode 100644 include/linux/context_tracking_work.h
--
2.43.0
When compiling these selftests the host-tools directory is generated.
Add it to the .gitignore so git doesn't see these files as trackable.
Signed-off-by: Charlie Jenkins <charlie(a)rivosinc.com>
---
tools/testing/selftests/hid/.gitignore | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/hid/.gitignore b/tools/testing/selftests/hid/.gitignore
index 746c62361f77..933f483815b2 100644
--- a/tools/testing/selftests/hid/.gitignore
+++ b/tools/testing/selftests/hid/.gitignore
@@ -1,5 +1,6 @@
bpftool
*.skel.h
+/host-tools
/tools
hid_bpf
hidraw
---
base-commit: 40384c840ea1944d7c5a392e8975ed088ecf0b37
change-id: 20241206-host_tools_gitignore-8a89f8820a61
--
- Charlie
This patch series adds more test case issuing ioctls to ucontrol VMs and
its floating interrupt controller.
The test cases trigger three possible null pointer dereferences within
the handling of the KVM_DEV_FLIC_APF_ENABLE,
KVM_DEV_FLIC_APF_DISABLE_WAIT and KVM_SET_GSI_ROUTING ioctl.
All of these issues do only exist on ucontrol VMs. Fixes for the issues
are included within the patch series.
Christoph Schlameuss (6):
kvm: s390: Reject setting flic pfault attributes on ucontrol VMs
selftests: kvm: s390: Add ucontrol flic attr selftests
kvm: s390: Reject KVM_SET_GSI_ROUTING on ucontrol VMs
selftests: kvm: s390: Add ucontrol gis routing test
selftests: kvm: s390: Streamline uc_skey test to issue iske after sske
selftests: kvm: s390: Add has device attr check to uc_attr_mem_limit
selftest
arch/s390/kvm/interrupt.c | 6 +
.../selftests/kvm/s390x/ucontrol_test.c | 196 ++++++++++++++++--
2 files changed, 184 insertions(+), 18 deletions(-)
--
2.47.1
With CONFIG_KPROBES_ON_FTRACE enabled on powerpc, ftrace_location_range
returns ftrace location for bpf_fentry_test1 at offset of 4 bytes from
function entry. This is because branch to _mcount function is at offset
of 4 bytes in function profile sequence.
To fix this, add entry_offset of 4 bytes while verifying the address for
kprobe entry address of bpf_fentry_test1 in verify_perf_link_info in
selftest, when CONFIG_KPROBES_ON_FTRACE is enabled.
Disassemble of bpf_fentry_test1:
c000000000e4b080 <bpf_fentry_test1>:
c000000000e4b080: a6 02 08 7c mflr r0
c000000000e4b084: b9 e2 22 4b bl c00000000007933c <_mcount>
c000000000e4b088: 01 00 63 38 addi r3,r3,1
c000000000e4b08c: b4 07 63 7c extsw r3,r3
c000000000e4b090: 20 00 80 4e blr
When CONFIG_PPC_FTRACE_OUT_OF_LINE [1] is enabled, these function profile
sequence is moved out of line with an unconditional branch at offset 0.
So, the test works without altering the offset for
'CONFIG_KPROBES_ON_FTRACE && CONFIG_PPC_FTRACE_OUT_OF_LINE' case.
Disassemble of bpf_fentry_test1:
c000000000f95190 <bpf_fentry_test1>:
c000000000f95190: 00 00 00 60 nop
c000000000f95194: 01 00 63 38 addi r3,r3,1
c000000000f95198: b4 07 63 7c extsw r3,r3
c000000000f9519c: 20 00 80 4e blr
[1] https://lore.kernel.org/all/20241030070850.1361304-13-hbathini@linux.ibm.co…
Fixes: 23cf7aa539dc ("selftests/bpf: Add selftest for fill_link_info")
Signed-off-by: Saket Kumar Bhaskar <skb99(a)linux.ibm.com>
---
.../selftests/bpf/prog_tests/fill_link_info.c | 4 ++++
.../selftests/bpf/progs/test_fill_link_info.c | 13 ++++++++++---
2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
index d50cbd804..e59af2aa6 100644
--- a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
+++ b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
@@ -171,6 +171,10 @@ static void test_kprobe_fill_link_info(struct test_fill_link_info *skel,
/* See also arch_adjust_kprobe_addr(). */
if (skel->kconfig->CONFIG_X86_KERNEL_IBT)
entry_offset = 4;
+ if (skel->kconfig->CONFIG_PPC64 &&
+ skel->kconfig->CONFIG_KPROBES_ON_FTRACE &&
+ !skel->kconfig->CONFIG_PPC_FTRACE_OUT_OF_LINE)
+ entry_offset = 4;
err = verify_perf_link_info(link_fd, type, kprobe_addr, 0, entry_offset);
ASSERT_OK(err, "verify_perf_link_info");
} else {
diff --git a/tools/testing/selftests/bpf/progs/test_fill_link_info.c b/tools/testing/selftests/bpf/progs/test_fill_link_info.c
index 6afa83475..fac33a14f 100644
--- a/tools/testing/selftests/bpf/progs/test_fill_link_info.c
+++ b/tools/testing/selftests/bpf/progs/test_fill_link_info.c
@@ -6,13 +6,20 @@
#include <stdbool.h>
extern bool CONFIG_X86_KERNEL_IBT __kconfig __weak;
+extern bool CONFIG_PPC_FTRACE_OUT_OF_LINE __kconfig __weak;
+extern bool CONFIG_KPROBES_ON_FTRACE __kconfig __weak;
+extern bool CONFIG_PPC64 __kconfig __weak;
-/* This function is here to have CONFIG_X86_KERNEL_IBT
- * used and added to object BTF.
+/* This function is here to have CONFIG_X86_KERNEL_IBT,
+ * CONFIG_PPC_FTRACE_OUT_OF_LINE, CONFIG_KPROBES_ON_FTRACE,
+ * CONFIG_PPC6 used and added to object BTF.
*/
int unused(void)
{
- return CONFIG_X86_KERNEL_IBT ? 0 : 1;
+ return CONFIG_X86_KERNEL_IBT ||
+ CONFIG_PPC_FTRACE_OUT_OF_LINE ||
+ CONFIG_KPROBES_ON_FTRACE ||
+ CONFIG_PPC64 ? 0 : 1;
}
SEC("kprobe")
--
2.45.2