Hi, Willy
This is the v3 generic part1 for rv32, all of the found issues of v2
part1 [1] have been fixed up, several generic patches have been fixed up
and merged from v2 part2 [2] to this series, the standalone test_fork
patch [4] is merged with a Reviewed-by line into this series too.
This series is based on 20230528-nolibc-rv32+stkp5 branch of [5].
Changes from v2 -> v3:
* selftests/nolibc: fix up compile warning with glibc on x86_64
Use simpler 'long long' conversion instead of old #ifdef ...
(Suggestion from Willy)
* tools/nolibc: add missing nanoseconds support for __NR_statx
Split the compound assignment into two single assignments
(Suggestion from Thomas)
* selftests/nolibc: add new gettimeofday test cases
Removed the gettimeofday(NULL, &tz)
(Suggestion from Thomas)
All of the commit messages have been re-checked, some missing
Suggested-by lines are added.
The whole patchset have been tested on arm, aarch64, rv32 and rv64, no
regressions (the next compile patchset is required to do rv32 test).
The nolibc-test has been tested with glibc on x86_64 too.
Btw, we have found such poll failures on arm (not introduced by this
patchset), this will be fixed in our coming ppoll_time64 patchset:
48 poll_null = -1 ENOSYS [FAIL]
49 poll_stdout = -1 ENOSYS [FAIL]
50 poll_fault = -1 ENOSYS != (-1 EFAULT) [FAIL]
And the gettimeofday_null removal patch from Thomas [3] may conflicts
with the gettimeofday removal and addition patches, but it is not hard
to fix.
Best regards,
Zhangjin
---
[1]: https://lore.kernel.org/linux-riscv/cover.1685362482.git.falcon@tinylab.org…
[2]: https://lore.kernel.org/linux-riscv/cover.1685387484.git.falcon@tinylab.org…
[3]: https://lore.kernel.org/lkml/20230530-nolibc-gettimeofday-v1-1-7307441a002b…
[4]: https://lore.kernel.org/lkml/61bdfe7bacebdef8aa9195f6f2550a5b0d33aab3.16854…
[5]: https://git.kernel.org/pub/scm/linux/kernel/git/wtarreau/nolibc.git
Zhangjin Wu (12):
selftests/nolibc: syscall_args: use generic __NR_statx
tools/nolibc: add missing nanoseconds support for __NR_statx
selftests/nolibc: allow specify extra arguments for qemu
selftests/nolibc: fix up compile warning with glibc on x86_64
selftests/nolibc: not include limits.h for nolibc
selftests/nolibc: use INT_MAX instead of __INT_MAX__
tools/nolibc: arm: add missing my_syscall6
tools/nolibc: open: fix up compile warning for arm
selftests/nolibc: support two errnos with EXPECT_SYSER2()
selftests/nolibc: remove gettimeofday_bad1/2 completely
selftests/nolibc: add new gettimeofday test cases
selftests/nolibc: test_fork: fix up duplicated print
tools/include/nolibc/arch-arm.h | 23 +++++++++++
tools/include/nolibc/stdint.h | 14 +++++++
tools/include/nolibc/sys.h | 39 +++++++++---------
tools/testing/selftests/nolibc/Makefile | 2 +-
tools/testing/selftests/nolibc/nolibc-test.c | 42 ++++++++++++--------
5 files changed, 85 insertions(+), 35 deletions(-)
--
2.25.1
running nolibc-test with glibc on x86_64 got such print issue:
29 execve_root = -1 EACCES [OK]
30 fork30 fork = 0 [OK]
31 getdents64_root = 712 [OK]
The fork test case has three printf calls:
(1) llen += printf("%d %s", test, #name);
(2) llen += printf(" = %d %s ", expr, errorname(errno));
(3) llen += pad_spc(llen, 64, "[FAIL]\n"); --> vfprintf()
In the following scene, the above issue happens:
(a) The parent calls (1)
(b) The parent calls fork()
(c) The child runs and shares the print buffer of (1)
(d) The child exits, flushs the print buffer and closes its own stdout/stderr
* "30 fork" is printed at the first time.
(e) The parent calls (2) and (3), with "\n" in (3), it flushs the whole buffer
* "30 fork = 0 ..." is printed
Therefore, there are two "30 fork" in the stdout.
Between (a) and (b), if flush the stdout (and the sterr), the child in
stage (c) will not be able to 'see' the print buffer.
Signed-off-by: Zhangjin Wu <falcon(a)tinylab.org>
---
tools/testing/selftests/nolibc/nolibc-test.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/nolibc/nolibc-test.c b/tools/testing/selftests/nolibc/nolibc-test.c
index 7de46305f419..88323a60aa4a 100644
--- a/tools/testing/selftests/nolibc/nolibc-test.c
+++ b/tools/testing/selftests/nolibc/nolibc-test.c
@@ -486,7 +486,13 @@ static int test_getpagesize(void)
static int test_fork(void)
{
int status;
- pid_t pid = fork();
+ pid_t pid;
+
+ /* flush the printf buffer to avoid child flush it */
+ fflush(stdout);
+ fflush(stderr);
+
+ pid = fork();
switch (pid) {
case -1:
--
2.25.1
Hi,
This is somewhat related to the 11-patch series that I just posted [1],
but I hadn't originally planned to go this far with it. But since David
Hildenbrand asked if we could warn in this case [2], here it is.
It turns out that automatically doing the "make headers" correctly is
much harder than just warning, so I stopped at that point. It works well,
though.
[1] https://lore.kernel.org/all/20230603021558.95299-1-jhubbard@nvidia.com/
[2] https://lore.kernel.org/all/a4fbc191-9acb-5db8-a375-96c0c1ba3fcd@redhat.com/
thanks,
John Hubbard
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Peter Xu <peterx(a)redhat.com>
Cc: Muhammad Usama Anjum <usama.anjum(a)collabora.com>
Cc: Jonathan Corbet <corbet(a)lwn.net>
Cc: linux-doc(a)vger.kernel.org
John Hubbard (1):
selftests: error out if kernel header files are not yet built
tools/testing/selftests/lib.mk | 36 +++++++++++++++++++++++++++++++---
1 file changed, 33 insertions(+), 3 deletions(-)
base-commit: e5282a7d8f6b604f2bb6a06457734b8cf1e2f8f2
--
2.40.1
Adamos found the issue with the cached XFD state [1]. Although the XFD
state is reset on the CPU hotplug, the per-CPU XFD cache is missing
the reset. Then, running an AMX thread there, the staled value causes
the kernel crash to kill the thread.
This is reproducible when moving an AMX thread to the hot-plugged CPU.
So, add a test case to ensure no issue with that.
It repeats the test due to possible inconsistencies. Then, along with
the hotplug cost, it will bring a noticeable runtime increase. But,
the overall test has a quick turnaround time.
Link: https://lore.kernel.org/lkml/20230519112315.30616-1-attofari@amazon.de/ [1]
Signed-off-by: Chang S. Bae <chang.seok.bae(a)intel.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: Adamos Ttofari <attofari(a)amazon.de>
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: linux-kselftest(a)vger.kernel.org
Cc: linux-kernel(a)vger.kernel.org
---
The overall x86 selftest via "$ make TARGETS='x86' kselftest" takes
about 3.5 -> 5.5 seconds. 'amx_64' itself took about 1.5 more seconds
over 0.x seconds.
But, this overall runtime still takes in a matter of some seconds,
which should be fine I thought.
---
tools/testing/selftests/x86/amx.c | 133 ++++++++++++++++++++++++++++--
1 file changed, 126 insertions(+), 7 deletions(-)
diff --git a/tools/testing/selftests/x86/amx.c b/tools/testing/selftests/x86/amx.c
index d884fd69dd51..6f2f0598c706 100644
--- a/tools/testing/selftests/x86/amx.c
+++ b/tools/testing/selftests/x86/amx.c
@@ -3,6 +3,7 @@
#define _GNU_SOURCE
#include <err.h>
#include <errno.h>
+#include <fcntl.h>
#include <pthread.h>
#include <setjmp.h>
#include <stdio.h>
@@ -25,6 +26,8 @@
# error This test is 64-bit only
#endif
+#define BUF_LEN 1000
+
#define XSAVE_HDR_OFFSET 512
#define XSAVE_HDR_SIZE 64
@@ -239,11 +242,10 @@ static inline uint64_t get_fpx_sw_bytes_features(void *buffer)
}
/* Work around printf() being unsafe in signals: */
-#define SIGNAL_BUF_LEN 1000
-char signal_message_buffer[SIGNAL_BUF_LEN];
+char signal_message_buffer[BUF_LEN];
void sig_print(char *msg)
{
- int left = SIGNAL_BUF_LEN - strlen(signal_message_buffer) - 1;
+ int left = BUF_LEN - strlen(signal_message_buffer) - 1;
strncat(signal_message_buffer, msg, left);
}
@@ -767,15 +769,15 @@ static int create_threads(int num, struct futex_info *finfo)
return 0;
}
-static void affinitize_cpu0(void)
+static inline void affinitize_cpu(int cpu)
{
cpu_set_t cpuset;
CPU_ZERO(&cpuset);
- CPU_SET(0, &cpuset);
+ CPU_SET(cpu, &cpuset);
if (sched_setaffinity(0, sizeof(cpuset), &cpuset) != 0)
- fatal_error("sched_setaffinity to CPU 0");
+ fatal_error("sched_setaffinity to CPU %d", cpu);
}
static void test_context_switch(void)
@@ -784,7 +786,7 @@ static void test_context_switch(void)
int i;
/* Affinitize to one CPU to force context switches */
- affinitize_cpu0();
+ affinitize_cpu(0);
req_xtiledata_perm();
@@ -926,6 +928,121 @@ static void test_ptrace(void)
err(1, "ptrace test");
}
+/* CPU Hotplug test */
+
+static void __hotplug_cpu(int online, int cpu)
+{
+ char buf[BUF_LEN] = {};
+ int fd, rc;
+
+ strncat(buf, "/sys/devices/system/cpu/cpu", BUF_LEN);
+ snprintf(buf + strlen(buf), BUF_LEN - strlen(buf), "%d", cpu);
+ strncat(buf, "/online", BUF_LEN - strlen(buf));
+
+ fd = open(buf, O_RDWR);
+ if (fd == -1)
+ fatal_error("open()");
+
+ snprintf(buf, BUF_LEN, "%d", online);
+ rc = write(fd, buf, strlen(buf));
+ if (rc == -1)
+ fatal_error("write()");
+
+ rc = close(fd);
+ if (rc == -1)
+ fatal_error("close()");
+}
+
+static void offline_cpu(int cpu)
+{
+ __hotplug_cpu(0, cpu);
+}
+
+static void online_cpu(int cpu)
+{
+ __hotplug_cpu(1, cpu);
+}
+
+static jmp_buf jmpbuf;
+
+static void handle_sigsegv(int sig, siginfo_t *si, void *ctx_void)
+{
+ siglongjmp(jmpbuf, 1);
+}
+
+#define RETRY 5
+
+/*
+ * Sanity-check the hotplug CPU for its (re-)initialization.
+ *
+ * Create an AMX thread on a CPU, while the hotplug CPU went offline.
+ * Then, plug the offlined back, and move the thread to run on it.
+ *
+ * Repeat this multiple times to ensure no inconsistent failure.
+ * If something goes wrong, the thread will get a signal or killed.
+ */
+static void *switch_cpus(void *arg)
+{
+ int *result = (int *)arg;
+ int i = 0;
+
+ affinitize_cpu(0);
+ offline_cpu(1);
+ load_rand_tiledata(stashed_xsave);
+
+ sethandler(SIGSEGV, handle_sigsegv, SA_ONSTACK);
+ for (i = 0; i < RETRY; i++) {
+ if (i > 0) {
+ affinitize_cpu(0);
+ offline_cpu(1);
+ }
+ if (sigsetjmp(jmpbuf, 1) == 0) {
+ online_cpu(1);
+ affinitize_cpu(1);
+ } else {
+ *result = 1;
+ goto out;
+ }
+ }
+ *result = 0;
+out:
+ clearhandler(SIGSEGV);
+ return result;
+}
+
+static void test_cpuhp(void)
+{
+ int max_cpu_num = sysconf(_SC_NPROCESSORS_ONLN) - 1;
+ void *thread_retval;
+ pthread_t thread;
+ int result, rc;
+
+ if (!max_cpu_num) {
+ printf("[SKIP]\tThe running system has no more CPU for the hotplug test.\n");
+ return;
+ }
+
+ printf("[RUN]\tTest AMX use with the CPU hotplug.\n");
+
+ if (pthread_create(&thread, NULL, switch_cpus, &result))
+ fatal_error("pthread_create()");
+
+ rc = pthread_join(thread, &thread_retval);
+
+ if (rc)
+ fatal_error("pthread_join()");
+
+ /*
+ * Either an invalid retval or a failed result indicates
+ * the test failure.
+ */
+ if (thread_retval != &result || result != 0)
+ printf("[FAIL]\tThe AMX thread had an issue (%s).\n",
+ thread_retval != &result ? "killed" : "signaled");
+ else
+ printf("[OK]\tThe AMX thread had no issue.\n");
+}
+
int main(void)
{
/* Check hardware availability at first */
@@ -948,6 +1065,8 @@ int main(void)
test_ptrace();
+ test_cpuhp();
+
clearhandler(SIGILL);
free_stashed_xsave();
base-commit: 7877cb91f1081754a1487c144d85dc0d2e2e7fc4
--
2.17.1
Hi,
This follows the discussion here:
https://lore.kernel.org/linux-kselftest/20230324123157.bbwvfq4gsxnlnfwb@hou…
This shows a couple of inconsistencies with regard to how device-managed
resources are cleaned up. Basically, devm resources will only be cleaned up
if the device is attached to a bus and bound to a driver. Failing any of
these cases, a call to device_unregister will not end up in the devm
resources being released.
We had to work around it in DRM to provide helpers to create a device for
kunit tests, but the current discussion around creating similar, generic,
helpers for kunit resumed interest in fixing this.
This can be tested using the command:
./tools/testing/kunit/kunit.py run --kunitconfig=drivers/base/test/
Let me know what you think,
Maxime
Signed-off-by: Maxime Ripard <maxime(a)cerno.tech>
---
Maxime Ripard (2):
drivers: base: Add basic devm tests for root devices
drivers: base: Add basic devm tests for platform devices
drivers/base/test/.kunitconfig | 2 +
drivers/base/test/Kconfig | 4 +
drivers/base/test/Makefile | 3 +
drivers/base/test/platform-device-test.c | 278 +++++++++++++++++++++++++++++++
drivers/base/test/root-device-test.c | 120 +++++++++++++
5 files changed, 407 insertions(+)
---
base-commit: a6faf7ea9fcb7267d06116d4188947f26e00e57e
change-id: 20230329-kunit-devm-inconsistencies-test-5e5a7d01e60d
Best regards,
--
Maxime Ripard <maxime(a)cerno.tech>
Dzień dobry,
w jaki sposób docierają Państwo do odbiorców?
Tworzymy potężne narzędzia sprzedaży, które pozwalają kompleksowo rozwiązać problemy potencjalnych klientów i skutecznie wpłynąć na ich decyzje zakupowe.
Skupiamy się na Państwa potrzebach związanych z obsługą sklepu, oczekiwaniach i planach sprzedażowych. Szczegółowo dopasowujemy grafikę, funkcjonalności, strukturę i mikrointerakcje do Państwa grupy docelowej, co przekłada się na oczekiwane rezultaty.
Chętnie przedstawię dotychczasowe realizacje, aby mogli Państwo przekonać się o naszych możliwościach. Mogę się skontaktować?
Pozdrawiam
Kamil Durjasz
Commit d937bc3449fa ("bpf: make uniform use of array->elem_size
everywhere in arraymap.c") changed array_map_gen_lookup to use
array->elem_size instead of round_up(map->value_size, 8) as the element
size when generating code to access a value in an array map.
array->elem_size, however, is not set by bpf_map_meta_alloc when
initializing an BPF_MAP_TYPE_ARRAY_OF_MAPS or BPF_MAP_TYPE_HASH_OF_MAPS.
This results in array_map_gen_lookup incorrectly outputting code that
always accesses index 0 in the array (as the index will be calculated
via a multiplication with the element size, which is incorrectly set to
0).
This patchset sets elem_size on the bpf_array object when allocating an
array or hash of maps to fix this and adds a selftest that accesses an
array map nested within a hash of maps at a nonzero index to prevent
regressions.
v1: https://lore.kernel.org/bpf/95b5da7c-ee52-3ecb-0a4e-f6a7a114f269@linux.dev/
Changelog:
v1 -> v2:
Address comments by Martin KaFai Lau:
- Directly use inner_array->elem_size instead of using round_up
- Move selftests to a new patch
- Use ASSERT_* macros instead of CHECK and remove duration
- Remove unnecessary usleep
- Shorten selftest name
Rhys Rustad-Elliott (2):
bpf: Fix elem_size not being set for inner maps
selftests/bpf: Add access_inner_map selftest
kernel/bpf/map_in_map.c | 8 +++-
.../bpf/prog_tests/inner_array_lookup.c | 31 +++++++++++++
.../bpf/progs/test_inner_array_lookup.c | 45 +++++++++++++++++++
3 files changed, 82 insertions(+), 2 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/inner_array_lookup.c
create mode 100644 tools/testing/selftests/bpf/progs/test_inner_array_lookup.c
--
2.40.1
*Changes in v16*
- Fix a corner case
- Add exclusive PM_SCAN_OP_WP back
*Changes in v15*
- Build fix (Add missed build fix in RESEND)
*Changes in v14*
- Fix build error caused by #ifdef added at last minute in some configs
*Changes in v13*
- Rebase on top of next-20230414
- Give-up on using uffd_wp_range() and write new helpers, flush tlb only
once
*Changes in v12*
- Update and other memory types to UFFD_FEATURE_WP_ASYNC
- Rebaase on top of next-20230406
- Review updates
*Changes in v11*
- Rebase on top of next-20230307
- Base patches on UFFD_FEATURE_WP_UNPOPULATED
- Do a lot of cosmetic changes and review updates
- Remove ENGAGE_WP + !GET operation as it can be performed with
UFFDIO_WRITEPROTECT
*Changes in v10*
- Add specific condition to return error if hugetlb is used with wp
async
- Move changes in tools/include/uapi/linux/fs.h to separate patch
- Add documentation
*Changes in v9:*
- Correct fault resolution for userfaultfd wp async
- Fix build warnings and errors which were happening on some configs
- Simplify pagemap ioctl's code
*Changes in v8:*
- Update uffd async wp implementation
- Improve PAGEMAP_IOCTL implementation
*Changes in v7:*
- Add uffd wp async
- Update the IOCTL to use uffd under the hood instead of soft-dirty
flags
*Motivation*
The real motivation for adding PAGEMAP_SCAN IOCTL is to emulate Windows
GetWriteWatch() syscall [1]. The GetWriteWatch{} retrieves the addresses of
the pages that are written to in a region of virtual memory.
This syscall is used in Windows applications and games etc. This syscall is
being emulated in pretty slow manner in userspace. Our purpose is to
enhance the kernel such that we translate it efficiently in a better way.
Currently some out of tree hack patches are being used to efficiently
emulate it in some kernels. We intend to replace those with these patches.
So the whole gaming on Linux can effectively get benefit from this. It
means there would be tons of users of this code.
CRIU use case [2] was mentioned by Andrei and Danylo:
> Use cases for migrating sparse VMAs are binaries sanitized with ASAN,
> MSAN or TSAN [3]. All of these sanitizers produce sparse mappings of
> shadow memory [4]. Being able to migrate such binaries allows to highly
> reduce the amount of work needed to identify and fix post-migration
> crashes, which happen constantly.
Andrei's defines the following uses of this code:
* it is more granular and allows us to track changed pages more
effectively. The current interface can clear dirty bits for the entire
process only. In addition, reading info about pages is a separate
operation. It means we must freeze the process to read information
about all its pages, reset dirty bits, only then we can start dumping
pages. The information about pages becomes more and more outdated,
while we are processing pages. The new interface solves both these
downsides. First, it allows us to read pte bits and clear the
soft-dirty bit atomically. It means that CRIU will not need to freeze
processes to pre-dump their memory. Second, it clears soft-dirty bits
for a specified region of memory. It means CRIU will have actual info
about pages to the moment of dumping them.
* The new interface has to be much faster because basic page filtering
is happening in the kernel. With the old interface, we have to read
pagemap for each page.
*Implementation Evolution (Short Summary)*
From the definition of GetWriteWatch(), we feel like kernel's soft-dirty
feature can be used under the hood with some additions like:
* reset soft-dirty flag for only a specific region of memory instead of
clearing the flag for the entire process
* get and clear soft-dirty flag for a specific region atomically
So we decided to use ioctl on pagemap file to read or/and reset soft-dirty
flag. But using soft-dirty flag, sometimes we get extra pages which weren't
even written. They had become soft-dirty because of VMA merging and
VM_SOFTDIRTY flag. This breaks the definition of GetWriteWatch(). We were
able to by-pass this short coming by ignoring VM_SOFTDIRTY until David
reported that mprotect etc messes up the soft-dirty flag while ignoring
VM_SOFTDIRTY [5]. This wasn't happening until [6] got introduced. We
discussed if we can revert these patches. But we could not reach to any
conclusion. So at this point, I made couple of tries to solve this whole
VM_SOFTDIRTY issue by correcting the soft-dirty implementation:
* [7] Correct the bug fixed wrongly back in 2014. It had potential to cause
regression. We left it behind.
* [8] Keep a list of soft-dirty part of a VMA across splits and merges. I
got the reply don't increase the size of the VMA by 8 bytes.
At this point, we left soft-dirty considering it is too much delicate and
userfaultfd [9] seemed like the only way forward. From there onward, we
have been basing soft-dirty emulation on userfaultfd wp feature where
kernel resolves the faults itself when WP_ASYNC feature is used. It was
straight forward to add WP_ASYNC feature in userfautlfd. Now we get only
those pages dirty or written-to which are really written in reality. (PS
There is another WP_UNPOPULATED userfautfd feature is required which is
needed to avoid pre-faulting memory before write-protecting [9].)
All the different masks were added on the request of CRIU devs to create
interface more generic and better.
[1] https://learn.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-…
[2] https://lore.kernel.org/all/20221014134802.1361436-1-mdanylo@google.com
[3] https://github.com/google/sanitizers
[4] https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#64-bit
[5] https://lore.kernel.org/all/bfcae708-db21-04b4-0bbe-712badd03071@redhat.com
[6] https://lore.kernel.org/all/20220725142048.30450-1-peterx@redhat.com/
[7] https://lore.kernel.org/all/20221122115007.2787017-1-usama.anjum@collabora.…
[8] https://lore.kernel.org/all/20221220162606.1595355-1-usama.anjum@collabora.…
[9] https://lore.kernel.org/all/20230306213925.617814-1-peterx@redhat.com
[10] https://lore.kernel.org/all/20230125144529.1630917-1-mdanylo@google.com
* Original Cover letter from v8*
Hello,
Note:
Soft-dirty pages and pages which have been written-to are synonyms. As
kernel already has soft-dirty feature inside which we have given up to
use, we are using written-to terminology while using UFFD async WP under
the hood.
This IOCTL, PAGEMAP_SCAN on pagemap file can be used to get and/or clear
the info about page table entries. The following operations are
supported in this ioctl:
- Get the information if the pages have been written-to (PAGE_IS_WRITTEN),
file mapped (PAGE_IS_FILE), present (PAGE_IS_PRESENT) or swapped
(PAGE_IS_SWAPPED).
- Write-protect the pages (PAGEMAP_WP_ENGAGE) to start finding which
pages have been written-to.
- Find pages which have been written-to and write protect the pages
(atomic PAGE_IS_WRITTEN + PAGEMAP_WP_ENGAGE)
It is possible to find and clear soft-dirty pages entirely in userspace.
But it isn't efficient:
- The mprotect and SIGSEGV handler for bookkeeping
- The userfaultfd wp (synchronous) with the handler for bookkeeping
Some benchmarks can be seen here[1]. This series adds features that weren't
present earlier:
- There is no atomic get soft-dirty/Written-to status and clear present in
the kernel.
- The pages which have been written-to can not be found in accurate way.
(Kernel's soft-dirty PTE bit + sof_dirty VMA bit shows more soft-dirty
pages than there actually are.)
Historically, soft-dirty PTE bit tracking has been used in the CRIU
project. The procfs interface is enough for finding the soft-dirty bit
status and clearing the soft-dirty bit of all the pages of a process.
We have the use case where we need to track the soft-dirty PTE bit for
only specific pages on-demand. We need this tracking and clear mechanism
of a region of memory while the process is running to emulate the
getWriteWatch() syscall of Windows.
*(Moved to using UFFD instead of soft-dirtyi feature to find pages which
have been written-to from v7 patch series)*:
Stop using the soft-dirty flags for finding which pages have been
written to. It is too delicate and wrong as it shows more soft-dirty
pages than the actual soft-dirty pages. There is no interest in
correcting it [2][3] as this is how the feature was written years ago.
It shouldn't be updated to changed behaviour. Peter Xu has suggested
using the async version of the UFFD WP [4] as it is based inherently
on the PTEs.
So in this patch series, I've added a new mode to the UFFD which is
asynchronous version of the write protect. When this variant of the
UFFD WP is used, the page faults are resolved automatically by the
kernel. The pages which have been written-to can be found by reading
pagemap file (!PM_UFFD_WP). This feature can be used successfully to
find which pages have been written to from the time the pages were
write protected. This works just like the soft-dirty flag without
showing any extra pages which aren't soft-dirty in reality.
The information related to pages if the page is file mapped, present and
swapped is required for the CRIU project [5][6]. The addition of the
required mask, any mask, excluded mask and return masks are also required
for the CRIU project [5].
The IOCTL returns the addresses of the pages which match the specific
masks. The page addresses are returned in struct page_region in a compact
form. The max_pages is needed to support a use case where user only wants
to get a specific number of pages. So there is no need to find all the
pages of interest in the range when max_pages is specified. The IOCTL
returns when the maximum number of the pages are found. The max_pages is
optional. If max_pages is specified, it must be equal or greater than the
vec_size. This restriction is needed to handle worse case when one
page_region only contains info of one page and it cannot be compacted.
This is needed to emulate the Windows getWriteWatch() syscall.
The patch series include the detailed selftest which can be used as an
example for the uffd async wp test and PAGEMAP_IOCTL. It shows the
interface usages as well.
[1] https://lore.kernel.org/lkml/54d4c322-cd6e-eefd-b161-2af2b56aae24@collabora…
[2] https://lore.kernel.org/all/20221220162606.1595355-1-usama.anjum@collabora.…
[3] https://lore.kernel.org/all/20221122115007.2787017-1-usama.anjum@collabora.…
[4] https://lore.kernel.org/all/Y6Hc2d+7eTKs7AiH@x1n
[5] https://lore.kernel.org/all/YyiDg79flhWoMDZB@gmail.com/
[6] https://lore.kernel.org/all/20221014134802.1361436-1-mdanylo@google.com/
Regards,
Muhammad Usama Anjum
Muhammad Usama Anjum (4):
fs/proc/task_mmu: Implement IOCTL to get and optionally clear info
about PTEs
tools headers UAPI: Update linux/fs.h with the kernel sources
mm/pagemap: add documentation of PAGEMAP_SCAN IOCTL
selftests: mm: add pagemap ioctl tests
Peter Xu (1):
userfaultfd: UFFD_FEATURE_WP_ASYNC
Documentation/admin-guide/mm/pagemap.rst | 58 +
Documentation/admin-guide/mm/userfaultfd.rst | 35 +
fs/proc/task_mmu.c | 503 ++++++
fs/userfaultfd.c | 26 +-
include/linux/userfaultfd_k.h | 21 +-
include/uapi/linux/fs.h | 53 +
include/uapi/linux/userfaultfd.h | 9 +-
mm/hugetlb.c | 32 +-
mm/memory.c | 27 +-
tools/include/uapi/linux/fs.h | 53 +
tools/testing/selftests/mm/.gitignore | 1 +
tools/testing/selftests/mm/Makefile | 3 +-
tools/testing/selftests/mm/config | 1 +
tools/testing/selftests/mm/pagemap_ioctl.c | 1459 ++++++++++++++++++
tools/testing/selftests/mm/run_vmtests.sh | 4 +
15 files changed, 2262 insertions(+), 23 deletions(-)
create mode 100644 tools/testing/selftests/mm/pagemap_ioctl.c
mode change 100644 => 100755 tools/testing/selftests/mm/run_vmtests.sh
--
2.39.2
This is part of the effort to remove the empty element of the ctl_table
structures (used to calculate size) and replace it with an ARRAY_SIZE call. By
replacing the child element in struct ctl_table with a flags element we make
sure that there are no forward recursions on child nodes and therefore set
ourselves up for just using an ARRAY_SIZE. We also added some self tests to
make sure that we do not break anything.
Patchset is separated in 4: parport fixes, selftests fixes, selftests additions and
replacement of child element. Tested everything with sysctl self tests and everything
seems "ok".
1. parport fixes: @mcgrof: this is related to my previous series and it plugs a
sysct table leak in the parport driver. Please tell me if you want me to repost
the parport series with this one stiched in.
2. Selftests fixes: Remove the prefixed zeros when passing a awk field to the
awk print command because it was causing $0009 to be interpreted as $0.
Replaced continue with return in sysctl.sh(test_case) so the test actually
gets skipped. The skip decision is now in sysctl.sh(skip_test).
3. Selftest additions: New test to confirm that unregister actually removes
targets. New test to confirm that permanently empty targets are indeed
created and that no other targets can be created "on top".
4. Replaced the child pointer in struct ctl_table with a u8 flag. The flag
is used to differentiate between permanently empty targets and non-empty ones.
Comments/feedback greatly appreciated
Best
Joel
Joel Granados (8):
parport: plug a sysctl register leak
test_sysctl: Fix test metadata getters
test_sysctl: Group node sysctl test under one func
test_sysctl: Add an unregister sysctl test
test_sysctl: Add an option to prevent test skip
test_sysclt: Test for registering a mount point
sysctl: Remove debugging dump_stack
sysctl: replace child with a flags var
drivers/parport/procfs.c | 23 ++---
fs/proc/proc_sysctl.c | 82 ++++------------
include/linux/sysctl.h | 4 +-
lib/test_sysctl.c | 91 ++++++++++++++++--
tools/testing/selftests/sysctl/sysctl.sh | 115 +++++++++++++++++------
5 files changed, 204 insertions(+), 111 deletions(-)
--
2.30.2
From: Jeff Xu <jeffxu(a)google.com>
This is the first set of Memory mapping (VMA) protection patches using PKU.
* * *
Background:
As discussed previously in the kernel mailing list [1], V8 CFI [2] uses
PKU to protect memory, and Stephen Röttger proposes to extend the PKU to
memory mapping [3].
We're using PKU for in-process isolation to enforce control-flow integrity
for a JIT compiler. In our threat model, an attacker exploits a
vulnerability and has arbitrary read/write access to the whole process
space concurrently to other threads being executed. This attacker can
manipulate some arguments to syscalls from some threads.
Under such a powerful attack, we want to create a “safe/isolated”
thread environment. We assign dedicated PKUs to this thread,
and use those PKUs to protect the threads’ runtime environment.
The thread has exclusive access to its run-time memory. This
includes modifying the protection of the memory mapping, or
munmap the memory mapping after use. And the other threads
won’t be able to access the memory or modify the memory mapping
(VMA) belonging to the thread.
* * *
Proposed changes:
This patch introduces a new flag, PKEY_ENFORCE_API, to the pkey_alloc()
function. When a PKEY is created with this flag, it is enforced that any
thread that wants to make changes to the memory mapping (such as mprotect)
of the memory must have write access to the PKEY. PKEYs created without
this flag will continue to work as they do now, for backwards
compatibility.
Only PKEY created from user space can have the new flag set, the PKEY
allocated by the kernel internally will not have it. In other words,
ARCH_DEFAULT_PKEY(0) and execute_only_pkey won’t have this flag set,
and continue work as today.
This flag is checked only at syscall entry, such as mprotect/munmap in
this set of patches. It will not apply to other call paths. In other
words, if the kernel want to change attributes of VMA for some reasons,
the kernel is free to do that and not affected by this new flag.
This set of patch covers mprotect/munmap, I plan to work on other
syscalls after this.
* * *
Testing:
I have tested this patch on a Linux kernel 5.15, 6,1, and 6.4-rc1,
new selftest is added in: pkey_enforce_api.c
* * *
Discussion:
We believe that this patch provides a valuable security feature.
It allows us to create “safe/isolated” thread environments that are
protected from attackers with arbitrary read/write access to
the process space.
We believe that the interface change and the patch don't
introduce backwards compatibility risk.
We would like to disucss this patch in Linux kernel community
for feedback and support.
* * *
Reference:
[1]https://lore.kernel.org/all/202208221331.71C50A6F@keescook/
[2]https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyX…
[3]https://docs.google.com/document/d/1qqVoVfRiF2nRylL3yjZyCQvzQaej1HRPh3f5w…
Best Regards,
-Jeff Xu
Jeff Xu (6):
PKEY: Introduce PKEY_ENFORCE_API flag
PKEY: Add arch_check_pkey_enforce_api()
PKEY: Apply PKEY_ENFORCE_API to mprotect
PKEY:selftest pkey_enforce_api for mprotect
KEY: Apply PKEY_ENFORCE_API to munmap
PKEY:selftest pkey_enforce_api for munmap
arch/powerpc/include/asm/pkeys.h | 19 +-
arch/x86/include/asm/mmu.h | 7 +
arch/x86/include/asm/pkeys.h | 92 +-
arch/x86/mm/pkeys.c | 2 +-
include/linux/mm.h | 2 +-
include/linux/pkeys.h | 18 +-
include/uapi/linux/mman.h | 5 +
mm/mmap.c | 34 +-
mm/mprotect.c | 31 +-
mm/mremap.c | 6 +-
tools/testing/selftests/mm/Makefile | 1 +
tools/testing/selftests/mm/pkey_enforce_api.c | 1312 +++++++++++++++++
12 files changed, 1507 insertions(+), 22 deletions(-)
create mode 100644 tools/testing/selftests/mm/pkey_enforce_api.c
base-commit: ba0ad6ed89fd5dada3b7b65ef2b08e95d449d4ab
--
2.40.1.606.ga4b1b128d6-goog
This patch updates the cgroup-v2.rst file to include information about
the new "cpuset.cpus.reserve" control file as well as the new remote
partition.
Signed-off-by: Waiman Long <longman(a)redhat.com>
---
Documentation/admin-guide/cgroup-v2.rst | 92 +++++++++++++++++++++----
1 file changed, 79 insertions(+), 13 deletions(-)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index f67c0829350b..3e9351c2cd27 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -2215,6 +2215,38 @@ Cpuset Interface Files
Its value will be affected by memory nodes hotplug events.
+ cpuset.cpus.reserve
+ A read-write multiple values file which exists only on root
+ cgroup.
+
+ It lists all the CPUs that are reserved for adjacent and remote
+ partitions created in the system. See the next section for
+ more information on what an adjacent or remote partitions is.
+
+ Creation of adjacent partition does not require touching this
+ control file as CPU reservation will be done automatically.
+ In order to create a remote partition, the CPUs needed by the
+ remote partition has to be written to this file first.
+
+ Due to the fact that "cpuset.cpus.reserve" holds reserve CPUs
+ that can be used by multiple partitions and automatic reservation
+ may also race with manual reservation, an extension prefixes of
+ "+" and "-" are allowed for this file to reduce race.
+
+ A "+" prefix can be used to indicate a list of additional
+ CPUs that are to be added without disturbing the CPUs that are
+ originally there. For example, if its current value is "3-4",
+ echoing ""+5" to it will change it to "3-5".
+
+ Once a remote partition is destroyed, its CPUs have to be
+ removed from this file or no other process can use them. A "-"
+ prefix can be used to remove a list of CPUs from it. However,
+ removing CPUs that are currently used in existing partitions
+ may cause those partitions to become invalid. A single "-"
+ character without any number can be used to indicate removal
+ of all the free CPUs not yet allocated to any partitions to
+ avoid accidental partition invalidation.
+
cpuset.cpus.partition
A read-write single value file which exists on non-root
cpuset-enabled cgroups. This flag is owned by the parent cgroup
@@ -2228,25 +2260,49 @@ Cpuset Interface Files
"isolated" Partition root without load balancing
========== =====================================
- The root cgroup is always a partition root and its state
- cannot be changed. All other non-root cgroups start out as
- "member".
+ A cpuset partition is a collection of cgroups with a partition
+ root at the top of the hierarchy and its descendants except
+ those that are separate partition roots themselves and their
+ descendants. A partition has exclusive access to the set of
+ CPUs allocated to it. Other cgroups outside of that partition
+ cannot use any CPUs in that set.
+
+ There are two types of partitions - adjacent and remote. The
+ parent of an adjacent partition must be a valid partition root.
+ Partition roots of adjacent partitions are all clustered around
+ the root cgroup. Creation of adjacent partition is done by
+ writing the desired partition type into "cpuset.cpus.partition".
+
+ A remote partition does not require a partition root parent.
+ So a remote partition can be formed far from the root cgroup.
+ However, its creation is a 2-step process. The CPUs needed
+ by a remote partition ("cpuset.cpus" of the partition root)
+ has to be written into "cpuset.cpus.reserve" of the root
+ cgroup first. After that, "isolated" can be written into
+ "cpuset.cpus.partition" of the partition root to form a remote
+ isolated partition which is the only supported remote partition
+ type for now.
+
+ All remote partitions are terminal as adjacent partition cannot
+ be created underneath it. With the way remote partition is
+ formed, it is not possible to create another valid remote
+ partition underneath it.
+
+ The root cgroup is always a partition root and its state cannot
+ be changed. All other non-root cgroups start out as "member".
When set to "root", the current cgroup is the root of a new
- partition or scheduling domain that comprises itself and all
- its descendants except those that are separate partition roots
- themselves and their descendants.
+ partition or scheduling domain.
- When set to "isolated", the CPUs in that partition root will
+ When set to "isolated", the CPUs in that partition will
be in an isolated state without any load balancing from the
scheduler. Tasks placed in such a partition with multiple
CPUs should be carefully distributed and bound to each of the
individual CPUs for optimal performance.
- The value shown in "cpuset.cpus.effective" of a partition root
- is the CPUs that the partition root can dedicate to a potential
- new child partition root. The new child subtracts available
- CPUs from its parent "cpuset.cpus.effective".
+ The value shown in "cpuset.cpus.effective" of a partition root is
+ the CPUs that are dedicated to that partition and not available
+ to cgroups outside of that partittion.
A partition root ("root" or "isolated") can be in one of the
two possible states - valid or invalid. An invalid partition
@@ -2270,8 +2326,8 @@ Cpuset Interface Files
In the case of an invalid partition root, a descriptive string on
why the partition is invalid is included within parentheses.
- For a partition root to become valid, the following conditions
- must be met.
+ For an adjacent partition root to be valid, the following
+ conditions must be met.
1) The "cpuset.cpus" is exclusive with its siblings , i.e. they
are not shared by any of its siblings (exclusivity rule).
@@ -2281,6 +2337,16 @@ Cpuset Interface Files
4) The "cpuset.cpus.effective" cannot be empty unless there is
no task associated with this partition.
+ For a remote partition root to be valid, the following conditions
+ must be met.
+
+ 1) The same exclusivity rule as adjacent partition root.
+ 2) The "cpuset.cpus" is not empty and all the CPUs must be
+ present in "cpuset.cpus.reserve" of the root cgroup and none
+ of them are allocated to another partition.
+ 3) The "cpuset.cpus" value must be present in all its ancestors
+ to ensure proper hierarchical cpu distribution.
+
External events like hotplug or changes to "cpuset.cpus" can
cause a valid partition root to become invalid and vice versa.
Note that a task cannot be moved to a cgroup with empty
--
2.31.1
From: Mark Brown <broonie(a)kernel.org>
[ Upstream commit dbcf76390eb9a65d5d0c37b0cd57335218564e37 ]
The ftrace selftests do not currently produce KTAP output, they produce a
custom format much nicer for human consumption. This means that when run in
automated test systems we just get a single result for the suite as a whole
rather than recording results for individual test cases, making it harder
to look at the test data and masking things like inappropriate skips.
Address this by adding support for KTAP output to the ftracetest script and
providing a trivial wrapper which will be invoked by the kselftest runner
to generate output in this format by default, users using ftracetest
directly will continue to get the existing output.
This is not the most elegant solution but it is simple and effective. I
did consider implementing this by post processing the existing output
format but that felt more complex and likely to result in all output being
lost if something goes seriously wrong during the run which would not be
helpful. I did also consider just writing a separate runner script but
there's enough going on with things like the signal handling for that to
seem like it would be duplicating too much.
Acked-by: Steven Rostedt (Google) <rostedt(a)goodmis.org>
Acked-by: Masami Hiramatsu (Google) <mhiramat(a)kernel.org>
Tested-by: Masami Hiramatsu (Google) <mhiramat(a)kernel.org>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Shuah Khan <skhan(a)linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/ftrace/Makefile | 3 +-
tools/testing/selftests/ftrace/ftracetest | 63 ++++++++++++++++++-
.../testing/selftests/ftrace/ftracetest-ktap | 8 +++
3 files changed, 70 insertions(+), 4 deletions(-)
create mode 100755 tools/testing/selftests/ftrace/ftracetest-ktap
diff --git a/tools/testing/selftests/ftrace/Makefile b/tools/testing/selftests/ftrace/Makefile
index d6e106fbce11c..a1e955d2de4cc 100644
--- a/tools/testing/selftests/ftrace/Makefile
+++ b/tools/testing/selftests/ftrace/Makefile
@@ -1,7 +1,8 @@
# SPDX-License-Identifier: GPL-2.0
all:
-TEST_PROGS := ftracetest
+TEST_PROGS_EXTENDED := ftracetest
+TEST_PROGS := ftracetest-ktap
TEST_FILES := test.d settings
EXTRA_CLEAN := $(OUTPUT)/logs/*
diff --git a/tools/testing/selftests/ftrace/ftracetest b/tools/testing/selftests/ftrace/ftracetest
index c3311c8c40890..2506621e75dfb 100755
--- a/tools/testing/selftests/ftrace/ftracetest
+++ b/tools/testing/selftests/ftrace/ftracetest
@@ -13,6 +13,7 @@ echo "Usage: ftracetest [options] [testcase(s)] [testcase-directory(s)]"
echo " Options:"
echo " -h|--help Show help message"
echo " -k|--keep Keep passed test logs"
+echo " -K|--ktap Output in KTAP format"
echo " -v|--verbose Increase verbosity of test messages"
echo " -vv Alias of -v -v (Show all results in stdout)"
echo " -vvv Alias of -v -v -v (Show all commands immediately)"
@@ -85,6 +86,10 @@ parse_opts() { # opts
KEEP_LOG=1
shift 1
;;
+ --ktap|-K)
+ KTAP=1
+ shift 1
+ ;;
--verbose|-v|-vv|-vvv)
if [ $VERBOSE -eq -1 ]; then
usage "--console can not use with --verbose"
@@ -178,6 +183,7 @@ TEST_DIR=$TOP_DIR/test.d
TEST_CASES=`find_testcases $TEST_DIR`
LOG_DIR=$TOP_DIR/logs/`date +%Y%m%d-%H%M%S`/
KEEP_LOG=0
+KTAP=0
DEBUG=0
VERBOSE=0
UNSUPPORTED_RESULT=0
@@ -229,7 +235,7 @@ prlog() { # messages
newline=
shift
fi
- printf "$*$newline"
+ [ "$KTAP" != "1" ] && printf "$*$newline"
[ "$LOG_FILE" ] && printf "$*$newline" | strip_esc >> $LOG_FILE
}
catlog() { #file
@@ -260,11 +266,11 @@ TOTAL_RESULT=0
INSTANCE=
CASENO=0
+CASENAME=
testcase() { # testfile
CASENO=$((CASENO+1))
- desc=`grep "^#[ \t]*description:" $1 | cut -f2- -d:`
- prlog -n "[$CASENO]$INSTANCE$desc"
+ CASENAME=`grep "^#[ \t]*description:" $1 | cut -f2- -d:`
}
checkreq() { # testfile
@@ -277,40 +283,68 @@ test_on_instance() { # testfile
grep -q "^#[ \t]*flags:.*instance" $1
}
+ktaptest() { # result comment
+ if [ "$KTAP" != "1" ]; then
+ return
+ fi
+
+ local result=
+ if [ "$1" = "1" ]; then
+ result="ok"
+ else
+ result="not ok"
+ fi
+ shift
+
+ local comment=$*
+ if [ "$comment" != "" ]; then
+ comment="# $comment"
+ fi
+
+ echo $CASENO $result $INSTANCE$CASENAME $comment
+}
+
eval_result() { # sigval
case $1 in
$PASS)
prlog " [${color_green}PASS${color_reset}]"
+ ktaptest 1
PASSED_CASES="$PASSED_CASES $CASENO"
return 0
;;
$FAIL)
prlog " [${color_red}FAIL${color_reset}]"
+ ktaptest 0
FAILED_CASES="$FAILED_CASES $CASENO"
return 1 # this is a bug.
;;
$UNRESOLVED)
prlog " [${color_blue}UNRESOLVED${color_reset}]"
+ ktaptest 0 UNRESOLVED
UNRESOLVED_CASES="$UNRESOLVED_CASES $CASENO"
return $UNRESOLVED_RESULT # depends on use case
;;
$UNTESTED)
prlog " [${color_blue}UNTESTED${color_reset}]"
+ ktaptest 1 SKIP
UNTESTED_CASES="$UNTESTED_CASES $CASENO"
return 0
;;
$UNSUPPORTED)
prlog " [${color_blue}UNSUPPORTED${color_reset}]"
+ ktaptest 1 SKIP
UNSUPPORTED_CASES="$UNSUPPORTED_CASES $CASENO"
return $UNSUPPORTED_RESULT # depends on use case
;;
$XFAIL)
prlog " [${color_green}XFAIL${color_reset}]"
+ ktaptest 1 XFAIL
XFAILED_CASES="$XFAILED_CASES $CASENO"
return 0
;;
*)
prlog " [${color_blue}UNDEFINED${color_reset}]"
+ ktaptest 0 error
UNDEFINED_CASES="$UNDEFINED_CASES $CASENO"
return 1 # this must be a test bug
;;
@@ -371,6 +405,7 @@ __run_test() { # testfile
run_test() { # testfile
local testname=`basename $1`
testcase $1
+ prlog -n "[$CASENO]$INSTANCE$CASENAME"
if [ ! -z "$LOG_FILE" ] ; then
local testlog=`mktemp $LOG_DIR/${CASENO}-${testname}-log.XXXXXX`
else
@@ -405,6 +440,17 @@ run_test() { # testfile
# load in the helper functions
. $TEST_DIR/functions
+if [ "$KTAP" = "1" ]; then
+ echo "TAP version 13"
+
+ casecount=`echo $TEST_CASES | wc -w`
+ for t in $TEST_CASES; do
+ test_on_instance $t || continue
+ casecount=$((casecount+1))
+ done
+ echo "1..${casecount}"
+fi
+
# Main loop
for t in $TEST_CASES; do
run_test $t
@@ -439,6 +485,17 @@ prlog "# of unsupported: " `echo $UNSUPPORTED_CASES | wc -w`
prlog "# of xfailed: " `echo $XFAILED_CASES | wc -w`
prlog "# of undefined(test bug): " `echo $UNDEFINED_CASES | wc -w`
+if [ "$KTAP" = "1" ]; then
+ echo -n "# Totals:"
+ echo -n " pass:"`echo $PASSED_CASES | wc -w`
+ echo -n " faii:"`echo $FAILED_CASES | wc -w`
+ echo -n " xfail:"`echo $XFAILED_CASES | wc -w`
+ echo -n " xpass:0"
+ echo -n " skip:"`echo $UNTESTED_CASES $UNSUPPORTED_CASES | wc -w`
+ echo -n " error:"`echo $UNRESOLVED_CASES $UNDEFINED_CASES | wc -w`
+ echo
+fi
+
cleanup
# if no error, return 0
diff --git a/tools/testing/selftests/ftrace/ftracetest-ktap b/tools/testing/selftests/ftrace/ftracetest-ktap
new file mode 100755
index 0000000000000..b3284679ef3af
--- /dev/null
+++ b/tools/testing/selftests/ftrace/ftracetest-ktap
@@ -0,0 +1,8 @@
+#!/bin/sh -e
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# ftracetest-ktap: Wrapper to integrate ftracetest with the kselftest runner
+#
+# Copyright (C) Arm Ltd., 2023
+
+./ftracetest -K
--
2.39.2
Dzień dobry,
zapoznałem się z Państwa ofertą i z przyjemnością przyznaję, że przyciąga uwagę i zachęca do dalszych rozmów.
Pomyślałem, że może mógłbym mieć swój wkład w Państwa rozwój i pomóc dotrzeć z tą ofertą do większego grona odbiorców. Pozycjonuję strony www, dzięki czemu generują świetny ruch w sieci.
Możemy porozmawiać w najbliższym czasie?
Pozdrawiam
Adam Charachuta
One can use "cpuset.cpus.partition" to create multiple scheduling domains
or to produce a set of isolated CPUs where load balancing is disabled.
The former use case is less common but the latter one can be frequently
used especially for the Telco use cases like DPDK.
The existing "isolated" partition can be used to produce isolated
CPUs if the applications have full control of a system. However, in a
containerized environment where all the apps are run in a container,
it is hard to distribute out isolated CPUs from the root down given
the unified hierarchy nature of cgroup v2.
The container running on isolated CPUs can be several layers down from
the root. The current partition feature requires that all the ancestors
of a leaf partition root must be parititon roots themselves. This can
be hard to configure.
This patch introduces a new type of partition called remote partition.
A remote partition is a partition whose parent is not a partition root
itself and its CPUs are acquired directly from available CPUs in the
top cpuset's cpuset.cpus.reserve. For contrast, the existing type of
partitions where their parents have to be valid partition roots are
referred to as adjacent partitions as they have to be clustered around
the cgroup root.
This patch enables only the creation of remote isolated partitions
for now.
The creation of a remote isolated partition is a 2-step process.
1) Reserve the CPUs needed by the remote partition by adding CPUs to
cpuset.cpus.reserve of the top cpuset.
2) Enable an isolated partition by
# echo isolated > cpuset.cpus.partition
Such a remote isolated partition P will only be valid if the following
conditions are true.
1) P/cpuset.cpus is a subset of top cpuset's cpuset.cpus.reserve.
2) All the CPUs in P/cpuset.cpus are present in the cpuset.cpus of
all its ancestors to ensure that those CPUs are properly granted
to P in a hierarchical manner.
3) None of the CPUs in P/cpuset.cpus have been acquired by other valid
partitions.
Like adjacent partitions, a remote partition has exclusive access to the
CPUs allocated to that partition. Because of the exclusive nature, none
of the cpuset.cpus of its sibling cpusets can contain any CPUs allocated
to the remote partition or the partition creation process will fail.
Signed-off-by: Waiman Long <longman(a)redhat.com>
---
kernel/cgroup/cpuset.c | 306 +++++++++++++++++++++++++++++++++++++++--
1 file changed, 291 insertions(+), 15 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 69abe95a9969..280018cddaba 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -98,6 +98,7 @@ enum prs_errcode {
PERR_NOCPUS,
PERR_HOTPLUG,
PERR_CPUSEMPTY,
+ PERR_RMTPARENT,
};
static const char * const perr_strings[] = {
@@ -108,6 +109,7 @@ static const char * const perr_strings[] = {
[PERR_NOCPUS] = "Parent unable to distribute cpu downstream",
[PERR_HOTPLUG] = "No cpu available due to hotplug",
[PERR_CPUSEMPTY] = "cpuset.cpus is empty",
+ [PERR_RMTPARENT] = "New partition not allowed under remote partition",
};
struct cpuset {
@@ -206,6 +208,9 @@ struct cpuset {
/* Handle for cpuset.cpus.partition */
struct cgroup_file partition_file;
+
+ /* Remote partition silbling list anchored at remote_children */
+ struct list_head remote_sibling;
};
/*
@@ -236,6 +241,9 @@ static cpumask_var_t cs_reserve_cpus; /* Reserved CPUs */
static cpumask_var_t cs_free_reserve_cpus; /* Unallocated reserved CPUs */
static cpumask_var_t cs_tmp_cpus; /* Temp cpumask for partition */
+/* List of remote partition root children */
+static struct list_head remote_children;
+
/*
* Partition root states:
*
@@ -385,6 +393,8 @@ static struct cpuset top_cpuset = {
.flags = ((1 << CS_ONLINE) | (1 << CS_CPU_EXCLUSIVE) |
(1 << CS_MEM_EXCLUSIVE)),
.partition_root_state = PRS_ROOT,
+ .remote_sibling = LIST_HEAD_INIT(top_cpuset.remote_sibling),
+
};
/**
@@ -1385,6 +1395,209 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs)
rebuild_sched_domains_locked();
}
+static inline bool is_remote_partition(struct cpuset *cs)
+{
+ return !list_empty(&cs->remote_sibling);
+}
+
+/*
+ * update_isolated_cpumasks_hier - Update effective cpumasks and tasks
+ * @cs: the cpuset to consider
+ * @lb: load balance flag
+ *
+ * This is called for descendant cpusets when a cpuset switches to or
+ * from an isolated remote partition. There can't be any remote partitions
+ * underneath it.
+ */
+static void update_isolated_cpumasks_hier(struct cpuset *cs, bool lb)
+{
+ struct cpuset *cp;
+ struct cgroup_subsys_state *pos_css;
+
+ rcu_read_lock();
+ cpuset_for_each_descendant_pre(cp, pos_css, cs) {
+ struct cpuset *parent = parent_cs(cp);
+
+ if (cp == cs)
+ continue; /* Skip partition root */
+
+ WARN_ON_ONCE(is_partition_valid(cp));
+ spin_lock_irq(&callback_lock);
+
+ if (cpumask_and(cp->effective_cpus, cp->cpus_allowed,
+ parent->effective_cpus)) {
+ if (cp->use_parent_ecpus) {
+ WARN_ON_ONCE(--parent->child_ecpus_count < 0);
+ cp->use_parent_ecpus = false;
+ }
+ } else {
+ cpumask_copy(cp->effective_cpus, parent->effective_cpus);
+ if (!cp->use_parent_ecpus) {
+ parent->child_ecpus_count++;
+ cp->use_parent_ecpus = true;
+ }
+ }
+ if (lb)
+ set_bit(CS_SCHED_LOAD_BALANCE, &cp->flags);
+ else
+ clear_bit(CS_SCHED_LOAD_BALANCE, &cp->flags);
+
+ spin_unlock_irq(&callback_lock);
+ }
+ rcu_read_unlock();
+}
+
+/*
+ * isolated_cpus_acquire - Acquire isolated CPUs from cpuset.cpus.reserve
+ * @cs: the cpuset to update
+ * Return: 1 if successful, 0 if error
+ *
+ * Acquire isolated CPUs from cpuset.cpus.reserve and become an isolated
+ * partition root. cpuset_mutex must be held by the caller.
+ *
+ * Note that freely available reserve CPUs have already been isolated, so
+ * we don't need to rebuild sched domains. Since the cpuset is likely
+ * using effective_cpus from its parent before the conversion, we have to
+ * update parent's child_ecpus_count accordingly.
+ */
+static int isolated_cpus_acquire(struct cpuset *cs)
+{
+ struct cpuset *ancestor, *parent;
+
+ ancestor = parent = parent_cs(cs);
+
+ /*
+ * To enable acquiring of isolated CPUs from cpuset.cpus.reserve,
+ * cpus_allowed must be a subset of both its ancestor's cpus_allowed
+ * and cs_free_reserve_cpus and the user must have sysadmin privilege.
+ */
+ if (!capable(CAP_SYS_ADMIN) ||
+ !cpumask_subset(cs->cpus_allowed, cs_free_reserve_cpus))
+ return 0;
+
+ /*
+ * Check cpus_allowed of all its ancestors, except top_cpuset.
+ */
+ while (ancestor != &top_cpuset) {
+ if (!cpumask_subset(cs->cpus_allowed, ancestor->cpus_allowed))
+ return 0;
+ ancestor = parent_cs(ancestor);
+ }
+
+ spin_lock_irq(&callback_lock);
+ cpumask_andnot(cs_free_reserve_cpus,
+ cs_free_reserve_cpus, cs->cpus_allowed);
+ cpumask_and(cs->effective_cpus, cs->cpus_allowed, cpu_active_mask);
+
+ if (cs->use_parent_ecpus) {
+ cs->use_parent_ecpus = false;
+ parent->child_ecpus_count--;
+ }
+ list_add(&cs->remote_sibling, &remote_children);
+ clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
+ spin_unlock_irq(&callback_lock);
+
+ if (!list_empty(&cs->css.children))
+ update_isolated_cpumasks_hier(cs, false);
+
+ return 1;
+}
+
+/*
+ * isolated_cpus_release - Release isolated CPUs back to cpuset.cpus.reserve
+ * @cs: the cpuset to update
+ *
+ * Release isolated CPUs back to cpuset.cpus.reserve.
+ * cpuset_mutex must be held by the caller.
+ */
+static void isolated_cpus_release(struct cpuset *cs)
+{
+ struct cpuset *parent = parent_cs(cs);
+
+ if (!is_remote_partition(cs))
+ return;
+
+ /*
+ * This can be called when the cpu list in cs_reserve_cpus
+ * is reduced. So not all the cpus should be returned back to
+ * cs_free_reserve_cpus.
+ */
+ WARN_ON_ONCE(cs->partition_root_state != PRS_ISOLATED);
+ WARN_ON_ONCE(!cpumask_subset(cs->cpus_allowed, cs_reserve_cpus));
+ spin_lock_irq(&callback_lock);
+ if (!cpumask_and(cs->effective_cpus,
+ parent->effective_cpus, cs->cpus_allowed)) {
+ cs->use_parent_ecpus = true;
+ parent->child_ecpus_count++;
+ cpumask_copy(cs->effective_cpus, parent->effective_cpus);
+ }
+ list_del_init(&cs->remote_sibling);
+ cs->partition_root_state = PRS_INVALID_ISOLATED;
+ if (!cs->prs_err)
+ cs->prs_err = PERR_INVCPUS;
+
+ /* Add the CPUs back to cs_free_reserve_cpus */
+ cpumask_or(cs_free_reserve_cpus,
+ cs_free_reserve_cpus, cs->cpus_allowed);
+
+ /*
+ * There is no change in the CPU load balance state that requires
+ * rebuilding sched domains. So the flags bits can be set directly.
+ */
+ set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
+ clear_bit(CS_CPU_EXCLUSIVE, &cs->flags);
+ spin_unlock_irq(&callback_lock);
+
+ if (!list_empty(&cs->css.children))
+ update_isolated_cpumasks_hier(cs, true);
+}
+
+/*
+ * isolated_cpus_update - cpuset.cpus change in a remote isolated partition
+ *
+ * Return: 1 if successful, 0 if it needs to become invalid.
+ */
+static int isolated_cpus_update(struct cpuset *cs, struct cpumask *newmask,
+ struct tmpmasks *tmp)
+{
+ bool adding, deleting;
+
+ if (WARN_ON_ONCE((cs->partition_root_state != PRS_ISOLATED) ||
+ !is_remote_partition(cs)))
+ return 0;
+
+ if (cpumask_empty(newmask))
+ goto invalidate;
+
+ adding = cpumask_andnot(tmp->addmask, newmask, cs->cpus_allowed);
+ deleting = cpumask_andnot(tmp->delmask, cs->cpus_allowed, newmask);
+
+ /*
+ * Additions of isolation CPUs is only allowed if those CPUs are
+ * in cs_free_reserve_cpus and the caller has sysadmin privilege.
+ */
+ if (adding && (!capable(CAP_SYS_ADMIN) ||
+ !cpumask_subset(tmp->addmask, cs_free_reserve_cpus)))
+ goto invalidate;
+
+ spin_lock_irq(&callback_lock);
+ if (adding)
+ cpumask_andnot(cs_free_reserve_cpus,
+ cs_free_reserve_cpus, tmp->addmask);
+ if (deleting)
+ cpumask_or(cs_free_reserve_cpus,
+ cs_free_reserve_cpus, tmp->delmask);
+ cpumask_copy(cs->cpus_allowed, newmask);
+ cpumask_andnot(cs->effective_cpus, newmask, cs->subparts_cpus);
+ cpumask_and(cs->effective_cpus, cs->effective_cpus, cpu_active_mask);
+ spin_unlock_irq(&callback_lock);
+ return 1;
+
+invalidate:
+ isolated_cpus_release(cs);
+ return 0;
+}
+
/**
* update_parent_subparts_cpumask - update subparts_cpus mask of parent cpuset
* @cs: The cpuset that requests change in partition root state
@@ -1457,9 +1670,12 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
if (cmd == partcmd_enable) {
/*
* Enabling partition root is not allowed if cpus_allowed
- * doesn't overlap parent's cpus_allowed.
+ * doesn't overlap parent's cpus_allowed or if it intersects
+ * cs_free_reserve_cpus since it needs to be a remote partition
+ * in this case.
*/
- if (!cpumask_intersects(cs->cpus_allowed, parent->cpus_allowed))
+ if (!cpumask_intersects(cs->cpus_allowed, parent->cpus_allowed) ||
+ cpumask_intersects(cs->cpus_allowed, cs_free_reserve_cpus))
return PERR_INVCPUS;
/*
@@ -1694,6 +1910,15 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
struct cpuset *parent = parent_cs(cp);
bool update_parent = false;
+ /*
+ * Skip remote partition that acquires isolated CPUs directly
+ * from cs_reserve_cpus.
+ */
+ if (is_remote_partition(cp)) {
+ pos_css = css_rightmost_descendant(pos_css);
+ continue;
+ }
+
compute_effective_cpumask(tmp->new_cpus, cp, parent);
/*
@@ -1804,7 +2029,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
WARN_ON(!is_in_v2_mode() &&
!cpumask_equal(cp->cpus_allowed, cp->effective_cpus));
- update_tasks_cpumask(cp, tmp->new_cpus);
+ update_tasks_cpumask(cp, cp->effective_cpus);
/*
* On legacy hierarchy, if the effective cpumask of any non-
@@ -1946,6 +2171,14 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
return retval;
if (cs->partition_root_state) {
+ /*
+ * Call isolated_cpus_update() to handle valid remote partition
+ */
+ if (is_remote_partition(cs)) {
+ isolated_cpus_update(cs, cs_tmp_cpus, &tmp);
+ goto update_hier;
+ }
+
if (invalidate)
update_parent_subparts_cpumask(cs, partcmd_invalidate,
NULL, &tmp);
@@ -1980,10 +2213,11 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
}
spin_unlock_irq(&callback_lock);
+update_hier:
/* effective_cpus will be updated here */
update_cpumasks_hier(cs, &tmp, false);
- if (cs->partition_root_state) {
+ if (cs->partition_root_state && !is_remote_partition(cs)) {
struct cpuset *parent = parent_cs(cs);
/*
@@ -2072,7 +2306,13 @@ static int update_reserve_cpumask(struct cpuset *trialcs, const char *buf)
* Invalidate remote partitions if necessary
*/
if (deleting) {
- /* TODO */
+ struct cpuset *child, *next;
+
+ list_for_each_entry_safe(child, next, &remote_children,
+ remote_sibling) {
+ if (cpumask_intersects(child->cpus_allowed, tmp.delmask))
+ isolated_cpus_release(child);
+ }
}
/*
@@ -2539,21 +2779,32 @@ static int update_prstate(struct cpuset *cs, int new_prs)
return 0;
/*
- * For a previously invalid partition root, leave it at being
- * invalid if new_prs is not "member".
+ * For a previously invalid partition root, treat it like a "member".
*/
- if (new_prs && is_prs_invalid(old_prs)) {
- cs->partition_root_state = -new_prs;
- return 0;
- }
+ if (new_prs && is_prs_invalid(old_prs))
+ old_prs = PRS_MEMBER;
if (alloc_cpumasks(NULL, &tmpmask))
return -ENOMEM;
+ if ((old_prs == PRS_ISOLATED) && is_remote_partition(cs)) {
+ /* Pre-invalidate a remote isolated partition */
+ isolated_cpus_release(cs);
+ old_prs = PRS_MEMBER;
+ }
+
err = update_partition_exclusive(cs, new_prs);
if (err)
goto out;
+ /*
+ * New partition is not allowed under a remote partition
+ */
+ if (new_prs && is_remote_partition(parent)) {
+ err = PERR_RMTPARENT;
+ goto out;
+ }
+
if (!old_prs) {
/*
* cpus_allowed cannot be empty.
@@ -2565,6 +2816,12 @@ static int update_prstate(struct cpuset *cs, int new_prs)
err = update_parent_subparts_cpumask(cs, partcmd_enable,
NULL, &tmpmask);
+ /*
+ * If an attempt to become adjacent isolated partition fails,
+ * try to become a remote isolated partition instead.
+ */
+ if (err && (new_prs == PRS_ISOLATED) && isolated_cpus_acquire(cs))
+ err = 0; /* Become remote isolated partition */
} else if (old_prs && new_prs) {
/*
* A change in load balance state only, no change in cpumasks.
@@ -3462,6 +3719,7 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent_css)
nodes_clear(cs->effective_mems);
fmeter_init(&cs->fmeter);
cs->relax_domain_level = -1;
+ INIT_LIST_HEAD(&cs->remote_sibling);
/* Set CS_MEMORY_MIGRATE for default hierarchy */
if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys))
@@ -3497,6 +3755,11 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
cs->effective_mems = parent->effective_mems;
cs->use_parent_ecpus = true;
parent->child_ecpus_count++;
+ /*
+ * Clear CS_SCHED_LOAD_BALANCE if parent is isolated
+ */
+ if (!is_sched_load_balance(parent))
+ clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
}
spin_unlock_irq(&callback_lock);
@@ -3741,6 +4004,7 @@ int __init cpuset_init(void)
fmeter_init(&top_cpuset.fmeter);
set_bit(CS_SCHED_LOAD_BALANCE, &top_cpuset.flags);
top_cpuset.relax_domain_level = -1;
+ INIT_LIST_HEAD(&remote_children);
BUG_ON(!alloc_cpumask_var(&cpus_attach, GFP_KERNEL));
@@ -3873,9 +4137,20 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
}
parent = parent_cs(cs);
- compute_effective_cpumask(&new_cpus, cs, parent);
nodes_and(new_mems, cs->mems_allowed, parent->effective_mems);
+ /*
+ * In the special case of a valid remote isolated partition.
+ * We just need to mask offline cpus from cpus_allowed unless
+ * all the isolated cpus are gone.
+ */
+ if (is_remote_partition(cs)) {
+ if (!cpumask_and(&new_cpus, cs->cpus_allowed, cpu_active_mask))
+ isolated_cpus_release(cs);
+ } else {
+ compute_effective_cpumask(&new_cpus, cs, parent);
+ }
+
if (cs->nr_subparts_cpus)
/*
* Make sure that CPUs allocated to child partitions
@@ -3906,10 +4181,11 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
* the following conditions hold:
* 1) empty effective cpus but not valid empty partition.
* 2) parent is invalid or doesn't grant any cpus to child
- * partitions.
+ * partitions and not a remote partition.
*/
- if (is_partition_valid(cs) && (!parent->nr_subparts_cpus ||
- (cpumask_empty(&new_cpus) && partition_is_populated(cs, NULL)))) {
+ if (is_partition_valid(cs) &&
+ ((!parent->nr_subparts_cpus && !is_remote_partition(cs)) ||
+ (cpumask_empty(&new_cpus) && partition_is_populated(cs, NULL)))) {
int old_prs, parent_prs;
update_parent_subparts_cpumask(cs, partcmd_disable, NULL, tmp);
--
2.31.1
A cpuset partition is a collection of cpusets with a partition root
and its descendants from that root downward excluding any cpusets that
are part of other partitions. A partition has exclusive access to a set
of CPUs granted to it. Other cpusets outside of a partition cannot use
any CPUs in that set.
Currently, creation of partitions requires a hierarchical CPUs
distribution model where the parent of a partition root has to be
a partition root itself. Hence all the partition roots have to be
clustered around the cgroup root.
To enable the creation of a remote partition down in the hierarchy
without a parental partition root, we need a way to reserve the CPUs
that will be used in a remote partition. Introduce a new root-only
"cpuset.cpus.reserve" control file in the top cpuset for this particular
purpose.
By default, the new "cpuset.cpus.reserve" control file will track
the subparts_cpus cpumask in the top cpuset. By writing into this new
control file, however, we can reserve additional CPUs that can be used
in a remote partition. Any CPUs that are in "cpuset.cpus.reserve" will
have to be removed from the effective_cpus of all the cpusets that are
not part of that valid partitions.
The prefix "+" and "-" can be used to indicate the addition to or the
subtraction from the existing CPUs in "cpuset.cpus.reserve". A single
"-" character indicate the deletion of all the free reserve CPUs not
allocated to any existing partition.
Signed-off-by: Waiman Long <longman(a)redhat.com>
---
kernel/cgroup/cpuset.c | 253 ++++++++++++++++++++++++++++++++++++++---
1 file changed, 239 insertions(+), 14 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 8604c919e1e4..69abe95a9969 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -208,7 +208,33 @@ struct cpuset {
struct cgroup_file partition_file;
};
-static cpumask_var_t cs_tmp_cpus; /* Temp cpumask for partition */
+/*
+ * Reserved CPUs for partitions.
+ *
+ * By default, CPUs used in partitions are tracked in the parent's
+ * subparts_cpus mask following a hierarchical CPUs distribution model.
+ * To enable the creation of a remote partition down in the hierarchy
+ * without a parental partition root, one can write directly to
+ * cpuset.cpus.reserve in the root cgroup to allocate more CPUs that can
+ * be used by remote partitions. Removal of existing reserved CPUs may
+ * also cause some existing partitions to become invalid.
+ *
+ * All the cpumasks below should only be used with cpuset_mutex held.
+ * Modification of cs_reserve_cpus & cs_free_reserve_cpus also requires
+ * holding the callback_lock.
+ *
+ * Relationship among cs_reserve_cpus, cs_free_reserve_cpus and
+ * top_cpuset.subparts_cpus are:
+ *
+ * top_cpuset.subparts_cpus ⊆ cs_reserve_cpus
+ * cs_free_reserve_cpus ⊆ cs_reserve_cpus
+ * top_cpuset.subparts_cpus ∩ cs_free_reserve_cpus = ∅
+ * cs_reserve_cpus - cs_free_reserve_cpus - top_cpuset.subparts_cpus
+ * = CPUs dedicated to remote partitions
+ */
+static cpumask_var_t cs_reserve_cpus; /* Reserved CPUs */
+static cpumask_var_t cs_free_reserve_cpus; /* Unallocated reserved CPUs */
+static cpumask_var_t cs_tmp_cpus; /* Temp cpumask for partition */
/*
* Partition root states:
@@ -1202,13 +1228,13 @@ static void rebuild_sched_domains_locked(void)
* should be the same as the active CPUs, so checking only top_cpuset
* is enough to detect racing CPU offlines.
*/
- if (!top_cpuset.nr_subparts_cpus &&
+ if (cpumask_empty(cs_reserve_cpus) &&
!cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask))
return;
/*
* With subpartition CPUs, however, the effective CPUs of a partition
- * root should be only a subset of the active CPUs. Since a CPU in any
+ * root should only be a subset of the active CPUs. Since a CPU in any
* partition root could be offlined, all must be checked.
*/
if (top_cpuset.nr_subparts_cpus) {
@@ -1275,7 +1301,7 @@ static void update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus)
*/
if ((task->flags & PF_KTHREAD) && kthread_is_per_cpu(task))
continue;
- cpumask_andnot(new_cpus, possible_mask, cs->subparts_cpus);
+ cpumask_andnot(new_cpus, possible_mask, cs_reserve_cpus);
} else {
cpumask_and(new_cpus, possible_mask, cs->effective_cpus);
}
@@ -1406,6 +1432,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
int deleting; /* Moving cpus from subparts_cpus to effective_cpus */
int old_prs, new_prs;
int part_error = PERR_NONE; /* Partition error? */
+ bool update_reserve = (parent == &top_cpuset);
lockdep_assert_held(&cpuset_mutex);
@@ -1576,7 +1603,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
}
/*
- * Change the parent's subparts_cpus.
+ * Change the parent's subparts_cpus and maybe cs_reserve_cpus.
* Newly added CPUs will be removed from effective_cpus and
* newly deleted ones will be added back to effective_cpus.
*/
@@ -1586,10 +1613,25 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
parent->subparts_cpus, tmp->addmask);
cpumask_andnot(parent->effective_cpus,
parent->effective_cpus, tmp->addmask);
+ if (update_reserve) {
+ cpumask_or(cs_reserve_cpus,
+ cs_reserve_cpus, tmp->addmask);
+ cpumask_andnot(cs_free_reserve_cpus,
+ cs_free_reserve_cpus, tmp->addmask);
+ }
}
if (deleting) {
cpumask_andnot(parent->subparts_cpus,
parent->subparts_cpus, tmp->delmask);
+ /*
+ * The automatic cpu reservation of adjacent partition
+ * won't add back the deleted CPUs to cs_free_reserve_cpus.
+ * Instead, they are returned back to effective_cpus of top
+ * cpuset.
+ */
+ if (update_reserve)
+ cpumask_andnot(cs_reserve_cpus,
+ cs_reserve_cpus, tmp->delmask);
/*
* Some of the CPUs in subparts_cpus might have been offlined.
*/
@@ -1783,6 +1825,8 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
if (need_rebuild_sched_domains)
rebuild_sched_domains_locked();
+
+ return;
}
/**
@@ -1955,6 +1999,167 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
return 0;
}
+/**
+ * update_reserve_cpumask - update cs_reserve_cpus
+ * @trialcs: trial cpuset
+ * @buf: buffer of cpu numbers written to this cpuset
+ * Return: 0 if successful, < 0 if error
+ */
+static int update_reserve_cpumask(struct cpuset *trialcs, const char *buf)
+{
+ struct cgroup_subsys_state *css;
+ struct cpuset *cs;
+ bool adding, deleting;
+ struct tmpmasks tmp;
+
+ adding = deleting = false;
+ if (*buf == '+') {
+ adding = true;
+ buf++;
+ } else if (*buf == '-') {
+ deleting = true;
+ buf++;
+ }
+
+ if (!*buf) {
+ if (adding)
+ return -EINVAL;
+
+ if (deleting) {
+ if (cpumask_empty(cs_free_reserve_cpus))
+ return 0;
+ cpumask_copy(trialcs->cpus_allowed, cs_free_reserve_cpus);
+ } else {
+ cpumask_clear(trialcs->cpus_allowed);
+ }
+ } else {
+ int retval = cpulist_parse(buf, trialcs->cpus_allowed);
+
+ if (retval < 0)
+ return retval;
+ }
+
+ if (!adding && !deleting &&
+ cpumask_equal(trialcs->cpus_allowed, cs_reserve_cpus))
+ return 0;
+
+ /* Preserve trialcs->cpus_allowed for now */
+ init_tmpmasks(&tmp, NULL, trialcs->subparts_cpus,
+ trialcs->effective_cpus);
+
+ /*
+ * Compute the addition and removal of CPUs to/from cs_reserve_cpus
+ */
+ if (!adding && !deleting) {
+ adding = cpumask_andnot(tmp.addmask, trialcs->cpus_allowed,
+ cs_reserve_cpus);
+ deleting = cpumask_andnot(tmp.delmask, cs_reserve_cpus,
+ trialcs->cpus_allowed);
+ } else if (adding) {
+ adding = cpumask_andnot(tmp.addmask,
+ trialcs->cpus_allowed, cs_reserve_cpus);
+ cpumask_or(trialcs->cpus_allowed, cs_reserve_cpus, tmp.addmask);
+ } else { /* deleting */
+ deleting = cpumask_and(tmp.delmask,
+ trialcs->cpus_allowed, cs_reserve_cpus);
+ cpumask_andnot(trialcs->cpus_allowed, cs_reserve_cpus, tmp.delmask);
+ }
+
+ if (!adding && !deleting)
+ return 0;
+
+ /*
+ * Invalidate remote partitions if necessary
+ */
+ if (deleting) {
+ /* TODO */
+ }
+
+ /*
+ * Cannot use up all the CPUs in top_cpuset.effective_cpus
+ */
+ if (!deleting && adding &&
+ cpumask_subset(top_cpuset.effective_cpus, tmp.addmask))
+ return -EINVAL;
+
+ spin_lock_irq(&callback_lock);
+ /*
+ * Update top_cpuset.effective_cpus, cs_reserve_cpus &
+ * cs_free_reserve_cpus.
+ */
+ if (adding)
+ cpumask_or(cs_free_reserve_cpus, cs_free_reserve_cpus,
+ tmp.addmask);
+ cpumask_copy(cs_reserve_cpus, trialcs->cpus_allowed);
+ cpumask_andnot(top_cpuset.effective_cpus,
+ cpu_active_mask, cs_reserve_cpus);
+
+ /*
+ * Remove CPUs from cs_free_reserve_cpus first. Anything left
+ * means some partitions has to be made invalid.
+ */
+ if (deleting & cpumask_and(cs_tmp_cpus, cs_free_reserve_cpus,
+ tmp.delmask)) {
+ cpumask_andnot(cs_free_reserve_cpus, cs_free_reserve_cpus,
+ cs_tmp_cpus);
+ deleting = cpumask_andnot(tmp.delmask, tmp.delmask,
+ cs_tmp_cpus);
+ }
+ spin_unlock_irq(&callback_lock);
+
+ /*
+ * Invalidate some adjacent partitions under top cpuset, if necessary
+ */
+ if (deleting && cpumask_and(cs_tmp_cpus, tmp.delmask,
+ top_cpuset.subparts_cpus)) {
+ struct cgroup_subsys_state *css;
+ struct cpuset *cp;
+
+ /*
+ * Temporarily save the remaining CPUs to be deleted in
+ * trialcs->cpus_allowed to be restored back to tmp.delmask
+ * later.
+ */
+ deleting = cpumask_andnot(trialcs->cpus_allowed, tmp.delmask,
+ cs_tmp_cpus);
+ rcu_read_lock();
+ cpuset_for_each_child(cp, css, &top_cpuset)
+ if (is_partition_valid(cp) &&
+ cpumask_intersects(cs_tmp_cpus, cp->cpus_allowed)) {
+ rcu_read_unlock();
+ update_parent_subparts_cpumask(cp, partcmd_invalidate, NULL, &tmp);
+ rcu_read_lock();
+ }
+ rcu_read_unlock();
+ if (deleting)
+ cpumask_copy(tmp.delmask, trialcs->cpus_allowed);
+ }
+
+ /* Can now use all of trialcs */
+ init_tmpmasks(&tmp, trialcs->cpus_allowed, trialcs->subparts_cpus,
+ trialcs->effective_cpus);
+
+ /*
+ * Update effective_cpus of all descendants that are not in
+ * partitions and rebuild sched domaiins.
+ */
+ rcu_read_lock();
+ cpuset_for_each_child(cs, css, &top_cpuset) {
+ compute_effective_cpumask(tmp.new_cpus, cs, &top_cpuset);
+ if (cpumask_equal(tmp.new_cpus, cs->effective_cpus))
+ continue;
+ if (!css_tryget_online(&cs->css))
+ continue;
+ rcu_read_unlock();
+ update_cpumasks_hier(cs, &tmp, false);
+ rcu_read_lock();
+ css_put(&cs->css);
+ }
+ rcu_read_unlock();
+ rebuild_sched_domains_locked();
+ return 0;
+}
+
/*
* Migrate memory region from one set of nodes to another. This is
* performed asynchronously as it can be called from process migration path
@@ -2743,6 +2948,7 @@ typedef enum {
FILE_EFFECTIVE_CPULIST,
FILE_EFFECTIVE_MEMLIST,
FILE_SUBPARTS_CPULIST,
+ FILE_RESERVE_CPULIST,
FILE_CPU_EXCLUSIVE,
FILE_MEM_EXCLUSIVE,
FILE_MEM_HARDWALL,
@@ -2880,6 +3086,9 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
case FILE_CPULIST:
retval = update_cpumask(cs, trialcs, buf);
break;
+ case FILE_RESERVE_CPULIST:
+ retval = update_reserve_cpumask(trialcs, buf);
+ break;
case FILE_MEMLIST:
retval = update_nodemask(cs, trialcs, buf);
break;
@@ -2927,6 +3136,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
case FILE_EFFECTIVE_MEMLIST:
seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
break;
+ case FILE_RESERVE_CPULIST:
+ seq_printf(sf, "%*pbl\n", cpumask_pr_args(cs_reserve_cpus));
+ break;
case FILE_SUBPARTS_CPULIST:
seq_printf(sf, "%*pbl\n", cpumask_pr_args(cs->subparts_cpus));
break;
@@ -3200,6 +3412,14 @@ static struct cftype dfl_files[] = {
.file_offset = offsetof(struct cpuset, partition_file),
},
+ {
+ .name = "cpus.reserve",
+ .seq_show = cpuset_common_seq_show,
+ .write = cpuset_write_resmask,
+ .private = FILE_RESERVE_CPULIST,
+ .flags = CFTYPE_ONLY_ON_ROOT,
+ },
+
{
.name = "cpus.subpartitions",
.seq_show = cpuset_common_seq_show,
@@ -3510,6 +3730,8 @@ int __init cpuset_init(void)
BUG_ON(!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL));
BUG_ON(!zalloc_cpumask_var(&top_cpuset.subparts_cpus, GFP_KERNEL));
BUG_ON(!zalloc_cpumask_var(&cs_tmp_cpus, GFP_KERNEL));
+ BUG_ON(!zalloc_cpumask_var(&cs_reserve_cpus, GFP_KERNEL));
+ BUG_ON(!zalloc_cpumask_var(&cs_free_reserve_cpus, GFP_KERNEL));
cpumask_setall(top_cpuset.cpus_allowed);
nodes_setall(top_cpuset.mems_allowed);
@@ -3788,10 +4010,10 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems);
/*
- * In the rare case that hotplug removes all the cpus in subparts_cpus,
+ * In the rare case that hotplug removes all the reserve cpus,
* we assumed that cpus are updated.
*/
- if (!cpus_updated && top_cpuset.nr_subparts_cpus)
+ if (!cpus_updated && !cpumask_empty(cs_reserve_cpus))
cpus_updated = true;
/* synchronize cpus_allowed to cpu_active_mask */
@@ -3801,18 +4023,21 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
cpumask_copy(top_cpuset.cpus_allowed, &new_cpus);
/*
* Make sure that CPUs allocated to child partitions
- * do not show up in effective_cpus. If no CPU is left,
- * we clear the subparts_cpus & let the child partitions
- * fight for the CPUs again.
+ * do not show up in top_cpuset's effective_cpus. In the
+ * unlikely event tht no effective CPU is left in top_cpuset,
+ * we clear all the reserve cpus and let the non-remote child
+ * partitions fight for the CPUs again.
*/
- if (top_cpuset.nr_subparts_cpus) {
- if (cpumask_subset(&new_cpus,
- top_cpuset.subparts_cpus)) {
+ if (!cpumask_empty(cs_reserve_cpus)) {
+
+ if (cpumask_subset(&new_cpus, cs_reserve_cpus)) {
top_cpuset.nr_subparts_cpus = 0;
cpumask_clear(top_cpuset.subparts_cpus);
+ cpumask_clear(cs_free_reserve_cpus);
+ cpumask_clear(cs_reserve_cpus);
} else {
cpumask_andnot(&new_cpus, &new_cpus,
- top_cpuset.subparts_cpus);
+ cs_reserve_cpus);
}
}
cpumask_copy(top_cpuset.effective_cpus, &new_cpus);
--
2.31.1
v2:
- [v1] https://lore.kernel.org/lkml/20230412153758.3088111-1-longman@redhat.com/
- Dropped the special "isolcpus" partition in v1
- Add the root only "cpuset.cpus.reserve" control file for reserving
CPUs used for remote isolated partitions.
- Update the test_cpuset_prs.sh test script and documentation
accordingly.
This patch series introduces a new category of cpuset partition called
remote partitions. The existing partition category where the partition
roots have to be clustered around the root cgroup in a hierarchical way
is now referred to as adjacent partitions.
A remote partition can be formed far from the root cgroup with no
partition root parent. The only commonality is that the CPUs that are
used in the partition as specified in "cpuset.cpus" have to be present
in the "cpuset.cpus" of all its ancestors.
It is relatively rare to have applications that require creation of
a separate scheduling domain (root). However, it is more common to
have applications that require the use of isolated CPUs (isolated),
e.g. DPDK. One can use the "isolcpus" or "nohz_full" boot command options
to get that statically. Of course, the "isolated" partition is another
way to achieve that dynamically.
Modern container orchestration tools like Kubernetes use the cgroup
hierarchy to manage different containers. And it is relying on other
middleware like systemd to help managing it. If a container needs to
use isolated CPUs, it is hard to get those with the adjacent partitions
as it will require the administrative parent cgroup to be a partition
root too which tool like systemd may not be ready to manage.
With this patch series, a new root cgroup only "cpuset.cpus.reserve"
file is added to specify the set of CPUs that can be used in partitions
(whether remote or adjacent). To create a remote partition, the set
of CPUs to be used in that partition (the "cpuset.cpus" file of the
partition root) has to be reserved by manually adding them to that
control file first. Then that partition can be activated by writing
"isolated" into its "cpuset.cpus.partition". CPU reservation of adjacent
partitions is done automatically without touching "cpuset.cpus.reserve"
at all.
Currently only remote isolated partitions are supported, we could
support a scheduling partition ("root") in the future if the need arises.
Additional isolation attributes like those with the "isolcpus" or "nohz"
boot command line options may be supported in the isolated partitions
in the future.
Waiman Long (6):
cgroup/cpuset: Extract out CS_CPU_EXCLUSIVE & CS_SCHED_LOAD_BALANCE
handling
cgroup/cpuset: Improve temporary cpumasks handling
cgroup/cpuset: Add cpuset.cpus.reserve for top cpuset
cgroup/cpuset: Introduce remote isolated partition
cgroup/cpuset: Documentation update for partition
cgroup/cpuset: Extend test_cpuset_prs.sh to test remote partition
Documentation/admin-guide/cgroup-v2.rst | 92 ++-
kernel/cgroup/cpuset.c | 749 +++++++++++++++---
.../selftests/cgroup/test_cpuset_prs.sh | 403 ++++++----
3 files changed, 988 insertions(+), 256 deletions(-)
--
2.31.1
From: Mirsad Todorovac <mirsad.todorovac(a)alu.unizg.hr>
[ Upstream commit 976d3c6778e99390c6d854d140b746d12ea18a51 ]
According to Mirsad the gpio-sim.sh test appears to FAIL in a wrong way
due to missing initialisation of shell variables:
4.2. Bias settings work correctly
cat: /sys/devices/platform/gpio-sim.0/gpiochip18/sim_gpio0/value: No such file or directory
./gpio-sim.sh: line 393: test: =: unary operator expected
bias setting does not work
GPIO gpio-sim test FAIL
After this change the test passed:
4.2. Bias settings work correctly
GPIO gpio-sim test PASS
His testing environment is AlmaLinux 8.7 on Lenovo desktop box with
the latest Linux kernel based on v6.2:
Linux 6.2.0-mglru-kmlk-andy-09238-gd2980d8d8265 x86_64
Suggested-by: Mirsad Todorovac <mirsad.todorovac(a)alu.unizg.hr>
Signed-off-by: Andy Shevchenko <andriy.shevchenko(a)linux.intel.com>
Tested-by: Mirsad Goran Todorovac <mirsad.todorovac(a)alu.unizg.hr>
Signed-off-by: Mirsad Goran Todorovac <mirsad.todorovac(a)alu.unizg.hr>
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski(a)linaro.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/gpio/gpio-sim.sh | 3 +++
1 file changed, 3 insertions(+)
diff --git a/tools/testing/selftests/gpio/gpio-sim.sh b/tools/testing/selftests/gpio/gpio-sim.sh
index 341e3de008968..bf67b23ed29ac 100755
--- a/tools/testing/selftests/gpio/gpio-sim.sh
+++ b/tools/testing/selftests/gpio/gpio-sim.sh
@@ -389,6 +389,9 @@ create_chip chip
create_bank chip bank
set_num_lines chip bank 8
enable_chip chip
+DEVNAME=`configfs_dev_name chip`
+CHIPNAME=`configfs_chip_name chip bank`
+SYSFS_PATH="/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/value"
$BASE_DIR/gpio-mockup-cdev -b pull-up /dev/`configfs_chip_name chip bank` 0
test `cat $SYSFS_PATH` = "1" || fail "bias setting does not work"
remove_chip chip
--
2.39.2
From: Mirsad Todorovac <mirsad.todorovac(a)alu.unizg.hr>
[ Upstream commit 976d3c6778e99390c6d854d140b746d12ea18a51 ]
According to Mirsad the gpio-sim.sh test appears to FAIL in a wrong way
due to missing initialisation of shell variables:
4.2. Bias settings work correctly
cat: /sys/devices/platform/gpio-sim.0/gpiochip18/sim_gpio0/value: No such file or directory
./gpio-sim.sh: line 393: test: =: unary operator expected
bias setting does not work
GPIO gpio-sim test FAIL
After this change the test passed:
4.2. Bias settings work correctly
GPIO gpio-sim test PASS
His testing environment is AlmaLinux 8.7 on Lenovo desktop box with
the latest Linux kernel based on v6.2:
Linux 6.2.0-mglru-kmlk-andy-09238-gd2980d8d8265 x86_64
Suggested-by: Mirsad Todorovac <mirsad.todorovac(a)alu.unizg.hr>
Signed-off-by: Andy Shevchenko <andriy.shevchenko(a)linux.intel.com>
Tested-by: Mirsad Goran Todorovac <mirsad.todorovac(a)alu.unizg.hr>
Signed-off-by: Mirsad Goran Todorovac <mirsad.todorovac(a)alu.unizg.hr>
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski(a)linaro.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/gpio/gpio-sim.sh | 3 +++
1 file changed, 3 insertions(+)
diff --git a/tools/testing/selftests/gpio/gpio-sim.sh b/tools/testing/selftests/gpio/gpio-sim.sh
index 9f539d454ee4d..fa2ce2b9dd5fc 100755
--- a/tools/testing/selftests/gpio/gpio-sim.sh
+++ b/tools/testing/selftests/gpio/gpio-sim.sh
@@ -389,6 +389,9 @@ create_chip chip
create_bank chip bank
set_num_lines chip bank 8
enable_chip chip
+DEVNAME=`configfs_dev_name chip`
+CHIPNAME=`configfs_chip_name chip bank`
+SYSFS_PATH="/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/value"
$BASE_DIR/gpio-mockup-cdev -b pull-up /dev/`configfs_chip_name chip bank` 0
test `cat $SYSFS_PATH` = "1" || fail "bias setting does not work"
remove_chip chip
--
2.39.2
The kunit_add_action() and related functions named the kunit_action_t
parameter 'func' in early drafts, which was later renamed to 'action'
However, the doc comments were not properly updated.
Fix these to avoid confusion and 'make htmldocs' warnings.
Fixes: b9dce8a1ed3e ("kunit: Add kunit_add_action() to defer a call until test exit")
Reported-by: Stephen Rothwell <sfr(a)canb.auug.org.au>
Closes: https://lore.kernel.org/lkml/20230530151840.16a56460@canb.auug.org.au/
Signed-off-by: David Gow <davidgow(a)google.com>
---
include/kunit/resource.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/kunit/resource.h b/include/kunit/resource.h
index b64eb783b1bc..c7383e90f5c9 100644
--- a/include/kunit/resource.h
+++ b/include/kunit/resource.h
@@ -393,7 +393,7 @@ typedef void (kunit_action_t)(void *);
/**
* kunit_add_action() - Call a function when the test ends.
* @test: Test case to associate the action with.
- * @func: The function to run on test exit
+ * @action: The function to run on test exit
* @ctx: Data passed into @func
*
* Defer the execution of a function until the test exits, either normally or
@@ -415,7 +415,7 @@ int kunit_add_action(struct kunit *test, kunit_action_t *action, void *ctx);
/**
* kunit_add_action_or_reset() - Call a function when the test ends.
* @test: Test case to associate the action with.
- * @func: The function to run on test exit
+ * @action: The function to run on test exit
* @ctx: Data passed into @func
*
* Defer the execution of a function until the test exits, either normally or
@@ -441,7 +441,7 @@ int kunit_add_action_or_reset(struct kunit *test, kunit_action_t *action,
/**
* kunit_remove_action() - Cancel a matching deferred action.
* @test: Test case the action is associated with.
- * @func: The deferred function to cancel.
+ * @action: The deferred function to cancel.
* @ctx: The context passed to the deferred function to trigger.
*
* Prevent an action deferred via kunit_add_action() from executing when the
@@ -459,7 +459,7 @@ void kunit_remove_action(struct kunit *test,
/**
* kunit_release_action() - Run a matching action call immediately.
* @test: Test case the action is associated with.
- * @func: The deferred function to trigger.
+ * @action: The deferred function to trigger.
* @ctx: The context passed to the deferred function to trigger.
*
* Execute a function deferred via kunit_add_action()) immediately, rather than
--
2.41.0.rc0.172.g3f132b7071-goog
The sample code has Kconfig for tristate configuration. In the case, it
could be friendly to developers that the code has MODULE_LICENSE, since
the missing MODULE_LICENSE brings error to modpost when the code is built
as loadable kernel module.
Signed-off-by: Takashi Sakamoto <o-takashi(a)sakamocchi.jp>
---
Documentation/dev-tools/kunit/start.rst | 2 ++
1 file changed, 2 insertions(+)
diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst
index c736613c9b19..d4f99ef94f71 100644
--- a/Documentation/dev-tools/kunit/start.rst
+++ b/Documentation/dev-tools/kunit/start.rst
@@ -250,6 +250,8 @@ Now we are ready to write the test cases.
};
kunit_test_suite(misc_example_test_suite);
+ MODULE_LICENSE("GPL");
+
2. Add the following lines to ``drivers/misc/Kconfig``:
.. code-block:: kconfig
--
2.39.2
User processes register name_args for events. If the same name but different
args event are registered. The trace outputs of second event are printed
as the first event. This is incorrect.
Return EADDRINUSE back to the user process if the same name but different args
event has being registered.
Signed-off-by: sunliming <sunliming(a)kylinos.cn>
---
kernel/trace/trace_events_user.c | 36 +++++++++++++++----
.../selftests/user_events/ftrace_test.c | 6 ++++
2 files changed, 36 insertions(+), 6 deletions(-)
diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index b1ecd7677642..e90161294698 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -1753,6 +1753,8 @@ static int user_event_parse(struct user_event_group *group, char *name,
int ret;
u32 key;
struct user_event *user;
+ int argc = 0;
+ char **argv;
/* Prevent dyn_event from racing */
mutex_lock(&event_mutex);
@@ -1760,13 +1762,35 @@ static int user_event_parse(struct user_event_group *group, char *name,
mutex_unlock(&event_mutex);
if (user) {
- *newuser = user;
- /*
- * Name is allocated by caller, free it since it already exists.
- * Caller only worries about failure cases for freeing.
- */
- kfree(name);
+ if (args) {
+ argv = argv_split(GFP_KERNEL, args, &argc);
+ if (!argv) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ ret = user_fields_match(user, argc, (const char **)argv);
+ argv_free(argv);
+
+ } else
+ ret = list_empty(&user->fields);
+
+ if (ret) {
+ *newuser = user;
+ /*
+ * Name is allocated by caller, free it since it already exists.
+ * Caller only worries about failure cases for freeing.
+ */
+ kfree(name);
+ } else {
+ ret = -EADDRINUSE;
+ goto error;
+ }
+
return 0;
+error:
+ refcount_dec(&user->refcnt);
+ return ret;
}
user = kzalloc(sizeof(*user), GFP_KERNEL_ACCOUNT);
diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c
index 7c99cef94a65..6e8c4b47281c 100644
--- a/tools/testing/selftests/user_events/ftrace_test.c
+++ b/tools/testing/selftests/user_events/ftrace_test.c
@@ -228,6 +228,12 @@ TEST_F(user, register_events) {
ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®));
ASSERT_EQ(0, reg.write_index);
+ /* Multiple registers to same name but different args should fail */
+ reg.enable_bit = 29;
+ reg.name_args = (__u64)"__test_event u32 field1;";
+ ASSERT_EQ(-1, ioctl(self->data_fd, DIAG_IOCSREG, ®));
+ ASSERT_EQ(EADDRINUSE, errno);
+
/* Ensure disabled */
self->enable_fd = open(enable_file, O_RDWR);
ASSERT_NE(-1, self->enable_fd);
--
2.25.1
Hallo, es tut mir so leid, Ihre Privatsphäre zu verletzen. Es heißt:
„Ein Bild sagt mehr als tausend Worte, aber als ich Ihres sah, war es
mehr, als Worte erklären könnten.“ Das charmante Profil ist
unwiderstehlich, obwohl es eine kleine persönliche Nachricht ist, aber
Ihr Aussehen verrät viel über eine nette Person ... Also musste ich
der charmanten Person mit diesem tollen Profil eine Nachricht
hinterlassen. Ich glaube, es ist die Neugier, die mich in einer
solchen Zeit zu Ihnen führt. Ich muss noch einmal sagen, dass es mir
leid tut, wenn das Schreiben an Sie Ihrer moralischen Ethik
widerspricht. Ich möchte dich einfach besser kennenlernen und ein
Freund sein oder mehr. Ich hoffe, irgendwann von Ihnen zu hören.
Hallo, es tut mir so leid, Ihre Privatsphäre zu verletzen. Es heißt:
„Ein Bild sagt mehr als tausend Worte, aber als ich Ihres sah, war es
mehr, als Worte erklären könnten.“ Das charmante Profil ist
unwiderstehlich, obwohl es eine kleine persönliche Nachricht ist, aber
Ihr Aussehen verrät viel über eine nette Person ... Also musste ich
der charmanten Person mit diesem tollen Profil eine Nachricht
hinterlassen. Ich glaube, es ist die Neugier, die mich in einer
solchen Zeit zu Ihnen führt. Ich muss noch einmal sagen, dass es mir
leid tut, wenn das Schreiben an Sie Ihrer moralischen Ethik
widerspricht. Ich möchte dich einfach besser kennenlernen und ein
Freund sein oder mehr. Ich hoffe, irgendwann von Ihnen zu hören.
After a few years of increasing test coverage in the MPTCP selftests, we
realised [1] the last version of the selftests is supposed to run on old
kernels without issues.
Supporting older versions is not that easy for this MPTCP case: these
selftests are often validating the internals by checking packets that
are exchanged, when some MIB counters are incremented after some
actions, how connections are getting opened and closed in some cases,
etc. In other words, it is not limited to the socket interface between
the userspace and the kernelspace. In addition, the current selftests
run a lot of different sub-tests but the TAP13 protocol used in the
selftests don't support sub-tests: in other words, one failure in
sub-tests implies that the whole selftest is seen as failed at the end
because sub-tests are not tracked. It is then important to skip
sub-tests not supported by old kernels.
To minimise the modifications and reduce the complexity to support old
versions, the idea is to look at external signs and skip the whole
selftests or just some sub-tests before starting them.
This first part focuses on marking the different selftests as skipped
if MPTCP is not even supported. That's what is done in patches 2 to 8.
Patch 2/8 introduces a new file (mptcp_lib.sh) to be able to re-use some
helpers in the different selftests. The first MPTCP selftest has been
introduced in v5.6.
Patch 1/8 is a bit different but still linked: it modifies mptcp_join.sh
selftest not to use 'cmp --bytes' which is not supported by the BusyBox
implementation. It is apparently quite common to use BusyBox in CI
environments. This tool is needed for a subtest introduced in v6.1.
Link: https://lore.kernel.org/stable/CA+G9fYtDGpgT4dckXD-y-N92nqUxuvue_7AtDdBcHrb… [1]
Link: https://github.com/multipath-tcp/mptcp_net-next/issues/368
Signed-off-by: Matthieu Baerts <matthieu.baerts(a)tessares.net>
---
Matthieu Baerts (8):
selftests: mptcp: join: avoid using 'cmp --bytes'
selftests: mptcp: connect: skip if MPTCP is not supported
selftests: mptcp: pm nl: skip if MPTCP is not supported
selftests: mptcp: join: skip if MPTCP is not supported
selftests: mptcp: diag: skip if MPTCP is not supported
selftests: mptcp: simult flows: skip if MPTCP is not supported
selftests: mptcp: sockopt: skip if MPTCP is not supported
selftests: mptcp: userspace pm: skip if MPTCP is not supported
tools/testing/selftests/net/mptcp/Makefile | 2 +-
tools/testing/selftests/net/mptcp/diag.sh | 4 +++
tools/testing/selftests/net/mptcp/mptcp_connect.sh | 4 +++
tools/testing/selftests/net/mptcp/mptcp_join.sh | 17 +++++++--
tools/testing/selftests/net/mptcp/mptcp_lib.sh | 40 ++++++++++++++++++++++
tools/testing/selftests/net/mptcp/mptcp_sockopt.sh | 4 +++
tools/testing/selftests/net/mptcp/pm_netlink.sh | 4 +++
tools/testing/selftests/net/mptcp/simult_flows.sh | 4 +++
tools/testing/selftests/net/mptcp/userspace_pm.sh | 4 +++
9 files changed, 80 insertions(+), 3 deletions(-)
---
base-commit: 9b9e46aa07273ceb96866b2e812b46f1ee0b8d2f
change-id: 20230528-upstream-net-20230528-mptcp-selftests-support-old-kernels-part-1-305638f4dbc0
Best regards,
--
Matthieu Baerts <matthieu.baerts(a)tessares.net>
Hi, Willy
Thanks very mush for your kindly review, discuss and suggestion, now we
get full rv32 support ;-)
In the first series [1], we have fixed up the compile errors about
_start and __NR_llseek for rv32, but left compile errors about tons of
time32 syscalls (removed after kernel commit d4c08b9776b3 ("riscv: Use
latest system call ABI")) and the missing fstat in nolibc-test.c [2],
now we have fixed up all of them.
Introduction
============
This series is based on the 20230524-nolibc-rv32+stkp4 branch of [3], it
includes 3 parts, they work together to add full rv32 support:
* Reverts two old out-of-day patches
* Revert "tools/nolibc: riscv: Support __NR_llseek for rv32"
* Revert "selftests/nolibc: Fix up compile error for rv32"
(these two and the reverted ones:
* commit 606343b7478c ("selftests/nolibc: Fix up compile error for rv32")
* commit d2c3acba6d66 ("tools/nolibc: riscv: Support __NR_llseek for rv32")
can be removed from the git repo completely, there are two new ones to replace
them)
* Compile and test support patches
* selftests/nolibc: print name instead of number for EOVERFLOW
* selftests/nolibc: syscall_args: use __NR_statx for rv32
* --> replace the old one 606343b7478, use statx instead of read
* selftests/nolibc: riscv: customize makefile for rv32
* selftests/nolibc: allow specify a bios for qemu
* selftests/nolibc: remove the duplicated gettimeofday_bad2
* Fix up some missing syscalls, mainly time32 syscalls
* tools/nolibc: sys_lseek: riscv: use __NR_llseek for rv32
* --> replace the old one d2c3acba6d66, cleaned up
* tools/nolibc: sys_poll: riscv: use __NR_ppoll_time64 for rv32
* tools/nolibc: ppoll/ppoll_time64: Add a missing argument
* tools/nolibc: sys_select: riscv: use __NR_pselect6_time64 for rv32
* tools/nolibc: sys_wait4: riscv: use __NR_waitid for rv32
* tools/nolibc: sys_gettimeofday: riscv: use __NR_clock_gettime64 for rv32
Compile
=======
For rv64:
$ make ARCH=riscv CROSS_COMPILE=riscv64-linux-gnu- nolibc-test
$ file nolibc-test
nolibc-test: ELF 64-bit LSB executable, UCB RISC-V ...
$ make ARCH=riscv64 CROSS_COMPILE=riscv64-linux-gnu- nolibc-test
$ file nolibc-test
nolibc-test: ELF 64-bit LSB executable, UCB RISC-V ...
For rv32:
$ make ARCH=riscv CONFIG_32BIT=1 CROSS_COMPILE=riscv64-linux-gnu- nolibc-test
$ file nolibc-test
nolibc-test: ELF 32-bit LSB executable, UCB RISC-V ...
$ make ARCH=riscv32 CROSS_COMPILE=riscv64-linux-gnu- nolibc-test
$ file nolibc-test
nolibc-test: ELF 32-bit LSB executable, UCB RISC-V ...
Testing
=======
Environment:
// gcc toolchain
$ riscv64-linux-gnu-gcc --version
riscv64-linux-gnu-gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
// glibc >= 2.33 required, for older glibc, must upgrade include/bits/wordsize.h
$ dpkg -l | grep libc6-dev | grep riscv
ii libc6-dev-riscv64-cross 2.31-0ubuntu7cross1
// glibc include/bits/wordsize.h: manually upgraded to >= 2.33
// without this, can not build tools/testing/selftests/nolibc/nolibc-test.c
$ cat /usr/riscv64-linux-gnu/include/bits/wordsize.h
#if __riscv_xlen == (__SIZEOF_POINTER__ * 8)
# define __WORDSIZE __riscv_xlen
#else
# error unsupported ABI
#endif
# define __WORDSIZE_TIME64_COMPAT32 1
#if __WORDSIZE == 32
# define __WORDSIZE32_SIZE_ULONG 0
# define __WORDSIZE32_PTRDIFF_LONG 0
#endif
// higher qemu version is better, latest version is v8.0.0+
$ qemu-system-riscv64 --version
QEMU emulator version 4.2.1 (Debian 1:4.2-3ubuntu6.18)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers
// opensbi version, higher is better, must match kernel version and qemu version
// rv64: used version is 1.2, latest is 1.2
$ head -2 /labs/linux-lab/src/linux-stable/tools/testing/selftests/nolibc/run.out | tail -1
OpenSBI v1.2-116-g7919530
// rv32: used version is v0.9, latest is 1.2
$ head -2 /labs/linux-lab/src/linux-stable/tools/testing/selftests/nolibc/run.out | tail -1
OpenSBI v0.9-152-g754d511
For rv64:
$ pwd
/labs/linux-lab/src/linux-stable/tools/testing/selftests/nolibc
$ make ARCH=riscv64 CROSS_COMPILE=riscv64-linux-gnu- defconfig
$ make ARCH=riscv64 CROSS_COMPILE=riscv64-linux-gnu- BIOS=/labs/linux-lab/boards/riscv64/virt/bsp/bios/opensbi/generic/fw_jump.elf run
MKDIR sysroot/riscv/include
make[1]: Entering directory '/labs/linux-lab/src/linux-stable/tools/include/nolibc'
make[2]: Entering directory '/labs/linux-lab/src/linux-stable'
make[2]: Leaving directory '/labs/linux-lab/src/linux-stable'
make[2]: Entering directory '/labs/linux-lab/src/linux-stable'
INSTALL /labs/linux-lab/src/linux-stable/tools/testing/selftests/nolibc/sysroot/sysroot/include
make[2]: Leaving directory '/labs/linux-lab/src/linux-stable'
make[1]: Leaving directory '/labs/linux-lab/src/linux-stable/tools/include/nolibc'
CC nolibc-test
MKDIR initramfs
INSTALL initramfs/init
make[1]: Entering directory '/labs/linux-lab/src/linux-stable'
...
LD vmlinux
NM System.map
SORTTAB vmlinux
OBJCOPY arch/riscv/boot/Image
Kernel: arch/riscv/boot/Image is ready
make[1]: Leaving directory '/labs/linux-lab/src/linux-stable'
135 test(s) passed.
$ file ../../../../vmlinux
../../../../vmlinux: ELF 64-bit LSB executable, UCB RISC-V, version 1 (SYSV), statically linked, BuildID[sha1]=b8e1cea5122b04bce540b4022f0d6f171ffe615a, not stripped
For rv32:
$ pwd
/labs/linux-lab/src/linux-stable/tools/testing/selftests/nolibc
$ make ARCH=riscv32 CROSS_COMPILE=riscv64-linux-gnu- defconfig
$ make ARCH=riscv32 CROSS_COMPILE=riscv64-linux-gnu- BIOS=/labs/linux-lab/boards/riscv32/virt/bsp/bios/opensbi/generic/fw_jump.elf run
MKDIR sysroot/riscv/include
make[1]: Entering directory '/labs/linux-lab/src/linux-stable/tools/include/nolibc'
make[2]: Entering directory '/labs/linux-lab/src/linux-stable'
make[2]: Leaving directory '/labs/linux-lab/src/linux-stable'
make[2]: Entering directory '/labs/linux-lab/src/linux-stable'
INSTALL /labs/linux-lab/src/linux-stable/tools/testing/selftests/nolibc/sysroot/sysroot/include
make[2]: Leaving directory '/labs/linux-lab/src/linux-stable'
make[1]: Leaving directory '/labs/linux-lab/src/linux-stable/tools/include/nolibc'
CC nolibc-test
MKDIR initramfs
INSTALL initramfs/init
make[1]: Entering directory '/labs/linux-lab/src/linux-stable'
CALL scripts/checksyscalls.sh
GEN usr/initramfs_data.cpio
COPY usr/initramfs_inc_data
AS usr/initramfs_data.o
AR usr/built-in.a
GEN security/selinux/flask.h security/selinux/av_permissions.h
CC security/selinux/avc.o
CC security/selinux/hooks.o
CC security/selinux/selinuxfs.o
CC security/selinux/nlmsgtab.o
CC security/selinux/netif.o
CC security/selinux/netnode.o
CC security/selinux/netport.o
CC security/selinux/status.o
CC security/selinux/ss/services.o
AR security/selinux/built-in.a
AR security/built-in.a
AR built-in.a
AR vmlinux.a
LD vmlinux.o
OBJCOPY modules.builtin.modinfo
GEN modules.builtin
MODPOST vmlinux.symvers
UPD include/generated/utsversion.h
CC init/version-timestamp.o
LD .tmp_vmlinux.kallsyms1
NM .tmp_vmlinux.kallsyms1.syms
KSYMS .tmp_vmlinux.kallsyms1.S
AS .tmp_vmlinux.kallsyms1.S
LD .tmp_vmlinux.kallsyms2
NM .tmp_vmlinux.kallsyms2.syms
KSYMS .tmp_vmlinux.kallsyms2.S
AS .tmp_vmlinux.kallsyms2.S
LD vmlinux
NM System.map
SORTTAB vmlinux
OBJCOPY arch/riscv/boot/Image
Kernel: arch/riscv/boot/Image is ready
make[1]: Leaving directory '/labs/linux-lab/src/linux-stable'
135 test(s) passed.
$ file ../../../../vmlinux
../../../../vmlinux: ELF 32-bit LSB executable, UCB RISC-V, version 1 (SYSV), statically linked, BuildID[sha1]=bad4c1f3899f47355d2a2010bade56972fd94b9d, not stripped
The full rv64 testing result (run.out) is uploaded at [4].
The full rv32 testing result (run.out) is uploaded at [5].
That's all, thanks!
Best regards,
Zhangjin Wu
---
[1]: https://lore.kernel.org/linux-riscv/20230520143154.68663-1-falcon@tinylab.o…
[2]: https://lore.kernel.org/linux-riscv/20230520135235.68155-1-falcon@tinylab.o…
[3]: https://git.kernel.org/pub/scm/linux/kernel/git/wtarreau/nolibc.git
[4]: https://pastebin.com/3L0nV78u
[5]: https://pastebin.com/RadrXdta
Zhangjin Wu (13):
Revert "tools/nolibc: riscv: Support __NR_llseek for rv32"
Revert "selftests/nolibc: Fix up compile error for rv32"
selftests/nolibc: print name instead of number for EOVERFLOW
selftests/nolibc: syscall_args: use __NR_statx for rv32
selftests/nolibc: riscv: customize makefile for rv32
selftests/nolibc: allow specify a bios for qemu
selftests/nolibc: remove the duplicated gettimeofday_bad2
tools/nolibc: sys_lseek: riscv: use __NR_llseek for rv32
tools/nolibc: sys_poll: riscv: use __NR_ppoll_time64 for rv32
tools/nolibc: ppoll/ppoll_time64: Add a missing argument
tools/nolibc: sys_select: riscv: use __NR_pselect6_time64 for rv32
tools/nolibc: sys_wait4: riscv: use __NR_waitid for rv32
tools/nolibc: sys_gettimeofday: riscv: use __NR_clock_gettime64 for
rv32
tools/include/nolibc/std.h | 1 +
tools/include/nolibc/sys.h | 135 +++++++++++++++++--
tools/include/nolibc/types.h | 21 ++-
tools/testing/selftests/nolibc/Makefile | 14 +-
tools/testing/selftests/nolibc/nolibc-test.c | 15 ++-
5 files changed, 167 insertions(+), 19 deletions(-)
--
2.25.1
From: Jeff Xu <jeffxu(a)google.com>
This is the first set of Memory mapping (VMA) protection patches using PKU.
* * *
Background:
As discussed previously in the kernel mailing list [1], V8 CFI [2] uses
PKU to protect memory, and Stephen Röttger proposes to extend the PKU to
memory mapping [3].
We're using PKU for in-process isolation to enforce control-flow integrity
for a JIT compiler. In our threat model, an attacker exploits a
vulnerability and has arbitrary read/write access to the whole process
space concurrently to other threads being executed. This attacker can
manipulate some arguments to syscalls from some threads.
Under such a powerful attack, we want to create a “safe/isolated”
thread environment. We assign dedicated PKUs to this thread,
and use those PKUs to protect the threads’ runtime environment.
The thread has exclusive access to its run-time memory. This
includes modifying the protection of the memory mapping, or
munmap the memory mapping after use. And the other threads
won’t be able to access the memory or modify the memory mapping
(VMA) belonging to the thread.
* * *
Proposed changes:
This patch introduces a new flag, PKEY_ENFORCE_API, to the pkey_alloc()
function. When a PKEY is created with this flag, it is enforced that any
thread that wants to make changes to the memory mapping (such as mprotect)
of the memory must have write access to the PKEY. PKEYs created without
this flag will continue to work as they do now, for backwards
compatibility.
Only PKEY created from user space can have the new flag set, the PKEY
allocated by the kernel internally will not have it. In other words,
ARCH_DEFAULT_PKEY(0) and execute_only_pkey won’t have this flag set,
and continue work as today.
This flag is checked only at syscall entry, such as mprotect/munmap in
this set of patches. It will not apply to other call paths. In other
words, if the kernel want to change attributes of VMA for some reasons,
the kernel is free to do that and not affected by this new flag.
This set of patch covers mprotect/munmap, I plan to work on other
syscalls after this.
* * *
Testing:
I have tested this patch on a Linux kernel 5.15, 6,1, and 6.4-rc1,
new selftest is added in: pkey_enforce_api.c
* * *
Discussion:
We believe that this patch provides a valuable security feature.
It allows us to create “safe/isolated” thread environments that are
protected from attackers with arbitrary read/write access to
the process space.
We believe that the interface change and the patch don't
introduce backwards compatibility risk.
We would like to disucss this patch in Linux kernel community
for feedback and support.
* * *
Reference:
[1]https://lore.kernel.org/all/202208221331.71C50A6F@keescook/
[2]https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyX…
[3]https://docs.google.com/document/d/1qqVoVfRiF2nRylL3yjZyCQvzQaej1HRPh3f5w…
* * *
Current status:
There are on-going discussion related to threat model, io_uring, we will continue discuss using v0 thread.
* * *
PATCH history:
v1: update code related review comments:
mprotect.c:
remove syscall from do_mprotect_pkey()
remove pr_warn_ratelimited
munmap.c:
change syscall to enum caller_origin
remove pr_warn_ratelimited
v0:
https://lore.kernel.org/linux-mm/20230515130553.2311248-1-jeffxu@chromium.o…
Best Regards,
-Jeff Xu
Jeff Xu (6):
PKEY: Introduce PKEY_ENFORCE_API flag
PKEY: Add arch_check_pkey_enforce_api()
PKEY: Apply PKEY_ENFORCE_API to mprotect
PKEY:selftest pkey_enforce_api for mprotect
PKEY: Apply PKEY_ENFORCE_API to munmap
PKEY:selftest pkey_enforce_api for munmap
arch/powerpc/include/asm/pkeys.h | 19 +-
arch/x86/include/asm/mmu.h | 7 +
arch/x86/include/asm/pkeys.h | 92 +-
arch/x86/mm/pkeys.c | 2 +-
include/linux/mm.h | 8 +-
include/linux/pkeys.h | 18 +-
include/uapi/linux/mman.h | 5 +
mm/mmap.c | 31 +-
mm/mprotect.c | 17 +-
mm/mremap.c | 6 +-
tools/testing/selftests/mm/Makefile | 1 +
tools/testing/selftests/mm/pkey_enforce_api.c | 1312 +++++++++++++++++
12 files changed, 1499 insertions(+), 19 deletions(-)
create mode 100644 tools/testing/selftests/mm/pkey_enforce_api.c
base-commit: ba0ad6ed89fd5dada3b7b65ef2b08e95d449d4ab
--
2.40.1.606.ga4b1b128d6-goog
Hi, All
Thanks very much for your review suggestions of the v1 series [1], this
is the generic part1 of the v2 revison.
* selftests/nolibc: syscall_args: use generic __NR_statx
A more generic statx is used instead of fstat
(Review suggestions from Willy, Arnd)
* selftests/nolibc: allow specify extra arguments for qemu
Besides BIOS, QEMU_ARGS_EXTRA is better for more requirements
(Review suggestions from Thomas, Willy)
* selftests/nolibc: fix up compile warning with glibc on x86_64
Definition of uint64_t differs from glibc and nolibc, use the right
print format here
* selftests/nolibc: not include limits.h for nolibc
Remove the requirement of limits.h for nolibc can let us use older
glibc for rv32
(Review suggestions from thomas)
* selftests/nolibc: use INT_MAX instead of __INT_MAX__
A trivial cleanup, based on the previous patch
* tools/nolibc: arm: add missing my_syscall6
Required by future forced pselect6/pselect6_time64, tested on arm/vexpress-a9
(Review suggestions from Arnd)
* tools/nolibc: open: fix up compile warning for arm
A trivial fixup based on compiler's suggestion and glibc code
Best regards,
Zhangjin
----
[1]: https://lore.kernel.org/linux-riscv/20230529113143.GB2762@1wt.eu/T/#t
Zhangjin Wu (7):
selftests/nolibc: syscall_args: use __NR_statx for rv32
selftests/nolibc: allow specify extra arguments for qemu
selftests/nolibc: fix up compile warning with glibc on x86_64
selftests/nolibc: not include limits.h for nolibc
selftests/nolibc: use INT_MAX instead of __INT_MAX__
tools/nolibc: arm: add missing my_syscall6
tools/nolibc: open: fix up compile warning for arm
tools/include/nolibc/arch-arm.h | 23 ++++++++++++++++++++
tools/include/nolibc/stdint.h | 14 ++++++++++++
tools/include/nolibc/sys.h | 2 +-
tools/testing/selftests/nolibc/Makefile | 2 +-
tools/testing/selftests/nolibc/nolibc-test.c | 14 +++++++-----
5 files changed, 47 insertions(+), 8 deletions(-)
--
2.25.1
When A registering user event from dyn_events has no argments, it will pass the
matching check, regardless of whether there is a user event with the same name
and arguments. Add the matching check when the arguments of registering user
event is null.
Signed-off-by: sunliming <sunliming(a)kylinos.cn>
---
kernel/trace/trace_events_user.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index e90161294698..0d91dac206ff 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -1712,6 +1712,8 @@ static bool user_event_match(const char *system, const char *event,
if (match && argc > 0)
match = user_fields_match(user, argc, argv);
+ else if (match && argc == 0)
+ match = list_empty(&user->fields);
return match;
}
--
2.25.1
Partially backport v6.3 commit 11f75a01448f ("selftests/memfd: add
tests for MFD_NOEXEC_SEAL MFD_EXEC") to fix an unknown type name
build error.
In some systems, the __u64 typedef is not present due to differences
in system headers, causing compilation errors like this one:
fuse_test.c:64:8: error: unknown type name '__u64'
64 | static __u64 mfd_assert_get_seals(int fd)
This header includes the __u64 typedef which increases the
likelihood of successful compilation on a wider variety of systems.
Signed-off-by: Hardik Garg <hargar(a)linux.microsoft.com>
---
tools/testing/selftests/memfd/fuse_test.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/memfd/fuse_test.c b/tools/testing/selftests/memfd/fuse_test.c
index be675002f918..93798c8c5d54 100644
--- a/tools/testing/selftests/memfd/fuse_test.c
+++ b/tools/testing/selftests/memfd/fuse_test.c
@@ -22,6 +22,7 @@
#include <linux/falloc.h>
#include <fcntl.h>
#include <linux/memfd.h>
+#include <linux/types.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
--
2.25.1
Small optimization to avoid coredump writing during the stack protector
tests.
Adds prctl() as prerequisite.
This series is based on nolibc/20230524-nolibc-rv32+stkp4
Signed-off-by: Thomas Weißschuh <linux(a)weissschuh.net>
---
Changes in v2:
- Fix compilation warning in prctl() testcase
- Link to v1: https://lore.kernel.org/r/20230526-nolibc-test-no-dump-v1-0-62e724a96db2@we…
---
Thomas Weißschuh (2):
tools/nolibc: add support for prctl()
selftests/nolibc: prevent coredumps during test execution
tools/include/nolibc/sys.h | 27 +++++++++++++++++++++++++++
tools/testing/selftests/nolibc/nolibc-test.c | 3 +++
2 files changed, 30 insertions(+)
---
base-commit: 1974a2b5fd434812b32952b09df7b79fdee8104d
change-id: 20230526-nolibc-test-no-dump-a1b1d9557df8
Best regards,
--
Thomas Weißschuh <linux(a)weissschuh.net>
Small optimization to avoid coredump writing during the stack protector
tests.
Adds prctl() as prerequisite.
This series is based on nolibc/20230524-nolibc-rv32+stkp4
Signed-off-by: Thomas Weißschuh <linux(a)weissschuh.net>
---
Thomas Weißschuh (2):
tools/nolibc: add support for prctl()
selftests/nolibc: prevent coredumps during test execution
tools/include/nolibc/sys.h | 27 +++++++++++++++++++++++++++
tools/testing/selftests/nolibc/nolibc-test.c | 3 +++
2 files changed, 30 insertions(+)
---
base-commit: 1974a2b5fd434812b32952b09df7b79fdee8104d
change-id: 20230526-nolibc-test-no-dump-a1b1d9557df8
Best regards,
--
Thomas Weißschuh <linux(a)weissschuh.net>
Dan Carpenter spotted a race condition in a couple of situations like
these in the test_firmware driver:
static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
{
u8 val;
int ret;
ret = kstrtou8(buf, 10, &val);
if (ret)
return ret;
mutex_lock(&test_fw_mutex);
*(u8 *)cfg = val;
mutex_unlock(&test_fw_mutex);
/* Always return full write size even if we didn't consume all */
return size;
}
static ssize_t config_num_requests_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
int rc;
mutex_lock(&test_fw_mutex);
if (test_fw_config->reqs) {
pr_err("Must call release_all_firmware prior to changing config\n");
rc = -EINVAL;
mutex_unlock(&test_fw_mutex);
goto out;
}
mutex_unlock(&test_fw_mutex);
rc = test_dev_config_update_u8(buf, count,
&test_fw_config->num_requests);
out:
return rc;
}
static ssize_t config_read_fw_idx_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
return test_dev_config_update_u8(buf, count,
&test_fw_config->read_fw_idx);
}
The function test_dev_config_update_u8() is called from both the locked
and the unlocked context, function config_num_requests_store() and
config_read_fw_idx_store() which can both be called asynchronously as
they are driver's methods, while test_dev_config_update_u8() and siblings
change their argument pointed to by u8 *cfg or similar pointer.
To avoid deadlock on test_fw_mutex, the lock is dropped before calling
test_dev_config_update_u8() and re-acquired within test_dev_config_update_u8()
itself, but alas this creates a race condition.
Having two locks wouldn't assure a race-proof mutual exclusion.
This situation is best avoided by the introduction of a new, unlocked
function __test_dev_config_update_u8() which can be called from the locked
context and reducing test_dev_config_update_u8() to:
static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
{
int ret;
mutex_lock(&test_fw_mutex);
ret = __test_dev_config_update_u8(buf, size, cfg);
mutex_unlock(&test_fw_mutex);
return ret;
}
doing the locking and calling the unlocked primitive, which enables both
locked and unlocked versions without duplication of code.
The similar approach was applied to all functions called from the locked
and the unlocked context, which safely mitigates both deadlocks and race
conditions in the driver.
__test_dev_config_update_bool(), __test_dev_config_update_u8() and
__test_dev_config_update_size_t() unlocked versions of the functions
were introduced to be called from the locked contexts as a workaround
without releasing the main driver's lock and thereof causing a race
condition.
The test_dev_config_update_bool(), test_dev_config_update_u8() and
test_dev_config_update_size_t() locked versions of the functions
are being called from driver methods without the unnecessary multiplying
of the locking and unlocking code for each method, and complicating
the code with saving of the return value across lock.
Fixes: 7feebfa487b92 ("test_firmware: add support for request_firmware_into_buf")
Cc: Luis Chamberlain <mcgrof(a)kernel.org>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Russ Weight <russell.h.weight(a)intel.com>
Cc: Takashi Iwai <tiwai(a)suse.de>
Cc: Tianfei Zhang <tianfei.zhang(a)intel.com>
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Colin Ian King <colin.i.king(a)gmail.com>
Cc: Randy Dunlap <rdunlap(a)infradead.org>
Cc: linux-kselftest(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # v5.4
Suggested-by: Dan Carpenter <error27(a)gmail.com>
Signed-off-by: Mirsad Goran Todorovac <mirsad.todorovac(a)alu.unizg.hr>
---
lib/test_firmware.c | 52 ++++++++++++++++++++++++++++++---------------
1 file changed, 35 insertions(+), 17 deletions(-)
diff --git a/lib/test_firmware.c b/lib/test_firmware.c
index 05ed84c2fc4c..35417e0af3f4 100644
--- a/lib/test_firmware.c
+++ b/lib/test_firmware.c
@@ -353,16 +353,26 @@ static ssize_t config_test_show_str(char *dst,
return len;
}
-static int test_dev_config_update_bool(const char *buf, size_t size,
+static inline int __test_dev_config_update_bool(const char *buf, size_t size,
bool *cfg)
{
int ret;
- mutex_lock(&test_fw_mutex);
if (kstrtobool(buf, cfg) < 0)
ret = -EINVAL;
else
ret = size;
+
+ return ret;
+}
+
+static int test_dev_config_update_bool(const char *buf, size_t size,
+ bool *cfg)
+{
+ int ret;
+
+ mutex_lock(&test_fw_mutex);
+ ret = __test_dev_config_update_bool(buf, size, cfg);
mutex_unlock(&test_fw_mutex);
return ret;
@@ -373,7 +383,8 @@ static ssize_t test_dev_config_show_bool(char *buf, bool val)
return snprintf(buf, PAGE_SIZE, "%d\n", val);
}
-static int test_dev_config_update_size_t(const char *buf,
+static int __test_dev_config_update_size_t(
+ const char *buf,
size_t size,
size_t *cfg)
{
@@ -384,9 +395,7 @@ static int test_dev_config_update_size_t(const char *buf,
if (ret)
return ret;
- mutex_lock(&test_fw_mutex);
*(size_t *)cfg = new;
- mutex_unlock(&test_fw_mutex);
/* Always return full write size even if we didn't consume all */
return size;
@@ -402,7 +411,7 @@ static ssize_t test_dev_config_show_int(char *buf, int val)
return snprintf(buf, PAGE_SIZE, "%d\n", val);
}
-static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
+static int __test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
{
u8 val;
int ret;
@@ -411,14 +420,23 @@ static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
if (ret)
return ret;
- mutex_lock(&test_fw_mutex);
*(u8 *)cfg = val;
- mutex_unlock(&test_fw_mutex);
/* Always return full write size even if we didn't consume all */
return size;
}
+static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)
+{
+ int ret;
+
+ mutex_lock(&test_fw_mutex);
+ ret = __test_dev_config_update_u8(buf, size, cfg);
+ mutex_unlock(&test_fw_mutex);
+
+ return ret;
+}
+
static ssize_t test_dev_config_show_u8(char *buf, u8 val)
{
return snprintf(buf, PAGE_SIZE, "%u\n", val);
@@ -471,10 +489,10 @@ static ssize_t config_num_requests_store(struct device *dev,
mutex_unlock(&test_fw_mutex);
goto out;
}
- mutex_unlock(&test_fw_mutex);
- rc = test_dev_config_update_u8(buf, count,
- &test_fw_config->num_requests);
+ rc = __test_dev_config_update_u8(buf, count,
+ &test_fw_config->num_requests);
+ mutex_unlock(&test_fw_mutex);
out:
return rc;
@@ -518,10 +536,10 @@ static ssize_t config_buf_size_store(struct device *dev,
mutex_unlock(&test_fw_mutex);
goto out;
}
- mutex_unlock(&test_fw_mutex);
- rc = test_dev_config_update_size_t(buf, count,
- &test_fw_config->buf_size);
+ rc = __test_dev_config_update_size_t(buf, count,
+ &test_fw_config->buf_size);
+ mutex_unlock(&test_fw_mutex);
out:
return rc;
@@ -548,10 +566,10 @@ static ssize_t config_file_offset_store(struct device *dev,
mutex_unlock(&test_fw_mutex);
goto out;
}
- mutex_unlock(&test_fw_mutex);
- rc = test_dev_config_update_size_t(buf, count,
- &test_fw_config->file_offset);
+ rc = __test_dev_config_update_size_t(buf, count,
+ &test_fw_config->file_offset);
+ mutex_unlock(&test_fw_mutex);
out:
return rc;
--
2.30.2
KUnit aborts the current thread when an assertion fails. Currently, this
is done conditionally as part of the kunit_do_failed_assertion()
function, but this hides the kunit_abort() call from the compiler
(particularly if it's in another module). This, in turn, can lead to
both suboptimal code generation (the compiler can't know if
kunit_do_failed_assertion() will return), and to static analysis tools
like smatch giving false positives.
Moving the kunit_abort() call into the macro should give the compiler
and tools a better chance at understanding what's going on. Doing so
requires exporting kunit_abort(), though it's recommended to continue to
use assertions in lieu of aborting directly.
Suggested-by: Dan Carpenter <dan.carpenter(a)linaro.org>
Signed-off-by: David Gow <davidgow(a)google.com>
---
include/kunit/test.h | 4 ++++
lib/kunit/test.c | 5 +----
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 2f23d6efa505..6a35e3e2a1e5 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -481,6 +481,8 @@ void __printf(2, 3) kunit_log_append(char *log, const char *fmt, ...);
*/
#define KUNIT_SUCCEED(test) do {} while (0)
+void __noreturn kunit_abort(struct kunit *test);
+
void kunit_do_failed_assertion(struct kunit *test,
const struct kunit_loc *loc,
enum kunit_assert_type type,
@@ -498,6 +500,8 @@ void kunit_do_failed_assertion(struct kunit *test,
assert_format, \
fmt, \
##__VA_ARGS__); \
+ if (assert_type == KUNIT_ASSERTION) \
+ kunit_abort(test); \
} while (0)
diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index d3fb93a23ccc..3b350e50cab9 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -310,7 +310,7 @@ static void kunit_fail(struct kunit *test, const struct kunit_loc *loc,
string_stream_destroy(stream);
}
-static void __noreturn kunit_abort(struct kunit *test)
+void __noreturn kunit_abort(struct kunit *test)
{
kunit_try_catch_throw(&test->try_catch); /* Does not return. */
@@ -340,9 +340,6 @@ void kunit_do_failed_assertion(struct kunit *test,
kunit_fail(test, loc, type, assert, assert_format, &message);
va_end(args);
-
- if (type == KUNIT_ASSERTION)
- kunit_abort(test);
}
EXPORT_SYMBOL_GPL(kunit_do_failed_assertion);
--
2.41.0.rc0.172.g3f132b7071-goog