Dzień dobry,
zapoznałem się z Państwa ofertą i z przyjemnością przyznaję, że przyciąga uwagę i zachęca do dalszych rozmów.
Pomyślałem, że może mógłbym mieć swój wkład w Państwa rozwój i pomóc dotrzeć z tą ofertą do większego grona odbiorców. Pozycjonuję strony www, dzięki czemu generują świetny ruch w sieci.
Możemy porozmawiać w najbliższym czasie?
Pozdrawiam
Adam Charachuta
commit b5ba705c2608 ("selftests/vm: enable running select groups of tests")
unintentionally reversed the ordering of some of the lines of
run_vmtests.sh that calculate values based on system configuration.
Importantly, $hpgsize_MB is determined from $hpgsize_KB, but this later
value is not read from /proc/meminfo until later, causing userfaultfd
tests to incorrectly fail since $half_ufd_size_MB will always be 0.
Switch these statements around into proper order to fix the invocation
of the userfaultfd tests that use $half_ufd_size_MB.
Suggested-by: Nico Pache <npache(a)redhat.com>
Signed-off-by: Joel Savitz <jsavitz(a)redhat.com>
---
tools/testing/selftests/vm/run_vmtests.sh | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/tools/testing/selftests/vm/run_vmtests.sh b/tools/testing/selftests/vm/run_vmtests.sh
index fff00bb77086..ce52e4f5ff21 100755
--- a/tools/testing/selftests/vm/run_vmtests.sh
+++ b/tools/testing/selftests/vm/run_vmtests.sh
@@ -82,16 +82,6 @@ test_selected() {
fi
}
-# Simple hugetlbfs tests have a hardcoded minimum requirement of
-# huge pages totaling 256MB (262144KB) in size. The userfaultfd
-# hugetlb test requires a minimum of 2 * nr_cpus huge pages. Take
-# both of these requirements into account and attempt to increase
-# number of huge pages available.
-nr_cpus=$(nproc)
-hpgsize_MB=$((hpgsize_KB / 1024))
-half_ufd_size_MB=$((((nr_cpus * hpgsize_MB + 127) / 128) * 128))
-needmem_KB=$((half_ufd_size_MB * 2 * 1024))
-
# get huge pagesize and freepages from /proc/meminfo
while read -r name size unit; do
if [ "$name" = "HugePages_Free:" ]; then
@@ -102,6 +92,16 @@ while read -r name size unit; do
fi
done < /proc/meminfo
+# Simple hugetlbfs tests have a hardcoded minimum requirement of
+# huge pages totaling 256MB (262144KB) in size. The userfaultfd
+# hugetlb test requires a minimum of 2 * nr_cpus huge pages. Take
+# both of these requirements into account and attempt to increase
+# number of huge pages available.
+nr_cpus=$(nproc)
+hpgsize_MB=$((hpgsize_KB / 1024))
+half_ufd_size_MB=$((((nr_cpus * hpgsize_MB + 127) / 128) * 128))
+needmem_KB=$((half_ufd_size_MB * 2 * 1024))
+
# set proper nr_hugepages
if [ -n "$freepgs" ] && [ -n "$hpgsize_KB" ]; then
nr_hugepgs=$(cat /proc/sys/vm/nr_hugepages)
--
2.31.1
Hello,
This patch series implements IOCTL on the pagemap procfs file to get the
information about the page table entries (PTEs). The following operations
are supported in this ioctl:
- Get the information if the pages are soft-dirty, file mapped, present
or swapped.
- Clear the soft-dirty PTE bit of the pages.
- Get and clear the soft-dirty PTE bit of the pages atomically.
Soft-dirty PTE bit of the memory pages can be read by using the pagemap
procfs file. The soft-dirty PTE bit for the whole memory range of the
process can be cleared by writing to the clear_refs file. There are other
methods to mimic this information entirely in userspace with poor
performance:
- The mprotect syscall and SIGSEGV handler for bookkeeping
- The userfaultfd syscall with the handler for bookkeeping
Some benchmarks can be seen here[1]. This series adds features that weren't
present earlier:
- There is no atomic get soft-dirty PTE bit status and clear operation
possible.
- The soft-dirty PTE bit of only a part of memory cannot be cleared.
Historically, soft-dirty PTE bit tracking has been used in the CRIU
project. The procfs interface is enough for finding the soft-dirty bit
status and clearing the soft-dirty bit of all the pages of a process.
We have the use case where we need to track the soft-dirty PTE bit for
only specific pages on demand. We need this tracking and clear mechanism
of a region of memory while the process is running to emulate the
getWriteWatch() syscall of Windows. This syscall is used by games to
keep track of dirty pages to process only the dirty pages.
The information related to pages if the page is file mapped, present and
swapped is required for the CRIU project[2][3]. The addition of the
required mask, any mask, excluded mask and return masks are also required
for the CRIU project[2].
The IOCTL returns the addresses of the pages which match the specific masks.
The page addresses are returned in struct page_region in a compact form.
The max_pages is needed to support a use case where user only wants to get
a specific number of pages. So there is no need to find all the pages of
interest in the range when max_pages is specified. The IOCTL returns when
the maximum number of the pages are found. The max_pages is optional. If
max_pages is specified, it must be equal or greater than the vec_size.
This restriction is needed to handle worse case when one page_region only
contains info of one page and it cannot be compacted. This is needed to
emulate the Windows getWriteWatch() syscall.
Some non-dirty pages get marked as dirty because of the kernel's
internal activity (such as VMA merging as soft-dirty bit difference isn't
considered while deciding to merge VMAs). The dirty bit of the pages is
stored in the VMA flags and in the per page flags. If any of these two bits
are set, the page is considered to be soft dirty. Suppose you have cleared
the soft dirty bit of half of VMA which will be done by splitting the VMA
and clearing soft dirty bit flag in the half VMA and the pages in it. Now
kernel may decide to merge the VMAs again. So the half VMA becomes dirty
again. This splitting/merging costs performance. The application receives
a lot of pages which aren't dirty in reality but marked as dirty.
Performance is lost again here. Also sometimes user doesn't want the newly
allocated memory to be marked as dirty. PAGEMAP_NO_REUSED_REGIONS flag
solves both the problems. It is used to not depend on the soft dirty flag
in the VMA flags. So VMA splitting and merging doesn't happen. It only
depends on the soft dirty bit of the individual pages. Thus by using this
flag, there may be a scenerio such that the new memory regions which are
just created, doesn't look dirty when seen with the IOCTL, but look dirty
when seen from procfs. This seems okay as the user of this flag know the
implication of using it.
[1] https://lore.kernel.org/lkml/54d4c322-cd6e-eefd-b161-2af2b56aae24@collabora…
[2] https://lore.kernel.org/all/YyiDg79flhWoMDZB@gmail.com/
[3] https://lore.kernel.org/all/20221014134802.1361436-1-mdanylo@google.com/
Regards,
Muhammad Usama Anjum
Muhammad Usama Anjum (3):
fs/proc/task_mmu: update functions to clear the soft-dirty PTE bit
fs/proc/task_mmu: Implement IOCTL to get and/or the clear info about
PTEs
selftests: vm: add pagemap ioctl tests
fs/proc/task_mmu.c | 396 +++++++++++-
include/uapi/linux/fs.h | 53 ++
tools/include/uapi/linux/fs.h | 53 ++
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 5 +-
tools/testing/selftests/vm/pagemap_ioctl.c | 681 +++++++++++++++++++++
6 files changed, 1156 insertions(+), 33 deletions(-)
create mode 100644 tools/testing/selftests/vm/pagemap_ioctl.c
--
2.30.2
When fixing up support for extra_context in the signal handling tests I
didn't notice that there is a TODO file in the directory which lists this
as a thing to be done. Since it's been done remove it from the list.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
tools/testing/selftests/arm64/signal/testcases/TODO | 1 -
1 file changed, 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/signal/testcases/TODO b/tools/testing/selftests/arm64/signal/testcases/TODO
index 110ff9fd195d..1f7fba8194fe 100644
--- a/tools/testing/selftests/arm64/signal/testcases/TODO
+++ b/tools/testing/selftests/arm64/signal/testcases/TODO
@@ -1,2 +1 @@
- Validate that register contents are saved and restored as expected.
-- Support and validate extra_context.
base-commit: 9abf2313adc1ca1b6180c508c25f22f9395cc780
--
2.30.2
The signal magic values are supposed to be allocated as somewhat meaningful
ASCII so if we encounter a bad magic value print the any alphanumeric
characters we find in it as well as the hex value to aid debuggability.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
.../arm64/signal/testcases/testcases.c | 21 +++++++++++++++----
1 file changed, 17 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/arm64/signal/testcases/testcases.c b/tools/testing/selftests/arm64/signal/testcases/testcases.c
index e1c625b20ac4..d2eda7b5de26 100644
--- a/tools/testing/selftests/arm64/signal/testcases/testcases.c
+++ b/tools/testing/selftests/arm64/signal/testcases/testcases.c
@@ -1,5 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (C) 2019 ARM Limited */
+
+#include <ctype.h>
+#include <string.h>
+
#include "testcases.h"
struct _aarch64_ctx *get_header(struct _aarch64_ctx *head, uint32_t magic,
@@ -109,7 +113,7 @@ bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err)
bool terminated = false;
size_t offs = 0;
int flags = 0;
- int new_flags;
+ int new_flags, i;
struct extra_context *extra = NULL;
struct sve_context *sve = NULL;
struct za_context *za = NULL;
@@ -117,6 +121,7 @@ bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err)
(struct _aarch64_ctx *)uc->uc_mcontext.__reserved;
void *extra_data = NULL;
size_t extra_sz = 0;
+ char magic[4];
if (!err)
return false;
@@ -194,11 +199,19 @@ bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err)
/*
* A still unknown Magic: potentially freshly added
* to the Kernel code and still unknown to the
- * tests.
+ * tests. Magic numbers are supposed to be allocated
+ * as somewhat meaningful ASCII strings so try to
+ * print as such as well as the raw number.
*/
+ memcpy(magic, &head->magic, sizeof(magic));
+ for (i = 0; i < sizeof(magic); i++)
+ if (!isalnum(magic[i]))
+ magic[i] = '?';
+
fprintf(stdout,
- "SKIP Unknown MAGIC: 0x%X - Is KSFT arm64/signal up to date ?\n",
- head->magic);
+ "SKIP Unknown MAGIC: 0x%X (%c%c%c%c) - Is KSFT arm64/signal up to date ?\n",
+ head->magic,
+ magic[3], magic[2], magic[1], magic[0]);
break;
}
base-commit: 30a0b95b1335e12efef89dd78518ed3e4a71a763
--
2.30.2
This series provides a couple of improvements to the output of
fp-stress, making it easier to follow what's going on and our
application of the timeout a bit more even.
Mark Brown (2):
kselftest/arm64: Check that all children are producing output in
fp-stress
kselftest/arm64: Provide progress messages when signalling children
tools/testing/selftests/arm64/fp/fp-stress.c | 26 ++++++++++++++++++++
1 file changed, 26 insertions(+)
base-commit: 9abf2313adc1ca1b6180c508c25f22f9395cc780
--
2.30.2
On Tue, Nov 08, 2022 at 12:59:14PM +0100, Jaroslav Kysela wrote:
> This initial code does a simple sample transfer tests. By default,
> all PCM devices are detected and tested with short and long
> buffering parameters for 4 seconds. If the sample transfer timing
> is not in a +-100ms boundary, the test fails. Only the interleaved
> buffering scheme is supported in this version.
Oh, thanks for picking this up - something like this has been on my mind
for ages! This should probably be copied to Shuah and the kselftest
list as well, I've added them. This looks basically good to me, I've
got a bunch of comments below but I'm not sure any of them except
possibly the one about not putting values in the configuration file by
default should block getting this merged so:
Reviewed-by: Mark Brown <broonie(a)kernel.org>
> The configuration may be modified with the configuration files.
> A specific hardware configuration is detected and activated
> using the sysfs regex matching. This allows to use the DMI string
> (/sys/class/dmi/id/* tree) or any other system parameters
> exposed in sysfs for the matching for the CI automation.
> The configuration file may also specify the PCM device list to detect
> the missing PCM devices.
> create mode 100644 tools/testing/selftests/alsa/alsa-local.h
> create mode 100644 tools/testing/selftests/alsa/conf.c
> create mode 100644 tools/testing/selftests/alsa/conf.d/Lenovo_ThinkPad_P1_Gen2.conf
> create mode 100644 tools/testing/selftests/alsa/pcm-test.c
This is a bit unusual for kselftest and might create a bit of churn but
does seem sensible and reasonable to me, it's on the edge of what
kselftest usually covers but seems close enough in scope. I worry
a bit about ending up needing to add a config fragment as a result but
perhaps we can get away without.
> index 000000000000..0a83f35d43eb
> --- /dev/null
> +++ b/tools/testing/selftests/alsa/conf.d/Lenovo_ThinkPad_P1_Gen2.conf
> + pcm.0.0 {
> + PLAYBACK {
> + test.time1 {
> + access RW_INTERLEAVED # can be omitted - default
> + format S16_LE # can be omitted - default
> + rate 48000 # can be omitted - default
> + channels 2 # can be omitted - default
> + period_size 512
> + buffer_size 4096
I think it'd be better to leave these commented by default, especially
if/once we improve the enumeration. That way the coverage will default
to whatever the tool does by default on the system (including any
checking of constraints for example). I guess we might want to add a
way of saying "here's what I expect the constraints to be" but that's
very much future work.
> +#ifdef SND_LIB_VER
> +#if SND_LIB_VERSION >= SND_LIB_VER(1, 2, 6)
> +#define LIB_HAS_LOAD_STRING
> +#endif
> +#endif
> +
> +#ifndef LIB_HAS_LOAD_STRING
> +static int snd_config_load_string(snd_config_t **config, const char *s,
> + size_t size)
> +{
This is also in mixer-test, we should pull it into a helper library too.
Something that could be done separately/incrementally.
> + for (i = 0; i < 4; i++) {
> +
> + snd_pcm_drain(handle);
> + ms = timestamp_diff_ms(&tstamp);
> + if (ms < 3900 || ms > 4100) {
It feels like the runtime might be usefully parameterised here - there's
a tradeoff with detecting inaccurate clocks and runtime that people
might want to make.
> + ksft_set_plan(num_missing + num_pcms * TESTS_PER_PCM);
> + for (pcm = pcm_missing; pcm != NULL; pcm = pcm->next) {
> + ksft_test_result(false, "test.missing.%d.%d.%d.%s\n",
> + pcm->card, pcm->device, pcm->subdevice,
> + snd_pcm_stream_name(pcm->stream));
> + }
We don't seem to report a successful test.missing anywhere like
find_pcms() so if we ever hit a test.missing then it'll look like a new
test, old test runs won't have logged the failure. That can change how
people look at any failures that crop up, "it's new and never worked" is
different to "this used to work" and people are likely to just be
running kselftest rather than specifically know this test. It'd be
better if we counted the cards in the config and used that for our
expected number of test.missings, logging cards that we find here as
well.
> + for (pcm = pcm_list; pcm != NULL; pcm = pcm->next) {
> + test_pcm_time1(pcm, "test.time1", "S16_LE", 48000, 2, 512, 4096);
> + test_pcm_time1(pcm, "test.time2", "S16_LE", 48000, 2, 24000, 192000);
> + }
It does feel like especially in the case where no configuration is
specified we should be eumerating what the card can do and both
potentially doing more tests (though there's obviously an execution time
tradeoff with going overboard there) and skipping configurations that
the card never claimed to support in the first place. In particular I'm
expecting we'll see some cards that only do either 44.1kHz or 48kHz and
will get spurious fails by default, and I'd like to see coverage of mono
playback on cards that claim to support it because I suspect there's a
bunch of them that don't actually do the right thing.
Like I say most of this could be done incrementally if we decide it
needs to get done at all though, we shouldn't let perfect be the enemy
of good.