According to the error message, the first argument of ptrace() should be
PTRACE_SINGLESTEP instead of PTRACE_CONT when ptrace single step.
Fixes: f43365ee17f8 ("selftests: arm64: add test for unaligned/inexact watchpoint handling")
Signed-off-by: Tiezhu Yang <yangtiezhu(a)loongson.cn>
---
tools/testing/selftests/breakpoints/breakpoint_test_arm64.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/breakpoints/breakpoint_test_arm64.c b/tools/testing/selftests/breakpoints/breakpoint_test_arm64.c
index ad41ea6..2f4d4d6 100644
--- a/tools/testing/selftests/breakpoints/breakpoint_test_arm64.c
+++ b/tools/testing/selftests/breakpoints/breakpoint_test_arm64.c
@@ -143,7 +143,7 @@ static bool run_test(int wr_size, int wp_size, int wr, int wp)
if (!set_watchpoint(pid, wp_size, wp))
return false;
- if (ptrace(PTRACE_CONT, pid, NULL, NULL) < 0) {
+ if (ptrace(PTRACE_SINGLESTEP, pid, NULL, NULL) < 0) {
ksft_print_msg(
"ptrace(PTRACE_SINGLESTEP) failed: %s\n",
strerror(errno));
--
2.1.0
Changelog
---------
v8
- Added reviewed by's from John Hubbard
- Fixed subjects for selftests patches
- Moved zero page check inside is_pinnable_page() as requested by Jason
Gunthorpe.
v7
- Added reviewed-by's
- Fixed a compile bug on non-mmu builds reported by robot
v6
Small update, but I wanted to send it out quicker, as it removes a
controversial patch and replaces it with something sane.
- Removed forcing FOLL_WRITE for longterm gup, instead added a patch to
skip zero pages during migration.
- Added reviewed-by's and minor log changes.
v5
- Added the following patches to the beginning of series, which are fixes
to the other existing problems with CMA migration code:
mm/gup: check every subpage of a compound page during isolation
mm/gup: return an error on migration failure
mm/gup: check for isolation errors also at the beginning of series
mm/gup: do not allow zero page for pinned pages
- remove .gfp_mask/.reclaim_idx changes from mm/vmscan.c
- update movable zone header comment in patch 8 instead of patch 3, fix
the comment
- Added acked, sign-offs
- Updated commit logs based on feedback
- Addressed issues reported by Michal and Jason.
- Remove:
#define PINNABLE_MIGRATE_MAX 10
#define PINNABLE_ISOLATE_MAX 100
Instead: fail on the first migration failure, and retry isolation
forever as their failures are transient.
- In self-set addressed some of the comments from John Hubbard, updated
commit logs, and added comments. Renamed gup->flags with gup->test_flags.
v4
- Address page migration comments. New patch:
mm/gup: limit number of gup migration failures, honor failures
Implements the limiting number of retries for migration failures, and
also check for isolation failures.
Added a test case into gup_test to verify that pages never long-term
pinned in a movable zone, and also added tests to fault both in kernel
and in userland.
v3
- Merged with linux-next, which contains clean-up patch from Jason,
therefore this series is reduced by two patches which did the same
thing.
v2
- Addressed all review comments
- Added Reviewed-by's.
- Renamed PF_MEMALLOC_NOMOVABLE to PF_MEMALLOC_PIN
- Added is_pinnable_page() to check if page can be longterm pinned
- Fixed gup fast path by checking is_in_pinnable_zone()
- rename cma_page_list to movable_page_list
- add a admin-guide note about handling pinned pages in ZONE_MOVABLE,
updated caveat about pinned pages from linux/mmzone.h
- Move current_gfp_context() to fast-path
---------
When page is pinned it cannot be moved and its physical address stays
the same until pages is unpinned.
This is useful functionality to allows userland to implementation DMA
access. For example, it is used by vfio in vfio_pin_pages().
However, this functionality breaks memory hotplug/hotremove assumptions
that pages in ZONE_MOVABLE can always be migrated.
This patch series fixes this issue by forcing new allocations during
page pinning to omit ZONE_MOVABLE, and also to migrate any existing
pages from ZONE_MOVABLE during pinning.
It uses the same scheme logic that is currently used by CMA, and extends
the functionality for all allocations.
For more information read the discussion [1] about this problem.
[1] https://lore.kernel.org/lkml/CA+CK2bBffHBxjmb9jmSKacm0fJMinyt3Nhk8Nx6iudcQS…
Previous versions:
v1
https://lore.kernel.org/lkml/20201202052330.474592-1-pasha.tatashin@soleen.…
v2
https://lore.kernel.org/lkml/20201210004335.64634-1-pasha.tatashin@soleen.c…
v3
https://lore.kernel.org/lkml/20201211202140.396852-1-pasha.tatashin@soleen.…
v4
https://lore.kernel.org/lkml/20201217185243.3288048-1-pasha.tatashin@soleen…
v5
https://lore.kernel.org/lkml/20210119043920.155044-1-pasha.tatashin@soleen.…
v6
https://lore.kernel.org/lkml/20210120014333.222547-1-pasha.tatashin@soleen.…
v7
https://lore.kernel.org/lkml/20210122033748.924330-1-pasha.tatashin@soleen.…
Pavel Tatashin (14):
mm/gup: don't pin migrated cma pages in movable zone
mm/gup: check every subpage of a compound page during isolation
mm/gup: return an error on migration failure
mm/gup: check for isolation errors
mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN
mm: apply per-task gfp constraints in fast path
mm: honor PF_MEMALLOC_PIN for all movable pages
mm/gup: do not migrate zero page
mm/gup: migrate pinned pages out of movable zone
memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning
mm/gup: change index type to long as it counts pages
mm/gup: longterm pin migration cleanup
selftests/vm: gup_test: fix test flag
selftests/vm: gup_test: test faulting in kernel, and verify pinnable
pages
.../admin-guide/mm/memory-hotplug.rst | 9 +
include/linux/migrate.h | 1 +
include/linux/mm.h | 12 ++
include/linux/mmzone.h | 13 +-
include/linux/pgtable.h | 3 +-
include/linux/sched.h | 2 +-
include/linux/sched/mm.h | 27 +--
include/trace/events/migrate.h | 3 +-
mm/gup.c | 174 ++++++++----------
mm/gup_test.c | 29 +--
mm/gup_test.h | 3 +-
mm/hugetlb.c | 4 +-
mm/page_alloc.c | 33 ++--
tools/testing/selftests/vm/gup_test.c | 36 +++-
14 files changed, 190 insertions(+), 159 deletions(-)
--
2.25.1
Changelog
---------
v9
- Renamed gpf_to_alloc_flags() to gfp_to_alloc_flags_cma(); thanks Lecopzer
Chen for noticing.
- Fixed warning reported scripts/checkpatch.pl:
"Logical continuations should be on the previous line"
v8
- Added reviewed by's from John Hubbard
- Fixed subjects for selftests patches
- Moved zero page check inside is_pinnable_page() as requested by Jason
Gunthorpe.
v7
- Added reviewed-by's
- Fixed a compile bug on non-mmu builds reported by robot
v6
Small update, but I wanted to send it out quicker, as it removes a
controversial patch and replaces it with something sane.
- Removed forcing FOLL_WRITE for longterm gup, instead added a patch to
skip zero pages during migration.
- Added reviewed-by's and minor log changes.
v5
- Added the following patches to the beginning of series, which are fixes
to the other existing problems with CMA migration code:
mm/gup: check every subpage of a compound page during isolation
mm/gup: return an error on migration failure
mm/gup: check for isolation errors also at the beginning of series
mm/gup: do not allow zero page for pinned pages
- remove .gfp_mask/.reclaim_idx changes from mm/vmscan.c
- update movable zone header comment in patch 8 instead of patch 3, fix
the comment
- Added acked, sign-offs
- Updated commit logs based on feedback
- Addressed issues reported by Michal and Jason.
- Remove:
#define PINNABLE_MIGRATE_MAX 10
#define PINNABLE_ISOLATE_MAX 100
Instead: fail on the first migration failure, and retry isolation
forever as their failures are transient.
- In self-set addressed some of the comments from John Hubbard, updated
commit logs, and added comments. Renamed gup->flags with gup->test_flags.
v4
- Address page migration comments. New patch:
mm/gup: limit number of gup migration failures, honor failures
Implements the limiting number of retries for migration failures, and
also check for isolation failures.
Added a test case into gup_test to verify that pages never long-term
pinned in a movable zone, and also added tests to fault both in kernel
and in userland.
v3
- Merged with linux-next, which contains clean-up patch from Jason,
therefore this series is reduced by two patches which did the same
thing.
v2
- Addressed all review comments
- Added Reviewed-by's.
- Renamed PF_MEMALLOC_NOMOVABLE to PF_MEMALLOC_PIN
- Added is_pinnable_page() to check if page can be longterm pinned
- Fixed gup fast path by checking is_in_pinnable_zone()
- rename cma_page_list to movable_page_list
- add a admin-guide note about handling pinned pages in ZONE_MOVABLE,
updated caveat about pinned pages from linux/mmzone.h
- Move current_gfp_context() to fast-path
---------
When page is pinned it cannot be moved and its physical address stays
the same until pages is unpinned.
This is useful functionality to allows userland to implementation DMA
access. For example, it is used by vfio in vfio_pin_pages().
However, this functionality breaks memory hotplug/hotremove assumptions
that pages in ZONE_MOVABLE can always be migrated.
This patch series fixes this issue by forcing new allocations during
page pinning to omit ZONE_MOVABLE, and also to migrate any existing
pages from ZONE_MOVABLE during pinning.
It uses the same scheme logic that is currently used by CMA, and extends
the functionality for all allocations.
For more information read the discussion [1] about this problem.
[1] https://lore.kernel.org/lkml/CA+CK2bBffHBxjmb9jmSKacm0fJMinyt3Nhk8Nx6iudcQS…
Previous versions:
v1
https://lore.kernel.org/lkml/20201202052330.474592-1-pasha.tatashin@soleen.…
v2
https://lore.kernel.org/lkml/20201210004335.64634-1-pasha.tatashin@soleen.c…
v3
https://lore.kernel.org/lkml/20201211202140.396852-1-pasha.tatashin@soleen.…
v4
https://lore.kernel.org/lkml/20201217185243.3288048-1-pasha.tatashin@soleen…
v5
https://lore.kernel.org/lkml/20210119043920.155044-1-pasha.tatashin@soleen.…
v6
https://lore.kernel.org/lkml/20210120014333.222547-1-pasha.tatashin@soleen.…
v7
https://lore.kernel.org/lkml/20210122033748.924330-1-pasha.tatashin@soleen.…
v8
https://lore.kernel.org/lkml/20210125194751.1275316-1-pasha.tatashin@soleen…
Pavel Tatashin (14):
mm/gup: don't pin migrated cma pages in movable zone
mm/gup: check every subpage of a compound page during isolation
mm/gup: return an error on migration failure
mm/gup: check for isolation errors
mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN
mm: apply per-task gfp constraints in fast path
mm: honor PF_MEMALLOC_PIN for all movable pages
mm/gup: do not migrate zero page
mm/gup: migrate pinned pages out of movable zone
memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning
mm/gup: change index type to long as it counts pages
mm/gup: longterm pin migration cleanup
selftests/vm: gup_test: fix test flag
selftests/vm: gup_test: test faulting in kernel, and verify pinnable
pages
.../admin-guide/mm/memory-hotplug.rst | 9 +
include/linux/migrate.h | 1 +
include/linux/mm.h | 12 ++
include/linux/mmzone.h | 13 +-
include/linux/pgtable.h | 3 +-
include/linux/sched.h | 2 +-
include/linux/sched/mm.h | 27 +--
include/trace/events/migrate.h | 3 +-
mm/gup.c | 174 ++++++++----------
mm/gup_test.c | 29 +--
mm/gup_test.h | 3 +-
mm/hugetlb.c | 4 +-
mm/page_alloc.c | 33 ++--
tools/testing/selftests/vm/gup_test.c | 36 +++-
14 files changed, 190 insertions(+), 159 deletions(-)
--
2.25.1
What the short summary is saying now, is that this commit would make the
existing code to use vDSO base address. It's already doing that.
You could instead just "Use getauxval() to simplify the code".
Also, I'd prefer to properly use upper and lower case letter, e.g. vDSO
instead of vdso.
Reply-To:
In-Reply-To: <20210124062907.88229-2-tianjia.zhang(a)linux.alibaba.com>
On Sun, Jan 24, 2021 at 02:29:03PM +0800, Tianjia Zhang wrote:
> This patch uses the library function `getauxval(AT_SYSINFO_EHDR)`
> instead of the custom function `vdso_get_base_addr` to obtain the
Use either double or single quotation mark instead of hyphen.
> base address of vDSO, which will simplify the code implementation.
>
> Signed-off-by: Tianjia Zhang <tianjia.zhang(a)linux.alibaba.com>
This needs to be imperative form, e.g. "Simplify the code implemntation
by using getauxval() instead of a custom function."
> ---
> tools/testing/selftests/sgx/main.c | 24 ++++--------------------
> 1 file changed, 4 insertions(+), 20 deletions(-)
>
> diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c
> index 724cec700926..365d01dea67b 100644
> --- a/tools/testing/selftests/sgx/main.c
> +++ b/tools/testing/selftests/sgx/main.c
> @@ -15,6 +15,7 @@
> #include <sys/stat.h>
> #include <sys/time.h>
> #include <sys/types.h>
> +#include <sys/auxv.h>
> #include "defines.h"
> #include "main.h"
> #include "../kselftest.h"
> @@ -28,24 +29,6 @@ struct vdso_symtab {
> Elf64_Word *elf_hashtab;
> };
>
> -static void *vdso_get_base_addr(char *envp[])
> -{
> - Elf64_auxv_t *auxv;
> - int i;
> -
> - for (i = 0; envp[i]; i++)
> - ;
> -
> - auxv = (Elf64_auxv_t *)&envp[i + 1];
> -
> - for (i = 0; auxv[i].a_type != AT_NULL; i++) {
> - if (auxv[i].a_type == AT_SYSINFO_EHDR)
> - return (void *)auxv[i].a_un.a_val;
> - }
> -
> - return NULL;
> -}
> -
> static Elf64_Dyn *vdso_get_dyntab(void *addr)
> {
> Elf64_Ehdr *ehdr = addr;
> @@ -162,7 +145,7 @@ static int user_handler(long rdi, long rsi, long rdx, long ursp, long r8, long r
> return 0;
> }
>
> -int main(int argc, char *argv[], char *envp[])
> +int main(int argc, char *argv[])
> {
> struct sgx_enclave_run run;
> struct vdso_symtab symtab;
> @@ -203,7 +186,8 @@ int main(int argc, char *argv[], char *envp[])
> memset(&run, 0, sizeof(run));
> run.tcs = encl.encl_base;
>
> - addr = vdso_get_base_addr(envp);
> + /* Get vDSO base address */
> + addr = (void *)(uintptr_t)getauxval(AT_SYSINFO_EHDR);
You could just case the result the result directly to void *.
> if (!addr)
> goto err;
>
> --
> 2.19.1.3.ge56e4f7
>
>
/Jarkko