On Mon, 2024-11-11 at 12:12 +0000, Vlastimil Babka wrote:
On 10/31/24 10:57, David Hildenbrand wrote:
On 30.10.24 14:49, Patrick Roy wrote:
From: "Mike Rapoport (Microsoft)" rppt@kernel.org
From: Mike Rapoport (Microsoft) rppt@kernel.org
Add an API that will allow updates of the direct/linear map for a set of physically contiguous pages.
It will be used in the following patches.
Signed-off-by: Mike Rapoport (Microsoft) rppt@kernel.org Signed-off-by: Patrick Roy roypat@amazon.co.uk
[...]
#ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index e7aec20fb44f1..3030d9245f5ac 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -34,6 +34,12 @@ static inline int set_direct_map_default_noflush(struct page *page) return 0; }
+static inline int set_direct_map_valid_noflush(struct page *page,
unsigned nr, bool valid)
I recall that "unsigned" is frowned upon; "unsigned int".
+{
- return 0;
+}
Can we add some kernel doc for this?
In particular
(a) What does it mean when we return 0? That it worked? Then, this
Seems so.
dummy function looks wrong. Or this it return the
That's !CONFIG_ARCH_HAS_SET_DIRECT_MAP and other functions around do it the same way. Looks like the current callers can only exist with the CONFIG_ enabled in the first place.
Yeah, it looks a bit weird, but these functions seem to generally return 0 if the operation is not supported. ARM specifically has
if (!can_set_direct_map()) return 0;
inside `set_direct_map_invalid_{noflush,default}`. Documenting this definitely cannot hurt, I'll keep it on my todo list for the next iteration :)
number of processed entries? Then we'd have a possible "int" vs. "unsigned int" inconsistency.
(b) What are the semantics when we fail halfway through the operation when processing nr > 1? Is it "all or nothing"?
Looking at x86 implementation it seems like it can just bail out in the middle, but then I'm not sure if it can really fail in the middle, hmm...
If I understood Mike correctly when talking about this at LPC, then it can only fail if during break-up of huge mappings, it fails to allocate page tables to hold the lower-granularity mappings (which happens before any present bits are modified).
Best, Patrick