On Wed 08-12-21 09:24:39, David Hildenbrand wrote:
On 08.12.21 09:12, Michal Hocko wrote:
On Tue 07-12-21 19:03:28, David Hildenbrand wrote:
On 07.12.21 18:17, Alexey Makhalov wrote:
On Dec 7, 2021, at 9:13 AM, David Hildenbrand david@redhat.com wrote:
On 07.12.21 18:02, Alexey Makhalov wrote:
> On Dec 7, 2021, at 8:36 AM, Michal Hocko mhocko@suse.com wrote: > > On Tue 07-12-21 17:27:29, Michal Hocko wrote: > [...] >> So your proposal is to drop set_node_online from the patch and add it as >> a separate one which handles >> - sysfs part (i.e. do not register a node which doesn't span a >> physical address space) >> - hotplug side of (drop the pgd allocation, register node lazily >> when a first memblocks are registered) > > In other words, the first stage > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index c5952749ad40..f9024ba09c53 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6382,7 +6382,11 @@ static void __build_all_zonelists(void *data) > if (self && !node_online(self->node_id)) { > build_zonelists(self); > } else { > - for_each_online_node(nid) { > + /* > + * All possible nodes have pgdat preallocated > + * free_area_init > + */ > + for_each_node(nid) { > pg_data_t *pgdat = NODE_DATA(nid); > > build_zonelists(pgdat);
Will it blow up memory usage for the nodes which might never be onlined? I prefer the idea of init on demand.
Even now there is an existing problem. In my experiments, I observed _huge_ memory consumption increase by increasing number of possible numa nodes. I’m going to report it in separate mail thread.
I already raised that PPC might be problematic in that regard. Which architecture / setup do you have in mind that can have a lot of possible nodes?
It is x86_64 VMware VM, not the regular one, but specially configured (1 vCPU per node, with hot-plug support, 128 possible nodes)
I thought the pgdat would be smaller but I just gave it a test:
Yes, pgdat is quite large! Just embeded zones can eat a lot.
On my system, pgdata_t is 173824 bytes. So 128 nodes would correspond to 21 MiB, which is indeed a lot. I assume it's due to "struct zonelist", which has MAX_ZONES_PER_ZONELIST == (MAX_NUMNODES * MAX_NR_ZONES) zone references ...
This is what pahole tells me struct pglist_data { struct zone node_zones[4] __attribute__((__aligned__(64))); /* 0 5632 */ /* --- cacheline 88 boundary (5632 bytes) --- */ struct zonelist node_zonelists[1]; /* 5632 80 */ [...] /* size: 6400, cachelines: 100, members: 27 */ /* sum members: 6369, holes: 5, sum holes: 31 */
with my particular config (which is !NUMA). I haven't really checked whether there are other places which might scale with MAX_NUM_NODES or something like that.
Anyway, is 21MB of wasted space for 128 Node machine something really note worthy?
I think we'll soon might see setups (again, CXL is an example, but als owhen providing a dynamic amount of performance differentiated memory via virtio-mem) where this will most probably matter. With performance differentiated memory we'll see a lot more nodes getting used in general, and a lot more nodes eventually getting hotplugged.
There are certainly machines with many nodes. E.g. SLES kernels are build with CONFIG_NODES_SHIFT=10 which is a lot of potential nodes. And I have seen really large machines with many nodes but those usually come with a lot of memory and they do not tend to have non populated nodes AFAIR.
If 128 nodes is realistic, I cannot tell.
We could optimize by allocating some members dynamically. For example we'll never need MAX_NUMNODES entries, but only the number of possible nodes.
Yes agreed. Scaling with MAX_NUMNODES is almost always wasteful.