Hi Daniel
On 3/14/2018 6:42 AM, Daniel Vacek Wrote:
On some architectures (reported on arm64) commit 864b75f9d6b01 ("mm/page_alloc: fix memmap_init_zone pageblock alignment") causes a boot hang. This patch fixes the hang making sure the alignment never steps back.
Link: http://lkml.kernel.org/r/0485727b2e82da7efbce5f6ba42524b429d0391a.1520011945... Fixes: 864b75f9d6b01 ("mm/page_alloc: fix memmap_init_zone pageblock alignment") Signed-off-by: Daniel Vacek neelx@redhat.com Tested-by: Sudeep Holla sudeep.holla@arm.com Tested-by: Naresh Kamboju naresh.kamboju@linaro.org Cc: Andrew Morton akpm@linux-foundation.org Cc: Mel Gorman mgorman@techsingularity.net Cc: Michal Hocko mhocko@suse.com Cc: Paul Burton paul.burton@imgtec.com Cc: Pavel Tatashin pasha.tatashin@oracle.com Cc: Vlastimil Babka vbabka@suse.cz Cc: stable@vger.kernel.org
mm/page_alloc.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3d974cb2a1a1..e033a6895c6f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5364,9 +5364,14 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, * is not. move_freepages_block() can shift ahead of * the valid region but still depends on correct page * metadata.
* Also make sure we never step back. */
pfn = (memblock_next_valid_pfn(pfn, end_pfn) &
unsigned long next_pfn;
next_pfn = (memblock_next_valid_pfn(pfn, end_pfn) & ~(pageblock_nr_pages-1)) - 1;
if (next_pfn > pfn)
pfn = next_pfn;
It didn't resolve the booting hang issue in my arm64 server. what if memblock_next_valid_pfn(pfn, end_pfn) is 32 and pageblock_nr_pages is 8196? Thus, next_pfn will be (unsigned long)-1 and be larger than pfn. So still there is an infinite loop here.
Cheers, Jia He
#endif continue; }