Hari Bathini hbathini@linux.vnet.ibm.com writes:
On Monday 06 August 2018 09:52 AM, Mahesh Jagannath Salgaonkar wrote:
On 07/31/2018 07:26 PM, Hari Bathini wrote:
Crash memory ranges is an array of memory ranges of the crashing kernel to be exported as a dump via /proc/vmcore file. The size of the array is set based on INIT_MEMBLOCK_REGIONS, which works alright in most cases where memblock memory regions count is less than INIT_MEMBLOCK_REGIONS value. But this count can grow beyond INIT_MEMBLOCK_REGIONS value since commit 142b45a72e22 ("memblock: Add array resizing support").
...
Fixes: 2df173d9e85d ("fadump: Initialize elfcore header and add PT_LOAD program headers.") Cc: stable@vger.kernel.org Cc: Mahesh Salgaonkar mahesh@linux.vnet.ibm.com Signed-off-by: Hari Bathini hbathini@linux.ibm.com
arch/powerpc/include/asm/fadump.h | 2 + arch/powerpc/kernel/fadump.c | 63 ++++++++++++++++++++++++++++++++++--- 2 files changed, 59 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h index 5a23010..ff708b3 100644 --- a/arch/powerpc/include/asm/fadump.h +++ b/arch/powerpc/include/asm/fadump.h @@ -196,7 +196,7 @@ struct fadump_crash_info_header { };
/* Crash memory ranges */ -#define INIT_CRASHMEM_RANGES (INIT_MEMBLOCK_REGIONS + 2) +#define INIT_CRASHMEM_RANGES INIT_MEMBLOCK_REGIONS
struct fad_crash_memory_ranges { unsigned long long base; diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c index 07e8396..1c1df4f 100644 --- a/arch/powerpc/kernel/fadump.c +++ b/arch/powerpc/kernel/fadump.c
...
Also alongwith this change, Should we also double the initial array size (e.g. INIT_CRASHMEM_RANGES * 2) to reduce our chances to go for memory allocation ?
Agreed that doubling the static array size reduces the likelihood of the need for dynamic array resizing. Will do that.
Nonetheless, if we get to the point where 2K memory allocation fails on a system with so many memory ranges, it is likely that the kernel has some basic problems to deal with first :)
Yes, this all seems a bit silly.
Why not just allocate a 64K page and be done with it?
AFAICS we're not being called too early to do that, and if you can't allocate a single page then the system is going to OOM anyway.
cheers