From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com --- arch/arm64/mm/init.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index cbcac03c0e0d..a6034645d6f7 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -392,15 +392,18 @@ void __init arm64_memblock_init(void)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { extern u16 memstart_offset_seed; - u64 range = linear_region_size - - (memblock_end_of_DRAM() - memblock_start_of_DRAM()); + u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); + int parange = cpuid_feature_extract_unsigned_field( + mmfr0, ID_AA64MMFR0_PARANGE_SHIFT); + s64 range = linear_region_size - + BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
/* * If the size of the linear region exceeds, by a sufficient - * margin, the size of the region that the available physical - * memory spans, randomize the linear region as well. + * margin, the size of the region that the physical memory can + * span, randomize the linear region as well. */ - if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) { + if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) { range /= ARM64_MEMSTART_ALIGN; memstart_addr -= ARM64_MEMSTART_ALIGN * ((range * memstart_offset_seed) >> 16);
From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com --- arch/arm64/mm/init.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 80cc79760e8e..09c219aa9d78 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -401,15 +401,18 @@ void __init arm64_memblock_init(void)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { extern u16 memstart_offset_seed; - u64 range = linear_region_size - - (memblock_end_of_DRAM() - memblock_start_of_DRAM()); + u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); + int parange = cpuid_feature_extract_unsigned_field( + mmfr0, ID_AA64MMFR0_PARANGE_SHIFT); + s64 range = linear_region_size - + BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
/* * If the size of the linear region exceeds, by a sufficient - * margin, the size of the region that the available physical - * memory spans, randomize the linear region as well. + * margin, the size of the region that the physical memory can + * span, randomize the linear region as well. */ - if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) { + if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) { range /= ARM64_MEMSTART_ALIGN; memstart_addr -= ARM64_MEMSTART_ALIGN * ((range * memstart_offset_seed) >> 16);
On 1/9/25 08:54, Florian Fainelli wrote:
From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
Forgot to update the patch subject, but this one is for 5.10.
On Thu, Jan 09, 2025 at 09:01:13AM -0800, Florian Fainelli wrote:
On 1/9/25 08:54, Florian Fainelli wrote:
From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
Forgot to update the patch subject, but this one is for 5.10.
You also forgot to tell us _why_ this is needed :(
thanks,
greg k-h
On 1/12/2025 3:54 AM, Greg KH wrote:
On Thu, Jan 09, 2025 at 09:01:13AM -0800, Florian Fainelli wrote:
On 1/9/25 08:54, Florian Fainelli wrote:
From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
Forgot to update the patch subject, but this one is for 5.10.
You also forgot to tell us _why_ this is needed :(
This is explained in the second part of the first paragraph:
The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
We use both memory hotplug and KASLR on our systems and that's how we eventually found out about the bug.
On Mon, Jan 13, 2025 at 07:44:50AM -0800, Florian Fainelli wrote:
On 1/12/2025 3:54 AM, Greg KH wrote:
On Thu, Jan 09, 2025 at 09:01:13AM -0800, Florian Fainelli wrote:
On 1/9/25 08:54, Florian Fainelli wrote:
From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
Forgot to update the patch subject, but this one is for 5.10.
You also forgot to tell us _why_ this is needed :(
This is explained in the second part of the first paragraph:
The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
We use both memory hotplug and KASLR on our systems and that's how we eventually found out about the bug.
And you still have 5.10.y ARM64 systems that need this? Why not move to a newer kernel version already?
Anyway, I need an ack from the ARM64 maintainers that this is ok to apply here before I can take it.
thanks,
greg k-h
On 1/20/2025 5:59 AM, Greg KH wrote:
On Mon, Jan 13, 2025 at 07:44:50AM -0800, Florian Fainelli wrote:
On 1/12/2025 3:54 AM, Greg KH wrote:
On Thu, Jan 09, 2025 at 09:01:13AM -0800, Florian Fainelli wrote:
On 1/9/25 08:54, Florian Fainelli wrote:
From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
Forgot to update the patch subject, but this one is for 5.10.
You also forgot to tell us _why_ this is needed :(
This is explained in the second part of the first paragraph:
The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
We use both memory hotplug and KASLR on our systems and that's how we eventually found out about the bug.
And you still have 5.10.y ARM64 systems that need this? Why not move to a newer kernel version already?
We still have ARM64 systems running 5.4 that need this, and the same bug applies to 5.10 that we used to support but dropped in favor of 5.15/6.1. Those are the kernel versions used by Android, and Android TV in particular, so it's kind of the way it goes for us.
Anyway, I need an ack from the ARM64 maintainers that this is ok to apply here before I can take it.
Just out of curiosity, the change is pretty innocuous and simple to review, why the extra scrutiny needed here?
On Mon, Jan 20, 2025 at 08:33:12AM -0800, Florian Fainelli wrote:
On 1/20/2025 5:59 AM, Greg KH wrote:
On Mon, Jan 13, 2025 at 07:44:50AM -0800, Florian Fainelli wrote:
On 1/12/2025 3:54 AM, Greg KH wrote:
On Thu, Jan 09, 2025 at 09:01:13AM -0800, Florian Fainelli wrote:
On 1/9/25 08:54, Florian Fainelli wrote:
From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
Forgot to update the patch subject, but this one is for 5.10.
You also forgot to tell us _why_ this is needed :(
This is explained in the second part of the first paragraph:
The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
We use both memory hotplug and KASLR on our systems and that's how we eventually found out about the bug.
And you still have 5.10.y ARM64 systems that need this? Why not move to a newer kernel version already?
We still have ARM64 systems running 5.4 that need this, and the same bug applies to 5.10 that we used to support but dropped in favor of 5.15/6.1. Those are the kernel versions used by Android, and Android TV in particular, so it's kind of the way it goes for us.
Anyway, I need an ack from the ARM64 maintainers that this is ok to apply here before I can take it.
Just out of curiosity, the change is pretty innocuous and simple to review, why the extra scrutiny needed here?
Why shouldn't the maintainers review a proposed backport patch for core kernel code that affects everyone who uses that arch?
thanks,
greg k-h
On 1/29/25 01:17, Greg KH wrote:
On Mon, Jan 20, 2025 at 08:33:12AM -0800, Florian Fainelli wrote:
On 1/20/2025 5:59 AM, Greg KH wrote:
On Mon, Jan 13, 2025 at 07:44:50AM -0800, Florian Fainelli wrote:
On 1/12/2025 3:54 AM, Greg KH wrote:
On Thu, Jan 09, 2025 at 09:01:13AM -0800, Florian Fainelli wrote:
On 1/9/25 08:54, Florian Fainelli wrote: > From: Ard Biesheuvel ardb@kernel.org > > commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream > > As a hardening measure, we currently randomize the placement of > physical memory inside the linear region when KASLR is in effect. > Since the random offset at which to place the available physical > memory inside the linear region is chosen early at boot, it is > based on the memblock description of memory, which does not cover > hotplug memory. The consequence of this is that the randomization > offset may be chosen such that any hotplugged memory located above > memblock_end_of_DRAM() that appears later is pushed off the end of > the linear region, where it cannot be accessed. > > So let's limit this randomization of the linear region to ensure > that this can no longer happen, by using the CPU's addressable PA > range instead. As it is guaranteed that no hotpluggable memory will > appear that falls outside of that range, we can safely put this PA > range sized window anywhere in the linear region. > > Signed-off-by: Ard Biesheuvel ardb@kernel.org > Cc: Anshuman Khandual anshuman.khandual@arm.com > Cc: Will Deacon will@kernel.org > Cc: Steven Price steven.price@arm.com > Cc: Robin Murphy robin.murphy@arm.com > Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org > Signed-off-by: Catalin Marinas catalin.marinas@arm.com > Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
Forgot to update the patch subject, but this one is for 5.10.
You also forgot to tell us _why_ this is needed :(
This is explained in the second part of the first paragraph:
The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
We use both memory hotplug and KASLR on our systems and that's how we eventually found out about the bug.
And you still have 5.10.y ARM64 systems that need this? Why not move to a newer kernel version already?
We still have ARM64 systems running 5.4 that need this, and the same bug applies to 5.10 that we used to support but dropped in favor of 5.15/6.1. Those are the kernel versions used by Android, and Android TV in particular, so it's kind of the way it goes for us.
Anyway, I need an ack from the ARM64 maintainers that this is ok to apply here before I can take it.
Just out of curiosity, the change is pretty innocuous and simple to review, why the extra scrutiny needed here?
Why shouldn't the maintainers review a proposed backport patch for core kernel code that affects everyone who uses that arch?
They should, but they are not, we can keep sending messages like those in the hope that someone does, but clearly that is not working at the moment.
This patch cherry picked cleanly into 5.4 and 5.10 maybe they just trust whoever submit stable bugfixes to have done their due diligence, too, I don't know how to get that moving now but it fixes a real problem we observed.
On Wed, 29 Jan 2025 at 18:45, Florian Fainelli florian.fainelli@broadcom.com wrote:
On 1/29/25 01:17, Greg KH wrote:
On Mon, Jan 20, 2025 at 08:33:12AM -0800, Florian Fainelli wrote:
On 1/20/2025 5:59 AM, Greg KH wrote:
On Mon, Jan 13, 2025 at 07:44:50AM -0800, Florian Fainelli wrote:
On 1/12/2025 3:54 AM, Greg KH wrote:
On Thu, Jan 09, 2025 at 09:01:13AM -0800, Florian Fainelli wrote: > On 1/9/25 08:54, Florian Fainelli wrote: >> From: Ard Biesheuvel ardb@kernel.org >> >> commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream >> >> As a hardening measure, we currently randomize the placement of >> physical memory inside the linear region when KASLR is in effect. >> Since the random offset at which to place the available physical >> memory inside the linear region is chosen early at boot, it is >> based on the memblock description of memory, which does not cover >> hotplug memory. The consequence of this is that the randomization >> offset may be chosen such that any hotplugged memory located above >> memblock_end_of_DRAM() that appears later is pushed off the end of >> the linear region, where it cannot be accessed. >> >> So let's limit this randomization of the linear region to ensure >> that this can no longer happen, by using the CPU's addressable PA >> range instead. As it is guaranteed that no hotpluggable memory will >> appear that falls outside of that range, we can safely put this PA >> range sized window anywhere in the linear region. >> >> Signed-off-by: Ard Biesheuvel ardb@kernel.org >> Cc: Anshuman Khandual anshuman.khandual@arm.com >> Cc: Will Deacon will@kernel.org >> Cc: Steven Price steven.price@arm.com >> Cc: Robin Murphy robin.murphy@arm.com >> Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org >> Signed-off-by: Catalin Marinas catalin.marinas@arm.com >> Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com > > Forgot to update the patch subject, but this one is for 5.10.
You also forgot to tell us _why_ this is needed :(
This is explained in the second part of the first paragraph:
The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
We use both memory hotplug and KASLR on our systems and that's how we eventually found out about the bug.
And you still have 5.10.y ARM64 systems that need this? Why not move to a newer kernel version already?
We still have ARM64 systems running 5.4 that need this, and the same bug applies to 5.10 that we used to support but dropped in favor of 5.15/6.1. Those are the kernel versions used by Android, and Android TV in particular, so it's kind of the way it goes for us.
Anyway, I need an ack from the ARM64 maintainers that this is ok to apply here before I can take it.
Just out of curiosity, the change is pretty innocuous and simple to review, why the extra scrutiny needed here?
Why shouldn't the maintainers review a proposed backport patch for core kernel code that affects everyone who uses that arch?
They should, but they are not, we can keep sending messages like those in the hope that someone does, but clearly that is not working at the moment.
This patch cherry picked cleanly into 5.4 and 5.10 maybe they just trust whoever submit stable bugfixes to have done their due diligence, too, I don't know how to get that moving now but it fixes a real problem we observed.
FWIW, I understand why this might be useful when running under a non-KVM hypervisor that relies on memory hotplug to perform resource balancing between VMs. But the upshot of this change is that existing systems that do not rely on memory hotplug at all will suddenly lose any randomization of the linear map if its CPU happens to be able to address more than ~40 bits of physical memory. So I'm not convinced this is a change we should make for these older kernels.
On 1/29/25 14:15, Ard Biesheuvel wrote:
On Wed, 29 Jan 2025 at 18:45, Florian Fainelli florian.fainelli@broadcom.com wrote:
On 1/29/25 01:17, Greg KH wrote:
On Mon, Jan 20, 2025 at 08:33:12AM -0800, Florian Fainelli wrote:
On 1/20/2025 5:59 AM, Greg KH wrote:
On Mon, Jan 13, 2025 at 07:44:50AM -0800, Florian Fainelli wrote:
On 1/12/2025 3:54 AM, Greg KH wrote: > On Thu, Jan 09, 2025 at 09:01:13AM -0800, Florian Fainelli wrote: >> On 1/9/25 08:54, Florian Fainelli wrote: >>> From: Ard Biesheuvel ardb@kernel.org >>> >>> commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream >>> >>> As a hardening measure, we currently randomize the placement of >>> physical memory inside the linear region when KASLR is in effect. >>> Since the random offset at which to place the available physical >>> memory inside the linear region is chosen early at boot, it is >>> based on the memblock description of memory, which does not cover >>> hotplug memory. The consequence of this is that the randomization >>> offset may be chosen such that any hotplugged memory located above >>> memblock_end_of_DRAM() that appears later is pushed off the end of >>> the linear region, where it cannot be accessed. >>> >>> So let's limit this randomization of the linear region to ensure >>> that this can no longer happen, by using the CPU's addressable PA >>> range instead. As it is guaranteed that no hotpluggable memory will >>> appear that falls outside of that range, we can safely put this PA >>> range sized window anywhere in the linear region. >>> >>> Signed-off-by: Ard Biesheuvel ardb@kernel.org >>> Cc: Anshuman Khandual anshuman.khandual@arm.com >>> Cc: Will Deacon will@kernel.org >>> Cc: Steven Price steven.price@arm.com >>> Cc: Robin Murphy robin.murphy@arm.com >>> Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org >>> Signed-off-by: Catalin Marinas catalin.marinas@arm.com >>> Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com >> >> Forgot to update the patch subject, but this one is for 5.10. > > You also forgot to tell us _why_ this is needed :(
This is explained in the second part of the first paragraph:
The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
We use both memory hotplug and KASLR on our systems and that's how we eventually found out about the bug.
And you still have 5.10.y ARM64 systems that need this? Why not move to a newer kernel version already?
We still have ARM64 systems running 5.4 that need this, and the same bug applies to 5.10 that we used to support but dropped in favor of 5.15/6.1. Those are the kernel versions used by Android, and Android TV in particular, so it's kind of the way it goes for us.
Anyway, I need an ack from the ARM64 maintainers that this is ok to apply here before I can take it.
Just out of curiosity, the change is pretty innocuous and simple to review, why the extra scrutiny needed here?
Why shouldn't the maintainers review a proposed backport patch for core kernel code that affects everyone who uses that arch?
They should, but they are not, we can keep sending messages like those in the hope that someone does, but clearly that is not working at the moment.
This patch cherry picked cleanly into 5.4 and 5.10 maybe they just trust whoever submit stable bugfixes to have done their due diligence, too, I don't know how to get that moving now but it fixes a real problem we observed.
FWIW, I understand why this might be useful when running under a non-KVM hypervisor that relies on memory hotplug to perform resource balancing between VMs. But the upshot of this change is that existing systems that do not rely on memory hotplug at all will suddenly lose any randomization of the linear map if its CPU happens to be able to address more than ~40 bits of physical memory. So I'm not convinced this is a change we should make for these older kernels.
Are there other patches that we could backport in order not to lose the randomization in the linear range?
On Thu, 30 Jan 2025 at 00:31, Florian Fainelli florian.fainelli@broadcom.com wrote:
On 1/29/25 14:15, Ard Biesheuvel wrote:
On Wed, 29 Jan 2025 at 18:45, Florian Fainelli florian.fainelli@broadcom.com wrote:
On 1/29/25 01:17, Greg KH wrote:
On Mon, Jan 20, 2025 at 08:33:12AM -0800, Florian Fainelli wrote:
On 1/20/2025 5:59 AM, Greg KH wrote:
On Mon, Jan 13, 2025 at 07:44:50AM -0800, Florian Fainelli wrote: > > > On 1/12/2025 3:54 AM, Greg KH wrote: >> On Thu, Jan 09, 2025 at 09:01:13AM -0800, Florian Fainelli wrote: >>> On 1/9/25 08:54, Florian Fainelli wrote: >>>> From: Ard Biesheuvel ardb@kernel.org >>>> >>>> commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream >>>> >>>> As a hardening measure, we currently randomize the placement of >>>> physical memory inside the linear region when KASLR is in effect. >>>> Since the random offset at which to place the available physical >>>> memory inside the linear region is chosen early at boot, it is >>>> based on the memblock description of memory, which does not cover >>>> hotplug memory. The consequence of this is that the randomization >>>> offset may be chosen such that any hotplugged memory located above >>>> memblock_end_of_DRAM() that appears later is pushed off the end of >>>> the linear region, where it cannot be accessed. >>>> >>>> So let's limit this randomization of the linear region to ensure >>>> that this can no longer happen, by using the CPU's addressable PA >>>> range instead. As it is guaranteed that no hotpluggable memory will >>>> appear that falls outside of that range, we can safely put this PA >>>> range sized window anywhere in the linear region. >>>> >>>> Signed-off-by: Ard Biesheuvel ardb@kernel.org >>>> Cc: Anshuman Khandual anshuman.khandual@arm.com >>>> Cc: Will Deacon will@kernel.org >>>> Cc: Steven Price steven.price@arm.com >>>> Cc: Robin Murphy robin.murphy@arm.com >>>> Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org >>>> Signed-off-by: Catalin Marinas catalin.marinas@arm.com >>>> Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com >>> >>> Forgot to update the patch subject, but this one is for 5.10. >> >> You also forgot to tell us _why_ this is needed :( > > This is explained in the second part of the first paragraph: > > The consequence of this is that the randomization offset may be chosen such > that any hotplugged memory located above memblock_end_of_DRAM() that appears > later is pushed off the end of the linear region, where it cannot be > accessed. > > We use both memory hotplug and KASLR on our systems and that's how we > eventually found out about the bug.
And you still have 5.10.y ARM64 systems that need this? Why not move to a newer kernel version already?
We still have ARM64 systems running 5.4 that need this, and the same bug applies to 5.10 that we used to support but dropped in favor of 5.15/6.1. Those are the kernel versions used by Android, and Android TV in particular, so it's kind of the way it goes for us.
Anyway, I need an ack from the ARM64 maintainers that this is ok to apply here before I can take it.
Just out of curiosity, the change is pretty innocuous and simple to review, why the extra scrutiny needed here?
Why shouldn't the maintainers review a proposed backport patch for core kernel code that affects everyone who uses that arch?
They should, but they are not, we can keep sending messages like those in the hope that someone does, but clearly that is not working at the moment.
This patch cherry picked cleanly into 5.4 and 5.10 maybe they just trust whoever submit stable bugfixes to have done their due diligence, too, I don't know how to get that moving now but it fixes a real problem we observed.
FWIW, I understand why this might be useful when running under a non-KVM hypervisor that relies on memory hotplug to perform resource balancing between VMs. But the upshot of this change is that existing systems that do not rely on memory hotplug at all will suddenly lose any randomization of the linear map if its CPU happens to be able to address more than ~40 bits of physical memory. So I'm not convinced this is a change we should make for these older kernels.
Are there other patches that we could backport in order not to lose the randomization in the linear range?
No, this never got fixed. Only recently, I proposed some patches that allow the PARange field in the CPU id registers to be overridden, and this would also bring back the ability to randomize the linear map on CPUs with a wide PARange.
Android also enables memory hotplug, and so I didn't bother with preserving the old behavior when memory hotplug is disabled, and so linear map randomization has basically been disabled ever since (unless you are using an older core with only 40 physical address bits).
Nobody ever complained about losing this linear map randomization, but taking it away at this point from 5.4 and 5.10 goes a bit too far imo.
On 1/30/25 02:05, Ard Biesheuvel wrote:
On Thu, 30 Jan 2025 at 00:31, Florian Fainelli florian.fainelli@broadcom.com wrote:
On 1/29/25 14:15, Ard Biesheuvel wrote:
On Wed, 29 Jan 2025 at 18:45, Florian Fainelli florian.fainelli@broadcom.com wrote:
On 1/29/25 01:17, Greg KH wrote:
On Mon, Jan 20, 2025 at 08:33:12AM -0800, Florian Fainelli wrote:
On 1/20/2025 5:59 AM, Greg KH wrote: > On Mon, Jan 13, 2025 at 07:44:50AM -0800, Florian Fainelli wrote: >> >> >> On 1/12/2025 3:54 AM, Greg KH wrote: >>> On Thu, Jan 09, 2025 at 09:01:13AM -0800, Florian Fainelli wrote: >>>> On 1/9/25 08:54, Florian Fainelli wrote: >>>>> From: Ard Biesheuvel ardb@kernel.org >>>>> >>>>> commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream >>>>> >>>>> As a hardening measure, we currently randomize the placement of >>>>> physical memory inside the linear region when KASLR is in effect. >>>>> Since the random offset at which to place the available physical >>>>> memory inside the linear region is chosen early at boot, it is >>>>> based on the memblock description of memory, which does not cover >>>>> hotplug memory. The consequence of this is that the randomization >>>>> offset may be chosen such that any hotplugged memory located above >>>>> memblock_end_of_DRAM() that appears later is pushed off the end of >>>>> the linear region, where it cannot be accessed. >>>>> >>>>> So let's limit this randomization of the linear region to ensure >>>>> that this can no longer happen, by using the CPU's addressable PA >>>>> range instead. As it is guaranteed that no hotpluggable memory will >>>>> appear that falls outside of that range, we can safely put this PA >>>>> range sized window anywhere in the linear region. >>>>> >>>>> Signed-off-by: Ard Biesheuvel ardb@kernel.org >>>>> Cc: Anshuman Khandual anshuman.khandual@arm.com >>>>> Cc: Will Deacon will@kernel.org >>>>> Cc: Steven Price steven.price@arm.com >>>>> Cc: Robin Murphy robin.murphy@arm.com >>>>> Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org >>>>> Signed-off-by: Catalin Marinas catalin.marinas@arm.com >>>>> Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com >>>> >>>> Forgot to update the patch subject, but this one is for 5.10. >>> >>> You also forgot to tell us _why_ this is needed :( >> >> This is explained in the second part of the first paragraph: >> >> The consequence of this is that the randomization offset may be chosen such >> that any hotplugged memory located above memblock_end_of_DRAM() that appears >> later is pushed off the end of the linear region, where it cannot be >> accessed. >> >> We use both memory hotplug and KASLR on our systems and that's how we >> eventually found out about the bug. > > And you still have 5.10.y ARM64 systems that need this? Why not move to > a newer kernel version already?
We still have ARM64 systems running 5.4 that need this, and the same bug applies to 5.10 that we used to support but dropped in favor of 5.15/6.1. Those are the kernel versions used by Android, and Android TV in particular, so it's kind of the way it goes for us.
> > Anyway, I need an ack from the ARM64 maintainers that this is ok to > apply here before I can take it.
Just out of curiosity, the change is pretty innocuous and simple to review, why the extra scrutiny needed here?
Why shouldn't the maintainers review a proposed backport patch for core kernel code that affects everyone who uses that arch?
They should, but they are not, we can keep sending messages like those in the hope that someone does, but clearly that is not working at the moment.
This patch cherry picked cleanly into 5.4 and 5.10 maybe they just trust whoever submit stable bugfixes to have done their due diligence, too, I don't know how to get that moving now but it fixes a real problem we observed.
FWIW, I understand why this might be useful when running under a non-KVM hypervisor that relies on memory hotplug to perform resource balancing between VMs. But the upshot of this change is that existing systems that do not rely on memory hotplug at all will suddenly lose any randomization of the linear map if its CPU happens to be able to address more than ~40 bits of physical memory. So I'm not convinced this is a change we should make for these older kernels.
Are there other patches that we could backport in order not to lose the randomization in the linear range?
No, this never got fixed. Only recently, I proposed some patches that allow the PARange field in the CPU id registers to be overridden, and this would also bring back the ability to randomize the linear map on CPUs with a wide PARange.
Android also enables memory hotplug, and so I didn't bother with preserving the old behavior when memory hotplug is disabled, and so linear map randomization has basically been disabled ever since (unless you are using an older core with only 40 physical address bits).
We are using Brahma-B53 cores with 5.4 primarily which are architecturally equivalent to a Cortex-A53 where ID_AA64MMFR0_EL1.PARange = 0b0010 -> 40 bits only. The other platform that we use has a Cortex-A72 that supports up to 44 bits of PA, but that one could probably get a custom kernel with memory hotplug disabled.
Nobody ever complained about losing this linear map randomization, but taking it away at this point from 5.4 and 5.10 goes a bit too far imo.
Fair enough thanks for the background!
[ Sasha's backport helper bot ]
Hi,
The upstream commit SHA1 provided is correct: 97d6786e0669daa5c2f2d07a057f574e849dfd3e
WARNING: Author mismatch between patch and upstream commit: Backport author: Florian Fainelliflorian.fainelli@broadcom.com Commit author: Ard Biesheuvelardb@kernel.org
Status in newer kernel trees: 6.12.y | Present (exact SHA1)
Note: The patch differs from the upstream commit: --- Failed to apply patch cleanly, falling back to interdiff... ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.12.y | Failed | N/A | | stable/linux-6.6.y | Failed | N/A | | stable/linux-6.1.y | Failed | N/A | | stable/linux-5.15.y | Failed | N/A | | stable/linux-5.10.y | Success | Success | | stable/linux-5.4.y | Success | Success |
[ Sasha's backport helper bot ]
Hi,
The upstream commit SHA1 provided is correct: 97d6786e0669daa5c2f2d07a057f574e849dfd3e
WARNING: Author mismatch between patch and upstream commit: Backport author: Florian Fainelliflorian.fainelli@broadcom.com Commit author: Ard Biesheuvelardb@kernel.org
Status in newer kernel trees: 6.12.y | Present (exact SHA1) 6.6.y | Present (exact SHA1) 6.1.y | Present (exact SHA1) 5.15.y | Present (exact SHA1) 5.10.y | Not found 5.4.y | Not found
Note: The patch differs from the upstream commit: --- 1: 97d6786e0669 ! 1: fa6d576248a0 arm64: mm: account for hotplug memory when randomizing the linear region @@ Metadata ## Commit message ## arm64: mm: account for hotplug memory when randomizing the linear region
+ commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream + As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical @@ Commit message Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com + Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
## arch/arm64/mm/init.c ## @@ arch/arm64/mm/init.c: void __init arm64_memblock_init(void) ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-5.4.y | Success | Success |
On Thu, Jan 09, 2025 at 08:54:16AM -0800, Florian Fainelli wrote:
From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
arch/arm64/mm/init.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index cbcac03c0e0d..a6034645d6f7 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -392,15 +392,18 @@ void __init arm64_memblock_init(void) if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { extern u16 memstart_offset_seed;
u64 range = linear_region_size -
(memblock_end_of_DRAM() - memblock_start_of_DRAM());
u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
int parange = cpuid_feature_extract_unsigned_field(
mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
s64 range = linear_region_size -
BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
/* * If the size of the linear region exceeds, by a sufficient
* margin, the size of the region that the available physical
* memory spans, randomize the linear region as well.
* margin, the size of the region that the physical memory can
*/* span, randomize the linear region as well.
if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) { range /= ARM64_MEMSTART_ALIGN; memstart_addr -= ARM64_MEMSTART_ALIGN * ((range * memstart_offset_seed) >> 16);
-- 2.43.0
You are not providing any information as to WHY this is needed in stable kernels at all. It just looks like an unsolicted backport with no changes from upstream, yet no hint as to any bug it fixes.
And you all really have hotpluggable memory on systems that are running th is old kernel? Why are they not using newer kernels if they need this? Surely lots of other bugs they need are resolved there, right?
thanks,
greg k-h
On 1/12/25 03:53, Greg KH wrote:
On Thu, Jan 09, 2025 at 08:54:16AM -0800, Florian Fainelli wrote:
From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
arch/arm64/mm/init.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index cbcac03c0e0d..a6034645d6f7 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -392,15 +392,18 @@ void __init arm64_memblock_init(void) if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { extern u16 memstart_offset_seed;
u64 range = linear_region_size -
(memblock_end_of_DRAM() - memblock_start_of_DRAM());
u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
int parange = cpuid_feature_extract_unsigned_field(
mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
s64 range = linear_region_size -
BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
/* * If the size of the linear region exceeds, by a sufficient
* margin, the size of the region that the available physical
* memory spans, randomize the linear region as well.
* margin, the size of the region that the physical memory can
*/* span, randomize the linear region as well.
if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) { range /= ARM64_MEMSTART_ALIGN; memstart_addr -= ARM64_MEMSTART_ALIGN * ((range * memstart_offset_seed) >> 16);
-- 2.43.0
You are not providing any information as to WHY this is needed in stable kernels at all. It just looks like an unsolicted backport with no changes from upstream, yet no hint as to any bug it fixes.
See the response in the other thread.
And you all really have hotpluggable memory on systems that are running th is old kernel? Why are they not using newer kernels if they need this? Surely lots of other bugs they need are resolved there, right?
Believe it or not, but memory hotplug works really well for us, in a somewhat limited configuration on the 5.4 kernel whereby we simply plug memory, and never unplug it thereafter, but still, we have not had to carry hotplug related patches other than this one.
Trying to be a good citizen here: one of my colleague has identified an upstream fix that works, that we got tested, cherry picked cleanly into both 5.4 and 5.10, so it's not even like there was any fuzz.
I was sort of hoping that giving my history of regularly testing stable kernels for the past years, as well as submitting a fair amount of targeted bug fixes to the stable branches that there would be some level of trust here.
Thanks
On Wed, Jan 29, 2025 at 10:05:29AM -0800, Florian Fainelli wrote:
On 1/12/25 03:53, Greg KH wrote:
On Thu, Jan 09, 2025 at 08:54:16AM -0800, Florian Fainelli wrote:
From: Ard Biesheuvel ardb@kernel.org
commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed.
So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Will Deacon will@kernel.org Cc: Steven Price steven.price@arm.com Cc: Robin Murphy robin.murphy@arm.com Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Florian Fainelli florian.fainelli@broadcom.com
arch/arm64/mm/init.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index cbcac03c0e0d..a6034645d6f7 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -392,15 +392,18 @@ void __init arm64_memblock_init(void) if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { extern u16 memstart_offset_seed;
u64 range = linear_region_size -
(memblock_end_of_DRAM() - memblock_start_of_DRAM());
u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
int parange = cpuid_feature_extract_unsigned_field(
mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
s64 range = linear_region_size -
/*BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
- If the size of the linear region exceeds, by a sufficient
* margin, the size of the region that the available physical
* memory spans, randomize the linear region as well.
* margin, the size of the region that the physical memory can
*/* span, randomize the linear region as well.
if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) { range /= ARM64_MEMSTART_ALIGN; memstart_addr -= ARM64_MEMSTART_ALIGN * ((range * memstart_offset_seed) >> 16);
-- 2.43.0
You are not providing any information as to WHY this is needed in stable kernels at all. It just looks like an unsolicted backport with no changes from upstream, yet no hint as to any bug it fixes.
See the response in the other thread.
And you all really have hotpluggable memory on systems that are running th is old kernel? Why are they not using newer kernels if they need this? Surely lots of other bugs they need are resolved there, right?
Believe it or not, but memory hotplug works really well for us, in a somewhat limited configuration on the 5.4 kernel whereby we simply plug memory, and never unplug it thereafter, but still, we have not had to carry hotplug related patches other than this one.
Trying to be a good citizen here: one of my colleague has identified an upstream fix that works, that we got tested, cherry picked cleanly into both 5.4 and 5.10, so it's not even like there was any fuzz.
I was sort of hoping that giving my history of regularly testing stable kernels for the past years, as well as submitting a fair amount of targeted bug fixes to the stable branches that there would be some level of trust here.
Of course your history matters here, I'm not trying to disuade that at all. All I am saying is "this touches core arm64 code, so I would like an arm64 maintainer to at least glance at it to say it's ok to do this."
And it looks like it now has happened, and it is good that I asked :)
This is all normal, and good, I'm not singling you out here at all. We push back on backports all the time when we don't understand why they are being asked for and ask for a second review. You want us to do this in order to keep these trees working well.
thanks,
greg k-h
linux-stable-mirror@lists.linaro.org