Partially revert commit 5f501d555653 ("binfmt_elf: reintroduce using MAP_FIXED_NOREPLACE").
At least ia64 has ET_EXEC PT_LOAD segments that are not virtual-address contiguous (but _are_ file-offset contiguous). This would result in giant mapping attempts to cover the entire span, including the virtual address range hole. Disable total_mapping_size for ET_EXEC, which reduces the MAP_FIXED_NOREPLACE coverage to only the first PT_LOAD:
$ readelf -lW /usr/bin/gcc ... Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz ... ... LOAD 0x000000 0x4000000000000000 0x4000000000000000 0x00b5a0 0x00b5a0 ... LOAD 0x00b5a0 0x600000000000b5a0 0x600000000000b5a0 0x0005ac 0x000710 ... ... ^^^^^^^^ ^^^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^
File offset range : 0x000000-0x00bb4c 0x00bb4c bytes
Virtual address range : 0x4000000000000000-0x600000000000bcb0 0x200000000000bcb0 bytes
Ironically, this is the reverse of the problem that originally caused problems with ET_EXEC and MAP_FIXED_NOREPLACE: overlaps. This problem is with holes. Future work could restore full coverage if load_elf_binary() were to perform mappings in a separate phase from the loading (where it could resolve both overlaps and holes).
Cc: Alexander Viro viro@zeniv.linux.org.uk Cc: Eric Biederman ebiederm@xmission.com Cc: linux-fsdevel@vger.kernel.org Cc: linux-mm@kvack.org Reported-by: matoro matoro_mailinglist_kernel@matoro.tk Reported-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Fixes: 5f501d555653 ("binfmt_elf: reintroduce using MAP_FIXED_NOREPLACE") Link: https://lore.kernel.org/r/a3edd529-c42d-3b09-135c-7e98a15b150f@leemhuis.info Cc: stable@vger.kernel.org Signed-off-by: Kees Cook keescook@chromium.org --- matoro (or anyone else) can you please test this? --- fs/binfmt_elf.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-)
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 9bea703ed1c2..474b44032c65 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1136,14 +1136,25 @@ static int load_elf_binary(struct linux_binprm *bprm) * is then page aligned. */ load_bias = ELF_PAGESTART(load_bias - vaddr); - }
- /* - * Calculate the entire size of the ELF mapping (total_size). - * (Note that first_pt_load is set to false later once the - * initial mapping is performed.) - */ - if (first_pt_load) { + /* + * Calculate the entire size of the ELF mapping + * (total_size), used for the initial mapping, + * due to first_pt_load which is set to false later + * once the initial mapping is performed. + * + * Note that this is only sensible when the LOAD + * segments are contiguous (or overlapping). If + * used for LOADs that are far apart, this would + * cause the holes between LOADs to be mapped, + * running the risk of having the mapping fail, + * as it would be larger than the ELF file itself. + * + * As a result, only ET_DYN does this, since + * some ET_EXEC (e.g. ia64) may have virtual + * memory holes between LOADs. + * + */ total_size = total_mapping_size(elf_phdata, elf_ex->e_phnum); if (!total_size) {
On 2022-02-28 14:46, Kees Cook wrote:
Partially revert commit 5f501d555653 ("binfmt_elf: reintroduce using MAP_FIXED_NOREPLACE").
At least ia64 has ET_EXEC PT_LOAD segments that are not virtual-address contiguous (but _are_ file-offset contiguous). This would result in giant mapping attempts to cover the entire span, including the virtual address range hole. Disable total_mapping_size for ET_EXEC, which reduces the MAP_FIXED_NOREPLACE coverage to only the first PT_LOAD:
$ readelf -lW /usr/bin/gcc ... Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz ... ... LOAD 0x000000 0x4000000000000000 0x4000000000000000 0x00b5a0 0x00b5a0 ... LOAD 0x00b5a0 0x600000000000b5a0 0x600000000000b5a0 0x0005ac 0x000710 ... ... ^^^^^^^^ ^^^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^
File offset range : 0x000000-0x00bb4c 0x00bb4c bytes
Virtual address range : 0x4000000000000000-0x600000000000bcb0 0x200000000000bcb0 bytes
Ironically, this is the reverse of the problem that originally caused problems with ET_EXEC and MAP_FIXED_NOREPLACE: overlaps. This problem is with holes. Future work could restore full coverage if load_elf_binary() were to perform mappings in a separate phase from the loading (where it could resolve both overlaps and holes).
Cc: Alexander Viro viro@zeniv.linux.org.uk Cc: Eric Biederman ebiederm@xmission.com Cc: linux-fsdevel@vger.kernel.org Cc: linux-mm@kvack.org Reported-by: matoro matoro_mailinglist_kernel@matoro.tk Reported-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Fixes: 5f501d555653 ("binfmt_elf: reintroduce using MAP_FIXED_NOREPLACE") Link: https://lore.kernel.org/r/a3edd529-c42d-3b09-135c-7e98a15b150f@leemhuis.info Cc: stable@vger.kernel.org Signed-off-by: Kees Cook keescook@chromium.org
matoro (or anyone else) can you please test this?
fs/binfmt_elf.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-)
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 9bea703ed1c2..474b44032c65 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1136,14 +1136,25 @@ static int load_elf_binary(struct linux_binprm *bprm) * is then page aligned. */ load_bias = ELF_PAGESTART(load_bias - vaddr);
}
/*
* Calculate the entire size of the ELF mapping (total_size).
* (Note that first_pt_load is set to false later once the
* initial mapping is performed.)
*/
if (first_pt_load) {
/*
* Calculate the entire size of the ELF mapping
* (total_size), used for the initial mapping,
* due to first_pt_load which is set to false later
* once the initial mapping is performed.
*
* Note that this is only sensible when the LOAD
* segments are contiguous (or overlapping). If
* used for LOADs that are far apart, this would
* cause the holes between LOADs to be mapped,
* running the risk of having the mapping fail,
* as it would be larger than the ELF file itself.
*
* As a result, only ET_DYN does this, since
* some ET_EXEC (e.g. ia64) may have virtual
* memory holes between LOADs.
*
*/ total_size = total_mapping_size(elf_phdata, elf_ex->e_phnum); if (!total_size) {
This does not apply for me, I'm looking around and can't find any reference to the first_pt_load variable you're removing there? What commit/tag are you applying this on top of?
On Mon, Feb 28, 2022 at 03:31:00PM -0500, matoro wrote:
On 2022-02-28 14:46, Kees Cook wrote:
Partially revert commit 5f501d555653 ("binfmt_elf: reintroduce using MAP_FIXED_NOREPLACE").
At least ia64 has ET_EXEC PT_LOAD segments that are not virtual-address contiguous (but _are_ file-offset contiguous). This would result in giant mapping attempts to cover the entire span, including the virtual address range hole. Disable total_mapping_size for ET_EXEC, which reduces the MAP_FIXED_NOREPLACE coverage to only the first PT_LOAD:
$ readelf -lW /usr/bin/gcc ... Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz ... ... LOAD 0x000000 0x4000000000000000 0x4000000000000000 0x00b5a0 0x00b5a0 ... LOAD 0x00b5a0 0x600000000000b5a0 0x600000000000b5a0 0x0005ac 0x000710 ... ... ^^^^^^^^ ^^^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^
File offset range : 0x000000-0x00bb4c 0x00bb4c bytes
Virtual address range : 0x4000000000000000-0x600000000000bcb0 0x200000000000bcb0 bytes
Ironically, this is the reverse of the problem that originally caused problems with ET_EXEC and MAP_FIXED_NOREPLACE: overlaps. This problem is with holes. Future work could restore full coverage if load_elf_binary() were to perform mappings in a separate phase from the loading (where it could resolve both overlaps and holes).
Cc: Alexander Viro viro@zeniv.linux.org.uk Cc: Eric Biederman ebiederm@xmission.com Cc: linux-fsdevel@vger.kernel.org Cc: linux-mm@kvack.org Reported-by: matoro matoro_mailinglist_kernel@matoro.tk Reported-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Fixes: 5f501d555653 ("binfmt_elf: reintroduce using MAP_FIXED_NOREPLACE") Link: https://lore.kernel.org/r/a3edd529-c42d-3b09-135c-7e98a15b150f@leemhuis.info Cc: stable@vger.kernel.org Signed-off-by: Kees Cook keescook@chromium.org
matoro (or anyone else) can you please test this?
fs/binfmt_elf.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-)
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 9bea703ed1c2..474b44032c65 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1136,14 +1136,25 @@ static int load_elf_binary(struct linux_binprm *bprm) * is then page aligned. */ load_bias = ELF_PAGESTART(load_bias - vaddr);
}
/*
* Calculate the entire size of the ELF mapping (total_size).
* (Note that first_pt_load is set to false later once the
* initial mapping is performed.)
*/
if (first_pt_load) {
/*
* Calculate the entire size of the ELF mapping
* (total_size), used for the initial mapping,
* due to first_pt_load which is set to false later
* once the initial mapping is performed.
*
* Note that this is only sensible when the LOAD
* segments are contiguous (or overlapping). If
* used for LOADs that are far apart, this would
* cause the holes between LOADs to be mapped,
* running the risk of having the mapping fail,
* as it would be larger than the ELF file itself.
*
* As a result, only ET_DYN does this, since
* some ET_EXEC (e.g. ia64) may have virtual
* memory holes between LOADs.
*
*/ total_size = total_mapping_size(elf_phdata, elf_ex->e_phnum); if (!total_size) {
This does not apply for me, I'm looking around and can't find any reference to the first_pt_load variable you're removing there? What commit/tag are you applying this on top of?
Ah, yeah, this is against linux-next. Let me send a backport, one sec...
linux-stable-mirror@lists.linaro.org