The following patches bring ARM vDSO read-only patches from Linus's master branch into the 4.1 stable kernel (4.1.20). The patches applied with trivial context fixups.
The easiest way to test this is to enable -DDEBUG on arch/arm/kernel/vdso.o, and see the kernel address of the vDSO page. Then, using CONFIG_ARM_PTDUMP, look at the mappings, and ensure this page is in RO after applying these patches.
There is a demonstrated x86 exploit that uses this to gain root, and this could be done in a similar manner on ARM.
I'll follow the patches with a pull request.
David Brown (1): ARM/vdso: Mark the vDSO code read-only after init
Kees Cook (6): asm-generic: Consolidate mark_rodata_ro() mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kernel mappings x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option arch: Introduce post-init read-only memory lkdtm: Verify that '__ro_after_init' works correctly x86/vdso: Mark the vDSO code read-only after init
Documentation/kernel-parameters.txt | 4 ++++ arch/arm/include/asm/cacheflush.h | 1 - arch/arm/vdso/vdso.S | 3 +-- arch/arm64/include/asm/cacheflush.h | 4 ---- arch/parisc/include/asm/cache.h | 3 +++ arch/parisc/include/asm/cacheflush.h | 4 ---- arch/x86/Kconfig | 3 +++ arch/x86/Kconfig.debug | 17 +++-------------- arch/x86/include/asm/cacheflush.h | 8 ++------ arch/x86/include/asm/kvm_para.h | 7 ------- arch/x86/include/asm/sections.h | 2 +- arch/x86/kernel/ftrace.c | 6 +++--- arch/x86/kernel/kgdb.c | 8 ++------ arch/x86/kernel/test_nx.c | 2 -- arch/x86/kernel/test_rodata.c | 2 +- arch/x86/kernel/vmlinux.lds.S | 25 +++++++++++-------------- arch/x86/mm/init_32.c | 3 --- arch/x86/mm/init_64.c | 3 --- arch/x86/mm/pageattr.c | 2 +- arch/x86/vdso/vdso2c.h | 2 +- drivers/misc/lkdtm.c | 29 ++++++++++++++++++++++++++--- include/asm-generic/vmlinux.lds.h | 1 + include/linux/cache.h | 14 ++++++++++++++ include/linux/init.h | 4 ++++ init/main.c | 27 +++++++++++++++++++++++---- kernel/debug/kdb/kdb_bp.c | 4 +--- 26 files changed, 105 insertions(+), 83 deletions(-)
From: Kees Cook keescook@chromium.org
commit e267d97b83d9cecc16c54825f9f3ac7f72dc1e1e upstream.
Instead of defining mark_rodata_ro() in each architecture, consolidate it.
Signed-off-by: Kees Cook keescook@chromium.org Acked-by: Will Deacon will.deacon@arm.com Cc: Andrew Morton akpm@linux-foundation.org Cc: Andy Gross agross@codeaurora.org Cc: Andy Lutomirski luto@amacapital.net Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Arnd Bergmann arnd@arndb.de Cc: Ashok Kumar ashoks@broadcom.com Cc: Borislav Petkov bp@alien8.de Cc: Borislav Petkov bp@suse.de Cc: Brian Gerst brgerst@gmail.com Cc: Catalin Marinas catalin.marinas@arm.com Cc: Dan Williams dan.j.williams@intel.com Cc: David Brown david.brown@linaro.org Cc: David Hildenbrand dahi@linux.vnet.ibm.com Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Emese Revfy re.emese@gmail.com Cc: H. Peter Anvin hpa@zytor.com Cc: Helge Deller deller@gmx.de Cc: James E.J. Bottomley jejb@parisc-linux.org Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Luis R. Rodriguez mcgrof@suse.com Cc: Marc Zyngier marc.zyngier@arm.com Cc: Mark Rutland mark.rutland@arm.com Cc: Mathias Krause minipli@googlemail.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: Nicolas Pitre nicolas.pitre@linaro.org Cc: PaX Team pageexec@freemail.hu Cc: Paul Gortmaker paul.gortmaker@windriver.com Cc: Peter Zijlstra peterz@infradead.org Cc: Ross Zwisler ross.zwisler@linux.intel.com Cc: Russell King linux@arm.linux.org.uk Cc: Rusty Russell rusty@rustcorp.com.au Cc: Stephen Boyd sboyd@codeaurora.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Toshi Kani toshi.kani@hp.com Cc: kernel-hardening@lists.openwall.com Cc: linux-arch linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linux-parisc@vger.kernel.org Link: http://lkml.kernel.org/r/1455748879-21872-2-git-send-email-keescook@chromium... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: David Brown david.brown@linaro.org --- arch/arm/include/asm/cacheflush.h | 1 - arch/arm64/include/asm/cacheflush.h | 4 ---- arch/parisc/include/asm/cacheflush.h | 4 ---- arch/x86/include/asm/cacheflush.h | 1 - include/linux/init.h | 4 ++++ 5 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 2d46862..5797815 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -488,7 +488,6 @@ int set_memory_x(unsigned long addr, int numpages); int set_memory_nx(unsigned long addr, int numpages);
#ifdef CONFIG_DEBUG_RODATA -void mark_rodata_ro(void); void set_kernel_text_rw(void); void set_kernel_text_ro(void); #else diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 67d309c..0cd01d5 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -153,8 +153,4 @@ int set_memory_rw(unsigned long addr, int numpages); int set_memory_x(unsigned long addr, int numpages); int set_memory_nx(unsigned long addr, int numpages);
-#ifdef CONFIG_DEBUG_RODATA -void mark_rodata_ro(void); -#endif - #endif diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index de65f66..b188d1f 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -121,10 +121,6 @@ flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vma } }
-#ifdef CONFIG_DEBUG_RODATA -void mark_rodata_ro(void); -#endif - #include <asm/kmap_types.h>
#define ARCH_HAS_KMAP diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 47c8e32..0561b19 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -85,7 +85,6 @@ int set_pages_rw(struct page *page, int numpages); void clflush_cache_range(void *addr, unsigned int size);
#ifdef CONFIG_DEBUG_RODATA -void mark_rodata_ro(void); extern const int rodata_test_data; extern int kernel_set_to_readonly; void set_kernel_text_rw(void); diff --git a/include/linux/init.h b/include/linux/init.h index 21b6d76..5e38644 100644 --- a/include/linux/init.h +++ b/include/linux/init.h @@ -153,6 +153,10 @@ void prepare_namespace(void); void __init load_default_modules(void); int __init init_rootfs(void);
+#ifdef CONFIG_DEBUG_RODATA +void mark_rodata_ro(void); +#endif + extern void (*late_time_init)(void);
extern bool initcall_debug;
From: Kees Cook keescook@chromium.org
commit d2aa1acad22f1bdd0cfa67b3861800e392254454 upstream.
It may be useful to debug writes to the readonly sections of memory, so provide a cmdline "rodata=off" to allow for this. This can be expanded in the future to support "log" and "write" modes, but that will need to be architecture-specific.
This also makes KDB software breakpoints more usable, as read-only mappings can now be disabled on any kernel.
Suggested-by: H. Peter Anvin hpa@zytor.com Signed-off-by: Kees Cook keescook@chromium.org Cc: Andy Lutomirski luto@amacapital.net Cc: Arnd Bergmann arnd@arndb.de Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: David Brown david.brown@linaro.org Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Emese Revfy re.emese@gmail.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mathias Krause minipli@googlemail.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: PaX Team pageexec@freemail.hu Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: kernel-hardening@lists.openwall.com Cc: linux-arch linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1455748879-21872-3-git-send-email-keescook@chromium... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: David Brown david.brown@linaro.org --- Documentation/kernel-parameters.txt | 4 ++++ init/main.c | 27 +++++++++++++++++++++++---- kernel/debug/kdb/kdb_bp.c | 4 +--- 3 files changed, 28 insertions(+), 7 deletions(-)
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index cd03a0f..51bbc77 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -3289,6 +3289,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
ro [KNL] Mount root device read-only on boot
+ rodata= [KNL] + on Mark read-only kernel memory as read-only (default). + off Leave read-only kernel memory writable for debugging. + root= [KNL] Root filesystem See name_to_dev_t comment in init/do_mounts.c.
diff --git a/init/main.c b/init/main.c index 2a89545..14f36d1 100644 --- a/init/main.c +++ b/init/main.c @@ -93,9 +93,6 @@ static int kernel_init(void *); extern void init_IRQ(void); extern void fork_init(void); extern void radix_tree_init(void); -#ifndef CONFIG_DEBUG_RODATA -static inline void mark_rodata_ro(void) { } -#endif
/* * Debug helper: via this flag we know that we are in 'early bootup code' @@ -925,6 +922,28 @@ static int try_to_run_init_process(const char *init_filename)
static noinline void __init kernel_init_freeable(void);
+#ifdef CONFIG_DEBUG_RODATA +static bool rodata_enabled = true; +static int __init set_debug_rodata(char *str) +{ + return strtobool(str, &rodata_enabled); +} +__setup("rodata=", set_debug_rodata); + +static void mark_readonly(void) +{ + if (rodata_enabled) + mark_rodata_ro(); + else + pr_info("Kernel memory protection disabled.\n"); +} +#else +static inline void mark_readonly(void) +{ + pr_warn("This architecture does not have kernel memory protection.\n"); +} +#endif + static int __ref kernel_init(void *unused) { int ret; @@ -933,7 +952,7 @@ static int __ref kernel_init(void *unused) /* need to finish all async __init code before freeing the memory */ async_synchronize_full(); free_initmem(); - mark_rodata_ro(); + mark_readonly(); system_state = SYSTEM_RUNNING; numa_default_policy();
diff --git a/kernel/debug/kdb/kdb_bp.c b/kernel/debug/kdb/kdb_bp.c index e1dbf4a..90ff129 100644 --- a/kernel/debug/kdb/kdb_bp.c +++ b/kernel/debug/kdb/kdb_bp.c @@ -153,13 +153,11 @@ static int _kdb_bp_install(struct pt_regs *regs, kdb_bp_t *bp) } else { kdb_printf("%s: failed to set breakpoint at 0x%lx\n", __func__, bp->bp_addr); -#ifdef CONFIG_DEBUG_RODATA if (!bp->bp_type) { kdb_printf("Software breakpoints are unavailable.\n" - " Change the kernel CONFIG_DEBUG_RODATA=n\n" + " Boot the kernel with rodata=off\n" " OR use hw breaks: help bph\n"); } -#endif return 1; } return 0;
From: Kees Cook keescook@chromium.org
commit 9ccaf77cf05915f51231d158abfd5448aedde758 upstream.
This removes the CONFIG_DEBUG_RODATA option and makes it always enabled.
This simplifies the code and also makes it clearer that read-only mapped memory is just as fundamental a security feature in kernel-space as it is in user-space.
Suggested-by: Ingo Molnar mingo@kernel.org Signed-off-by: Kees Cook keescook@chromium.org Cc: Andy Lutomirski luto@amacapital.net Cc: Arnd Bergmann arnd@arndb.de Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: David Brown david.brown@linaro.org Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Emese Revfy re.emese@gmail.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mathias Krause minipli@googlemail.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: PaX Team pageexec@freemail.hu Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: kernel-hardening@lists.openwall.com Cc: linux-arch linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1455748879-21872-4-git-send-email-keescook@chromium... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: David Brown david.brown@linaro.org --- arch/x86/Kconfig | 3 +++ arch/x86/Kconfig.debug | 17 +++-------------- arch/x86/include/asm/cacheflush.h | 7 ++----- arch/x86/include/asm/kvm_para.h | 7 ------- arch/x86/include/asm/sections.h | 2 +- arch/x86/kernel/ftrace.c | 6 +++--- arch/x86/kernel/kgdb.c | 8 ++------ arch/x86/kernel/test_nx.c | 2 -- arch/x86/kernel/test_rodata.c | 2 +- arch/x86/kernel/vmlinux.lds.S | 25 +++++++++++-------------- arch/x86/mm/init_32.c | 3 --- arch/x86/mm/init_64.c | 3 --- arch/x86/mm/pageattr.c | 2 +- 13 files changed, 27 insertions(+), 60 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 226d569..fdb2b1b 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -279,6 +279,9 @@ config ARCH_SUPPORTS_UPROBES config FIX_EARLYCON_MEM def_bool y
+config DEBUG_RODATA + def_bool y + config PGTABLE_LEVELS int default 4 if X86_64 diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug index 72484a6..467ccb4 100644 --- a/arch/x86/Kconfig.debug +++ b/arch/x86/Kconfig.debug @@ -86,23 +86,12 @@ config EFI_PGT_DUMP issues with the mapping of the EFI runtime regions into that table.
-config DEBUG_RODATA - bool "Write protect kernel read-only data structures" - default y - depends on DEBUG_KERNEL - ---help--- - Mark the kernel read-only data as write-protected in the pagetables, - in order to catch accidental (and incorrect) writes to such const - data. This is recommended so that we can catch kernel bugs sooner. - If in doubt, say "Y". - config DEBUG_RODATA_TEST - bool "Testcase for the DEBUG_RODATA feature" - depends on DEBUG_RODATA + bool "Testcase for the marking rodata read-only" default y ---help--- - This option enables a testcase for the DEBUG_RODATA - feature as well as for the change_page_attr() infrastructure. + This option enables a testcase for the setting rodata read-only + as well as for the change_page_attr() infrastructure. If in doubt, say "N"
config DEBUG_SET_MODULE_RONX diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 0561b19..5aa976c 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -84,15 +84,12 @@ int set_pages_rw(struct page *page, int numpages);
void clflush_cache_range(void *addr, unsigned int size);
-#ifdef CONFIG_DEBUG_RODATA +#define mmio_flush_range(addr, size) clflush_cache_range(addr, size) + extern const int rodata_test_data; extern int kernel_set_to_readonly; void set_kernel_text_rw(void); void set_kernel_text_ro(void); -#else -static inline void set_kernel_text_rw(void) { } -static inline void set_kernel_text_ro(void) { } -#endif
#ifdef CONFIG_DEBUG_RODATA_TEST int rodata_test(void); diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h index c1adf33..bc62e7c 100644 --- a/arch/x86/include/asm/kvm_para.h +++ b/arch/x86/include/asm/kvm_para.h @@ -17,15 +17,8 @@ static inline bool kvm_check_and_clear_guest_paused(void) } #endif /* CONFIG_KVM_GUEST */
-#ifdef CONFIG_DEBUG_RODATA #define KVM_HYPERCALL \ ALTERNATIVE(".byte 0x0f,0x01,0xc1", ".byte 0x0f,0x01,0xd9", X86_FEATURE_VMMCALL) -#else -/* On AMD processors, vmcall will generate a trap that we will - * then rewrite to the appropriate instruction. - */ -#define KVM_HYPERCALL ".byte 0x0f,0x01,0xc1" -#endif
/* For KVM hypercalls, a three-byte sequence of either the vmcall or the vmmcall * instruction. The hypervisor may replace it with something else but only the diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/asm/sections.h index 0a52424..13b6cdd 100644 --- a/arch/x86/include/asm/sections.h +++ b/arch/x86/include/asm/sections.h @@ -7,7 +7,7 @@ extern char __brk_base[], __brk_limit[]; extern struct exception_table_entry __stop___ex_table[];
-#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA) +#if defined(CONFIG_X86_64) extern char __end_rodata_hpage_align[]; #endif
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 8b7b0a5..f74416d 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -81,9 +81,9 @@ within(unsigned long addr, unsigned long start, unsigned long end) static unsigned long text_ip_addr(unsigned long ip) { /* - * On x86_64, kernel text mappings are mapped read-only with - * CONFIG_DEBUG_RODATA. So we use the kernel identity mapping instead - * of the kernel text mapping to modify the kernel text. + * On x86_64, kernel text mappings are mapped read-only, so we use + * the kernel identity mapping instead of the kernel text mapping + * to modify the kernel text. * * For 32bit kernels, these mappings are same and we can use * kernel identity mapping to modify code. diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c index d6178d9..543dcb8 100644 --- a/arch/x86/kernel/kgdb.c +++ b/arch/x86/kernel/kgdb.c @@ -745,9 +745,7 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long ip) int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt) { int err; -#ifdef CONFIG_DEBUG_RODATA char opc[BREAK_INSTR_SIZE]; -#endif /* CONFIG_DEBUG_RODATA */
bpt->type = BP_BREAKPOINT; err = probe_kernel_read(bpt->saved_instr, (char *)bpt->bpt_addr, @@ -756,7 +754,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt) return err; err = probe_kernel_write((char *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE); -#ifdef CONFIG_DEBUG_RODATA if (!err) return err; /* @@ -773,13 +770,12 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt) if (memcmp(opc, arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE)) return -EINVAL; bpt->type = BP_POKE_BREAKPOINT; -#endif /* CONFIG_DEBUG_RODATA */ + return err; }
int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt) { -#ifdef CONFIG_DEBUG_RODATA int err; char opc[BREAK_INSTR_SIZE];
@@ -796,8 +792,8 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt) if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE)) goto knl_write; return err; + knl_write: -#endif /* CONFIG_DEBUG_RODATA */ return probe_kernel_write((char *)bpt->bpt_addr, (char *)bpt->saved_instr, BREAK_INSTR_SIZE); } diff --git a/arch/x86/kernel/test_nx.c b/arch/x86/kernel/test_nx.c index 3f92ce0..27538f1 100644 --- a/arch/x86/kernel/test_nx.c +++ b/arch/x86/kernel/test_nx.c @@ -142,7 +142,6 @@ static int test_NX(void) * by the error message */
-#ifdef CONFIG_DEBUG_RODATA /* Test 3: Check if the .rodata section is executable */ if (rodata_test_data != 0xC3) { printk(KERN_ERR "test_nx: .rodata marker has invalid value\n"); @@ -151,7 +150,6 @@ static int test_NX(void) printk(KERN_ERR "test_nx: .rodata section is executable\n"); ret = -ENODEV; } -#endif
#if 0 /* Test 4: Check if the .data section of a module is executable */ diff --git a/arch/x86/kernel/test_rodata.c b/arch/x86/kernel/test_rodata.c index 5ecbfe5..cb4a01b 100644 --- a/arch/x86/kernel/test_rodata.c +++ b/arch/x86/kernel/test_rodata.c @@ -76,5 +76,5 @@ int rodata_test(void) }
MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Testcase for the DEBUG_RODATA infrastructure"); +MODULE_DESCRIPTION("Testcase for marking rodata as read-only"); MODULE_AUTHOR("Arjan van de Ven arjan@linux.intel.com"); diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 00bf300..0384a73 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -41,29 +41,28 @@ ENTRY(phys_startup_64) jiffies_64 = jiffies; #endif
-#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA) +#if defined(CONFIG_X86_64) /* - * On 64-bit, align RODATA to 2MB so that even with CONFIG_DEBUG_RODATA - * we retain large page mappings for boundaries spanning kernel text, rodata - * and data sections. + * On 64-bit, align RODATA to 2MB so we retain large page mappings for + * boundaries spanning kernel text, rodata and data sections. * * However, kernel identity mappings will have different RWX permissions * to the pages mapping to text and to the pages padding (which are freed) the * text section. Hence kernel identity mappings will be broken to smaller * pages. For 64-bit, kernel text and kernel identity mappings are different, - * so we can enable protection checks that come with CONFIG_DEBUG_RODATA, - * as well as retain 2MB large page mappings for kernel text. + * so we can enable protection checks as well as retain 2MB large page + * mappings for kernel text. */ -#define X64_ALIGN_DEBUG_RODATA_BEGIN . = ALIGN(HPAGE_SIZE); +#define X64_ALIGN_RODATA_BEGIN . = ALIGN(HPAGE_SIZE);
-#define X64_ALIGN_DEBUG_RODATA_END \ +#define X64_ALIGN_RODATA_END \ . = ALIGN(HPAGE_SIZE); \ __end_rodata_hpage_align = .;
#else
-#define X64_ALIGN_DEBUG_RODATA_BEGIN -#define X64_ALIGN_DEBUG_RODATA_END +#define X64_ALIGN_RODATA_BEGIN +#define X64_ALIGN_RODATA_END
#endif
@@ -112,13 +111,11 @@ SECTIONS
EXCEPTION_TABLE(16) :text = 0x9090
-#if defined(CONFIG_DEBUG_RODATA) /* .text should occupy whole number of pages */ . = ALIGN(PAGE_SIZE); -#endif - X64_ALIGN_DEBUG_RODATA_BEGIN + X64_ALIGN_RODATA_BEGIN RO_DATA(PAGE_SIZE) - X64_ALIGN_DEBUG_RODATA_END + X64_ALIGN_RODATA_END
/* Data */ .data : AT(ADDR(.data) - LOAD_OFFSET) { diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index c23ab1e..0f98553 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -871,7 +871,6 @@ static noinline int do_test_wp_bit(void) return flag; }
-#ifdef CONFIG_DEBUG_RODATA const int rodata_test_data = 0xC3; EXPORT_SYMBOL_GPL(rodata_test_data);
@@ -958,5 +957,3 @@ void mark_rodata_ro(void) #endif mark_nxdata_nx(); } -#endif - diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index f9977a7..e10635a 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1062,7 +1062,6 @@ void __init mem_init(void) mem_init_print_info(NULL); }
-#ifdef CONFIG_DEBUG_RODATA const int rodata_test_data = 0xC3; EXPORT_SYMBOL_GPL(rodata_test_data);
@@ -1152,8 +1151,6 @@ void mark_rodata_ro(void) (unsigned long) __va(__pa_symbol(_sdata))); }
-#endif - int kern_addr_valid(unsigned long addr) { unsigned long above = ((long)addr) >> __VIRTUAL_MASK_SHIFT; diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 2dd9b3a..ccb4b86 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -279,7 +279,7 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address, __pa_symbol(__end_rodata) >> PAGE_SHIFT)) pgprot_val(forbidden) |= _PAGE_RW;
-#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA) +#if defined(CONFIG_X86_64) /* * Once the kernel maps the text as RO (kernel_set_to_readonly is set), * kernel text mappings for the large page aligned text, rodata sections
From: Kees Cook keescook@chromium.org
commit c74ba8b3480da6ddaea17df2263ec09b869ac496 upstream.
One of the easiest ways to protect the kernel from attack is to reduce the internal attack surface exposed when a "write" flaw is available. By making as much of the kernel read-only as possible, we reduce the attack surface.
Many things are written to only during __init, and never changed again. These cannot be made "const" since the compiler will do the wrong thing (we do actually need to write to them). Instead, move these items into a memory region that will be made read-only during mark_rodata_ro() which happens after all kernel __init code has finished.
This introduces __ro_after_init as a way to mark such memory, and adds some documentation about the existing __read_mostly marking.
This improves the security of the Linux kernel by marking formerly read-write memory regions as read-only on a fully booted up system.
Based on work by PaX Team and Brad Spengler.
Signed-off-by: Kees Cook keescook@chromium.org Cc: Andy Lutomirski luto@amacapital.net Cc: Arnd Bergmann arnd@arndb.de Cc: Borislav Petkov bp@alien8.de Cc: Brad Spengler spender@grsecurity.net Cc: Brian Gerst brgerst@gmail.com Cc: David Brown david.brown@linaro.org Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Emese Revfy re.emese@gmail.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mathias Krause minipli@googlemail.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: PaX Team pageexec@freemail.hu Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: kernel-hardening@lists.openwall.com Cc: linux-arch linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1455748879-21872-5-git-send-email-keescook@chromium... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: David Brown david.brown@linaro.org --- arch/parisc/include/asm/cache.h | 3 +++ include/asm-generic/vmlinux.lds.h | 1 + include/linux/cache.h | 14 ++++++++++++++ 3 files changed, 18 insertions(+)
diff --git a/arch/parisc/include/asm/cache.h b/arch/parisc/include/asm/cache.h index 47f11c7..03d4d5a 100644 --- a/arch/parisc/include/asm/cache.h +++ b/arch/parisc/include/asm/cache.h @@ -30,6 +30,9 @@
#define __read_mostly __attribute__((__section__(".data..read_mostly")))
+/* Read-only memory is marked before mark_rodata_ro() is called. */ +#define __ro_after_init __read_mostly + void parisc_cache_init(void); /* initializes cache-flushing */ void disable_sr_hashing_asm(int); /* low level support for above */ void disable_sr_hashing(void); /* turns off space register hashing */ diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 8bd374d..6a3ea79 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -246,6 +246,7 @@ .rodata : AT(ADDR(.rodata) - LOAD_OFFSET) { \ VMLINUX_SYMBOL(__start_rodata) = .; \ *(.rodata) *(.rodata.*) \ + *(.data..ro_after_init) /* Read only after init */ \ *(__vermagic) /* Kernel version magic */ \ . = ALIGN(8); \ VMLINUX_SYMBOL(__start___tracepoints_ptrs) = .; \ diff --git a/include/linux/cache.h b/include/linux/cache.h index 17e7e82..1be04f8 100644 --- a/include/linux/cache.h +++ b/include/linux/cache.h @@ -12,10 +12,24 @@ #define SMP_CACHE_BYTES L1_CACHE_BYTES #endif
+/* + * __read_mostly is used to keep rarely changing variables out of frequently + * updated cachelines. If an architecture doesn't support it, ignore the + * hint. + */ #ifndef __read_mostly #define __read_mostly #endif
+/* + * __ro_after_init is used to mark things that are read-only after init (i.e. + * after mark_rodata_ro() has been called). These are effectively read-only, + * but may get written to during init, so can't live in .rodata (via "const"). + */ +#ifndef __ro_after_init +#define __ro_after_init __attribute__((__section__(".data..ro_after_init"))) +#endif + #ifndef ____cacheline_aligned #define ____cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES))) #endif
From: Kees Cook keescook@chromium.org
commit 7cca071ccbd2a293ea69168ace6abbcdce53098e upstream.
The new __ro_after_init section should be writable before init, but not after. Validate that it gets updated at init and can't be written to afterwards.
Signed-off-by: Kees Cook keescook@chromium.org Cc: Andy Lutomirski luto@amacapital.net Cc: Arnd Bergmann arnd@arndb.de Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: David Brown david.brown@linaro.org Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Emese Revfy re.emese@gmail.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mathias Krause minipli@googlemail.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: PaX Team pageexec@freemail.hu Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: kernel-hardening@lists.openwall.com Cc: linux-arch linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1455748879-21872-6-git-send-email-keescook@chromium... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: David Brown david.brown@linaro.org --- drivers/misc/lkdtm.c | 29 ++++++++++++++++++++++++++--- 1 file changed, 26 insertions(+), 3 deletions(-)
diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c index b5abe34..da9fb01 100644 --- a/drivers/misc/lkdtm.c +++ b/drivers/misc/lkdtm.c @@ -103,6 +103,7 @@ enum ctype { CT_EXEC_USERSPACE, CT_ACCESS_USERSPACE, CT_WRITE_RO, + CT_WRITE_RO_AFTER_INIT, CT_WRITE_KERN, };
@@ -140,6 +141,7 @@ static char* cp_type[] = { "EXEC_USERSPACE", "ACCESS_USERSPACE", "WRITE_RO", + "WRITE_RO_AFTER_INIT", "WRITE_KERN", };
@@ -162,6 +164,7 @@ static DEFINE_SPINLOCK(lock_me_up); static u8 data_area[EXEC_SIZE];
static const unsigned long rodata = 0xAA55AA55; +static unsigned long ro_after_init __ro_after_init = 0x55AA5500;
module_param(recur_count, int, 0644); MODULE_PARM_DESC(recur_count, " Recursion level for the stack overflow test"); @@ -497,11 +500,28 @@ static void lkdtm_do_action(enum ctype which) break; } case CT_WRITE_RO: { - unsigned long *ptr; + /* Explicitly cast away "const" for the test. */ + unsigned long *ptr = (unsigned long *)&rodata;
- ptr = (unsigned long *)&rodata; + pr_info("attempting bad rodata write at %p\n", ptr); + *ptr ^= 0xabcd1234;
- pr_info("attempting bad write at %p\n", ptr); + break; + } + case CT_WRITE_RO_AFTER_INIT: { + unsigned long *ptr = &ro_after_init; + + /* + * Verify we were written to during init. Since an Oops + * is considered a "success", a failure is to just skip the + * real test. + */ + if ((*ptr & 0xAA) != 0xAA) { + pr_info("%p was NOT written during init!?\n", ptr); + break; + } + + pr_info("attempting bad ro_after_init write at %p\n", ptr); *ptr ^= 0xabcd1234;
break; @@ -811,6 +831,9 @@ static int __init lkdtm_module_init(void) int n_debugfs_entries = 1; /* Assume only the direct entry */ int i;
+ /* Make sure we can write to __ro_after_init values during __init */ + ro_after_init |= 0xAA; + /* Register debugfs interface */ lkdtm_debugfs_root = debugfs_create_dir("provoke-crash", NULL); if (!lkdtm_debugfs_root) {
From: Kees Cook keescook@chromium.org
commit 018ef8dcf3de5f62e2cc1a9273cc27e1c6ba8de5 upstream.
The vDSO does not need to be writable after __init, so mark it as __ro_after_init. The result kills the exploit method of writing to the vDSO from kernel space resulting in userspace executing the modified code, as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21
The memory map (with added vDSO address reporting) shows the vDSO moving into read-only memory:
Before: [ 0.143067] vDSO @ ffffffff82004000 [ 0.143551] vDSO @ ffffffff82006000 ---[ High Kernel Mapping ]--- 0xffffffff80000000-0xffffffff81000000 16M pmd 0xffffffff81000000-0xffffffff81800000 8M ro PSE GLB x pmd 0xffffffff81800000-0xffffffff819f3000 1996K ro GLB x pte 0xffffffff819f3000-0xffffffff81a00000 52K ro NX pte 0xffffffff81a00000-0xffffffff81e00000 4M ro PSE GLB NX pmd 0xffffffff81e00000-0xffffffff81e05000 20K ro GLB NX pte 0xffffffff81e05000-0xffffffff82000000 2028K ro NX pte 0xffffffff82000000-0xffffffff8214f000 1340K RW GLB NX pte 0xffffffff8214f000-0xffffffff82281000 1224K RW NX pte 0xffffffff82281000-0xffffffff82400000 1532K RW GLB NX pte 0xffffffff82400000-0xffffffff83200000 14M RW PSE GLB NX pmd 0xffffffff83200000-0xffffffffc0000000 974M pmd
After: [ 0.145062] vDSO @ ffffffff81da1000 [ 0.146057] vDSO @ ffffffff81da4000 ---[ High Kernel Mapping ]--- 0xffffffff80000000-0xffffffff81000000 16M pmd 0xffffffff81000000-0xffffffff81800000 8M ro PSE GLB x pmd 0xffffffff81800000-0xffffffff819f3000 1996K ro GLB x pte 0xffffffff819f3000-0xffffffff81a00000 52K ro NX pte 0xffffffff81a00000-0xffffffff81e00000 4M ro PSE GLB NX pmd 0xffffffff81e00000-0xffffffff81e0b000 44K ro GLB NX pte 0xffffffff81e0b000-0xffffffff82000000 2004K ro NX pte 0xffffffff82000000-0xffffffff8214c000 1328K RW GLB NX pte 0xffffffff8214c000-0xffffffff8227e000 1224K RW NX pte 0xffffffff8227e000-0xffffffff82400000 1544K RW GLB NX pte 0xffffffff82400000-0xffffffff83200000 14M RW PSE GLB NX pmd 0xffffffff83200000-0xffffffffc0000000 974M pmd
Based on work by PaX Team and Brad Spengler.
Signed-off-by: Kees Cook keescook@chromium.org Acked-by: Andy Lutomirski luto@kernel.org Acked-by: H. Peter Anvin hpa@linux.intel.com Cc: Andy Lutomirski luto@amacapital.net Cc: Arnd Bergmann arnd@arndb.de Cc: Borislav Petkov bp@alien8.de Cc: Brad Spengler spender@grsecurity.net Cc: Brian Gerst brgerst@gmail.com Cc: David Brown david.brown@linaro.org Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Emese Revfy re.emese@gmail.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mathias Krause minipli@googlemail.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: PaX Team pageexec@freemail.hu Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: kernel-hardening@lists.openwall.com Cc: linux-arch linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1455748879-21872-7-git-send-email-keescook@chromium... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: David Brown david.brown@linaro.org --- arch/x86/vdso/vdso2c.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h index 0224987..3f69326 100644 --- a/arch/x86/vdso/vdso2c.h +++ b/arch/x86/vdso/vdso2c.h @@ -140,7 +140,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len, fprintf(outfile, "#include <asm/vdso.h>\n"); fprintf(outfile, "\n"); fprintf(outfile, - "static unsigned char raw_data[%lu] __page_aligned_data = {", + "static unsigned char raw_data[%lu] __ro_after_init __aligned(PAGE_SIZE) = {", mapping_size); for (j = 0; j < stripped_len; j++) { if (j % 10 == 0)
commit 11bf9b865898961cee60a41c483c9f27ec76e12e upstream.
Although the ARM vDSO is cleanly separated by code/data with the code being read-only in userspace mappings, the code page is still writable from the kernel.
There have been exploits (such as http://itszn.com/blog/?p=21) that take advantage of this on x86 to go from a bad kernel write to full root.
Prevent this specific exploit class on ARM as well by putting the vDSO code page in post-init read-only memory as well.
Before: vdso: 1 text pages at base 80927000 root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables ---[ Modules ]--- ---[ Kernel Mapping ]--- 0x80000000-0x80100000 1M RW NX SHD 0x80100000-0x80600000 5M ro x SHD 0x80600000-0x80800000 2M ro NX SHD 0x80800000-0xbe000000 984M RW NX SHD
After: vdso: 1 text pages at base 8072b000 root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables ---[ Modules ]--- ---[ Kernel Mapping ]--- 0x80000000-0x80100000 1M RW NX SHD 0x80100000-0x80600000 5M ro x SHD 0x80600000-0x80800000 2M ro NX SHD 0x80800000-0xbe000000 984M RW NX SHD
Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the PaX Team, Brad Spengler, and Kees Cook.
Signed-off-by: David Brown david.brown@linaro.org Signed-off-by: Kees Cook keescook@chromium.org Cc: Andy Lutomirski luto@amacapital.net Cc: Arnd Bergmann arnd@arndb.de Cc: Borislav Petkov bp@alien8.de Cc: Brad Spengler spender@grsecurity.net Cc: Brian Gerst brgerst@gmail.com Cc: Denys Vlasenko dvlasenk@redhat.com Cc: Emese Revfy re.emese@gmail.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mathias Krause minipli@googlemail.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: Nathan Lynch nathan_lynch@mentor.com Cc: PaX Team pageexec@freemail.hu Cc: Peter Zijlstra peterz@infradead.org Cc: Russell King linux@arm.linux.org.uk Cc: Thomas Gleixner tglx@linutronix.de Cc: kernel-hardening@lists.openwall.com Cc: linux-arch linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1455748879-21872-8-git-send-email-keescook@chromium... Signed-off-by: Ingo Molnar mingo@kernel.org --- arch/arm/vdso/vdso.S | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/arm/vdso/vdso.S b/arch/arm/vdso/vdso.S index b2b97e3..a62a7b6 100644 --- a/arch/arm/vdso/vdso.S +++ b/arch/arm/vdso/vdso.S @@ -23,9 +23,8 @@ #include <linux/const.h> #include <asm/page.h>
- __PAGE_ALIGNED_DATA - .globl vdso_start, vdso_end + .section .data..ro_after_init .balign PAGE_SIZE vdso_start: .incbin "arch/arm/vdso/vdso.so"
The following changes since commit 7f30737678023b5becaf0e2e012665f71b886a7d:
Linux 4.1.20 (2016-03-17 14:11:03 -0400)
are available in the git repository at:
https://git.linaro.org/people/david.brown/linux-lsk v4.1/vdso
for you to fetch changes up to f0b92cd1190816b1293c9724488ac87e0fcc38c0:
ARM/vdso: Mark the vDSO code read-only after init (2016-03-18 12:33:00 -0600)
---------------------------------------------------------------- David Brown (1): ARM/vdso: Mark the vDSO code read-only after init
Kees Cook (6): asm-generic: Consolidate mark_rodata_ro() mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kernel mappings x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option arch: Introduce post-init read-only memory lkdtm: Verify that '__ro_after_init' works correctly x86/vdso: Mark the vDSO code read-only after init
Documentation/kernel-parameters.txt | 4 ++++ arch/arm/include/asm/cacheflush.h | 1 - arch/arm/vdso/vdso.S | 3 +-- arch/arm64/include/asm/cacheflush.h | 4 ---- arch/parisc/include/asm/cache.h | 3 +++ arch/parisc/include/asm/cacheflush.h | 4 ---- arch/x86/Kconfig | 3 +++ arch/x86/Kconfig.debug | 17 +++-------------- arch/x86/include/asm/cacheflush.h | 8 ++------ arch/x86/include/asm/kvm_para.h | 7 ------- arch/x86/include/asm/sections.h | 2 +- arch/x86/kernel/ftrace.c | 6 +++--- arch/x86/kernel/kgdb.c | 8 ++------ arch/x86/kernel/test_nx.c | 2 -- arch/x86/kernel/test_rodata.c | 2 +- arch/x86/kernel/vmlinux.lds.S | 25 +++++++++++-------------- arch/x86/mm/init_32.c | 3 --- arch/x86/mm/init_64.c | 3 --- arch/x86/mm/pageattr.c | 2 +- arch/x86/vdso/vdso2c.h | 2 +- drivers/misc/lkdtm.c | 29 ++++++++++++++++++++++++++--- include/asm-generic/vmlinux.lds.h | 1 + include/linux/cache.h | 14 ++++++++++++++ include/linux/init.h | 4 ++++ init/main.c | 27 +++++++++++++++++++++++---- kernel/debug/kdb/kdb_bp.c | 4 +--- 26 files changed, 105 insertions(+), 83 deletions(-)
Subject should read '[PULL LSK-v4.1] Backport ARM vDSO read-only
On Mon, Mar 21, 2016 at 09:46:30AM -0600, David Brown wrote:
The following changes since commit 7f30737678023b5becaf0e2e012665f71b886a7d:
Linux 4.1.20 (2016-03-17 14:11:03 -0400)
are available in the git repository at:
https://git.linaro.org/people/david.brown/linux-lsk v4.1/vdso
for you to fetch changes up to f0b92cd1190816b1293c9724488ac87e0fcc38c0:
ARM/vdso: Mark the vDSO code read-only after init (2016-03-18 12:33:00 -0600)
David Brown (1): ARM/vdso: Mark the vDSO code read-only after init
Kees Cook (6): asm-generic: Consolidate mark_rodata_ro() mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kernel mappings x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option arch: Introduce post-init read-only memory lkdtm: Verify that '__ro_after_init' works correctly x86/vdso: Mark the vDSO code read-only after init
Documentation/kernel-parameters.txt | 4 ++++ arch/arm/include/asm/cacheflush.h | 1 - arch/arm/vdso/vdso.S | 3 +-- arch/arm64/include/asm/cacheflush.h | 4 ---- arch/parisc/include/asm/cache.h | 3 +++ arch/parisc/include/asm/cacheflush.h | 4 ---- arch/x86/Kconfig | 3 +++ arch/x86/Kconfig.debug | 17 +++-------------- arch/x86/include/asm/cacheflush.h | 8 ++------ arch/x86/include/asm/kvm_para.h | 7 ------- arch/x86/include/asm/sections.h | 2 +- arch/x86/kernel/ftrace.c | 6 +++--- arch/x86/kernel/kgdb.c | 8 ++------ arch/x86/kernel/test_nx.c | 2 -- arch/x86/kernel/test_rodata.c | 2 +- arch/x86/kernel/vmlinux.lds.S | 25 +++++++++++-------------- arch/x86/mm/init_32.c | 3 --- arch/x86/mm/init_64.c | 3 --- arch/x86/mm/pageattr.c | 2 +- arch/x86/vdso/vdso2c.h | 2 +- drivers/misc/lkdtm.c | 29 ++++++++++++++++++++++++++--- include/asm-generic/vmlinux.lds.h | 1 + include/linux/cache.h | 14 ++++++++++++++ include/linux/init.h | 4 ++++ init/main.c | 27 +++++++++++++++++++++++---- kernel/debug/kdb/kdb_bp.c | 4 +--- 26 files changed, 105 insertions(+), 83 deletions(-)
pulled&pushed.
Thanks!
On 03/21/2016 11:43 PM, David Brown wrote:
The following patches bring ARM vDSO read-only patches from Linus's master branch into the 4.1 stable kernel (4.1.20). The patches applied with trivial context fixups.
The easiest way to test this is to enable -DDEBUG on arch/arm/kernel/vdso.o, and see the kernel address of the vDSO page. Then, using CONFIG_ARM_PTDUMP, look at the mappings, and ensure this page is in RO after applying these patches.
There is a demonstrated x86 exploit that uses this to gain root, and this could be done in a similar manner on ARM.
I'll follow the patches with a pull request.
David Brown (1): ARM/vdso: Mark the vDSO code read-only after init
Kees Cook (6): asm-generic: Consolidate mark_rodata_ro() mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kernel mappings x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option arch: Introduce post-init read-only memory lkdtm: Verify that '__ro_after_init' works correctly x86/vdso: Mark the vDSO code read-only after init
Documentation/kernel-parameters.txt | 4 ++++ arch/arm/include/asm/cacheflush.h | 1 - arch/arm/vdso/vdso.S | 3 +-- arch/arm64/include/asm/cacheflush.h | 4 ---- arch/parisc/include/asm/cache.h | 3 +++ arch/parisc/include/asm/cacheflush.h | 4 ---- arch/x86/Kconfig | 3 +++ arch/x86/Kconfig.debug | 17 +++-------------- arch/x86/include/asm/cacheflush.h | 8 ++------ arch/x86/include/asm/kvm_para.h | 7 ------- arch/x86/include/asm/sections.h | 2 +- arch/x86/kernel/ftrace.c | 6 +++--- arch/x86/kernel/kgdb.c | 8 ++------ arch/x86/kernel/test_nx.c | 2 -- arch/x86/kernel/test_rodata.c | 2 +- arch/x86/kernel/vmlinux.lds.S | 25 +++++++++++-------------- arch/x86/mm/init_32.c | 3 --- arch/x86/mm/init_64.c | 3 --- arch/x86/mm/pageattr.c | 2 +- arch/x86/vdso/vdso2c.h | 2 +- drivers/misc/lkdtm.c | 29 ++++++++++++++++++++++++++--- include/asm-generic/vmlinux.lds.h | 1 + include/linux/cache.h | 14 ++++++++++++++ include/linux/init.h | 4 ++++ init/main.c | 27 +++++++++++++++++++++++---- kernel/debug/kdb/kdb_bp.c | 4 +--- 26 files changed, 105 insertions(+), 83 deletions(-)
linaro-kernel@lists.linaro.org