Hi Mike,
On Tue, Nov 24, 2020 at 11:25:51AM +0200, Mike Rapoport wrote:
+static vm_fault_t secretmem_fault(struct vm_fault *vmf) +{
- struct address_space *mapping = vmf->vma->vm_file->f_mapping;
- struct inode *inode = file_inode(vmf->vma->vm_file);
- pgoff_t offset = vmf->pgoff;
- vm_fault_t ret = 0;
- unsigned long addr;
- struct page *page;
- int err;
- if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
return vmf_error(-EINVAL);
- page = find_get_page(mapping, offset);
- if (!page) {
page = secretmem_alloc_page(vmf->gfp_mask);
if (!page)
return vmf_error(-ENOMEM);
err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
if (unlikely(err))
goto err_put_page;
err = set_direct_map_invalid_noflush(page, 1);
if (err)
goto err_del_page_cache;
On arm64, set_direct_map_default_noflush() returns 0 if !rodata_full but no pgtable changes happen since the linear map can be a mix of small and huge pages. The arm64 implementation doesn't break large mappings. I presume we don't want to tell the user that the designated memory is "secret" but the kernel silently ignored it.
We could change the arm64 set_direct_map* to return an error, however, I think it would be pretty unexpected for the user to get a fault when trying to access it. It may be better to return a -ENOSYS or something on the actual syscall if the fault-in wouldn't be allowed later.
Alternatively, we could make the linear map always use pages on arm64, irrespective of other config or cmdline options (maybe not justified unless we have clear memsecret users). Yet another idea is to get set_direct_map* to break pmd/pud mappings into pte but that's not always possible without a stop_machine() and potentially disabling the MMU.
addr = (unsigned long)page_address(page);
flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
__SetPageUptodate(page);
ret = VM_FAULT_LOCKED;
- }
- vmf->page = page;
- return ret;
+err_del_page_cache:
- delete_from_page_cache(page);
+err_put_page:
- put_page(page);
- return vmf_error(err);
+}