-----Original Message----- From: Gerd Hoffmann kraxel@redhat.com Sent: 26 February 2020 16:48 To: dri-devel@lists.freedesktop.org Cc: tzimmermann@suse.de; gurchetansingh@chromium.org; olvaffe@gmail.com; Guillaume Gardet Guillaume.Gardet@arm.com; Gerd Hoffmann kraxel@redhat.com; stable@vger.kernel.org; Maarten Lankhorst maarten.lankhorst@linux.intel.com; Maxime Ripard mripard@kernel.org; David Airlie airlied@linux.ie; Daniel Vetter daniel@ffwll.ch; open list <linux- kernel@vger.kernel.org> Subject: [PATCH v5 1/3] drm/shmem: add support for per object caching flags.
Add map_cached bool to drm_gem_shmem_object, to request cached mappings on a per-object base. Check the flag before adding writecombine to pgprot bits.
Cc: stable@vger.kernel.org Signed-off-by: Gerd Hoffmann kraxel@redhat.com
Tested-by: Guillaume Gardet Guillaume.Gardet@arm.com
include/drm/drm_gem_shmem_helper.h | 5 +++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 15 +++++++++++---- 2 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index e34a7b7f848a..294b2931c4cc 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -96,6 +96,11 @@ struct drm_gem_shmem_object {
- The address are un-mapped when the count reaches zero.
*/ unsigned int vmap_use_count;
+/**
- @map_cached: map object cached (instead of using writecombine).
- */
+bool map_cached; };
#define to_drm_gem_shmem_obj(obj) \ diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index a421a2eed48a..aad9324dcf4f 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -254,11 +254,16 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) if (ret) goto err_zero_use;
-if (obj->import_attach) +if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf); -else +} else { +pgprot_t prot = PAGE_KERNEL;
+if (!shmem->map_cached) +prot = pgprot_writecombine(prot); shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
- VM_MAP,
pgprot_writecombine(PAGE_KERNEL));
- VM_MAP, prot);
+}
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); @@ -540,7 +545,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) }
vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND; -vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma-
vm_flags));
+vma->vm_page_prot = vm_get_page_prot(vma->vm_flags); +if (!shmem->map_cached) +vma->vm_page_prot = pgprot_writecombine(vma-
vm_page_prot);
vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); vma->vm_ops = &drm_gem_shmem_vm_ops;
-- 2.18.2
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.