Ensure that we flush any cache dirt out to main memory before the user changes the cache-level as they may elect to bypass the cache (even after declaring their access cache-coherent) via use of unprivileged MOCS.
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Joonas Lahtinen joonas.lahtinen@linux.intel.com Cc: stable@vger.kernel.org --- drivers/gpu/drm/i915/gem/i915_gem_domain.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index 2e3ce2a69653..5d41e769a428 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -277,6 +277,11 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
list_for_each_entry(vma, &obj->vma.list, obj_link) vma->node.color = cache_level; + + /* Flush any previous cache dirt in case of cache bypass */ + if (obj->cache_dirty & ~obj->cache_coherent) + i915_gem_clflush_object(obj, I915_CLFLUSH_SYNC); + i915_gem_object_set_cache_coherency(obj, cache_level); obj->cache_dirty = true; /* Always invalidate stale cachelines */
Quoting Chris Wilson (2019-07-18 17:54:07)
Ensure that we flush any cache dirt out to main memory before the user changes the cache-level as they may elect to bypass the cache (even after declaring their access cache-coherent) via use of unprivileged MOCS.
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Joonas Lahtinen joonas.lahtinen@linux.intel.com Cc: stable@vger.kernel.org
Reviewed-by: Joonas Lahtinen joonas.lahtinen@linux.intel.com
Regards, Joonas
On Thu, Jul 18, 2019 at 4:54 PM Chris Wilson chris@chris-wilson.co.uk wrote:
Ensure that we flush any cache dirt out to main memory before the user changes the cache-level as they may elect to bypass the cache (even after declaring their access cache-coherent) via use of unprivileged MOCS.
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Joonas Lahtinen joonas.lahtinen@linux.intel.com Cc: stable@vger.kernel.org
drivers/gpu/drm/i915/gem/i915_gem_domain.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index 2e3ce2a69653..5d41e769a428 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -277,6 +277,11 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
list_for_each_entry(vma, &obj->vma.list, obj_link) vma->node.color = cache_level;
/* Flush any previous cache dirt in case of cache bypass */
if (obj->cache_dirty & ~obj->cache_coherent)
i915_gem_clflush_object(obj, I915_CLFLUSH_SYNC);
I think writing out the bit logic instead of implicitly relying on the #defines would be much better, i.e. && !(cache_coherent & COHERENT_FOR_READ). Plus I think we only need to set cache_dirty = true if we don't flush here already, to avoid double flushing? -Daniel
i915_gem_object_set_cache_coherency(obj, cache_level); obj->cache_dirty = true; /* Always invalidate stale cachelines */
-- 2.22.0
Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
-- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch
Quoting Daniel Vetter (2019-11-12 09:09:06)
On Thu, Jul 18, 2019 at 4:54 PM Chris Wilson chris@chris-wilson.co.uk wrote:
Ensure that we flush any cache dirt out to main memory before the user changes the cache-level as they may elect to bypass the cache (even after declaring their access cache-coherent) via use of unprivileged MOCS.
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Joonas Lahtinen joonas.lahtinen@linux.intel.com Cc: stable@vger.kernel.org
drivers/gpu/drm/i915/gem/i915_gem_domain.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index 2e3ce2a69653..5d41e769a428 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -277,6 +277,11 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
list_for_each_entry(vma, &obj->vma.list, obj_link) vma->node.color = cache_level;
/* Flush any previous cache dirt in case of cache bypass */
if (obj->cache_dirty & ~obj->cache_coherent)
i915_gem_clflush_object(obj, I915_CLFLUSH_SYNC);
I think writing out the bit logic instead of implicitly relying on the #defines would be much better, i.e. && !(cache_coherent & COHERENT_FOR_READ). Plus I think we only need to set cache_dirty = true if we don't flush here already, to avoid double flushing?
No. The mask is being updated, so you need to flush before you lose track. The cache is then cleared of the dirty bit so won't be flushed again until dirty and no longer coherent. We need to flag that the page is no longer coherent at the end of its lifetime (passing back to the system) to force the flush then. -Chris
On Tue, Nov 12, 2019 at 10:43 AM Chris Wilson chris@chris-wilson.co.uk wrote:
Quoting Daniel Vetter (2019-11-12 09:09:06)
On Thu, Jul 18, 2019 at 4:54 PM Chris Wilson chris@chris-wilson.co.uk wrote:
Ensure that we flush any cache dirt out to main memory before the user changes the cache-level as they may elect to bypass the cache (even after declaring their access cache-coherent) via use of unprivileged MOCS.
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Joonas Lahtinen joonas.lahtinen@linux.intel.com Cc: stable@vger.kernel.org
drivers/gpu/drm/i915/gem/i915_gem_domain.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index 2e3ce2a69653..5d41e769a428 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -277,6 +277,11 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
list_for_each_entry(vma, &obj->vma.list, obj_link) vma->node.color = cache_level;
/* Flush any previous cache dirt in case of cache bypass */
if (obj->cache_dirty & ~obj->cache_coherent)
i915_gem_clflush_object(obj, I915_CLFLUSH_SYNC);
I think writing out the bit logic instead of implicitly relying on the #defines would be much better, i.e. && !(cache_coherent & COHERENT_FOR_READ). Plus I think we only need to set cache_dirty = true if we don't flush here already, to avoid double flushing?
No. The mask is being updated, so you need to flush before you lose track. The cache is then cleared of the dirty bit so won't be flushed again until dirty and no longer coherent. We need to flag that the page is no longer coherent at the end of its lifetime (passing back to the system) to force the flush then.
Hm I think I overlooked that we only clear cache_dirty in i915_gem_clflush_object when it's a coherent mode.
I also spotted more cases for (obj->cache_dirty &~obj->cache_coherent), so that obscure/fragile pattern is pre-existing :-/ One of them also checks outside of the object lock, which I think is how these states are supposed to be protected. Smells a bit fishy still, would be good to make a bit clearer. -Daniel
On Tue, Nov 12, 2019 at 11:57 AM Daniel Vetter daniel@ffwll.ch wrote:
On Tue, Nov 12, 2019 at 10:43 AM Chris Wilson chris@chris-wilson.co.uk wrote:
Quoting Daniel Vetter (2019-11-12 09:09:06)
On Thu, Jul 18, 2019 at 4:54 PM Chris Wilson chris@chris-wilson.co.uk wrote:
Ensure that we flush any cache dirt out to main memory before the user changes the cache-level as they may elect to bypass the cache (even after declaring their access cache-coherent) via use of unprivileged MOCS.
Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Joonas Lahtinen joonas.lahtinen@linux.intel.com Cc: stable@vger.kernel.org
drivers/gpu/drm/i915/gem/i915_gem_domain.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index 2e3ce2a69653..5d41e769a428 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -277,6 +277,11 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
list_for_each_entry(vma, &obj->vma.list, obj_link) vma->node.color = cache_level;
/* Flush any previous cache dirt in case of cache bypass */
if (obj->cache_dirty & ~obj->cache_coherent)
i915_gem_clflush_object(obj, I915_CLFLUSH_SYNC);
I think writing out the bit logic instead of implicitly relying on the #defines would be much better, i.e. && !(cache_coherent & COHERENT_FOR_READ). Plus I think we only need to set cache_dirty = true if we don't flush here already, to avoid double flushing?
No. The mask is being updated, so you need to flush before you lose track. The cache is then cleared of the dirty bit so won't be flushed again until dirty and no longer coherent. We need to flag that the page is no longer coherent at the end of its lifetime (passing back to the system) to force the flush then.
Hm I think I overlooked that we only clear cache_dirty in i915_gem_clflush_object when it's a coherent mode.
Hm, the clear/blt code recently merged doesn't preserve the ->cache_dirty setting for this case, unlike clfush_object. Do we have a bug there?
I also spotted more cases for (obj->cache_dirty &~obj->cache_coherent), so that obscure/fragile pattern is pre-existing :-/ One of them also checks outside of the object lock, which I think is how these states are supposed to be protected. Smells a bit fishy still, would be good to make a bit clearer.
-Daniel
linux-stable-mirror@lists.linaro.org