6.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matthew Auld matthew.auld@intel.com
commit 2d1684a077d62fddfac074052c162ec6573a34e1 upstream.
Currently this is hidden behind perfmon_capable() since this is technically an info leak, given that this is a system wide metric. However the granularity reported here is always PAGE_SIZE aligned, which matches what the core kernel is already willing to expose to userspace if querying how many free RAM pages there are on the system, and that doesn't need any special privileges. In addition other drm drivers seem happy to expose this.
The motivation here if with oneAPI where they want to use the system wide 'used' reporting here, so not the per-client fdinfo stats. This has also come up with some perf overlay applications wanting this information.
Fixes: 1105ac15d2a1 ("drm/xe/uapi: restrict system wide accounting") Signed-off-by: Matthew Auld matthew.auld@intel.com Cc: Thomas Hellström thomas.hellstrom@linux.intel.com Cc: Joshua Santosh joshua.santosh.ranjan@intel.com Cc: José Roberto de Souza jose.souza@intel.com Cc: Matthew Brost matthew.brost@intel.com Cc: Rodrigo Vivi rodrigo.vivi@intel.com Cc: stable@vger.kernel.org # v6.8+ Acked-by: Rodrigo Vivi rodrigo.vivi@intel.com Reviewed-by: Lucas De Marchi lucas.demarchi@intel.com Link: https://lore.kernel.org/r/20250919122052.420979-2-matthew.auld@intel.com (cherry picked from commit 4d0b035fd6dae8ee48e9c928b10f14877e595356) Signed-off-by: Lucas De Marchi lucas.demarchi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/xe/xe_query.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)
--- a/drivers/gpu/drm/xe/xe_query.c +++ b/drivers/gpu/drm/xe/xe_query.c @@ -274,8 +274,7 @@ static int query_mem_regions(struct xe_d mem_regions->mem_regions[0].instance = 0; mem_regions->mem_regions[0].min_page_size = PAGE_SIZE; mem_regions->mem_regions[0].total_size = man->size << PAGE_SHIFT; - if (perfmon_capable()) - mem_regions->mem_regions[0].used = ttm_resource_manager_usage(man); + mem_regions->mem_regions[0].used = ttm_resource_manager_usage(man); mem_regions->num_mem_regions = 1;
for (i = XE_PL_VRAM0; i <= XE_PL_VRAM1; ++i) { @@ -291,13 +290,11 @@ static int query_mem_regions(struct xe_d mem_regions->mem_regions[mem_regions->num_mem_regions].total_size = man->size;
- if (perfmon_capable()) { - xe_ttm_vram_get_used(man, - &mem_regions->mem_regions - [mem_regions->num_mem_regions].used, - &mem_regions->mem_regions - [mem_regions->num_mem_regions].cpu_visible_used); - } + xe_ttm_vram_get_used(man, + &mem_regions->mem_regions + [mem_regions->num_mem_regions].used, + &mem_regions->mem_regions + [mem_regions->num_mem_regions].cpu_visible_used);
mem_regions->mem_regions[mem_regions->num_mem_regions].cpu_visible_size = xe_ttm_vram_get_cpu_visible_size(man);