This is to add Intel VT-d nested translation based on IOMMUFD nesting infrastructure. As the iommufd nesting infrastructure series[1], iommu core supports new ops to report iommu hardware information, allocate domains with user data and invalidate stage-1 IOTLB when there is mapping changed in stage-1 page table. The data required in the three paths are vendor-specific, so
1) IOMMU_HWPT_TYPE_VTD_S1 is defined for the Intel VT-d stage-1 page table, it will be used in the stage-1 domain allocation and IOTLB syncing path. struct iommu_hwpt_vtd_s1 is defined to pass user_data for the Intel VT-d stage-1 domain allocation. struct iommu_hwpt_vtd_s1_invalidate is defined to pass the data for the Intel VT-d stage-1 IOTLB invalidation. 2) IOMMU_HW_INFO_TYPE_INTEL_VTD and struct iommu_hw_info_vtd are defined to report iommu hardware information for Intel VT-d.
With above IOMMUFD extensions, the intel iommu driver implements the three paths to support nested translation.
The first Intel platform supporting nested translation is Sapphire Rapids which, unfortunately, has a hardware errata [2] requiring special treatment. This errata happens when a stage-1 page table page (either level) is located in a stage-2 read-only region. In that case the IOMMU hardware may ignore the stage-2 RO permission and still set the A/D bit in stage-1 page table entries during page table walking.
A flag IOMMU_HW_INFO_VTD_ERRATA_772415_SPR17 is introduced to report this errata to userspace. With that restriction the user should either disable nested translation to favor RO stage-2 mappings or ensure no RO stage-2 mapping to enable nested translation.
Intel-iommu driver is armed with necessary checks to prevent such mix in patch12 of this series.
Qemu currently does add RO mappings though. The vfio agent in Qemu simply maps all valid regions in the GPA address space which certainly includes RO regions e.g. vbios.
In reality we don't know a usage relying on DMA reads from the BIOS region. Hence finding a way to skip RO regions (e.g. via a discard manager) in Qemu might be an acceptable tradeoff. The actual change needs more discussion in Qemu community. For now we just hacked Qemu to test.
Complete code can be found in [3], corresponding QEMU could can be found in [4].
[1] https://lore.kernel.org/linux-iommu/20230921075138.124099-1-yi.l.liu@intel.c... [2] https://www.intel.com/content/www/us/en/content-details/772415/content-detai... [3] https://github.com/yiliu1765/iommufd/tree/iommufd_nesting [4] https://github.com/yiliu1765/qemu/tree/zhenzhong/wip/iommufd_nesting_rfcv1
Change log:
v5: - Add Kevin's r-b for patch 2, 3 ,5 8, 10 - Drop enforce_cache_coherency callback from the nested type domain ops (Kevin) - Remove duplicate agaw check in patch 04 (Kevin) - Remove duplicate domain_update_iommu_cap() in patch 06 (Kevin) - Check parent's force_snooping to set pgsnp in the pasid entry (Kevin) - uapi data structure check (Kevin) - Simplify the errata handling as user can allocate nested parent domain
v4: https://lore.kernel.org/linux-iommu/20230724111335.107427-1-yi.l.liu@intel.c... - Remove ascii art tables (Jason) - Drop EMT (Tina, Jason) - Drop MTS and related definitions (Kevin) - Rename macro IOMMU_VTD_PGTBL_ to IOMMU_VTD_S1_ (Kevin) - Rename struct iommu_hwpt_intel_vtd_ to iommu_hwpt_vtd_ (Kevin) - Rename struct iommu_hwpt_intel_vtd to iommu_hwpt_vtd_s1 (Kevin) - Put the vendor specific hwpt alloc data structure before enuma iommu_hwpt_type (Kevin) - Do not trim the higher page levels of S2 domain in nested domain attachment as the S2 domain may have been used independently. (Kevin) - Remove the first-stage pgd check against the maximum address of s2_domain as hw can check it anyhow. It makes sense to check every pfns used in the stage-1 page table. But it cannot make it. So just leave it to hw. (Kevin) - Split the iotlb flush part into an order of uapi, helper and callback implementation (Kevin) - Change the policy of VT-d nesting errata, disallow RO mapping once a domain is used as parent domain of a nested domain. This removes the nested_users counting. (Kevin) - Minor fix for "make htmldocs"
v3: https://lore.kernel.org/linux-iommu/20230511145110.27707-1-yi.l.liu@intel.co... - Further split the patches into an order of adding helpers for nested domain, iotlb flush, nested domain attachment and nested domain allocation callback, then report the hw_info to userspace. - Add batch support in cache invalidation from userspace - Disallow nested translation usage if RO mappings exists in stage-2 domain due to errata on readonly mappings on Sapphire Rapids platform.
v2: https://lore.kernel.org/linux-iommu/20230309082207.612346-1-yi.l.liu@intel.c... - The iommufd infrastructure is split to be separate series.
v1: https://lore.kernel.org/linux-iommu/20230209043153.14964-1-yi.l.liu@intel.co...
Regards, Yi Liu
Lu Baolu (5): iommu/vt-d: Extend dmar_domain to support nested domain iommu/vt-d: Add helper for nested domain allocation iommu/vt-d: Add helper to setup pasid nested translation iommu/vt-d: Add nested domain allocation iommu/vt-d: Disallow read-only mappings to nest parent domain
Yi Liu (6): iommufd: Add data structure for Intel VT-d stage-1 domain allocation iommu/vt-d: Make domain attach helpers to be extern iommu/vt-d: Set the nested domain to a device iommufd: Add data structure for Intel VT-d stage-1 cache invalidation iommu/vt-d: Make iotlb flush helpers to be extern iommu/vt-d: Add iotlb flush for nested domain
drivers/iommu/intel/Makefile | 2 +- drivers/iommu/intel/iommu.c | 60 +++++++++---- drivers/iommu/intel/iommu.h | 51 +++++++++-- drivers/iommu/intel/nested.c | 162 +++++++++++++++++++++++++++++++++++ drivers/iommu/intel/pasid.c | 125 +++++++++++++++++++++++++++ drivers/iommu/intel/pasid.h | 2 + include/uapi/linux/iommufd.h | 76 +++++++++++++++- 7 files changed, 452 insertions(+), 26 deletions(-) create mode 100644 drivers/iommu/intel/nested.c
This adds IOMMU_HWPT_TYPE_VTD_S1 for stage-1 hw_pagetable of Intel VT-d and the corressponding data structure for userspace specified parameter for the domain allocation.
Signed-off-by: Yi Liu yi.l.liu@intel.com --- include/uapi/linux/iommufd.h | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+)
diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index 2083a0309a9b..18a502e206c3 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -358,12 +358,42 @@ enum iommufd_hwpt_alloc_flags { IOMMU_HWPT_ALLOC_NEST_PARENT = 1 << 0, };
+/** + * enum iommu_hwpt_vtd_s1_flags - Intel VT-d stage-1 page table + * entry attributes + * @IOMMU_VTD_S1_SRE: Supervisor request + * @IOMMU_VTD_S1_EAFE: Extended access enable + * @IOMMU_VTD_S1_WPE: Write protect enable + */ +enum iommu_hwpt_vtd_s1_flags { + IOMMU_VTD_S1_SRE = 1 << 0, + IOMMU_VTD_S1_EAFE = 1 << 1, + IOMMU_VTD_S1_WPE = 1 << 2, +}; + +/** + * struct iommu_hwpt_vtd_s1 - Intel VT-d stage-1 page table + * info (IOMMU_HWPT_TYPE_VTD_S1) + * @flags: Combination of enum iommu_hwpt_vtd_s1_flags + * @pgtbl_addr: The base address of the stage-1 page table. + * @addr_width: The address width of the stage-1 page table + * @__reserved: Must be 0 + */ +struct iommu_hwpt_vtd_s1 { + __aligned_u64 flags; + __aligned_u64 pgtbl_addr; + __u32 addr_width; + __u32 __reserved; +}; + /** * enum iommu_hwpt_type - IOMMU HWPT Type * @IOMMU_HWPT_TYPE_DEFAULT: default + * @IOMMU_HWPT_TYPE_VTD_S1: Intel VT-d stage-1 page table */ enum iommu_hwpt_type { IOMMU_HWPT_TYPE_DEFAULT, + IOMMU_HWPT_TYPE_VTD_S1, };
/**
From: Liu, Yi L yi.l.liu@intel.com Sent: Thursday, September 21, 2023 3:54 PM
This adds IOMMU_HWPT_TYPE_VTD_S1 for stage-1 hw_pagetable of Intel VT-d and the corressponding data structure for userspace specified parameter for the domain allocation.
Signed-off-by: Yi Liu yi.l.liu@intel.com
Reviewed-by: Kevin Tian kevin.tian@intel.com
From: Lu Baolu baolu.lu@linux.intel.com
The nested domain fields are exclusive to those that used for a DMA remapping domain. Use union to avoid memory waste.
Reviewed-by: Kevin Tian kevin.tian@intel.com Signed-off-by: Lu Baolu baolu.lu@linux.intel.com Signed-off-by: Yi Liu yi.l.liu@intel.com --- drivers/iommu/intel/iommu.h | 35 +++++++++++++++++++++++++++++------ 1 file changed, 29 insertions(+), 6 deletions(-)
diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index c18fb699c87a..4a7f1cc16afa 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -597,15 +597,38 @@ struct dmar_domain { struct list_head devices; /* all devices' list */ struct list_head dev_pasids; /* all attached pasids */
- struct dma_pte *pgd; /* virtual address */ - int gaw; /* max guest address width */ - - /* adjusted guest address width, 0 is level 2 30-bit */ - int agaw; int iommu_superpage;/* Level of superpages supported: 0 == 4KiB (no superpages), 1 == 2MiB, 2 == 1GiB, 3 == 512GiB, 4 == 1TiB */ - u64 max_addr; /* maximum mapped address */ + union { + /* DMA remapping domain */ + struct { + /* virtual address */ + struct dma_pte *pgd; + /* max guest address width */ + int gaw; + /* + * adjusted guest address width: + * 0: level 2 30-bit + * 1: level 3 39-bit + * 2: level 4 48-bit + * 3: level 5 57-bit + */ + int agaw; + /* maximum mapped address */ + u64 max_addr; + }; + + /* Nested user domain */ + struct { + /* parent page table which the user domain is nested on */ + struct dmar_domain *s2_domain; + /* user page table pointer (in GPA) */ + unsigned long s1_pgtbl; + /* page table attributes */ + struct iommu_hwpt_vtd_s1 s1_cfg; + }; + };
struct iommu_domain domain; /* generic domain data structure for iommu core */
From: Lu Baolu baolu.lu@linux.intel.com
This adds helper for accepting user parameters and allocate a nested domain.
Reviewed-by: Kevin Tian kevin.tian@intel.com Signed-off-by: Jacob Pan jacob.jun.pan@linux.intel.com Signed-off-by: Lu Baolu baolu.lu@linux.intel.com Signed-off-by: Yi Liu yi.l.liu@intel.com --- drivers/iommu/intel/Makefile | 2 +- drivers/iommu/intel/iommu.h | 2 ++ drivers/iommu/intel/nested.c | 54 ++++++++++++++++++++++++++++++++++++ 3 files changed, 57 insertions(+), 1 deletion(-) create mode 100644 drivers/iommu/intel/nested.c
diff --git a/drivers/iommu/intel/Makefile b/drivers/iommu/intel/Makefile index 7af3b8a4f2a0..5dabf081a779 100644 --- a/drivers/iommu/intel/Makefile +++ b/drivers/iommu/intel/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMAR_TABLE) += dmar.o -obj-$(CONFIG_INTEL_IOMMU) += iommu.o pasid.o +obj-$(CONFIG_INTEL_IOMMU) += iommu.o pasid.o nested.o obj-$(CONFIG_DMAR_TABLE) += trace.o cap_audit.o obj-$(CONFIG_DMAR_PERF) += perf.o obj-$(CONFIG_INTEL_IOMMU_DEBUGFS) += debugfs.o diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index 4a7f1cc16afa..a82e232abae0 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -865,6 +865,8 @@ void *alloc_pgtable_page(int node, gfp_t gfp); void free_pgtable_page(void *vaddr); void iommu_flush_write_buffer(struct intel_iommu *iommu); struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn); +struct iommu_domain *intel_nested_domain_alloc(struct iommu_domain *s2_domain, + const struct iommu_user_data *user_data);
#ifdef CONFIG_INTEL_IOMMU_SVM void intel_svm_check(struct intel_iommu *iommu); diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c new file mode 100644 index 000000000000..012d5ff60265 --- /dev/null +++ b/drivers/iommu/intel/nested.c @@ -0,0 +1,54 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * nested.c - nested mode translation support + * + * Copyright (C) 2023 Intel Corporation + * + * Author: Lu Baolu baolu.lu@linux.intel.com + * Jacob Pan jacob.jun.pan@linux.intel.com + */ + +#define pr_fmt(fmt) "DMAR: " fmt + +#include <linux/iommu.h> + +#include "iommu.h" + +static void intel_nested_domain_free(struct iommu_domain *domain) +{ + kfree(to_dmar_domain(domain)); +} + +static const struct iommu_domain_ops intel_nested_domain_ops = { + .free = intel_nested_domain_free, +}; + +struct iommu_domain *intel_nested_domain_alloc(struct iommu_domain *s2_domain, + const struct iommu_user_data *user_data) +{ + const size_t min_len = offsetofend(struct iommu_hwpt_vtd_s1, __reserved); + struct iommu_hwpt_vtd_s1 vtd; + struct dmar_domain *domain; + int ret; + + ret = iommu_copy_user_data(&vtd, user_data, sizeof(vtd), min_len); + if (ret) + return ERR_PTR(ret); + + domain = kzalloc(sizeof(*domain), GFP_KERNEL_ACCOUNT); + if (!domain) + return NULL; + + domain->use_first_level = true; + domain->s2_domain = to_dmar_domain(s2_domain); + domain->s1_pgtbl = vtd.pgtbl_addr; + domain->s1_cfg = vtd; + domain->domain.ops = &intel_nested_domain_ops; + domain->domain.type = IOMMU_DOMAIN_NESTED; + INIT_LIST_HEAD(&domain->devices); + INIT_LIST_HEAD(&domain->dev_pasids); + spin_lock_init(&domain->lock); + xa_init(&domain->iommu_array); + + return &domain->domain; +}
From: Lu Baolu baolu.lu@linux.intel.com
The configurations are passed in from the user when the user domain is allocated. This helper interprets these configurations according to the data structure defined in uapi/linux/iommufd.h. The EINVAL error will be returned if any of configurations are not compatible with the hardware capabilities. The caller can retry with another compatible user domain. The encoding of fields of each pasid entry is defined in section 9.6 of the VT-d spec.
Signed-off-by: Jacob Pan jacob.jun.pan@linux.intel.com Signed-off-by: Lu Baolu baolu.lu@linux.intel.com Signed-off-by: Yi Liu yi.l.liu@intel.com --- drivers/iommu/intel/pasid.c | 125 ++++++++++++++++++++++++++++++++++++ drivers/iommu/intel/pasid.h | 2 + 2 files changed, 127 insertions(+)
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c index 8f92b92f3d2a..8a290ab07697 100644 --- a/drivers/iommu/intel/pasid.c +++ b/drivers/iommu/intel/pasid.c @@ -335,6 +335,15 @@ static inline void pasid_set_fault_enable(struct pasid_entry *pe) pasid_set_bits(&pe->val[0], 1 << 1, 0); }
+/* + * Setup the SRE(Supervisor Request Enable) field (Bit 128) of a + * scalable mode PASID entry. + */ +static inline void pasid_set_sre(struct pasid_entry *pe) +{ + pasid_set_bits(&pe->val[2], 1 << 0, 1); +} + /* * Setup the WPE(Write Protect Enable) field (Bit 132) of a * scalable mode PASID entry. @@ -402,6 +411,15 @@ pasid_set_flpm(struct pasid_entry *pe, u64 value) pasid_set_bits(&pe->val[2], GENMASK_ULL(3, 2), value << 2); }
+/* + * Setup the Extended Access Flag Enable (EAFE) field (Bit 135) + * of a scalable mode PASID entry. + */ +static inline void pasid_set_eafe(struct pasid_entry *pe) +{ + pasid_set_bits(&pe->val[2], 1 << 7, 1 << 7); +} + static void pasid_cache_invalidation_with_pasid(struct intel_iommu *iommu, u16 did, u32 pasid) @@ -713,3 +731,110 @@ void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu, if (!cap_caching_mode(iommu->cap)) devtlb_invalidation_with_pasid(iommu, dev, pasid); } + +/** + * intel_pasid_setup_nested() - Set up PASID entry for nested translation. + * @iommu: IOMMU which the device belong to + * @dev: Device to be set up for translation + * @pasid: PASID to be programmed in the device PASID table + * @domain: User first-level domain nested on a s2 domain + * + * This is used for nested translation based vIOMMU. e.g. guest IOVA and + * guest shared virtual address. In this case, the first-level page tables + * are used for GVA/GIOVA-GPA translation in the guest, the second-level + * page tables are used for GPA-HPA translation. + */ +int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev, + u32 pasid, struct dmar_domain *domain) +{ + struct iommu_hwpt_vtd_s1 *s1_cfg = &domain->s1_cfg; + pgd_t *s1_gpgd = (pgd_t *)(uintptr_t)domain->s1_pgtbl; + struct dmar_domain *s2_domain = domain->s2_domain; + u16 did = domain_id_iommu(domain, iommu); + struct dma_pte *pgd = s2_domain->pgd; + struct pasid_entry *pte; + + if (!ecap_nest(iommu->ecap)) { + pr_err_ratelimited("%s: No nested translation support\n", + iommu->name); + return -ENODEV; + } + + /* + * Address width should match in two dimensions: CPU vs. IOMMU, + * guest vs. host. + */ + switch (s1_cfg->addr_width) { + case ADDR_WIDTH_4LEVEL: + break; +#ifdef CONFIG_X86 + case ADDR_WIDTH_5LEVEL: + if (!cpu_feature_enabled(X86_FEATURE_LA57) || + !cap_fl5lp_support(iommu->cap)) { + dev_err_ratelimited(dev, + "5-level paging not supported\n"); + return -EINVAL; + } + break; +#endif + default: + dev_err_ratelimited(dev, "Invalid guest address width %d\n", + s1_cfg->addr_width); + return -EINVAL; + } + + if ((s1_cfg->flags & IOMMU_VTD_S1_SRE) && !ecap_srs(iommu->ecap)) { + pr_err_ratelimited("No supervisor request support on %s\n", + iommu->name); + return -EINVAL; + } + + if ((s1_cfg->flags & IOMMU_VTD_S1_EAFE) && !ecap_eafs(iommu->ecap)) { + pr_err_ratelimited("No extended access flag support on %s\n", + iommu->name); + return -EINVAL; + } + + spin_lock(&iommu->lock); + pte = intel_pasid_get_entry(dev, pasid); + if (!pte) { + spin_unlock(&iommu->lock); + return -ENODEV; + } + if (pasid_pte_is_present(pte)) { + spin_unlock(&iommu->lock); + return -EBUSY; + } + + pasid_clear_entry(pte); + + if (s1_cfg->addr_width == ADDR_WIDTH_5LEVEL) + pasid_set_flpm(pte, 1); + + pasid_set_flptr(pte, (uintptr_t)s1_gpgd); + + if (s1_cfg->flags & IOMMU_VTD_S1_SRE) { + pasid_set_sre(pte); + if (s1_cfg->flags & IOMMU_VTD_S1_WPE) + pasid_set_wpe(pte); + } + + if (s1_cfg->flags & IOMMU_VTD_S1_EAFE) + pasid_set_eafe(pte); + + if (s2_domain->force_snooping) + pasid_set_pgsnp(pte); + + pasid_set_slptr(pte, virt_to_phys(pgd)); + pasid_set_fault_enable(pte); + pasid_set_domain_id(pte, did); + pasid_set_address_width(pte, s2_domain->agaw); + pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap)); + pasid_set_translation_type(pte, PASID_ENTRY_PGTT_NESTED); + pasid_set_present(pte); + spin_unlock(&iommu->lock); + + pasid_flush_caches(iommu, pte, pasid, did); + + return 0; +} diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h index 4e9e68c3c388..7906d73f4ded 100644 --- a/drivers/iommu/intel/pasid.h +++ b/drivers/iommu/intel/pasid.h @@ -109,6 +109,8 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu, int intel_pasid_setup_pass_through(struct intel_iommu *iommu, struct dmar_domain *domain, struct device *dev, u32 pasid); +int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev, + u32 pasid, struct dmar_domain *domain); void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, u32 pasid, bool fault_ignore);
From: Liu, Yi L yi.l.liu@intel.com Sent: Thursday, September 21, 2023 3:54 PM
+/**
- intel_pasid_setup_nested() - Set up PASID entry for nested translation.
- @iommu: IOMMU which the device belong to
- @dev: Device to be set up for translation
- @pasid: PASID to be programmed in the device PASID table
- @domain: User first-level domain nested on a s2 domain
let's use stage-1/stage-2 consistently
- This is used for nested translation based vIOMMU. e.g. guest IOVA and
- guest shared virtual address. In this case, the first-level page tables
- are used for GVA/GIOVA-GPA translation in the guest, the second-level
- page tables are used for GPA-HPA translation.
No need to mention guest or vIOMMU. Just stick to the fact of nested configuration.
- */
+int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev,
u32 pasid, struct dmar_domain *domain)
+{
- struct iommu_hwpt_vtd_s1 *s1_cfg = &domain->s1_cfg;
- pgd_t *s1_gpgd = (pgd_t *)(uintptr_t)domain->s1_pgtbl;
- struct dmar_domain *s2_domain = domain->s2_domain;
- u16 did = domain_id_iommu(domain, iommu);
- struct dma_pte *pgd = s2_domain->pgd;
- struct pasid_entry *pte;
- if (!ecap_nest(iommu->ecap)) {
pr_err_ratelimited("%s: No nested translation support\n",
iommu->name);
return -ENODEV;
- }
this check has been done when creating the nest_parent s2 and iommufd shouldn't request a nested setup when the specified s2 is not nest_parent.
If you still want to keep this check then a WARN_ON.
- /*
* Address width should match in two dimensions: CPU vs. IOMMU,
* guest vs. host.
*/
No 'guest'. Just that the user requested addr width must be supported by the hardware.
- switch (s1_cfg->addr_width) {
- case ADDR_WIDTH_4LEVEL:
break;
+#ifdef CONFIG_X86
- case ADDR_WIDTH_5LEVEL:
if (!cpu_feature_enabled(X86_FEATURE_LA57) ||
!cap_fl5lp_support(iommu->cap)) {
dev_err_ratelimited(dev,
"5-level paging not supported\n");
return -EINVAL;
}
break;
I wonder whether this check is too strict. a nested configuration doesn't mandate vSVM. e.g. for guest IOVA the s1 page table is not walked by the CPU. Adding a CPU capability check here doesn't make much sense.
Of course doing so may not cause a problem reality as very likely all Intel platforms have same 5level support in both CPU/IOMMU. But code-wise it's still better to do it right.
Ideally the guest will be notified with 5level support only when the CPU supports it then it's the guest's business to choose the right format to match the CPU capability. Misconfiguration will be caught by the CPU virtualization path?
+#endif
- default:
dev_err_ratelimited(dev, "Invalid guest address width %d\n",
s1_cfg->addr_width);
ditto. use 'stage-1 address width' instead of 'guest'.
return -EINVAL;
- }
- if ((s1_cfg->flags & IOMMU_VTD_S1_SRE) && !ecap_srs(iommu-
ecap)) {
pr_err_ratelimited("No supervisor request support on %s\n",
iommu->name);
return -EINVAL;
- }
- if ((s1_cfg->flags & IOMMU_VTD_S1_EAFE) && !ecap_eafs(iommu-
ecap)) {
pr_err_ratelimited("No extended access flag support
on %s\n",
iommu->name);
return -EINVAL;
- }
- spin_lock(&iommu->lock);
- pte = intel_pasid_get_entry(dev, pasid);
- if (!pte) {
spin_unlock(&iommu->lock);
return -ENODEV;
- }
- if (pasid_pte_is_present(pte)) {
spin_unlock(&iommu->lock);
return -EBUSY;
- }
- pasid_clear_entry(pte);
- if (s1_cfg->addr_width == ADDR_WIDTH_5LEVEL)
pasid_set_flpm(pte, 1);
- pasid_set_flptr(pte, (uintptr_t)s1_gpgd);
- if (s1_cfg->flags & IOMMU_VTD_S1_SRE) {
pasid_set_sre(pte);
if (s1_cfg->flags & IOMMU_VTD_S1_WPE)
pasid_set_wpe(pte);
- }
- if (s1_cfg->flags & IOMMU_VTD_S1_EAFE)
pasid_set_eafe(pte);
- if (s2_domain->force_snooping)
pasid_set_pgsnp(pte);
- pasid_set_slptr(pte, virt_to_phys(pgd));
- pasid_set_fault_enable(pte);
- pasid_set_domain_id(pte, did);
- pasid_set_address_width(pte, s2_domain->agaw);
- pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
- pasid_set_translation_type(pte, PASID_ENTRY_PGTT_NESTED);
- pasid_set_present(pte);
- spin_unlock(&iommu->lock);
All changes within iommu->lock are specific to the device specific PASID entry. Probably this is one potential cleanup TODO to use a per-device lock instead.
- pasid_flush_caches(iommu, pte, pasid, did);
- return 0;
+} diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h index 4e9e68c3c388..7906d73f4ded 100644 --- a/drivers/iommu/intel/pasid.h +++ b/drivers/iommu/intel/pasid.h @@ -109,6 +109,8 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu, int intel_pasid_setup_pass_through(struct intel_iommu *iommu, struct dmar_domain *domain, struct device *dev, u32 pasid); +int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev,
u32 pasid, struct dmar_domain *domain);
void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, u32 pasid, bool fault_ignore); -- 2.34.1
On 2023/9/27 14:37, Tian, Kevin wrote:
From: Liu, Yi L yi.l.liu@intel.com Sent: Thursday, September 21, 2023 3:54 PM
+/**
- intel_pasid_setup_nested() - Set up PASID entry for nested translation.
- @iommu: IOMMU which the device belong to
- @dev: Device to be set up for translation
- @pasid: PASID to be programmed in the device PASID table
- @domain: User first-level domain nested on a s2 domain
let's use stage-1/stage-2 consistently
ok
- This is used for nested translation based vIOMMU. e.g. guest IOVA and
- guest shared virtual address. In this case, the first-level page tables
- are used for GVA/GIOVA-GPA translation in the guest, the second-level
- page tables are used for GPA-HPA translation.
No need to mention guest or vIOMMU. Just stick to the fact of nested configuration.
how about below?
* This is used for nested translation. The input domain should be * nested type and nested on a parent with 'is_nested_parent' flag * set.
- */
+int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev,
u32 pasid, struct dmar_domain *domain)
+{
- struct iommu_hwpt_vtd_s1 *s1_cfg = &domain->s1_cfg;
- pgd_t *s1_gpgd = (pgd_t *)(uintptr_t)domain->s1_pgtbl;
- struct dmar_domain *s2_domain = domain->s2_domain;
- u16 did = domain_id_iommu(domain, iommu);
- struct dma_pte *pgd = s2_domain->pgd;
- struct pasid_entry *pte;
- if (!ecap_nest(iommu->ecap)) {
pr_err_ratelimited("%s: No nested translation support\n",
iommu->name);
return -ENODEV;
- }
this check has been done when creating the nest_parent s2 and iommufd shouldn't request a nested setup when the specified s2 is not nest_parent.
yes, it has been checked when allocating nest_parent s2. Current iommufd nesting series does not check nested setup specifically. All the attach logic is common across the domain types. So it seems no need for iommufd to check it.
If you still want to keep this check then a WARN_ON.
may just remove it.
- /*
* Address width should match in two dimensions: CPU vs. IOMMU,
* guest vs. host.
*/
No 'guest'. Just that the user requested addr width must be supported by the hardware.
sure.
- switch (s1_cfg->addr_width) {
- case ADDR_WIDTH_4LEVEL:
break;
+#ifdef CONFIG_X86
- case ADDR_WIDTH_5LEVEL:
if (!cpu_feature_enabled(X86_FEATURE_LA57) ||
!cap_fl5lp_support(iommu->cap)) {
dev_err_ratelimited(dev,
"5-level paging not supported\n");
return -EINVAL;
}
break;
I wonder whether this check is too strict. a nested configuration doesn't mandate vSVM. e.g. for guest IOVA the s1 page table is not walked by the CPU. Adding a CPU capability check here doesn't make much sense.
Of course doing so may not cause a problem reality as very likely all Intel platforms have same 5level support in both CPU/IOMMU. But code-wise it's still better to do it right.
Ideally the guest will be notified with 5level support only when the CPU supports it then it's the guest's business to choose the right format to match the CPU capability. Misconfiguration will be caught by the CPU virtualization path?
yes, it may make more sense that CPU and IOMMU side take its own check to avoid misuage of 5lvl. For vSVM, it's guest's bussiness to ensure its vCPU and vIOMMU both have 5lvl. will relax the check here.
+#endif
- default:
dev_err_ratelimited(dev, "Invalid guest address width %d\n",
s1_cfg->addr_width);
ditto. use 'stage-1 address width' instead of 'guest'.
sure.
return -EINVAL;
- }
- if ((s1_cfg->flags & IOMMU_VTD_S1_SRE) && !ecap_srs(iommu-
ecap)) {
pr_err_ratelimited("No supervisor request support on %s\n",
iommu->name);
return -EINVAL;
- }
- if ((s1_cfg->flags & IOMMU_VTD_S1_EAFE) && !ecap_eafs(iommu-
ecap)) {
pr_err_ratelimited("No extended access flag support
on %s\n",
iommu->name);
return -EINVAL;
- }
- spin_lock(&iommu->lock);
- pte = intel_pasid_get_entry(dev, pasid);
- if (!pte) {
spin_unlock(&iommu->lock);
return -ENODEV;
- }
- if (pasid_pte_is_present(pte)) {
spin_unlock(&iommu->lock);
return -EBUSY;
- }
- pasid_clear_entry(pte);
- if (s1_cfg->addr_width == ADDR_WIDTH_5LEVEL)
pasid_set_flpm(pte, 1);
- pasid_set_flptr(pte, (uintptr_t)s1_gpgd);
- if (s1_cfg->flags & IOMMU_VTD_S1_SRE) {
pasid_set_sre(pte);
if (s1_cfg->flags & IOMMU_VTD_S1_WPE)
pasid_set_wpe(pte);
- }
- if (s1_cfg->flags & IOMMU_VTD_S1_EAFE)
pasid_set_eafe(pte);
- if (s2_domain->force_snooping)
pasid_set_pgsnp(pte);
- pasid_set_slptr(pte, virt_to_phys(pgd));
- pasid_set_fault_enable(pte);
- pasid_set_domain_id(pte, did);
- pasid_set_address_width(pte, s2_domain->agaw);
- pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
- pasid_set_translation_type(pte, PASID_ENTRY_PGTT_NESTED);
- pasid_set_present(pte);
- spin_unlock(&iommu->lock);
All changes within iommu->lock are specific to the device specific PASID entry. Probably this is one potential cleanup TODO to use a per-device lock instead.
yeah, a separate cleanup. is it, @Baolu?
- pasid_flush_caches(iommu, pte, pasid, did);
- return 0;
+} diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h index 4e9e68c3c388..7906d73f4ded 100644 --- a/drivers/iommu/intel/pasid.h +++ b/drivers/iommu/intel/pasid.h @@ -109,6 +109,8 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu, int intel_pasid_setup_pass_through(struct intel_iommu *iommu, struct dmar_domain *domain, struct device *dev, u32 pasid); +int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev,
void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, u32 pasid, bool fault_ignore);u32 pasid, struct dmar_domain *domain);
-- 2.34.1
On 2023/10/13 20:40, Yi Liu wrote:
+ return -EINVAL; + }
+ if ((s1_cfg->flags & IOMMU_VTD_S1_SRE) && !ecap_srs(iommu-
ecap)) {
+ pr_err_ratelimited("No supervisor request support on %s\n", + iommu->name); + return -EINVAL; + }
+ if ((s1_cfg->flags & IOMMU_VTD_S1_EAFE) && !ecap_eafs(iommu-
ecap)) {
+ pr_err_ratelimited("No extended access flag support on %s\n", + iommu->name); + return -EINVAL; + }
+ spin_lock(&iommu->lock); + pte = intel_pasid_get_entry(dev, pasid); + if (!pte) { + spin_unlock(&iommu->lock); + return -ENODEV; + } + if (pasid_pte_is_present(pte)) { + spin_unlock(&iommu->lock); + return -EBUSY; + }
+ pasid_clear_entry(pte);
+ if (s1_cfg->addr_width == ADDR_WIDTH_5LEVEL) + pasid_set_flpm(pte, 1);
+ pasid_set_flptr(pte, (uintptr_t)s1_gpgd);
+ if (s1_cfg->flags & IOMMU_VTD_S1_SRE) { + pasid_set_sre(pte); + if (s1_cfg->flags & IOMMU_VTD_S1_WPE) + pasid_set_wpe(pte); + }
+ if (s1_cfg->flags & IOMMU_VTD_S1_EAFE) + pasid_set_eafe(pte);
+ if (s2_domain->force_snooping) + pasid_set_pgsnp(pte);
+ pasid_set_slptr(pte, virt_to_phys(pgd)); + pasid_set_fault_enable(pte); + pasid_set_domain_id(pte, did); + pasid_set_address_width(pte, s2_domain->agaw); + pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap)); + pasid_set_translation_type(pte, PASID_ENTRY_PGTT_NESTED); + pasid_set_present(pte); + spin_unlock(&iommu->lock);
All changes within iommu->lock are specific to the device specific PASID entry. Probably this is one potential cleanup TODO to use a per-device lock instead.
yeah, a separate cleanup. is it, @Baolu?
Sure. I'd like to take some time to consider this further. Please keep it as-is for the time being.
Best regards, baolu
From: Liu, Yi L yi.l.liu@intel.com Sent: Friday, October 13, 2023 8:40 PM
- This is used for nested translation based vIOMMU. e.g. guest IOVA and
- guest shared virtual address. In this case, the first-level page tables
- are used for GVA/GIOVA-GPA translation in the guest, the second-level
- page tables are used for GPA-HPA translation.
No need to mention guest or vIOMMU. Just stick to the fact of nested configuration.
how about below?
- This is used for nested translation. The input domain should be
- nested type and nested on a parent with 'is_nested_parent' flag
- set.
* This is used for nested translation. The input domain should be * the nested type with the parent having 'is_nested_parent' flag set.
This makes the helpers visible to nested.c.
Suggested-by: Lu Baolu baolu.lu@linux.intel.com Reviewed-by: Kevin Tian kevin.tian@intel.com Signed-off-by: Yi Liu yi.l.liu@intel.com --- drivers/iommu/intel/iommu.c | 15 ++++++--------- drivers/iommu/intel/iommu.h | 7 +++++++ 2 files changed, 13 insertions(+), 9 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 4ffc939a71f0..7b75f407c623 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -282,7 +282,6 @@ static LIST_HEAD(dmar_satc_units); #define for_each_rmrr_units(rmrr) \ list_for_each_entry(rmrr, &dmar_rmrr_units, list)
-static void device_block_translation(struct device *dev); static void intel_iommu_domain_free(struct iommu_domain *domain);
int dmar_disabled = !IS_ENABLED(CONFIG_INTEL_IOMMU_DEFAULT_ON); @@ -560,7 +559,7 @@ static unsigned long domain_super_pgsize_bitmap(struct dmar_domain *domain) }
/* Some capabilities may be different across iommus */ -static void domain_update_iommu_cap(struct dmar_domain *domain) +void domain_update_iommu_cap(struct dmar_domain *domain) { domain_update_iommu_coherency(domain); domain->iommu_superpage = domain_update_iommu_superpage(domain, NULL); @@ -1778,8 +1777,7 @@ static struct dmar_domain *alloc_domain(unsigned int type) return domain; }
-static int domain_attach_iommu(struct dmar_domain *domain, - struct intel_iommu *iommu) +int domain_attach_iommu(struct dmar_domain *domain, struct intel_iommu *iommu) { struct iommu_domain_info *info, *curr; unsigned long ndomains; @@ -1828,8 +1826,7 @@ static int domain_attach_iommu(struct dmar_domain *domain, return ret; }
-static void domain_detach_iommu(struct dmar_domain *domain, - struct intel_iommu *iommu) +void domain_detach_iommu(struct dmar_domain *domain, struct intel_iommu *iommu) { struct iommu_domain_info *info;
@@ -3974,7 +3971,7 @@ static void dmar_remove_one_dev_info(struct device *dev) * all DMA requests without PASID from the device are blocked. If the page * table has been set, clean up the data structures. */ -static void device_block_translation(struct device *dev) +void device_block_translation(struct device *dev) { struct device_domain_info *info = dev_iommu_priv_get(dev); struct intel_iommu *iommu = info->iommu; @@ -4102,8 +4099,8 @@ static void intel_iommu_domain_free(struct iommu_domain *domain) domain_exit(to_dmar_domain(domain)); }
-static int prepare_domain_attach_device(struct iommu_domain *domain, - struct device *dev) +int prepare_domain_attach_device(struct iommu_domain *domain, + struct device *dev) { struct dmar_domain *dmar_domain = to_dmar_domain(domain); struct intel_iommu *iommu; diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index a82e232abae0..411cde769787 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -859,6 +859,13 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, */ #define QI_OPT_WAIT_DRAIN BIT(0)
+int domain_attach_iommu(struct dmar_domain *domain, struct intel_iommu *iommu); +void domain_detach_iommu(struct dmar_domain *domain, struct intel_iommu *iommu); +void device_block_translation(struct device *dev); +int prepare_domain_attach_device(struct iommu_domain *domain, + struct device *dev); +void domain_update_iommu_cap(struct dmar_domain *domain); + int dmar_ir_support(void);
void *alloc_pgtable_page(int node, gfp_t gfp);
This adds the helper for setting the nested domain to a device hence enable nested domain usage on Intel VT-d.
Signed-off-by: Jacob Pan jacob.jun.pan@linux.intel.com Signed-off-by: Lu Baolu baolu.lu@linux.intel.com Signed-off-by: Yi Liu yi.l.liu@intel.com --- drivers/iommu/intel/nested.c | 50 ++++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+)
diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c index 012d5ff60265..f9c6ade72416 100644 --- a/drivers/iommu/intel/nested.c +++ b/drivers/iommu/intel/nested.c @@ -11,8 +11,57 @@ #define pr_fmt(fmt) "DMAR: " fmt
#include <linux/iommu.h> +#include <linux/pci.h> +#include <linux/pci-ats.h>
#include "iommu.h" +#include "pasid.h" + +static int intel_nested_attach_dev(struct iommu_domain *domain, + struct device *dev) +{ + struct device_domain_info *info = dev_iommu_priv_get(dev); + struct dmar_domain *dmar_domain = to_dmar_domain(domain); + struct intel_iommu *iommu = info->iommu; + unsigned long flags; + int ret = 0; + + if (info->domain) + device_block_translation(dev); + + if (iommu->agaw < dmar_domain->s2_domain->agaw) { + dev_err_ratelimited(dev, "Adjusted guest address width not compatible\n"); + return -ENODEV; + } + + /* Is s2_domain compatible with this IOMMU? */ + ret = prepare_domain_attach_device(&dmar_domain->s2_domain->domain, dev); + if (ret) { + dev_err_ratelimited(dev, "s2 domain is not compatible\n"); + return ret; + } + + ret = domain_attach_iommu(dmar_domain, iommu); + if (ret) { + dev_err_ratelimited(dev, "Failed to attach domain to iommu\n"); + return ret; + } + + ret = intel_pasid_setup_nested(iommu, dev, + IOMMU_NO_PASID, dmar_domain); + if (ret) { + domain_detach_iommu(dmar_domain, iommu); + dev_err_ratelimited(dev, "Failed to setup pasid entry\n"); + return ret; + } + + info->domain = dmar_domain; + spin_lock_irqsave(&dmar_domain->lock, flags); + list_add(&info->link, &dmar_domain->devices); + spin_unlock_irqrestore(&dmar_domain->lock, flags); + + return 0; +}
static void intel_nested_domain_free(struct iommu_domain *domain) { @@ -20,6 +69,7 @@ static void intel_nested_domain_free(struct iommu_domain *domain) }
static const struct iommu_domain_ops intel_nested_domain_ops = { + .attach_dev = intel_nested_attach_dev, .free = intel_nested_domain_free, };
From: Liu, Yi L yi.l.liu@intel.com Sent: Thursday, September 21, 2023 3:54 PM
- /* Is s2_domain compatible with this IOMMU? */
Add more words on why prepare_attach uses s2_domain when the actual attach is for s1
- ret = prepare_domain_attach_device(&dmar_domain->s2_domain-
domain, dev);
- if (ret) {
dev_err_ratelimited(dev, "s2 domain is not compatible\n");
return ret;
- }
- ret = domain_attach_iommu(dmar_domain, iommu);
- if (ret) {
dev_err_ratelimited(dev, "Failed to attach domain to
iommu\n");
return ret;
- }
This adds the data structure for flushing iotlb for the nested domain allocated with IOMMU_HWPT_TYPE_VTD_S1 type.
This only supports invalidating IOTLB, but no for device-TLB as device-TLB invalidation will be covered automatically in the IOTLB invalidation if the underlying IOMMU driver has enabled ATS for the affected device.
Signed-off-by: Yi Liu yi.l.liu@intel.com --- include/uapi/linux/iommufd.h | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index 18a502e206c3..3050efbceb57 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -510,6 +510,40 @@ struct iommu_hw_info { }; #define IOMMU_GET_HW_INFO _IO(IOMMUFD_TYPE, IOMMUFD_CMD_GET_HW_INFO)
+/** + * enum iommu_hwpt_vtd_s1_invalidate_flags - Flags for Intel VT-d + * stage-1 cache invalidation + * @IOMMU_VTD_QI_FLAGS_LEAF: The LEAF flag indicates whether only the + * leaf PTE caching needs to be invalidated + * and other paging structure caches can be + * preserved. + */ +enum iommu_hwpt_vtd_s1_invalidate_flags { + IOMMU_VTD_QI_FLAGS_LEAF = 1 << 0, +}; + +/** + * struct iommu_hwpt_vtd_s1_invalidate - Intel VT-d cache invalidation + * (IOMMU_HWPT_TYPE_VTD_S1) + * @addr: The start address of the addresses to be invalidated. + * @npages: Number of contiguous 4K pages to be invalidated. + * @flags: Combination of enum iommu_hwpt_vtd_s1_invalidate_flags + * @__reserved: Must be 0 + * + * The Intel VT-d specific invalidation data for user-managed stage-1 cache + * invalidation under nested translation. Userspace uses this structure to + * tell host about the impacted caches after modifying the stage-1 page table. + * + * Invalidating all the caches related to the page table by setting @addr + * to be 0 and @npages to be __aligned_u64(-1). + */ +struct iommu_hwpt_vtd_s1_invalidate { + __aligned_u64 addr; + __aligned_u64 npages; + __u32 flags; + __u32 __reserved; +}; + /** * struct iommu_hwpt_invalidate - ioctl(IOMMU_HWPT_INVALIDATE) * @size: sizeof(struct iommu_hwpt_invalidate)
On 2023/9/21 15:54, Yi Liu wrote:
This adds the data structure for flushing iotlb for the nested domain allocated with IOMMU_HWPT_TYPE_VTD_S1 type.
This only supports invalidating IOTLB, but no for device-TLB as device-TLB invalidation will be covered automatically in the IOTLB invalidation if the underlying IOMMU driver has enabled ATS for the affected device.
Signed-off-by: Yi Liu yi.l.liu@intel.com
include/uapi/linux/iommufd.h | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index 18a502e206c3..3050efbceb57 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -510,6 +510,40 @@ struct iommu_hw_info { }; #define IOMMU_GET_HW_INFO _IO(IOMMUFD_TYPE, IOMMUFD_CMD_GET_HW_INFO) +/**
- enum iommu_hwpt_vtd_s1_invalidate_flags - Flags for Intel VT-d
stage-1 cache invalidation
- @IOMMU_VTD_QI_FLAGS_LEAF: The LEAF flag indicates whether only the
leaf PTE caching needs to be invalidated
and other paging structure caches can be
preserved.
- */
+enum iommu_hwpt_vtd_s1_invalidate_flags {
- IOMMU_VTD_QI_FLAGS_LEAF = 1 << 0,
+};
+/**
- struct iommu_hwpt_vtd_s1_invalidate - Intel VT-d cache invalidation
(IOMMU_HWPT_TYPE_VTD_S1)
- @addr: The start address of the addresses to be invalidated.
Is there an alignment requirement for @addr? If so, is 4K alignment sufficient? Perhaps we need to document it here so that user space can calculate the @addr correctly.
- @npages: Number of contiguous 4K pages to be invalidated.
- @flags: Combination of enum iommu_hwpt_vtd_s1_invalidate_flags
- @__reserved: Must be 0
- The Intel VT-d specific invalidation data for user-managed stage-1 cache
- invalidation under nested translation. Userspace uses this structure to
- tell host about the impacted caches after modifying the stage-1 page table.
- Invalidating all the caches related to the page table by setting @addr
- to be 0 and @npages to be __aligned_u64(-1).
- */
+struct iommu_hwpt_vtd_s1_invalidate {
- __aligned_u64 addr;
- __aligned_u64 npages;
- __u32 flags;
- __u32 __reserved;
+};
- /**
- struct iommu_hwpt_invalidate - ioctl(IOMMU_HWPT_INVALIDATE)
- @size: sizeof(struct iommu_hwpt_invalidate)
Best regards, baolu
On 2023/9/21 21:33, Baolu Lu wrote:
On 2023/9/21 15:54, Yi Liu wrote:
This adds the data structure for flushing iotlb for the nested domain allocated with IOMMU_HWPT_TYPE_VTD_S1 type.
This only supports invalidating IOTLB, but no for device-TLB as device-TLB invalidation will be covered automatically in the IOTLB invalidation if the underlying IOMMU driver has enabled ATS for the affected device.
Signed-off-by: Yi Liu yi.l.liu@intel.com
include/uapi/linux/iommufd.h | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index 18a502e206c3..3050efbceb57 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -510,6 +510,40 @@ struct iommu_hw_info { }; #define IOMMU_GET_HW_INFO _IO(IOMMUFD_TYPE, IOMMUFD_CMD_GET_HW_INFO) +/**
- enum iommu_hwpt_vtd_s1_invalidate_flags - Flags for Intel VT-d
- * stage-1 cache invalidation
- @IOMMU_VTD_QI_FLAGS_LEAF: The LEAF flag indicates whether only the
- * leaf PTE caching needs to be invalidated
- * and other paging structure caches can be
- * preserved.
- */
+enum iommu_hwpt_vtd_s1_invalidate_flags { + IOMMU_VTD_QI_FLAGS_LEAF = 1 << 0, +};
+/**
- struct iommu_hwpt_vtd_s1_invalidate - Intel VT-d cache invalidation
- * (IOMMU_HWPT_TYPE_VTD_S1)
- @addr: The start address of the addresses to be invalidated.
Is there an alignment requirement for @addr? If so, is 4K alignment sufficient? Perhaps we need to document it here so that user space can calculate the @addr correctly.
yes, it should be aligned. let's document it in the kdoc.
- @npages: Number of contiguous 4K pages to be invalidated.
- @flags: Combination of enum iommu_hwpt_vtd_s1_invalidate_flags
- @__reserved: Must be 0
- The Intel VT-d specific invalidation data for user-managed stage-1 cache
- invalidation under nested translation. Userspace uses this structure to
- tell host about the impacted caches after modifying the stage-1 page
table.
- Invalidating all the caches related to the page table by setting @addr
- to be 0 and @npages to be __aligned_u64(-1).
- */
+struct iommu_hwpt_vtd_s1_invalidate { + __aligned_u64 addr; + __aligned_u64 npages; + __u32 flags; + __u32 __reserved; +};
/** * struct iommu_hwpt_invalidate - ioctl(IOMMU_HWPT_INVALIDATE) * @size: sizeof(struct iommu_hwpt_invalidate)
Best regards, baolu
From: Liu, Yi L yi.l.liu@intel.com Sent: Thursday, September 21, 2023 3:54 PM
+/**
- enum iommu_hwpt_vtd_s1_invalidate_flags - Flags for Intel VT-d
stage-1 cache invalidation
- @IOMMU_VTD_QI_FLAGS_LEAF: The LEAF flag indicates whether only the
leaf PTE caching needs to be invalidated
and other paging structure caches can be
preserved.
- */
+enum iommu_hwpt_vtd_s1_invalidate_flags {
- IOMMU_VTD_QI_FLAGS_LEAF = 1 << 0,
+};
QI is iommu driver internal term.
let's use IOMMU_VTD_INV_FLAGS_LEAF here.
+/**
- struct iommu_hwpt_vtd_s1_invalidate - Intel VT-d cache invalidation
(IOMMU_HWPT_TYPE_VTD_S1)
- @addr: The start address of the addresses to be invalidated.
- @npages: Number of contiguous 4K pages to be invalidated.
- @flags: Combination of enum iommu_hwpt_vtd_s1_invalidate_flags
- @__reserved: Must be 0
- The Intel VT-d specific invalidation data for user-managed stage-1 cache
- invalidation under nested translation. Userspace uses this structure to
s/under/in/
- tell host about the impacted caches after modifying the stage-1 page
table.
"to tell the impacted cache scope..."
- Invalidating all the caches related to the page table by setting @addr
- to be 0 and @npages to be __aligned_u64(-1).
This should also call out that device TLB is also invalidated by this request if ATS is enabled on the device.
This makes the helpers visible to nested.c.
Reviewed-by: Kevin Tian kevin.tian@intel.com Signed-off-by: Yi Liu yi.l.liu@intel.com --- drivers/iommu/intel/iommu.c | 10 +++++----- drivers/iommu/intel/iommu.h | 6 ++++++ 2 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 7b75f407c623..c93c91ed4ee2 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1511,10 +1511,10 @@ static void domain_flush_pasid_iotlb(struct intel_iommu *iommu, spin_unlock_irqrestore(&domain->lock, flags); }
-static void iommu_flush_iotlb_psi(struct intel_iommu *iommu, - struct dmar_domain *domain, - unsigned long pfn, unsigned int pages, - int ih, int map) +void iommu_flush_iotlb_psi(struct intel_iommu *iommu, + struct dmar_domain *domain, + unsigned long pfn, unsigned int pages, + int ih, int map) { unsigned int aligned_pages = __roundup_pow_of_two(pages); unsigned int mask = ilog2(aligned_pages); @@ -1587,7 +1587,7 @@ static inline void __mapping_notify_one(struct intel_iommu *iommu, iommu_flush_write_buffer(iommu); }
-static void intel_flush_iotlb_all(struct iommu_domain *domain) +void intel_flush_iotlb_all(struct iommu_domain *domain) { struct dmar_domain *dmar_domain = to_dmar_domain(domain); struct iommu_domain_info *info; diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index 411cde769787..ac23b9d22d20 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -865,6 +865,12 @@ void device_block_translation(struct device *dev); int prepare_domain_attach_device(struct iommu_domain *domain, struct device *dev); void domain_update_iommu_cap(struct dmar_domain *domain); +void iommu_flush_iotlb_psi(struct intel_iommu *iommu, + struct dmar_domain *domain, + unsigned long pfn, unsigned int pages, + int ih, int map); +void intel_flush_iotlb_all(struct iommu_domain *domain); +
int dmar_ir_support(void);
This implements the .cache_invalidate_user() callback to support iotlb flush for nested domain.
Signed-off-by: Lu Baolu baolu.lu@linux.intel.com Signed-off-by: Yi Liu yi.l.liu@intel.com --- drivers/iommu/intel/nested.c | 58 ++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+)
diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c index f9c6ade72416..4853fee216d9 100644 --- a/drivers/iommu/intel/nested.c +++ b/drivers/iommu/intel/nested.c @@ -68,9 +68,67 @@ static void intel_nested_domain_free(struct iommu_domain *domain) kfree(to_dmar_domain(domain)); }
+static void domain_flush_iotlb_psi(struct dmar_domain *domain, + u64 addr, unsigned long npages) +{ + struct iommu_domain_info *info; + unsigned long i; + + xa_for_each(&domain->iommu_array, i, info) + iommu_flush_iotlb_psi(info->iommu, domain, + addr >> VTD_PAGE_SHIFT, npages, 1, 0); +} + +static int intel_nested_cache_invalidate_user(struct iommu_domain *domain, + struct iommu_user_data_array *array, + u32 *cerror_idx) +{ + const size_t min_len = + offsetofend(struct iommu_hwpt_vtd_s1_invalidate, __reserved); + struct dmar_domain *dmar_domain = to_dmar_domain(domain); + struct iommu_hwpt_vtd_s1_invalidate inv_info; + u32 index; + int ret; + + /* REVISIT: + * VT-d has defined ITE, ICE, IQE for invalidation failure per hardware, + * but no error code yet, so just set the error code to be 0. + */ + *cerror_idx = 0; + + if (array->entry_len < min_len) + return -EINVAL; + + for (index = 0; index < array->entry_num; index++) { + ret = iommu_copy_user_data_from_array(&inv_info, array, index, + sizeof(inv_info), min_len); + if (ret) { + pr_err_ratelimited("Failed to fetch invalidation request\n"); + break; + } + + if (inv_info.__reserved || (inv_info.flags & ~IOMMU_VTD_QI_FLAGS_LEAF) || + !IS_ALIGNED(inv_info.addr, VTD_PAGE_SIZE)) { + ret = -EINVAL; + break; + } + + if (inv_info.addr == 0 && inv_info.npages == -1) + intel_flush_iotlb_all(domain); + else + domain_flush_iotlb_psi(dmar_domain, + inv_info.addr, inv_info.npages); + } + + array->entry_num = index; + + return ret; +} + static const struct iommu_domain_ops intel_nested_domain_ops = { .attach_dev = intel_nested_attach_dev, .free = intel_nested_domain_free, + .cache_invalidate_user = intel_nested_cache_invalidate_user, };
struct iommu_domain *intel_nested_domain_alloc(struct iommu_domain *s2_domain,
From: Liu, Yi L yi.l.liu@intel.com Sent: Thursday, September 21, 2023 3:54 PM
- /* REVISIT:
* VT-d has defined ITE, ICE, IQE for invalidation failure per hardware,
* but no error code yet, so just set the error code to be 0.
*/
- *cerror_idx = 0;
Is it "hardware doesn't provide error code now though it's defined in spec" or "intel-iommu driver doesn't retrieve the error code though it's provided by the hardware"?
Is there guarantee that '0' isn't used for an existing error code or won't be used for any new error code later?
On 2023/9/27 14:53, Tian, Kevin wrote:
From: Liu, Yi L yi.l.liu@intel.com Sent: Thursday, September 21, 2023 3:54 PM
- /* REVISIT:
* VT-d has defined ITE, ICE, IQE for invalidation failure per hardware,
* but no error code yet, so just set the error code to be 0.
*/
- *cerror_idx = 0;
Is it "hardware doesn't provide error code now though it's defined in spec" or "intel-iommu driver doesn't retrieve the error code though it's provided by the hardware"?
I didn't see vtd spec defines error code for cache invalidation. :(
Is there guarantee that '0' isn't used for an existing error code or won't be used for any new error code later?
may need to check it.
From: Lu Baolu baolu.lu@linux.intel.com
This adds the support for IOMMU_HWPT_TYPE_VTD_S1 type.
Reviewed-by: Kevin Tian kevin.tian@intel.com Signed-off-by: Lu Baolu baolu.lu@linux.intel.com Signed-off-by: Yi Liu yi.l.liu@intel.com --- drivers/iommu/intel/iommu.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index c93c91ed4ee2..9b10e4b1d400 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -4077,19 +4077,35 @@ intel_iommu_domain_alloc_user(struct device *dev, u32 flags, struct iommu_domain *parent, const struct iommu_user_data *user_data) { + bool request_nest_parent = flags & IOMMU_HWPT_ALLOC_NEST_PARENT; struct iommu_domain *domain; struct intel_iommu *iommu;
+ if (hwpt_type != IOMMU_HWPT_TYPE_DEFAULT && + hwpt_type != IOMMU_HWPT_TYPE_VTD_S1) + return ERR_PTR(-EINVAL); + + if ((hwpt_type == IOMMU_HWPT_TYPE_DEFAULT) == !!parent) + return ERR_PTR(-EINVAL); + + if (parent && request_nest_parent) + return ERR_PTR(-EINVAL); + iommu = device_to_iommu(dev, NULL, NULL); if (!iommu) return ERR_PTR(-ENODEV);
- if ((flags & IOMMU_HWPT_ALLOC_NEST_PARENT) && !ecap_nest(iommu->ecap)) + if ((parent || request_nest_parent) && !ecap_nest(iommu->ecap)) return ERR_PTR(-EOPNOTSUPP);
- domain = iommu_domain_alloc(dev->bus); - if (!domain) - domain = ERR_PTR(-ENOMEM); + if (parent) { + domain = intel_nested_domain_alloc(parent, user_data); + } else { + domain = iommu_domain_alloc(dev->bus); + if (!domain) + domain = ERR_PTR(-ENOMEM); + } + return domain; }
From: Liu, Yi L yi.l.liu@intel.com Sent: Thursday, September 21, 2023 3:55 PM
- if (hwpt_type != IOMMU_HWPT_TYPE_DEFAULT &&
hwpt_type != IOMMU_HWPT_TYPE_VTD_S1)
return ERR_PTR(-EINVAL);
- if ((hwpt_type == IOMMU_HWPT_TYPE_DEFAULT) == !!parent)
return ERR_PTR(-EINVAL);
this is probably too strict. What about intel-iommu driver supports a IOMMU_HWPT_TYPE_VTD_S2 later for some tweak w/o nesting?
let's make the parent match specific to VTD_S1 type.
- if (parent && request_nest_parent)
return ERR_PTR(-EINVAL);
this check should be moved to iommufd?
On 2023/9/27 14:56, Tian, Kevin wrote:
From: Liu, Yi L yi.l.liu@intel.com Sent: Thursday, September 21, 2023 3:55 PM
- if (hwpt_type != IOMMU_HWPT_TYPE_DEFAULT &&
hwpt_type != IOMMU_HWPT_TYPE_VTD_S1)
return ERR_PTR(-EINVAL);
- if ((hwpt_type == IOMMU_HWPT_TYPE_DEFAULT) == !!parent)
return ERR_PTR(-EINVAL);
this is probably too strict. What about intel-iommu driver supports a IOMMU_HWPT_TYPE_VTD_S2 later for some tweak w/o nesting?
in that case, this check needs to be updated
let's make the parent match specific to VTD_S1 type.
ok. how about below?
if ((data_type == IOMMU_HWPT_ALLOC_DATA_VTD_S1) && !parent)
- if (parent && request_nest_parent)
return ERR_PTR(-EINVAL);
this check should be moved to iommufd?
oops, maybe no need, iommufd_hwpt_alloc() has below check.
if (cmd->flags & IOMMU_HWPT_ALLOC_NEST_PARENT && cmd->data_type != IOMMU_HWPT_ALLOC_DATA_NONE) return -EINVAL;
From: Lu Baolu baolu.lu@linux.intel.com
When remapping hardware is configured by system software in scalable mode as Nested (PGTT=011b) and with PWSNP field Set in the PASID-table-entry, it may Set Accessed bit and Dirty bit (and Extended Access bit if enabled) in first-stage page-table entries even when second-stage mappings indicate that corresponding first-stage page-table is Read-Only.
As the result, contents of pages designated by VMM as Read-Only can be modified by IOMMU via PML5E (PML4E for 4-level tables) access as part of address translation process due to DMAs issued by Guest.
This disallows read-only mappings in the domain that is supposed to be used as nested parent. Reference from Sapphire Rapids Specification Update [1], errata details, SPR17. Userspace should know this limitation by checking the IOMMU_HW_INFO_VTD_ERRATA_772415_SPR17 flag reported in the IOMMU_GET_HW_INFO ioctl.
[1] https://www.intel.com/content/www/us/en/content-details/772415/content-detai...
Signed-off-by: Lu Baolu baolu.lu@linux.intel.com Signed-off-by: Yi Liu yi.l.liu@intel.com --- drivers/iommu/intel/iommu.c | 11 +++++++++++ drivers/iommu/intel/iommu.h | 1 + include/uapi/linux/iommufd.h | 12 +++++++++++- 3 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 9b10e4b1d400..dbcdf7b95b9f 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -2193,6 +2193,11 @@ __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn, if ((prot & (DMA_PTE_READ|DMA_PTE_WRITE)) == 0) return -EINVAL;
+ if (!(prot & DMA_PTE_WRITE) && domain->is_nested_parent) { + pr_err_ratelimited("Read-only mapping is disallowed on the domain which serves as the parent in a nested configuration, due to HW errata (ERRATA_772415_SPR17)\n"); + return -EINVAL; + } + attr = prot & (DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP); attr |= DMA_FL_PTE_PRESENT; if (domain->use_first_level) { @@ -4106,6 +4111,11 @@ intel_iommu_domain_alloc_user(struct device *dev, u32 flags, domain = ERR_PTR(-ENOMEM); }
+ if (!IS_ERR(domain)) { + struct dmar_domain *dmar_domain = container_of(domain, + struct dmar_domain, domain); + dmar_domain->is_nested_parent = request_nest_parent; + } return domain; }
@@ -4831,6 +4841,7 @@ static void *intel_iommu_hw_info(struct device *dev, u32 *length, u32 *type) if (!vtd) return ERR_PTR(-ENOMEM);
+ vtd->flags = IOMMU_HW_INFO_VTD_ERRATA_772415_SPR17; vtd->cap_reg = iommu->cap; vtd->ecap_reg = iommu->ecap; *length = sizeof(*vtd); diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index ac23b9d22d20..8d0aac71c135 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -592,6 +592,7 @@ struct dmar_domain { * otherwise, goes through the second * level. */ + u8 is_nested_parent:1; /* has other domains nested on it */
spinlock_t lock; /* Protect device tracking lists */ struct list_head devices; /* all devices' list */ diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index 3050efbceb57..99401d7d70b2 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -440,10 +440,20 @@ struct iommu_hwpt_alloc { }; #define IOMMU_HWPT_ALLOC _IO(IOMMUFD_TYPE, IOMMUFD_CMD_HWPT_ALLOC)
+/** + * enum iommu_hw_info_vtd_flags - Flags for VT-d hw_info + * @IOMMU_HW_INFO_VTD_ERRATA_772415_SPR17: If set, disallow nesting on domains + * with read-only mapping. + * https://www.intel.com/content/www/us/en/content-details/772415/content-detai... + */ +enum iommu_hw_info_vtd_flags { + IOMMU_HW_INFO_VTD_ERRATA_772415_SPR17 = 1 << 0, +}; + /** * struct iommu_hw_info_vtd - Intel VT-d hardware information * - * @flags: Must be 0 + * @flags: Combination of enum iommu_hw_info_vtd_flags * @__reserved: Must be 0 * * @cap_reg: Value of Intel VT-d capability register defined in VT-d spec
From: Liu, Yi L yi.l.liu@intel.com Sent: Thursday, September 21, 2023 3:55 PM
From: Lu Baolu baolu.lu@linux.intel.com
When remapping hardware is configured by system software in scalable mode as Nested (PGTT=011b) and with PWSNP field Set in the PASID-table-entry, it may Set Accessed bit and Dirty bit (and Extended Access bit if enabled) in first-stage page-table entries even when second-stage mappings indicate that corresponding first-stage page-table is Read-Only.
As the result, contents of pages designated by VMM as Read-Only can be modified by IOMMU via PML5E (PML4E for 4-level tables) access as part of address translation process due to DMAs issued by Guest.
This disallows read-only mappings in the domain that is supposed to be used as nested parent. Reference from Sapphire Rapids Specification Update [1], errata details, SPR17. Userspace should know this limitation by checking the IOMMU_HW_INFO_VTD_ERRATA_772415_SPR17 flag reported in the IOMMU_GET_HW_INFO ioctl.
[1] https://www.intel.com/content/www/us/en/content- details/772415/content-details.html
Signed-off-by: Lu Baolu baolu.lu@linux.intel.com Signed-off-by: Yi Liu yi.l.liu@intel.com
Reviewed-by: Kevin Tian kevin.tian@intel.com
linux-kselftest-mirror@lists.linaro.org