Unverified Commit 148b436b authored by openeuler-ci-bot's avatar openeuler-ci-bot Committed by Gitee
Browse files

!13402 【OLK-6.6】Support SMMU vSVA feature

Merge Pull Request from: @did-you-collect-the-wool-today 
 
【特性描述】
本PR是支持SMMU vSVA特性的内核补丁集合,共147个补丁,来源如下:
1)upstream iommu子系统backport的补丁:patch 1-patch 96
Update SMMUv3 to the modern iommu API (part 1/3)
https://lore.kernel.org/linux-iommu/0-v6-96275f25c39d+2d4-smmuv3_newapi_p1_jgg@nvidia.com/

Make the SMMUv3 CD logic match the new STE design (part 2a/3)
https://lore.kernel.org/linux-iommu/0-v9-5040dc602008+177d7-smmuv3_newapi_p2_jgg@nvidia.com/

Update SMMUv3 to the modern iommu API (part 2b/3)
https://lore.kernel.org/linux-iommu/0-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com/

Add IOMMUFD dirty tracking support for SMMUv3
https://lore.kernel.org/linux-iommu/20240703101604.2576-1-shameerali.kolothum.thodi@huawei.com/

IOMMUFD: Deliver IO page faults to user space
https://lore.kernel.org/linux-iommu/20240702063444.105814-1-baolu.lu@linux.intel.com/#t

Tidy some minor things in the stream table/cd table area
https://lore.kernel.org/linux-iommu/0-v4-6416877274e1+1af-smmuv3_tidy_jgg@nvidia.com/

2)社区已相对稳定暂未合入主线的的补丁:patch 97-patch 139
iommufd: Add vIOMMU infrastructure (Part-1)
https://lore.kernel.org/linux-iommu/cover.1730836219.git.nicolinc@nvidia.com/

iommufd: Add vIOMMU infrastructure (Part-2: vDEVICE)
https://lore.kernel.org/linux-iommu/cover.1730836308.git.nicolinc@nvidia.com/

Initial support for SMMUv3 nested translation
https://lore.kernel.org/linux-iommu/0-v4-9e99b76f3518+3a8-smmuv3_nesting_jgg@nvidia.com/

iommu: Support IOMMU_RESV_SW_MSI with nesting
https://lore.kernel.org/linux-iommu/cover.1723056910.git.nicolinc@nvidia.com/

3)修复回合补丁导致的kabi变更:patch 140-patch 147 
 
Link:https://gitee.com/openeuler/kernel/pulls/13402 
parents bf99c651 9bf7bc9a
Loading
Loading
Loading
Loading
+177 −47
Original line number Diff line number Diff line
@@ -41,46 +41,131 @@ Following IOMMUFD objects are exposed to userspace:
- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
  external driver.

- IOMMUFD_OBJ_HW_PAGETABLE, representing an actual hardware I/O page table
  (i.e. a single struct iommu_domain) managed by the iommu driver.
- IOMMUFD_OBJ_HWPT_PAGING, representing an actual hardware I/O page table
  (i.e. a single struct iommu_domain) managed by the iommu driver. "PAGING"
  primarly indicates this type of HWPT should be linked to an IOAS. It also
  indicates that it is backed by an iommu_domain with __IOMMU_DOMAIN_PAGING
  feature flag. This can be either an UNMANAGED stage-1 domain for a device
  running in the user space, or a nesting parent stage-2 domain for mappings
  from guest-level physical addresses to host-level physical addresses.

  The IOAS has a list of HWPT_PAGINGs that share the same IOVA mapping and
  it will synchronize its mapping with each member HWPT_PAGING.

- IOMMUFD_OBJ_HWPT_NESTED, representing an actual hardware I/O page table
  (i.e. a single struct iommu_domain) managed by user space (e.g. guest OS).
  "NESTED" indicates that this type of HWPT should be linked to an HWPT_PAGING.
  It also indicates that it is backed by an iommu_domain that has a type of
  IOMMU_DOMAIN_NESTED. This must be a stage-1 domain for a device running in
  the user space (e.g. in a guest VM enabling the IOMMU nested translation
  feature.) As such, it must be created with a given nesting parent stage-2
  domain to associate to. This nested stage-1 page table managed by the user
  space usually has mappings from guest-level I/O virtual addresses to guest-
  level physical addresses.

- IOMMUFD_OBJ_VIOMMU, representing a slice of the physical IOMMU instance,
  passed to or shared with a VM. It may be some HW-accelerated virtualization
  features and some SW resources used by the VM. For examples:
  * Security namespace for guest owned ID, e.g. guest-controlled cache tags
  * Non-device-affiliated event reporting, e.g. invalidation queue errors
  * Access to a sharable nesting parent pagetable across physical IOMMUs
  * Virtualization of various platforms IDs, e.g. RIDs and others
  * Delivery of paravirtualized invalidation
  * Direct assigned invalidation queues
  * Direct assigned interrupts
  Such a vIOMMU object generally has the access to a nesting parent pagetable
  to support some HW-accelerated virtualization features. So, a vIOMMU object
  must be created given a nesting parent HWPT_PAGING object, and then it would
  encapsulate that HWPT_PAGING object. Therefore, a vIOMMU object can be used
  to allocate an HWPT_NESTED object in place of the encapsulated HWPT_PAGING.

  The IOAS has a list of HW_PAGETABLES that share the same IOVA mapping and
  it will synchronize its mapping with each member HW_PAGETABLE.
  .. note::

     The name "vIOMMU" isn't necessarily identical to a virtualized IOMMU in a
     VM. A VM can have one giant virtualized IOMMU running on a machine having
     multiple physical IOMMUs, in which case the VMM will dispatch the requests
     or configurations from this single virtualized IOMMU instance to multiple
     vIOMMU objects created for individual slices of different physical IOMMUs.
     In other words, a vIOMMU object is always a representation of one physical
     IOMMU, not necessarily of a virtualized IOMMU. For VMMs that want the full
     virtualization features from physical IOMMUs, it is suggested to build the
     same number of virtualized IOMMUs as the number of physical IOMMUs, so the
     passed-through devices would be connected to their own virtualized IOMMUs
     backed by corresponding vIOMMU objects, in which case a guest OS would do
     the "dispatch" naturally instead of VMM trappings.

- IOMMUFD_OBJ_VDEVICE, representing a virtual device for an IOMMUFD_OBJ_DEVICE
  against an IOMMUFD_OBJ_VIOMMU. This virtual device holds the device's virtual
  information or attributes (related to the vIOMMU) in a VM. An immediate vDATA
  example can be the virtual ID of the device on a vIOMMU, which is a unique ID
  that VMM assigns to the device for a translation channel/port of the vIOMMU,
  e.g. vSID of ARM SMMUv3, vDeviceID of AMD IOMMU, and vRID of Intel VT-d to a
  Context Table. Potential use cases of some advanced security information can
  be forwarded via this object too, such as security level or realm information
  in a Confidential Compute Architecture. A VMM should create a vDEVICE object
  to forward all the device information in a VM, when it connects a device to a
  vIOMMU, which is a separate ioctl call from attaching the same device to an
  HWPT_PAGING that the vIOMMU holds.

All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.

The diagram below shows relationship between user-visible objects and kernel
The diagrams below show relationships between user-visible objects and kernel
datastructures (external to iommufd), with numbers referred to operations
creating the objects and links::

  _________________________________________________________
 |                         iommufd                         |
 |       [1]                                               |
 |  _________________                                      |
 | |                 |                                     |
 | |                 |                                     |
 | |                 |                                     |
 | |                 |                                     |
 | |                 |                                     |
 | |                 |                                     |
 | |                 |        [3]                 [2]      |
 | |                 |    ____________         __________  |
 | |      IOAS       |<--|            |<------|          | |
 | |                 |   |HW_PAGETABLE|       |  DEVICE  | |
 | |                 |   |____________|       |__________| |
 | |                 |         |                   |       |
 | |                 |         |                   |       |
  _______________________________________________________________________
 |                      iommufd (HWPT_PAGING only)                       |
 |                                                                       |
 |        [1]                  [3]                                [2]    |
 |  ________________      _____________                        ________  |
 | |                |    |             |                      |        | |
 | |      IOAS      |<---| HWPT_PAGING |<---------------------| DEVICE | |
 | |________________|    |_____________|                      |________| |
 |         |                    |                                  |     |
 |_________|____________________|__________________________________|_____|
           |                    |                                  |
           |              ______v_____                          ___v__
           | PFN storage |  (paging)  |                        |struct|
           |------------>|iommu_domain|<-----------------------|device|
                         |____________|                        |______|

  _______________________________________________________________________
 |                      iommufd (with HWPT_NESTED)                       |
 |                                                                       |
 |        [1]                  [3]                [4]             [2]    |
 |  ________________      _____________      _____________     ________  |
 | |                |    |             |    |             |   |        | |
 | |      IOAS      |<---| HWPT_PAGING |<---| HWPT_NESTED |<--| DEVICE | |
 | |________________|    |_____________|    |_____________|   |________| |
 |         |                    |                  |               |     |
 |_________|____________________|__________________|_______________|_____|
           |                    |                  |               |
           |              ______v_____       ______v_____       ___v__
           | PFN storage |  (paging)  |     |  (nested)  |     |struct|
           |------------>|iommu_domain|<----|iommu_domain|<----|device|
                         |____________|     |____________|     |______|

  _______________________________________________________________________
 |                      iommufd (with vIOMMU/vDEVICE)                    |
 |                                                                       |
 |                             [5]                [6]                    |
 |                        _____________      _____________               |
 |                       |             |    |             |              |
 |      |----------------|    vIOMMU   |<---|   vDEVICE   |<----|        |
 |      |                |             |    |_____________|     |        |
 |      |                |             |                        |        |
 | |_________________|         |                   |       |
 |      |      [1]       |             |          [4]           | [2]    |
 |      |     ______     |             |     _____________     _|______  |
 |      |    |      |    |     [3]     |    |             |   |        | |
 |      |    | IOAS |<---|(HWPT_PAGING)|<---| HWPT_NESTED |<--| DEVICE | |
 |      |    |______|    |_____________|    |_____________|   |________| |
 |      |        |              |                  |               |     |
 |______|________|______________|__________________|_______________|_____|
        |        |              |                  |               |
 |_________|___________________|___________________|_______|
           |                   |                   |
           |              _____v______      _______v_____
           | PFN storage |            |    |             |
           |------------>|iommu_domain|    |struct device|
                         |____________|    |_____________|
  ______v_____   |        ______v_____       ______v_____       ___v__
 |   struct   |  |  PFN  |  (paging)  |     |  (nested)  |     |struct|
 |iommu_device|  |------>|iommu_domain|<----|iommu_domain|<----|device|
 |____________|   storage|____________|     |____________|     |______|

1. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. An iommufd can
   hold multiple IOAS objects. IOAS is the most generic object and does not
@@ -94,21 +179,63 @@ creating the objects and links::
   device. The driver must also set the driver_managed_dma flag and must not
   touch the device until this operation succeeds.

3. IOMMUFD_OBJ_HW_PAGETABLE is created when an external driver calls the IOMMUFD
   kAPI to attach a bound device to an IOAS. Similarly the external driver uAPI
   allows userspace to initiate the attaching operation. If a compatible
   pagetable already exists then it is reused for the attachment. Otherwise a
   new pagetable object and iommu_domain is created. Successful completion of
   this operation sets up the linkages among IOAS, device and iommu_domain. Once
   this completes the device could do DMA.

   Every iommu_domain inside the IOAS is also represented to userspace as a
   HW_PAGETABLE object.
3. IOMMUFD_OBJ_HWPT_PAGING can be created in two ways:

   * IOMMUFD_OBJ_HWPT_PAGING is automatically created when an external driver
     calls the IOMMUFD kAPI to attach a bound device to an IOAS. Similarly the
     external driver uAPI allows userspace to initiate the attaching operation.
     If a compatible member HWPT_PAGING object exists in the IOAS's HWPT_PAGING
     list, then it will be reused. Otherwise a new HWPT_PAGING that represents
     an iommu_domain to userspace will be created, and then added to the list.
     Successful completion of this operation sets up the linkages among IOAS,
     device and iommu_domain. Once this completes the device could do DMA.

   * IOMMUFD_OBJ_HWPT_PAGING can be manually created via the IOMMU_HWPT_ALLOC
     uAPI, provided an ioas_id via @pt_id to associate the new HWPT_PAGING to
     the corresponding IOAS object. The benefit of this manual allocation is to
     allow allocation flags (defined in enum iommufd_hwpt_alloc_flags), e.g. it
     allocates a nesting parent HWPT_PAGING if the IOMMU_HWPT_ALLOC_NEST_PARENT
     flag is set.

4. IOMMUFD_OBJ_HWPT_NESTED can be only manually created via the IOMMU_HWPT_ALLOC
   uAPI, provided an hwpt_id or a viommu_id of a vIOMMU object encapsulating a
   nesting parent HWPT_PAGING via @pt_id to associate the new HWPT_NESTED object
   to the corresponding HWPT_PAGING object. The associating HWPT_PAGING object
   must be a nesting parent manually allocated via the same uAPI previously with
   an IOMMU_HWPT_ALLOC_NEST_PARENT flag, otherwise the allocation will fail. The
   allocation will be further validated by the IOMMU driver to ensure that the
   nesting parent domain and the nested domain being allocated are compatible.
   Successful completion of this operation sets up linkages among IOAS, device,
   and iommu_domains. Once this completes the device could do DMA via a 2-stage
   translation, a.k.a nested translation. Note that multiple HWPT_NESTED objects
   can be allocated by (and then associated to) the same nesting parent.

   .. note::

      Future IOMMUFD updates will provide an API to create and manipulate the
      HW_PAGETABLE directly.
      Either a manual IOMMUFD_OBJ_HWPT_PAGING or an IOMMUFD_OBJ_HWPT_NESTED is
      created via the same IOMMU_HWPT_ALLOC uAPI. The difference is at the type
      of the object passed in via the @pt_id field of struct iommufd_hwpt_alloc.

5. IOMMUFD_OBJ_VIOMMU can be only manually created via the IOMMU_VIOMMU_ALLOC
   uAPI, provided a dev_id (for the device's physical IOMMU to back the vIOMMU)
   and an hwpt_id (to associate the vIOMMU to a nesting parent HWPT_PAGING). The
   iommufd core will link the vIOMMU object to the struct iommu_device that the
   struct device is behind. And an IOMMU driver can implement a viommu_alloc op
   to allocate its own vIOMMU data structure embedding the core-level structure
   iommufd_viommu and some driver-specific data. If necessary, the driver can
   also configure its HW virtualization feature for that vIOMMU (and thus for
   the VM). Successful completion of this operation sets up the linkages between
   the vIOMMU object and the HWPT_PAGING, then this vIOMMU object can be used
   as a nesting parent object to allocate an HWPT_NESTED object described above.

6. IOMMUFD_OBJ_VDEVICE can be only manually created via the IOMMU_VDEVICE_ALLOC
   uAPI, provided a viommu_id for an iommufd_viommu object and a dev_id for an
   iommufd_device object. The vDEVICE object will be the binding between these
   two parent objects. Another @virt_id will be also set via the uAPI providing
   the iommufd core an index to store the vDEVICE object to a vDEVICE array per
   vIOMMU. If necessary, the IOMMU driver may choose to implement a vdevce_alloc
   op to init its HW for virtualization feature related to a vDEVICE. Successful
   completion of this operation sets up the linkages between vIOMMU and device.

A device can only bind to an iommufd due to DMA ownership claim and attach to at
most one IOAS object (no support of PASID yet).
@@ -120,7 +247,10 @@ User visible objects are backed by following datastructures:

- iommufd_ioas for IOMMUFD_OBJ_IOAS.
- iommufd_device for IOMMUFD_OBJ_DEVICE.
- iommufd_hw_pagetable for IOMMUFD_OBJ_HW_PAGETABLE.
- iommufd_hwpt_paging for IOMMUFD_OBJ_HWPT_PAGING.
- iommufd_hwpt_nested for IOMMUFD_OBJ_HWPT_NESTED.
- iommufd_viommu for IOMMUFD_OBJ_VIOMMU.
- iommufd_vdevice for IOMMUFD_OBJ_VDEVICE.

Several terminologies when looking at these datastructures:

+1 −0
Original line number Diff line number Diff line
@@ -6620,6 +6620,7 @@ CONFIG_ARM_SMMU_V3_SVA=y
# CONFIG_ARM_SMMU_V3_PM is not set
CONFIG_ARM_SMMU_V3_HTTU=y
CONFIG_ARM_SMMU_V3_ECMDQ=y
CONFIG_ARM_SMMU_V3_IOMMUFD=y
# CONFIG_QCOM_IOMMU is not set
# CONFIG_VIRTIO_IOMMU is not set
CONFIG_SMMU_BYPASS_DEV=y
+13 −0
Original line number Diff line number Diff line
@@ -1218,6 +1218,17 @@ static bool iort_pci_rc_supports_ats(struct acpi_iort_node *node)
	return pci_rc->ats_attribute & ACPI_IORT_ATS_SUPPORTED;
}

static bool iort_pci_rc_supports_canwbs(struct acpi_iort_node *node)
{
	struct acpi_iort_memory_access *memory_access;
	struct acpi_iort_root_complex *pci_rc;

	pci_rc = (struct acpi_iort_root_complex *)node->node_data;
	memory_access =
		(struct acpi_iort_memory_access *)&pci_rc->memory_properties;
	return memory_access->memory_flags & ACPI_IORT_MF_CANWBS;
}

static int iort_iommu_xlate(struct device *dev, struct acpi_iort_node *node,
			    u32 streamid)
{
@@ -1344,6 +1355,8 @@ int iort_iommu_configure_id(struct device *dev, const u32 *id_in)
		fwspec = dev_iommu_fwspec_get(dev);
		if (fwspec && iort_pci_rc_supports_ats(node))
			fwspec->flags |= IOMMU_FWSPEC_PCI_RC_ATS;
		if (fwspec && iort_pci_rc_supports_canwbs(node))
			fwspec->flags |= IOMMU_FWSPEC_PCI_RC_CANWBS;
	} else {
		node = iort_scan_node(ACPI_IORT_NODE_NAMED_COMPONENT,
				      iort_match_node_callback, dev);
+8 −2
Original line number Diff line number Diff line
@@ -268,6 +268,7 @@ u32 virtcca_tmi_dev_attach(struct arm_smmu_domain *arm_smmu_domain, struct kvm *
	unsigned long flags;
	int i, j;
	struct arm_smmu_master *master;
	struct arm_smmu_master_domain *master_domain;
	int ret = 0;
	u64 cmd[CMDQ_ENT_DWORDS] = {0};
	struct virtcca_cvm *virtcca_cvm = kvm->arch.virtcca_cvm;
@@ -277,7 +278,8 @@ u32 virtcca_tmi_dev_attach(struct arm_smmu_domain *arm_smmu_domain, struct kvm *
	 * Traverse all devices under the secure smmu domain and
	 * set the correspnding address translation table for each device
	 */
	list_for_each_entry(master, &arm_smmu_domain->devices, domain_head) {
	list_for_each_entry(master_domain, &arm_smmu_domain->devices, devices_elm) {
		master = master_domain->master;
		if (master && master->num_streams >= 0) {
			for (i = 0; i < master->num_streams; i++) {
				u32 sid = master->streams[i].id;
@@ -327,6 +329,8 @@ static int virtcca_secure_dev_ste_create(struct arm_smmu_device *smmu,
	struct arm_smmu_master *master, u32 sid)
{
	struct tmi_smmu_ste_params *params_ptr;
	struct iommu_domain *domain;
	struct arm_smmu_domain *smmu_domain;

	params_ptr = kzalloc(sizeof(*params_ptr), GFP_KERNEL);
	if (!params_ptr)
@@ -335,7 +339,9 @@ static int virtcca_secure_dev_ste_create(struct arm_smmu_device *smmu,
	/* Sync Level 2 STE to TMM */
	params_ptr->sid = sid;
	params_ptr->smmu_id = smmu->s_smmu_id;
	params_ptr->smmu_vmid = master->domain->s2_cfg.vmid;
	domain = iommu_get_domain_for_dev(master->dev);
	smmu_domain = to_smmu_domain(domain);
	params_ptr->smmu_vmid = smmu_domain->s2_cfg.vmid;

	if (tmi_smmu_ste_create(__pa(params_ptr)) != 0) {
		kfree(params_ptr);
+1 −1
Original line number Diff line number Diff line
@@ -584,7 +584,7 @@ static int idxd_enable_system_pasid(struct idxd_device *idxd)
	 * DMA domain is owned by the driver, it should support all valid
	 * types such as DMA-FQ, identity, etc.
	 */
	ret = iommu_attach_device_pasid(domain, dev, pasid);
	ret = iommu_attach_device_pasid(domain, dev, pasid, NULL);
	if (ret) {
		dev_err(dev, "failed to attach device pasid %d, domain type %d",
			pasid, domain->type);
Loading