Commit b6f788dc authored by Robin Murphy's avatar Robin Murphy Committed by Jason Zeng
Browse files

iommu: Indicate queued flushes via gather data

mainline inclusion
from mainline-v5.15-rc1
commit 7a7c5bad
category: bugfix
bugzilla: https://gitee.com/openeuler/intel-kernel/issues/I8C8B4
CVE: NA

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7a7c5badf85806eab75e31ab8d45021f1545b0e3



-------------------------------------

Intel-SIG: commit 7a7c5bad iommu: Indicate queued flushes via gather data
Backport SPR and EMR IOMMU PCIe related upstream bugfixes to kernel 5.10.

Since iommu_iotlb_gather exists to help drivers optimise flushing for a
given unmap request, it is also the logical place to indicate whether
the unmap is strict or not, and thus help them further optimise for
whether to expect a sync or a flush_all subsequently. As part of that,
it also seems fair to make the flush queue code take responsibility for
enforcing the really subtle ordering requirement it brings, so that we
don't need to worry about forgetting that if new drivers want to add
flush queue support, and can consolidate the existing versions.

While we're adding to the kerneldoc, also fill in some info for
@freelist which was overlooked previously.

Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/bf5f8e2ad84e48c712ccbf80fa8c610594c7595f.1628682049.git.robin.murphy@arm.com


Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
(cherry picked from commit 7a7c5bad)
Signed-off-by: default avatarEthan Zhao <haifeng.zhao@linux.intel.com>
parent 4a2e7b3a
Loading
Loading
Loading
Loading
+1 −0
Original line number Diff line number Diff line
@@ -483,6 +483,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
	dma_addr -= iova_off;
	size = iova_align(iovad, size + iova_off);
	iommu_iotlb_gather_init(&iotlb_gather);
	iotlb_gather.queued = cookie->fq_domain;

	unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather);
	WARN_ON(unmapped != size);
+7 −0
Original line number Diff line number Diff line
@@ -565,6 +565,13 @@ void queue_iova(struct iova_domain *iovad,
	unsigned long flags;
	unsigned idx;

	/*
	 * Order against the IOMMU driver's pagetable update from unmapping
	 * @pte, to guarantee that iova_domain_flush() observes that if called
	 * from a different CPU before we release the lock below.
	 */
	smp_wmb();

	spin_lock_irqsave(&fq->lock, flags);

	/*
+7 −1
Original line number Diff line number Diff line
@@ -190,16 +190,22 @@ enum iommu_dev_features {
 * @start: IOVA representing the start of the range to be flushed
 * @end: IOVA representing the end of the range to be flushed (inclusive)
 * @pgsize: The interval at which to perform the flush
 * @freelist: Removed pages to free after sync
 * @queued: Indicates that the flush will be queued
 *
 * This structure is intended to be updated by multiple calls to the
 * ->unmap() function in struct iommu_ops before eventually being passed
 * into ->iotlb_sync().
 * into ->iotlb_sync(). Drivers can add pages to @freelist to be freed after
 * ->iotlb_sync() or ->iotlb_flush_all() have cleared all cached references to
 * them. @queued is set to indicate when ->iotlb_flush_all() will be called
 * later instead of ->iotlb_sync(), so drivers may optimise accordingly.
 */
struct iommu_iotlb_gather {
	unsigned long		start;
	unsigned long		end;
	size_t			pgsize;
	struct page		*freelist;
	bool			queued;
};

/**