Skip to content
Commit 51b30ecb authored by Will Deacon's avatar Will Deacon Committed by Christoph Hellwig
Browse files

swiotlb: Fix alignment checks when both allocation and DMA masks are present

Nicolin reports that swiotlb buffer allocations fail for an NVME device
behind an IOMMU using 64KiB pages. This is because we end up with a
minimum allocation alignment of 64KiB (for the IOMMU to map the buffer
safely) but a minimum DMA alignment mask corresponding to a 4KiB NVME
page (i.e. preserving the 4KiB page offset from the original allocation).
If the original address is not 4KiB-aligned, the allocation will fail
because swiotlb_search_pool_area() erroneously compares these unmasked
bits with the 64KiB-aligned candidate allocation.

Tweak swiotlb_search_pool_area() so that the DMA alignment mask is
reduced based on the required alignment of the allocation.

Fixes: 82612d66 ("iommu: Allow the dma-iommu api to use bounce buffers")
Link: https://lore.kernel.org/r/cover.1707851466.git.nicolinc@nvidia.com


Reported-by: default avatarNicolin Chen <nicolinc@nvidia.com>
Signed-off-by: default avatarWill Deacon <will@kernel.org>
Reviewed-by: default avatarMichael Kelley <mhklinux@outlook.com>
Tested-by: default avatarNicolin Chen <nicolinc@nvidia.com>
Tested-by: default avatarMichael Kelley <mhklinux@outlook.com>
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
parent cbf53074
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment