swiotlb: Fix alignment checks when both allocation and DMA masks are present
[ Upstream commit 51b30ecb ] Nicolin reports that swiotlb buffer allocations fail for an NVME device behind an IOMMU using 64KiB pages. This is because we end up with a minimum allocation alignment of 64KiB (for the IOMMU to map the buffer safely) but a minimum DMA alignment mask corresponding to a 4KiB NVME page (i.e. preserving the 4KiB page offset from the original allocation). If the original address is not 4KiB-aligned, the allocation will fail because swiotlb_search_pool_area() erroneously compares these unmasked bits with the 64KiB-aligned candidate allocation. Tweak swiotlb_search_pool_area() so that the DMA alignment mask is reduced based on the required alignment of the allocation. Fixes: 82612d66 ("iommu: Allow the dma-iommu api to use bounce buffers") Link: https://lore.kernel.org/r/cover.1707851466.git.nicolinc@nvidia.com Reported-by:Nicolin Chen <nicolinc@nvidia.com> Signed-off-by:
Will Deacon <will@kernel.org> Reviewed-by:
Michael Kelley <mhklinux@outlook.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Tested-by:
Michael Kelley <mhklinux@outlook.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Sasha Levin <sashal@kernel.org>
Loading
Please register or sign in to comment