Skip to content
Commit 20347fca authored by Tianyu Lan's avatar Tianyu Lan Committed by Christoph Hellwig
Browse files

swiotlb: split up the global swiotlb lock

Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead to
significat lock contention on the swiotlb lock.

This patch splits the swiotlb bounce buffer pool into individual areas
which have their own lock. Each CPU tries to allocate in its own area
first. Only if that fails does it search other areas. On freeing the
allocation is freed into the area where the memory was originally
allocated from.

Area number can be set via swiotlb kernel parameter and is default
to be possible cpu number. If possible cpu number is not power of
2, area number will be round up to the next power of 2.

This idea from Andi Kleen patch(https://github.com/intel/tdx/commit/


4529b5784c141782c72ec9bd9a92df2b68cb7d45).

Based-on-idea-by: default avatarAndi Kleen <ak@linux.intel.com>
Signed-off-by: default avatarTianyu Lan <Tianyu.Lan@microsoft.com>
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
parent c51ba246
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment