Skip to content
  1. Sep 11, 2020
    • Jason Gunthorpe's avatar
      RDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks() · a665aca8
      Jason Gunthorpe authored
      ib_umem_num_pages() should only be used by things working with the SGL in
      CPU pages directly.
      
      Drivers building DMA lists should use the new ib_num_dma_blocks() which
      returns the number of blocks rdma_umem_for_each_block() will return.
      
      To make this general for DMA drivers requires a different implementation.
      Computing DMA block count based on umem->address only works if the
      requested page size is < PAGE_SIZE and/or the IOVA == umem->address.
      
      Instead the number of DMA pages should be computed in the IOVA address
      space, not umem->address. Thus the IOVA has to be stored inside the umem
      so it can be used for these calculations.
      
      For now set it to umem->address by default and fix it up if
      ib_umem_find_best_pgsz() was called. This allows drivers to be converted
      to ib_umem_num_dma_blocks() safely.
      
      Link: https://lore.kernel.org/r/6-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
      
      
      Signed-off-by: default avatarJason Gunthorpe <jgg@nvidia.com>
      a665aca8
  2. Sep 10, 2020
  3. Sep 03, 2020
  4. Sep 01, 2020
  5. Aug 31, 2020