FROMLIST: iommu/io-pgtable: Introduce unmap_pages() as a page table op

The io-pgtable code expects to operate on a single block or
granule of memory that is supported by the IOMMU hardware when
unmapping memory.

This means that when a large buffer that consists of multiple
such blocks is unmapped, the io-pgtable code will walk the page
tables to the correct level to unmap each block, even for blocks
that are virtually contiguous and at the same level, which can
incur an overhead in performance.

Introduce the unmap_pages() page table op to express to the
io-pgtable code that it should unmap a number of blocks of
the same size, instead of a single block. Doing so allows
multiple blocks to be unmapped in one call to the io-pgtable
code, reducing the number of page table walks, and indirect
calls.

Bug: 178537788
Link: https://lore.kernel.org/linux-iommu/20210408171402.12607-1-isaacm@codeaurora.org/T/#t
Change-Id: Ic3b3bf2f0e41eda4ca75482cd95a671a1f8e1cee
Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
This commit is contained in:
Isaac J. Manjarres
2021-04-01 17:47:33 +01:00
committed by Todd Kjos
parent 46074eb650
commit 1e74a0fd95

View File

@@ -142,6 +142,7 @@ struct io_pgtable_cfg {
* chunks. The mapped pointer argument is used to store how * chunks. The mapped pointer argument is used to store how
* many bytes are mapped. * many bytes are mapped.
* @unmap: Unmap a physically contiguous memory region. * @unmap: Unmap a physically contiguous memory region.
* @unmap_pages: Unmap a range of virtually contiguous pages of the same size.
* @iova_to_phys: Translate iova to physical address. * @iova_to_phys: Translate iova to physical address.
* *
* These functions map directly onto the iommu_ops member functions with * These functions map directly onto the iommu_ops member functions with
@@ -155,6 +156,9 @@ struct io_pgtable_ops {
gfp_t gfp, size_t *mapped); gfp_t gfp, size_t *mapped);
size_t (*unmap)(struct io_pgtable_ops *ops, unsigned long iova, size_t (*unmap)(struct io_pgtable_ops *ops, unsigned long iova,
size_t size, struct iommu_iotlb_gather *gather); size_t size, struct iommu_iotlb_gather *gather);
size_t (*unmap_pages)(struct io_pgtable_ops *ops, unsigned long iova,
size_t pgsize, size_t pgcount,
struct iommu_iotlb_gather *gather);
phys_addr_t (*iova_to_phys)(struct io_pgtable_ops *ops, phys_addr_t (*iova_to_phys)(struct io_pgtable_ops *ops,
unsigned long iova); unsigned long iova);
}; };