]> git.proxmox.com Git - mirror_ubuntu-jammy-kernel.git/commit
iommu/dma: Account for min_align_mask w/swiotlb
authorDavid Stevens <stevensd@chromium.org>
Wed, 29 Sep 2021 02:33:00 +0000 (11:33 +0900)
committerStefan Bader <stefan.bader@canonical.com>
Fri, 20 May 2022 12:40:27 +0000 (14:40 +0200)
commit7015644b1b33b323a09ef1f874183c5d916c8aee
treee88a7c47d12d3eb42cf1aeadff38967043006ffa
parenta9e3bd56f3e8ce2d2ad86ebe472126ba3096cb0e
iommu/dma: Account for min_align_mask w/swiotlb

BugLink: https://bugs.launchpad.net/bugs/1969110
commit 2cbc61a1b1665c84282dbf2b1747ffa0b6248639 upstream.

Pass the non-aligned size to __iommu_dma_map when using swiotlb bounce
buffers in iommu_dma_map_page, to account for min_align_mask.

To deal with granule alignment, __iommu_dma_map maps iova_align(size +
iova_off) bytes starting at phys - iova_off. If iommu_dma_map_page
passes aligned size when using swiotlb, then this becomes
iova_align(iova_align(orig_size) + iova_off). Normally iova_off will be
zero when using swiotlb. However, this is not the case for devices that
set min_align_mask. When iova_off is non-zero, __iommu_dma_map ends up
mapping an extra page at the end of the buffer. Beyond just being a
security issue, the extra page is not cleaned up by __iommu_dma_unmap.
This causes problems when the IOVA is reused, due to collisions in the
iommu driver.  Just passing the original size is sufficient, since
__iommu_dma_map will take care of granule alignment.

Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask")
Signed-off-by: David Stevens <stevensd@chromium.org>
Link: https://lore.kernel.org/r/20210929023300.335969-8-stevensd@google.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 57c04fac80232e09440444d4a607fdaffd91b225)
Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com>
drivers/iommu/dma-iommu.c