This scope of this driver's lock usage is extremely wide, leading to
excessively long lock hold times. Additionally, there is lots of
excessive linked-list traversal and unnecessary dynamic memory
allocation in a critical path, causing poor performance across the
board.
Fix all of this by greatly reducing the scope of the locks used and by
significantly reducing the amount of operations performed when
msm_dma_map_sg_attrs() is called. The entire driver's code is overhauled
for better cleanliness and performance.
Note that ION must be modified to pass a known structure via the private
dma_buf pointer, so that the IOMMU driver can prevent races when
operating on the same buffer concurrently. This is the only way to
eliminate said buffer races without hurting the IOMMU driver's
performance.
Some additional members are added to the device struct as well to make
these various performance improvements possible.
This also removes the manual cache maintenance since ION already handles
it.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: azrim <mirzaspc@gmail.com>
Add static and inline to msm_dma_unmap_sg_attrs to fix multiple
definition compilation errors when CONFIG_QCOM_LAZY_MAP is not
selected.
Change-Id: I2aa56d85459e7144d279521905d0cb3225b39648
Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org>
Definition of msm_dma_unmap_sg_attrs is not present when
CONFIG_QCOM_LAZY_MAPPING is not selected this would result
in compilation errors, fix this by adding the definition.
Change-Id: Ie4e92463a1641a002e99c72b325e224b79066936
Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org>
Validate that attempts to lazy map a buffer, that is already
lazy mapped, request the same number of elements, direction
and dma mapping attributes as the original mapping.
Change-Id: I6094cd6281a22525aa9971f1c5684f423be08580
Signed-off-by: Liam Mark <lmark@codeaurora.org>
Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org>
During lazy dma_map_sg, only some entries of the caller's sg list
are stored in the msm_iommu_map->sgl. And lazy dma_unmap_sg uses
this incomplete sgl to perform sg_list walk to determine the total
iova size to unmap (all entries/segments are mapped into a single
contiguous iova). Since sg->page_link is missing, the sg_list walk
ends up into null pointer dereference kernel crash:
BUG: Unable to handle kernel NULL pointer dereference at virtual
address 00000018
PC is at iommu_dma_unmap_sg+0x4c/0xdc
[...]
iommu_dma_unmap_sg+0x4c/0xdc
__iommu_unmap_sg_attrs+0x64/0x6c
msm_iommu_map_release+0x154/0x164
msm_dma_buf_freed+0x168/0x3c8
_ion_buffer_destroy+0x30/0x88
ion_buffer_put+0x40/0x50
ion_handle_destroy+0xec/0x10c
ion_handle_put_nolock+0x40/0x50
ion_ioctl+0x2ec/0x4d4
do_vfs_ioctl+0xd0/0x85c
SyS_ioctl+0x90/0xa4
el0_svc_naked+0x24/0x28
Hence, clone/duplicate the caller's sg list into msm_iommu_map->sgl.
Also, update lazy map/unmap_sg to check DMA_ATTR_SKIP_CPU_SYNC to
skip cache maintenance only if asked for.
Change-Id: Idb7bd52d84d27ad0c7873208a3e25129f20d07da
Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org>
Take a snapshot of all ion files for the 4.12 kernel upgrade as of:
commit c30d45ac0d79 ("ion: Fix use after free during ION_IOC_ALLOC")
Change-Id: Iee79c01235cd0b1fa5ca3c2cc9a022d1057a09eb
Signed-off-by: Liam Mark <lmark@codeaurora.org>