The debug_dma_assert_idle() infrastructure was put in place to catch a data corruption scenario first identified by the now defunct NET_DMA receive offload feature. It caught cases where dma was in flight to a stale page because the dma raced the cpu writing the page, and the cpu write triggered cow_user_page().
However, the dma-debug tracking is overeager and also triggers in cases where the dma device is reading from a page that is also undergoing cow_user_page().
The fix proposed was originally posted in 2016, and Russell reported "Yes, that seems to avoid the warning for me from an initial test", and now Don is also reporting that this fix is addressing a similar false positive report that he is seeing.
Link: https://lore.kernel.org/r/CAPcyv4j8fWqwAaX5oCdg5atc+vmp57HoAGT6AfBFwaCiv0RbA... Reported-by: Russell King linux@armlinux.org.uk Reported-by: Don Dutile ddutile@redhat.com Fixes: 0abdd7a81b7e ("dma-debug: introduce debug_dma_assert_idle()") Cc: stable@vger.kernel.org Cc: Christoph Hellwig hch@lst.de Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Robin Murphy robin.murphy@arm.com Signed-off-by: Dan Williams dan.j.williams@intel.com --- kernel/dma/debug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index 099002d84f46..11a6db53d193 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -587,7 +587,7 @@ void debug_dma_assert_idle(struct page *page) } spin_unlock_irqrestore(&radix_lock, flags);
- if (!entry) + if (!entry || entry->direction != DMA_FROM_DEVICE) return;
cln = to_cacheline_number(entry);
On Tue, Nov 19, 2019 at 9:49 AM Dan Williams dan.j.williams@intel.com wrote:
The debug_dma_assert_idle() infrastructure was put in place to catch a data corruption scenario first identified by the now defunct NET_DMA receive offload feature. It caught cases where dma was in flight to a stale page because the dma raced the cpu writing the page, and the cpu write triggered cow_user_page().
However, the dma-debug tracking is overeager and also triggers in cases where the dma device is reading from a page that is also undergoing cow_user_page().
The fix proposed was originally posted in 2016, and Russell reported "Yes, that seems to avoid the warning for me from an initial test", and now Don is also reporting that this fix is addressing a similar false positive report that he is seeing.
Link: https://lore.kernel.org/r/CAPcyv4j8fWqwAaX5oCdg5atc+vmp57HoAGT6AfBFwaCiv0RbA... Reported-by: Russell King linux@armlinux.org.uk Reported-by: Don Dutile ddutile@redhat.com Fixes: 0abdd7a81b7e ("dma-debug: introduce debug_dma_assert_idle()") Cc: stable@vger.kernel.org Cc: Christoph Hellwig hch@lst.de Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Robin Murphy robin.murphy@arm.com Signed-off-by: Dan Williams dan.j.williams@intel.com
kernel/dma/debug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index 099002d84f46..11a6db53d193 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -587,7 +587,7 @@ void debug_dma_assert_idle(struct page *page) } spin_unlock_irqrestore(&radix_lock, flags);
if (!entry)
if (!entry || entry->direction != DMA_FROM_DEVICE) return; cln = to_cacheline_number(entry);
If I am understanding right DMA_TO_DEVICE is fine, but won't you also need to cover the DMA_BIDIRECTIONAL case since it is possible for a device to also write the memory in that case?
On Tue, Nov 19, 2019 at 4:02 PM Alexander Duyck alexander.duyck@gmail.com wrote:
On Tue, Nov 19, 2019 at 9:49 AM Dan Williams dan.j.williams@intel.com wrote:
The debug_dma_assert_idle() infrastructure was put in place to catch a data corruption scenario first identified by the now defunct NET_DMA receive offload feature. It caught cases where dma was in flight to a stale page because the dma raced the cpu writing the page, and the cpu write triggered cow_user_page().
However, the dma-debug tracking is overeager and also triggers in cases where the dma device is reading from a page that is also undergoing cow_user_page().
The fix proposed was originally posted in 2016, and Russell reported "Yes, that seems to avoid the warning for me from an initial test", and now Don is also reporting that this fix is addressing a similar false positive report that he is seeing.
Link: https://lore.kernel.org/r/CAPcyv4j8fWqwAaX5oCdg5atc+vmp57HoAGT6AfBFwaCiv0RbA... Reported-by: Russell King linux@armlinux.org.uk Reported-by: Don Dutile ddutile@redhat.com Fixes: 0abdd7a81b7e ("dma-debug: introduce debug_dma_assert_idle()") Cc: stable@vger.kernel.org Cc: Christoph Hellwig hch@lst.de Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Robin Murphy robin.murphy@arm.com Signed-off-by: Dan Williams dan.j.williams@intel.com
kernel/dma/debug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index 099002d84f46..11a6db53d193 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -587,7 +587,7 @@ void debug_dma_assert_idle(struct page *page) } spin_unlock_irqrestore(&radix_lock, flags);
if (!entry)
if (!entry || entry->direction != DMA_FROM_DEVICE) return; cln = to_cacheline_number(entry);
If I am understanding right DMA_TO_DEVICE is fine, but won't you also need to cover the DMA_BIDIRECTIONAL case since it is possible for a device to also write the memory in that case?
True, DMA_BIDIRECTIONAL and DMA_TO_DEVICE are being treated equally in this case. Given this is the second time this facility needed to be taught to be less eager [1], I'd be inclined to let the tie-break / BIDIR case be treated like TO. This facility was always meant as a "there might be a problem here", but not a definitive checker, and it certainly loses value if the reports are ambiguous.
[1]: 3b7a6418c749 dma debug: account for cachelines and read-only mappings in overlap tracking
linux-stable-mirror@lists.linaro.org