On Mon, Jul 12, 2021 at 11:25:54AM +0100, Suzuki Kuruppassery Poulose wrote:
Hi Leo,
On 10/07/2021 08:02, Leo Yan wrote:
Current code syncs the buffer range is [offset, offset+len), it doesn't consider the case when the trace data is wrapped around, in this case 'offset+len' is bigger than 'etr_buf->size'. Thus it syncs buffer out of the memory buffer, and it also misses to sync buffer from the start of the memory.
I doubt this claim is valid. We do the sync properly, taking the page corresponding to the "offset" wrapping it around in "page" index.
Here is the code :
void tmc_sg_table_sync_data_range(struct tmc_sg_table *table, u64 offset, u64 size) { int i, index, start; int npages = DIV_ROUND_UP(size, PAGE_SIZE); struct device *real_dev = table->dev->parent; struct tmc_pages *data = &table->data_pages;
start = offset >> PAGE_SHIFT; for (i = start; i < (start + npages); i++) { index = i % data->nr_pages; dma_sync_single_for_cpu(real_dev, data->daddrs[index], PAGE_SIZE, DMA_FROM_DEVICE); }
}
See that the npages accounts for the "size" requested and we wrap the "index" by the total number of pages in the buffer and pick the right page.
So, I think this fix is not needed.
Ouch, you are right :) Let's drop these two patches.
Thanks, Leo