Hi Suzuki,
On Mon, Oct 26, 2020 at 11:47 PM Suzuki K Poulose suzuki.poulose@arm.com wrote:
Hi Linu,
Thanks for the feedback. My responses inline.
On 10/26/20 4:33 AM, Linu Cherian wrote:
Hi Suzuki,
On Mon, Oct 5, 2020 at 4:52 PM Suzuki K Poulose suzuki.poulose@arm.com wrote:
Hi Linu,
On 09/04/2020 03:41 AM, Linu Cherian wrote:
This patch series tries to fix the sysfs breakage on topologies with per core sink.
Changes since v3:
- References to coresight_get_enabled_sink in perf interface has been removed and marked deprecated as a new patch.
- To avoid changes to coresight_find_sink for ease of maintenance, search function specific to sysfs usage has been added.
- Sysfs being the only user for coresight_get_enabled sink, reset option is removed as well.
Have you tried running perf with --per-thread option ? I believe this will be impacted as well, as we choose a single sink at the moment and this may not be reachable from the other CPUs, where the event may be scheduled. Eventually loosing trace for the duration where the task is scheduled on a different CPU.
Please could you try this patch and see if helps ? I have lightly tested this on a fast model.
We are seeing some issues while testing with this patch. The issue is that, always buffer allocation for the sink happens to be on the first core in cpu mask and this doesn't match with the core on which event is started. Please see below for additional comments.
Please could you clarify the "issues" ? How is the buffer allocation a problem ?
1. Just realized that the issue that we are seeing with this patch is something specific to our test setup, since we had some custom patches that was required for supporting the secure trace buffer configuration for our silicon.
And to be specific, our changeset was relying on the drvdata->etr_buf at the time of tmc_etr_sync_perf_buffer.
In per core case during buffer allocation, the sink chosen is always for the first core, core 0. Let's consider the event started on say, core 4. So w.r.t drvdata of tmc_etr4, drvdata->etr_buf would get initialized while starting the event. And w.r.t drvdata of tmc_etr0, drvdata->etr_buf would be NULL here and our custom changeset was expecting to be initialized with the etr_buf.
So will try to rebase our patches accordingly and test this again.
2. Related to iommu enabled configuration.
This on the assumption that there can be dedicated stream id (device id) possible for each per core ETR device. Please ignore otherwise.
When the buffer allocation happens on tmc_etr0, and provided we have a iommu enabled case, the iommu mapping would be w.r.t to tmc_etr0. But then, the actual DMA might get triggered from a non matching device, ie. tmc_etr4 which would then fail. Should we take this into consideration ?
Thanks.