HI Mathieu,
On Tue, 28 May 2019 at 17:07, Mathieu Poirier mathieu.poirier@linaro.org wrote:
On Tue, 28 May 2019 at 03:36, Mike Leach mike.leach@linaro.org wrote:
Hi Leo,
On Sun, 26 May 2019 at 14:11, Leo Yan leo.yan@linaro.org wrote:
Hi Mike,
On Wed, May 01, 2019 at 09:49:32AM +0100, Mike Leach wrote:
CTIs are defined in the device tree and associated with other CoreSight devices. The core CoreSight code has been modified to enable the registration of the CTI devices on the same bus as the other CoreSight components, but as these are not actually trace generation / capture devices, they are not part of the Coresight path when generating trace.
However, the definition of the standard CoreSight device has been extended to include a reference to an associated CTI device, and the enable / disable trace path operations will auto enable/disable any associated CTI devices at the same time.
Programming is at present via sysfs - a full API is provided to utilise the hardware capabilities. As CTI devices are unprogrammed by default, the auto enable describe above will have no effect until explicit programming takes place.
A set of device tree bindings specific to the CTI topology has been defined.
Documentation has been updated to describe both the CTI hardware, its use and programming in sysfs, and the new dts bindings required.
I'd like to reply you for sysfs nodes for CTI topology and avoid to disturb for Suzuki's APCI binding patch set threading, so I copy 'n paste your comments at here:
I fully agree that there is requirement to expose device connections as Suzuki's patches provided. As commented in the original patch, it removes the need for users to have knowledge of hardware specifics or access to device tree source.
For the trace datapath a simple link is sufficient to express this information. The nature of the data and connection is known - it is the trace data running from source to sink. The linked components are guaranteed to be registered coresight devices
However, the requirement for the CTI is different.
CTI is not limited to connecting to other coresight devices. Any device can be wired into a CTI trigger signal. These devices may or may not have drivers / entries in the device tree. For each connection a client needs to know the signals connected to the cti, the signal directions, the signal prupose if possible, and the device connected. For this reason we dynamically fill out a connections infomation sub-dir in sysfs containing _name, _trigin_sig, _trigout_sig, _trigin_type, _trigout_type - described in the patch [1].
This information is sufficient and necessary to enable a user to program a CTI in most cases.
As an example look at the Juno dtsi in [2]. CTI 0 is connected to ETR, ETF, STM and TPIU - all coresight devices. CTI 1 is connected to REF_CLK, system profiler and watchdog - no coresight devices at all. CTI 2 is connected to ETF, and two ELA devices - so 1 coresight device and 2 not coresight devices.
So my view is that for the case where CTI is connected to another CoreSight device the sysfs link could be used in addition to the information described above.
To meet this requirement, below are my some suggestions:
- After read this patch set, it gives me impression that it mixes two things: hardware topology and triggers/channels configurations.
You are correct - there is the hardware topology and a software programming API to connect channels and triggers.
The hardware topology describes the association between the CTI and connected devices, along with the details of the hardware trigger signals between CTI and device. These are static details that must appear in a device tree. I consider the connection information just as much part of the hardware topology as the association between the CTI and connected device.
So below suggestions try to distinguish these two things.
For the hardware topology, we can reuse Suzuki's approach for 'in' and 'out' nodes for CTI devices:
Firstly, to reflect the whole matrix, we can create the 'static' device for CTM (same with static replicator and static funnel), thus under CTM sysfs node we can see all its bounded CTI devices;
I am not sure that this gives us any advantage over the current system
- each CTI is tagged with a CTM id - thus establishing the association
between CTIs.
I think Leo's idea of creating static CTM devices isn't bad at all. What we currently do for static funnel and replicators definitely helps to understand the topology enacted on a platform. CTIs do have a CTM id but it's introducing another way of representing an association. I think there would be value in at least trying that approach in the next patchset.
I did consider this as part of my original design (list of CTMs, each CTM having a list of CTIs) - then decided I was writing a whole load of code that was unnecessarily complex for the problem being solved.
The static funnel and replicators are necessary to make the directed graph of the trace path make sense - combining or splitting paths at various points between source and sink.
There is no directed graph for CTIs - just a star topology with a single hop to all other CTIs. We could add a CTM driver, add in the device tree entries and connections, and a some driver code to handle them (and repeat for ACPI later) - but what problem is being solved by this? What advantages over the (optional) CTM ID?
If there was advantage later in creating an additional API that automatically connected trig_x on device_y via cti_n -> trig_a on device_b via cti_m, then a search of CTIs on a CTM might be useful. In this case we could go back to CTMs with lists of CTIs but these could be built using the CTM ID when each CTI was registered - there is still no overriding need for a CTM driver.
Yes it is a different way of representing and association - but it is a different type of association than the trace data path covered by the funnel/replicators.
Thanks
Mike