Hi Mike,
On Wed, May 01, 2019 at 09:49:32AM +0100, Mike Leach wrote:
CTIs are defined in the device tree and associated with other CoreSight devices. The core CoreSight code has been modified to enable the registration of the CTI devices on the same bus as the other CoreSight components, but as these are not actually trace generation / capture devices, they are not part of the Coresight path when generating trace.
However, the definition of the standard CoreSight device has been extended to include a reference to an associated CTI device, and the enable / disable trace path operations will auto enable/disable any associated CTI devices at the same time.
Programming is at present via sysfs - a full API is provided to utilise the hardware capabilities. As CTI devices are unprogrammed by default, the auto enable describe above will have no effect until explicit programming takes place.
A set of device tree bindings specific to the CTI topology has been defined.
Documentation has been updated to describe both the CTI hardware, its use and programming in sysfs, and the new dts bindings required.
I'd like to reply you for sysfs nodes for CTI topology and avoid to disturb for Suzuki's APCI binding patch set threading, so I copy 'n paste your comments at here:
I fully agree that there is requirement to expose device connections as Suzuki's patches provided. As commented in the original patch, it removes the need for users to have knowledge of hardware specifics or access to device tree source.
For the trace datapath a simple link is sufficient to express this information. The nature of the data and connection is known - it is the trace data running from source to sink. The linked components are guaranteed to be registered coresight devices
However, the requirement for the CTI is different.
CTI is not limited to connecting to other coresight devices. Any device can be wired into a CTI trigger signal. These devices may or may not have drivers / entries in the device tree. For each connection a client needs to know the signals connected to the cti, the signal directions, the signal prupose if possible, and the device connected. For this reason we dynamically fill out a connections infomation sub-dir in sysfs containing _name, _trigin_sig, _trigout_sig, _trigin_type, _trigout_type - described in the patch [1].
This information is sufficient and necessary to enable a user to program a CTI in most cases.
As an example look at the Juno dtsi in [2]. CTI 0 is connected to ETR, ETF, STM and TPIU - all coresight devices. CTI 1 is connected to REF_CLK, system profiler and watchdog - no coresight devices at all. CTI 2 is connected to ETF, and two ELA devices - so 1 coresight device and 2 not coresight devices.
So my view is that for the case where CTI is connected to another CoreSight device the sysfs link could be used in addition to the information described above.
To meet this requirement, below are my some suggestions:
- After read this patch set, it gives me impression that it mixes two things: hardware topology and triggers/channels configurations.
So below suggestions try to distinguish these two things.
- For the hardware topology, we can reuse Suzuki's approach for 'in' and 'out' nodes for CTI devices:
Firstly, to reflect the whole matrix, we can create the 'static' device for CTM (same with static replicator and static funnel), thus under CTM sysfs node we can see all its bounded CTI devices;
For CTI device, its sysfs node can use 'in' / 'out' nodes to connect ETM/TPIU/STM/ETR/ETF, CPU, or any other devices;
For ETM device or other CoreSight components, its sysfs node can use 'in' / 'out' nodes to connect to CTI device.
For this part, we only focus on the hardware connection rather than the detailed configurations.
- For the second level, it's more related with triggers and channels configuration; so this level is only related with every CTI device regsiters accessing under CTI's sysfs node.
It can use DT binding to accomplish related configurations if these configurations are defined in SoC and cannot be changed for specific SoC.
On the other hand, if these configurations are very dynamic for one SoC, I think we even can write one small tool (or python script) to parse the related user's configuration and access sysfs nodes to write corresponding registers. Finally we can use OpenCSD to store related files.
For this part, it's only specific in CTI/CTM drivers.
You and Mathieu/Suzuki have more deeper understanding for CoreSight framework and CTI usages, I just want to contribute some ideas (sorry if introduce noise :)
Thanks, Leo Yan