Hi Leo,
On Sun, 26 May 2019 at 14:11, Leo Yan leo.yan@linaro.org wrote:
Hi Mike,
On Wed, May 01, 2019 at 09:49:32AM +0100, Mike Leach wrote:
CTIs are defined in the device tree and associated with other CoreSight devices. The core CoreSight code has been modified to enable the registration of the CTI devices on the same bus as the other CoreSight components, but as these are not actually trace generation / capture devices, they are not part of the Coresight path when generating trace.
However, the definition of the standard CoreSight device has been extended to include a reference to an associated CTI device, and the enable / disable trace path operations will auto enable/disable any associated CTI devices at the same time.
Programming is at present via sysfs - a full API is provided to utilise the hardware capabilities. As CTI devices are unprogrammed by default, the auto enable describe above will have no effect until explicit programming takes place.
A set of device tree bindings specific to the CTI topology has been defined.
Documentation has been updated to describe both the CTI hardware, its use and programming in sysfs, and the new dts bindings required.
I'd like to reply you for sysfs nodes for CTI topology and avoid to disturb for Suzuki's APCI binding patch set threading, so I copy 'n paste your comments at here:
I fully agree that there is requirement to expose device connections as Suzuki's patches provided. As commented in the original patch, it removes the need for users to have knowledge of hardware specifics or access to device tree source.
For the trace datapath a simple link is sufficient to express this information. The nature of the data and connection is known - it is the trace data running from source to sink. The linked components are guaranteed to be registered coresight devices
However, the requirement for the CTI is different.
CTI is not limited to connecting to other coresight devices. Any device can be wired into a CTI trigger signal. These devices may or may not have drivers / entries in the device tree. For each connection a client needs to know the signals connected to the cti, the signal directions, the signal prupose if possible, and the device connected. For this reason we dynamically fill out a connections infomation sub-dir in sysfs containing _name, _trigin_sig, _trigout_sig, _trigin_type, _trigout_type - described in the patch [1].
This information is sufficient and necessary to enable a user to program a CTI in most cases.
As an example look at the Juno dtsi in [2]. CTI 0 is connected to ETR, ETF, STM and TPIU - all coresight devices. CTI 1 is connected to REF_CLK, system profiler and watchdog - no coresight devices at all. CTI 2 is connected to ETF, and two ELA devices - so 1 coresight device and 2 not coresight devices.
So my view is that for the case where CTI is connected to another CoreSight device the sysfs link could be used in addition to the information described above.
To meet this requirement, below are my some suggestions:
- After read this patch set, it gives me impression that it mixes two things: hardware topology and triggers/channels configurations.
You are correct - there is the hardware topology and a software programming API to connect channels and triggers.
The hardware topology describes the association between the CTI and connected devices, along with the details of the hardware trigger signals between CTI and device. These are static details that must appear in a device tree. I consider the connection information just as much part of the hardware topology as the association between the CTI and connected device.
So below suggestions try to distinguish these two things.
For the hardware topology, we can reuse Suzuki's approach for 'in' and 'out' nodes for CTI devices:
Firstly, to reflect the whole matrix, we can create the 'static' device for CTM (same with static replicator and static funnel), thus under CTM sysfs node we can see all its bounded CTI devices;
I am not sure that this gives us any advantage over the current system - each CTI is tagged with a CTM id - thus establishing the association between CTIs.
For CTI device, its sysfs node can use 'in' / 'out' nodes to connect ETM/TPIU/STM/ETR/ETF, CPU, or any other devices;
The "connection" between the ETM and CPU does not currently use the in/out node mechanism it uses a phandle reference - why should the connection between CPU and CTI be different?
I looked into re-using this mechnism while developing this set. It had a number of disadvantages:- 1) it assumes everything is a coresight device - not true for CTI. We can have connections to non-coresight devices, or simple IO running off chip. 2) I wanted to avoid a flood of new in/out connections between devices in the device trees. There is no concept of a "path" that needs walking in the case of triggers, so no advantage to declaring in and out nodes to follow. In many cases (CPU, ETM, ETR, ETF) there would be both in and out nodes running between device and CTI,
My attempts to re-use the connection infrastructure resulted in adding a lot of parameters and information to the existing structures that was not relevant to non-CTI devices. I felt a clean break was better and makes the overall code easier to read and maintain.
The essential information is the connection details at the CTI end, and the direction and function of the trigger signals between the relevant devices. Encapsulating this information in the CTI driver is sufficient to allow the user to successfully program it. Association of CTI and devices using phadle references also solves the issue of non-coresight devices, and allowing the flexibility of having a connection named, but not otherwise defined in the device tree allows the
For ETM device or other CoreSight components, its sysfs node can use 'in' / 'out' nodes to connect to CTI device.
For this part, we only focus on the hardware connection rather than the detailed configurations.
Here the trigger connections between the CTI device and the associated device are hardware connections in the same way that the trace bus is a hardware connection between an ETM and a funnel. The only difference is that the trace bus is not explicitly mentioned in the description of the hardware as it is the _only_ possible connection an therefore is implied. The in/out nodes describe the assocation between two coresight devices and the hardware signals between then - an ATB bus running from out -> in.
With the CTI user needs to know the hardware connections, but these cannot be implied from the association so must be specifically defined in the device tree / (or other hardware definition) - as they are not software discoverable or software configurable, These are part of the hardware topology of the device.
- For the second level, it's more related with triggers and channels configuration; so this level is only related with every CTI device regsiters accessing under CTI's sysfs node.
If you are referring to the connection between triggers and channels, then these are 100% software configurable by programming the CTI. These can never be defined in advance in the device tree.
Thus programming a set of CTIs we can connect a trigger connection e.g. the full signal from an ETR can be connected to a channel in the CTI associated with the ETR. The same channel can then be connected to the CTIIRQ trigger into a CPU by programming the CTI attached to the CPU.
It can use DT binding to accomplish related configurations if these configurations are defined in SoC and cannot be changed for specific SoC.
On the other hand, if these configurations are very dynamic for one SoC, I think we even can write one small tool (or python script) to parse the related user's configuration and access sysfs nodes to write corresponding registers. Finally we can use OpenCSD to store related files.
Programming up CTIs is a complex operation and I agree that a tool could be useful to handle this. As part of some additional work I am doing for the overall complex configuration of coresight systems there may be an opportunity to develop something.
For the present - this set describes the hardware topology which must include details of the trigger signal connections between CTI and device, and provides the software API to interconnect triggers and channels for propogation over the CTM to other CTIs/ devices.
Thanks for the feedback.
Mike
For this part, it's only specific in CTI/CTM drivers.
You and Mathieu/Suzuki have more deeper understanding for CoreSight framework and CTI usages, I just want to contribute some ideas (sorry if introduce noise :)
Thanks, Leo Yan