Hi Leo,
On Tue, 28 May 2019 at 14:41, Leo Yan leo.yan@linaro.org wrote:
Hi Mike,
On Tue, May 28, 2019 at 10:36:08AM +0100, Mike Leach wrote:
[...]
To meet this requirement, below are my some suggestions:
- After read this patch set, it gives me impression that it mixes two things: hardware topology and triggers/channels configurations.
You are correct - there is the hardware topology and a software programming API to connect channels and triggers.
The hardware topology describes the association between the CTI and connected devices, along with the details of the hardware trigger signals between CTI and device.
Thanks for clarification.
These are static details that must appear in a device tree. I consider the connection information just as much part of the hardware topology as the association between the CTI and connected device.
I want to ask a general question so that I go back to read the driver with more clear idea. Now the detailed signals are presented in DT bindings, and these signals will set corresponding variables in CTI driver; the question is these singals finally will impact CTI registers configuration or we merely use these DT bindings to express the hardware attributions?
The trigger signal information will not directly affect any of the CTI programmable registers. It will be used by the trigger / channel API to ensure that only valid (defined) triggers are connected to channels. The information can be used to identify the signal connections to correctly program the device.
So below suggestions try to distinguish these two things.
For the hardware topology, we can reuse Suzuki's approach for 'in' and 'out' nodes for CTI devices:
Firstly, to reflect the whole matrix, we can create the 'static' device for CTM (same with static replicator and static funnel), thus under CTM sysfs node we can see all its bounded CTI devices;
I am not sure that this gives us any advantage over the current system
- each CTI is tagged with a CTM id - thus establishing the association
between CTIs.
From the driver's implementation point of view, I agree CTM driver is not very necessary.
But let's think we want to use sysfs nodes to retrive the whole CTI connections under the same CTM matrix, if we without CTM node under sysfs, then we need to find all CTI devices and assume all of them are using the same CTM (but this is not always the case). If we create sysfs node for CTM, then we can use the CTM node to easily hook all connected CTI nodes.
The ctmid - an optional CTI device tree parameter - can be used in systems to differentiate between different CTMs. If this is not present then we assume all CTIs are connected to the same CTM. This is true for all systems we currently support - but may change in future especially multi-socket systems.
At present I do not see any advantage in finding all the CTIs on a given CTM. What you really need to know is if the CTI for device A, is on the same CTM as CTI for device B. For example, if I want to route the ETR full trigger back to a CPU outputting trace I need to know if the ETR CTI is on the same CTM as the CPU CTI. If you do need to know all the CTIs on a given CTM for some reason, then simply search the CoreSight bus and check the ctmid for each CTI found.
BTW, if we review the CTI driver, it uses 'static' array to maintain CTI devices and assume all of them belong to CTM (id = 0);
Next patch set has dropped the static array - there is only the dynamic list. The assumption of CTM ID = 0 is the defauit but is controllable in the device tree as explained above.
For flexibility, actually we can dynamically create CTM instance as well. But I don't have strong opinion for this, especially if you prefer the initial driver should be keep simple as possible; my suggestion is from how to easily reflect the connections under sysfs.
Though I went though you other replys, will take more time to understand them and reply when I have solid comment in next 2~3 days.
[...]
Thanks, Leo Yan
Thanks
Mike
-- Mike Leach Principal Engineer, ARM Ltd. Manchester Design Centre. UK