On 18/06/2019 21:16, Mathieu Poirier wrote:
Hi,
On Wed, Jun 05, 2019 at 08:03:12PM +0100, Mike Leach wrote:
Currently each driver type sets up own CPU power notifications such as CPU hotplug etc. Need a common mechanism for all driver types to use.
Adds a notification mechanism for CPU power events such as CPU hotplug as a general notification mechnism in CoreSight common code to be used by all CPU bound CS devices.
Provides a per-device registration mechanism with callback fn and event type notifier. Core code now manages the CPU hotplug registrations and calls back any registered device when events occur.
Signed-off-by: Mike Leach mike.leach@linaro.org
/*
- coresight_search_device_idx - Search the fwnode handle of a device
- in the given dev_idx list. Must be called with the coresight_mutex held.
@@ -1404,3 +1403,169 @@ char *coresight_alloc_device_name(struct coresight_dev_list *dict, return name; } EXPORT_SYMBOL_GPL(coresight_alloc_device_name);
+/* cs general CPU bound device pm notification */ +struct coresight_cpupm_item {
- struct coresight_device *csdev;
- p_cs_cpu_pm_event callback;
- struct list_head node;
+};
+/* CS cpu pm notifier lists per cpu */ +static DEFINE_PER_CPU(struct list_head *, cs_cpupm_notifier);
+static enum cpuhp_state cscpu_hp_online; +static int cscpu_dev_reg_cnt;
I do not really see the advantage of adding all this code. To me it is like shadowing an infrastructure the CPUhotplug subsystem is alreading giving us. And looking at function cti_cs_cpupm_event() in the next patch the code to handle 3 events is grouped into a single function. To me we are not gaining anything other than replicating code that already exists.
Exactly what I think. We are adding another layer behind already existing layers, with coresight specific event names, not worth the overhead.
To me it is much easier to have a percpu instance of the cti_drvdata (I think Suzuki already hinted at that in his review) and access the right one using the CPU parameter provided by the individual callbacks.
Cheers Suzuki