On Mon, 24 Jun 2019 at 06:39, Mike Leach mike.leach@linaro.org wrote:
Hi Mathieu,
I'm working through the other responses from you and suzuki, but this seems worth taking out of order...
On Tue, 18 Jun 2019 at 21:16, Mathieu Poirier mathieu.poirier@linaro.org wrote:
Hi,
On Wed, Jun 05, 2019 at 08:03:12PM +0100, Mike Leach wrote:
Currently each driver type sets up own CPU power notifications such as CPU hotplug etc. Need a common mechanism for all driver types to use.
Adds a notification mechanism for CPU power events such as CPU hotplug as a general notification mechnism in CoreSight common code to be used by all CPU bound CS devices.
Provides a per-device registration mechanism with callback fn and event type notifier. Core code now manages the CPU hotplug registrations and calls back any registered device when events occur.
Signed-off-by: Mike Leach mike.leach@linaro.org
drivers/hwtracing/coresight/coresight-priv.h | 26 +++ drivers/hwtracing/coresight/coresight.c | 167 ++++++++++++++++++- 2 files changed, 192 insertions(+), 1 deletion(-)
diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h index 9c4fd7eb56eb..5ea2f86f71e0 100644 --- a/drivers/hwtracing/coresight/coresight-priv.h +++ b/drivers/hwtracing/coresight/coresight-priv.h @@ -214,4 +214,30 @@ void coresight_release_platform_data(struct coresight_platform_data *pdata);
int coresight_device_fwnode_match(struct device *dev, void *fwnode);
+/*
- cpu power management - allow any CPU bound Coresight device
- to use a common registration infrastructure for notifications.
- */
+/* cpu pm event types */ +enum cs_cpu_pm_events {
CS_CPUPM_SMP_ON_REG_CPU, /* smp call on cpu at registration */
CS_CPUPM_CPUHP_CS_START, /* hotplug CoreSight starting CPU */
CS_CPUPM_CPUHP_CS_STOP, /* hotplug CoreSight stopping CPU */
CS_CPUPM_CPUHP_AP_DYN_ONLINE, /* hotplug AP dynamic online CPU */
CS_CPUPM_CPUHP_AP_DYN_OFFLINE, /* hotplug AP dynamic offline CPU */
+};
+/* callback function type */ +typedef int (*p_cs_cpu_pm_event)(struct coresight_device *csdev,
unsigned int cpuid,
enum cs_cpu_pm_events event);
+/* registration functions */ +int coresight_cpupm_register(struct coresight_device *csdev,
unsigned int cpuid,
p_cs_cpu_pm_event callback);
+void coresight_cpupm_unregister(struct coresight_device *csdev,
unsigned int cpuid);
#endif diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c index 74661fabe098..509fc0b8329d 100644 --- a/drivers/hwtracing/coresight/coresight.c +++ b/drivers/hwtracing/coresight/coresight.c @@ -1347,7 +1347,6 @@ void coresight_unregister(struct coresight_device *csdev) } EXPORT_SYMBOL_GPL(coresight_unregister);
/*
- coresight_search_device_idx - Search the fwnode handle of a device
- in the given dev_idx list. Must be called with the coresight_mutex held.
@@ -1404,3 +1403,169 @@ char *coresight_alloc_device_name(struct coresight_dev_list *dict, return name; } EXPORT_SYMBOL_GPL(coresight_alloc_device_name);
+/* cs general CPU bound device pm notification */ +struct coresight_cpupm_item {
struct coresight_device *csdev;
p_cs_cpu_pm_event callback;
struct list_head node;
+};
+/* CS cpu pm notifier lists per cpu */ +static DEFINE_PER_CPU(struct list_head *, cs_cpupm_notifier);
+static enum cpuhp_state cscpu_hp_online; +static int cscpu_dev_reg_cnt;
I do not really see the advantage of adding all this code. To me it is like shadowing an infrastructure the CPUhotplug subsystem is alreading giving us. And looking at function cti_cs_cpupm_event() in the next patch the code to handle 3 events is grouped into a single function. To me we are not gaining anything other than replicating code that already exists.
To me it is much easier to have a percpu instance of the cti_drvdata (I think Suzuki already hinted at that in his review) and access the right one using the CPU parameter provided by the individual callbacks.
Let me know if the above is not handling a scenario that this patch is.
My initial thoughts were:-
- this code appears both in the ETMv3 driver and ETMv4 driver. I
could add it to the CTI driver but thats just more maintenance. Seems better design for common code to go into the coresight core.
- this patches adds CPU hotplug. I can also add CPUidle to the same
infrastructure code - with a small change in the core code and the addition of 3 lines in the CTI driver CPUIdle power states are covered too. I have a test branch that does just that. As the CTI driver is designed from the ground up to respect the power states while programming this is needed for testing.
Then I looked deeper into the issue... 3) As I looked further into this, the current ETM drivers use the cpuhp_setup_state_nocalls_cpuslocked() call - this sets a single instance callback state. So any attempt to register a second set of callbacks fails - either with -EBUSY if you try to register callbacks with a different name, -EINVAL if you use name as NULL. Now the reason for the -EINVAL is that it is forbidden to register a second instance on a single instance callback. Now looking at the CPU hotplug call, there is a multi-instance variant
- but this only allows for the registration of one set of callback
functions, which have a 'node' parameter, in addition to the CPU. The first instance to register can provide the callback functions, the remaining instances register using a different call - add_instance() which just provides a 'node' object (actually something with a hlist struct). So using the multi-instance variant, the callback functions have to be in the coresight core code anyway, and we need code in there to figure out which node and which driver to notify - there is no way that the ETM driver can find the correct callback into the CTI driver, or vice versa.
I will admit I discovered (3) only after writing this patch - in my view 1 & 2 were pretty strong reasons - but now it seems it is required to allow the CPU bound drivers to get notified. I was of the view that this common code could be added and additional patches could fix up the ETMv3/ETMv4 drivers.
Now that Andrew is taking care of ETMv3/4 we can concentrate on CTIs. The ETM drivers are using the same CPUHP_AP_ARM_CORESIGHT_STARTING without problem because they are mutually exclusive. We also find the same CPUHP_AP_ONLINE_DYN in both drivers because it is one of only two defines along with (CPUHP_BP_PREPARE_DYN) that can be added multiple times.
That leaves us with 3 choices:
1) We use CPUHP_AP_ARM_CORESIGHT_STARTING as a multiplexer, like you did. 2) We make CPUHP_AP_ARM_CORESIGHT_STARTING part of the _DYN [1] API, something that has little chances of flying upstream. 3) We add a new define, something like CPUHP_AP_ARM_CORESIGHT_CTI_STARTING, just like others have done [2].
[1]. https://elixir.bootlin.com/linux/latest/source/kernel/cpu.c#L1561 [2]. git log --oneline -10 include/linux/cpuhotplug.h
At this point, I think that the best way to solve this is to split off the CPU hotplug updates to the CTI driver, and issue two patchsets. 1st - a baseline CTI driver without any of the hotplug handling. 2nd - a CPU hotplug fixup patchset, that adds common HP handling and fixes up ETMv3 and ETMv4 and adds to CTI in the same set. This is necessary as without removing the HP registration from the two ETM drivers, the common code will fail when invoked in the CTI driver. (interestingly, neither of the ETM drivers, nor my initial patch looked at the return code from cpuhp_setup_state_nocalls_cpuslocked()
I just noticed that as well. If I remember correctly Sebastian Siewior sent that code as part of a substantial patchset when the hotplug subsystem got a serious face lift a few years back. Since all the patches were the same I applied his code but looking back at it now, I can't see a reason to ignore the return code associated with the CPUHP_AP_ARM_CORESIGHT_STARTING registration.
- when that is checked these issues become very apparent).
Thanks and Regards
Mike
+static int coresight_iterate_cpu_event_list(unsigned int cpu,
enum cs_cpu_pm_events event)
+{
struct list_head *cpu_list;
struct coresight_cpupm_item *item;
mutex_lock(&coresight_mutex);
cpu_list = per_cpu(cs_cpupm_notifier, cpu);
if (!list_empty(cpu_list)) {
list_for_each_entry(item, cpu_list, node) {
item->callback(item->csdev, cpu, event);
}
}
mutex_unlock(&coresight_mutex);
return 0;
+}
+static int coresight_hp_starting_cpu(unsigned int cpu) +{
return coresight_iterate_cpu_event_list(cpu, CS_CPUPM_CPUHP_CS_START);
+}
+static int coresight_hp_stopping_cpu(unsigned int cpu) +{
return coresight_iterate_cpu_event_list(cpu, CS_CPUPM_CPUHP_CS_STOP);
+}
+static int coresight_hp_dyn_online_cpu(unsigned int cpu) +{
return coresight_iterate_cpu_event_list(cpu,
CS_CPUPM_CPUHP_AP_DYN_ONLINE);
+}
+static int coresight_hp_dyn_offline_cpu(unsigned int cpu) +{
return coresight_iterate_cpu_event_list(cpu,
CS_CPUPM_CPUHP_AP_DYN_OFFLINE);
+}
+struct cs_smp_reg_cb_info {
unsigned int cpu;
struct coresight_cpupm_item *item;
+};
+static void coresight_smp_onreg_callback(void *info) +{
struct cs_smp_reg_cb_info *cb_info = info;
cb_info->item->callback(cb_info->item->csdev, cb_info->cpu,
CS_CPUPM_SMP_ON_REG_CPU);
+}
+int coresight_cpupm_register(struct coresight_device *csdev,
unsigned int cpuid,
p_cs_cpu_pm_event callback)
+{
struct list_head *cpu_list;
struct coresight_cpupm_item *item;
int err = 0;
struct cs_smp_reg_cb_info on_reg_cb_info;
mutex_lock(&coresight_mutex);
cpu_list = per_cpu(cs_cpupm_notifier, cpuid);
/* init list if no head yet */
if (!cpu_list) {
cpu_list = kzalloc(sizeof(struct list_head), GFP_KERNEL);
if (!cpu_list)
goto cpupm_reg_done;
else
INIT_LIST_HEAD(cpu_list);
}
/* create a list item */
item = kzalloc(sizeof(struct coresight_cpupm_item), GFP_KERNEL);
if (!item) {
err = -ENOMEM;
goto cpupm_reg_done;
}
item->csdev = csdev;
item->callback = callback;
/* init the cpu hotplug notification */
cpus_read_lock();
if (!cscpu_dev_reg_cnt) {
cpuhp_setup_state_nocalls_cpuslocked(
CPUHP_AP_ARM_CORESIGHT_STARTING,
"arm/coresightcpu:starting",
coresight_hp_starting_cpu,
coresight_hp_stopping_cpu);
cscpu_hp_online = cpuhp_setup_state_nocalls_cpuslocked(
CPUHP_AP_ONLINE_DYN, "arm/coresightcpu:online",
coresight_hp_dyn_online_cpu,
coresight_hp_dyn_offline_cpu);
if (cscpu_hp_online < 0) {
err = cscpu_hp_online;
cpuhp_remove_state_nocalls(
CPUHP_AP_ARM_CORESIGHT_STARTING);
kfree(item);
cscpu_hp_online = 0;
goto cpupm_hp_done;
}
}
cscpu_dev_reg_cnt++;
list_add(&item->node, cpu_list);
/* smp callback on dev cpu @ registration */
on_reg_cb_info.item = item;
on_reg_cb_info.cpu = cpuid;
smp_call_function_single(cpuid, coresight_smp_onreg_callback,
&on_reg_cb_info, 1);
+cpupm_hp_done:
cpus_read_unlock();
+cpupm_reg_done:
mutex_unlock(&coresight_mutex);
return err;
+} +EXPORT_SYMBOL_GPL(coresight_cpupm_register);
+void coresight_cpupm_unregister(struct coresight_device *csdev,
unsigned int cpuid)
+{
struct list_head *cpu_list;
struct coresight_cpupm_item *item, *tmp;
mutex_lock(&coresight_mutex);
cpu_list = per_cpu(cs_cpupm_notifier, cpuid);
if (!list_empty(cpu_list)) {
list_for_each_entry_safe(item, tmp, cpu_list, node) {
if (item->csdev == csdev) {
list_del(&item->node);
cscpu_dev_reg_cnt--;
kfree(item);
goto cs_cpupm_item_removed;
}
}
}
+cs_cpupm_item_removed:
if (!cscpu_dev_reg_cnt) {
cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
if (cscpu_hp_online)
cpuhp_remove_state_nocalls(cscpu_hp_online);
}
mutex_unlock(&coresight_mutex);
+}
+EXPORT_SYMBOL_GPL(coresight_cpupm_unregister);
2.20.1
CoreSight mailing list CoreSight@lists.linaro.org https://lists.linaro.org/mailman/listinfo/coresight
-- Mike Leach Principal Engineer, ARM Ltd. Manchester Design Centre. UK