 
            This patch series introduces support for CPU cluster local CoreSight components, including funnel, replicator, and TMC, which reside inside CPU cluster power domains. These components require special handling due to power domain constraints.
Unlike system-level CoreSight devices, CPU cluster local components share the power domain of the CPU cluster. When the cluster enters low-power mode (LPM), the registers of these components become inaccessible. Importantly, `pm_runtime_get` calls alone are insufficient to bring the CPU cluster out of LPM, making standard register access unreliable in such cases.
To address this, the series introduces: - Device tree bindings for CPU cluster local funnel, replicator, and TMC. - Introduce a cpumask to record the CPUs belonging to the cluster where the cpu cluster local component resides. - Safe register access via smp_call_function_single() on CPUs within the associated cpumask, ensuring the cluster is power-resident during access. - Delayed probe support for CPU cluster local components when all CPUs of this CPU cluster are offline, re-probe the component when any CPU in the cluster comes online. - Introduce `cs_mode` to link enable interfaces to avoid the use smp_call_function_single() under perf mode.
Patch summary: Patch 1: Adds device tree bindings for CPU cluster funnel/replicator/TMC devices. Patches 2–3: Add support for CPU cluster funnel. Patches 4-6: Add support for CPU cluster replicator. Patches 7-10: Add support for CPU cluster TMC. Patch 11: Add 'cs_mode' to link enable functions. Patches 12-13: Add Coresight nodes for APSS debug block for x1e80100 and fix build issue.
Verification:
This series has been verified on sm8750.
Test steps for delay probe:
1. limit the system to enable at most 6 CPU cores during boot. 2. echo 1 >/sys/bus/cpu/devices/cpu6/online. 3. check whether ETM6 and ETM7 have been probed.
Test steps for sysfs mode:
echo 1 >/sys/bus/coresight/devices/tmc_etf0/enable_sink echo 1 >/sys/bus/coresight/devices/etm0/enable_source echo 1 >/sys/bus/coresight/devices/etm6/enable_source echo 0 >/sys/bus/coresight/devices/etm0/enable_source echo 0 >/sys/bus/coresight/devicse/etm6/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf0/enable_sink
echo 1 >/sys/bus/coresight/devices/tmc_etf1/enable_sink echo 1 >/sys/bus/coresight/devcies/etm0/enable_source cat /dev/tmc_etf1 >/tmp/etf1.bin echo 0 >/sys/bus/coresight/devices/etm0/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf1/enable_sink
echo 1 >/sys/bus/coresight/devices/tmc_etf2/enable_sink echo 1 >/sys/bus/coresight/devices/etm6/enable_source cat /dev/tmc_etf2 >/tmp/etf2.bin echo 0 >/sys/bus/coresight/devices/etm6/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf2/enable_sink
Test steps for sysfs node:
cat /sys/bus/coresight/devices/tmc_etf*/mgmt/*
cat /sys/bus/coresight/devices/funnel*/funnel_ctrl
cat /sys/bus/coresight/devices/replicator*/mgmt/*
Test steps for perf mode:
perf record -a -e cs_etm//k -- sleep 5
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- Yuanfang Zhang (12): dt-bindings: arm: coresight: Add cpu cluster tmc/funnel/replicator support coresight-funnel: Add support for CPU cluster funnel coresight-funnel: Handle delay probe for CPU cluster funnel coresight-replicator: Add support for CPU cluster replicator coresight-replicator: Handle delayed probe for CPU cluster replicator coresight-replicator: Update mgmt_attrs for CPU cluster replicator compatibility coresight-tmc: Add support for CPU cluster ETF and refactor probe flow coresight-tmc-etf: Refactor enable function for CPU cluster ETF support coresight-tmc: Update tmc_mgmt_attrs for CPU cluster TMC compatibility coresight-tmc: Handle delayed probe for CPU cluster TMC coresight: add 'cs_mode' to link enable functions arm64: dts: qcom: x1e80100: add Coresight nodes for APSS debug block
.../bindings/arm/arm,coresight-dynamic-funnel.yaml | 23 +- .../arm/arm,coresight-dynamic-replicator.yaml | 22 +- .../devicetree/bindings/arm/arm,coresight-tmc.yaml | 22 +- arch/arm64/boot/dts/qcom/x1e80100.dtsi | 885 +++++++++++++++++++++ arch/arm64/boot/dts/qcom/x1p42100.dtsi | 12 + drivers/hwtracing/coresight/coresight-core.c | 7 +- drivers/hwtracing/coresight/coresight-funnel.c | 260 +++++- drivers/hwtracing/coresight/coresight-replicator.c | 343 +++++++- drivers/hwtracing/coresight/coresight-tmc-core.c | 396 +++++++-- drivers/hwtracing/coresight/coresight-tmc-etf.c | 105 ++- drivers/hwtracing/coresight/coresight-tmc.h | 10 + drivers/hwtracing/coresight/coresight-tnoc.c | 3 +- drivers/hwtracing/coresight/coresight-tpda.c | 3 +- include/linux/coresight.h | 3 +- 14 files changed, 1912 insertions(+), 182 deletions(-) --- base-commit: 01f96b812526a2c8dcd5c0e510dda37e09ec8bcd change-id: 20251016-cpu_cluster_component_pm-ce518f510433
Best regards,
 
            Add the following compatible strings to the bindings: - arm,coresight-cpu-funnel - arm,coresight-cpu-replicator - arm,coresight-cpu-tmc
Each requires 'power-domains' when used.
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- .../bindings/arm/arm,coresight-dynamic-funnel.yaml | 23 +++++++++++++++++----- .../arm/arm,coresight-dynamic-replicator.yaml | 22 +++++++++++++++++---- .../devicetree/bindings/arm/arm,coresight-tmc.yaml | 22 +++++++++++++++++---- 3 files changed, 54 insertions(+), 13 deletions(-)
diff --git a/Documentation/devicetree/bindings/arm/arm,coresight-dynamic-funnel.yaml b/Documentation/devicetree/bindings/arm/arm,coresight-dynamic-funnel.yaml index b74db15e5f8af2226b817f6af5f533b1bfc74736..8f32d4e3bbb750f5a6262db0032318875739cf81 100644 --- a/Documentation/devicetree/bindings/arm/arm,coresight-dynamic-funnel.yaml +++ b/Documentation/devicetree/bindings/arm/arm,coresight-dynamic-funnel.yaml @@ -28,19 +28,32 @@ select: properties: compatible: contains: - const: arm,coresight-dynamic-funnel + enum: + - arm,coresight-dynamic-funnel + - arm,coresight-cpu-funnel required: - compatible
allOf: - $ref: /schemas/arm/primecell.yaml#
+ - if: + properties: + compatible: + contains: + const: arm,coresight-cpu-funnel + then: + required: + - power-domains + properties: compatible: - items: - - const: arm,coresight-dynamic-funnel - - const: arm,primecell - + oneOf: + - items: + - const: arm,coresight-dynamic-funnel + - const: arm,primecell + - items: + - const: arm,coresight-cpu-funnel reg: maxItems: 1
diff --git a/Documentation/devicetree/bindings/arm/arm,coresight-dynamic-replicator.yaml b/Documentation/devicetree/bindings/arm/arm,coresight-dynamic-replicator.yaml index 17ea936b796fd42bb885e539201276a11e91028c..5ce30c4e9c415f487ee61dceaf5b8ad12c78e671 100644 --- a/Documentation/devicetree/bindings/arm/arm,coresight-dynamic-replicator.yaml +++ b/Documentation/devicetree/bindings/arm/arm,coresight-dynamic-replicator.yaml @@ -28,18 +28,32 @@ select: properties: compatible: contains: - const: arm,coresight-dynamic-replicator + enum: + - arm,coresight-dynamic-replicator + - arm,coresight-cpu-replicator required: - compatible
allOf: - $ref: /schemas/arm/primecell.yaml#
+ - if: + properties: + compatible: + contains: + const: arm,coresight-cpu-replicator + then: + required: + - power-domains + properties: compatible: - items: - - const: arm,coresight-dynamic-replicator - - const: arm,primecell + oneOf: + - items: + - const: arm,coresight-dynamic-replicator + - const: arm,primecell + - items: + - const: arm,coresight-cpu-replicator
reg: maxItems: 1 diff --git a/Documentation/devicetree/bindings/arm/arm,coresight-tmc.yaml b/Documentation/devicetree/bindings/arm/arm,coresight-tmc.yaml index 96dd5b5f771a39138df9adde0c9c9a6f5583d9da..d7c0b618fe98a3ca584041947fb5c0f80f1ade6e 100644 --- a/Documentation/devicetree/bindings/arm/arm,coresight-tmc.yaml +++ b/Documentation/devicetree/bindings/arm/arm,coresight-tmc.yaml @@ -29,18 +29,32 @@ select: properties: compatible: contains: - const: arm,coresight-tmc + enum: + - arm,coresight-tmc + - arm,coresight-cpu-tmc required: - compatible
allOf: - $ref: /schemas/arm/primecell.yaml#
+ - if: + properties: + compatible: + contains: + const: arm,coresight-cpu-tmc + then: + required: + - power-domains + properties: compatible: - items: - - const: arm,coresight-tmc - - const: arm,primecell + oneOf: + - items: + - const: arm,coresight-tmc + - const: arm,primecell + - items: + - const: arm,coresight-cpu-tmc
reg: maxItems: 1
 
            The CPU cluster funnel is a type of CoreSight funnel that resides inside the CPU cluster's power domain. Unlike dynamic funnels, CPU funnels are coupled with CPU clusters and share their power management characteristics. When the CPU cluster enters low-power mode (LPM), the funnel's registers become inaccessible. Moreover, runtime PM operations alone cannot bring the CPU cluster out of LPM, making standard register access unreliable.
This patch enhances the existing CoreSight funnel platform driver to support CPU cluster funnels by: - Add cpumask to funnel_drvdata to store CPU cluster's mask for CPU cluster funnel. - Retrieving the associated CPU cluster's cpumask from the power domain. - Using smp_call_function_single() to do clear self claim tag operation. - Refactoring funnel_enable function. For cluster funnels, use smp_call_function_single() to ensure register visibility. - Encapsulating coresight registration in funnel_add_coresight_dev(). - Reusing the existing platform driver infrastructure to minimize duplication and maintain compatibility with static funnel devices.
This ensures funnel operations are safe and functional even when the CPU cluster is in low-power mode.
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- drivers/hwtracing/coresight/coresight-funnel.c | 185 ++++++++++++++++++++----- 1 file changed, 154 insertions(+), 31 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c index 3b248e54471a38f501777fe162fea850d1c851b3..12c29eb98b2122a3ade4cbed9a0d91c67ec777ed 100644 --- a/drivers/hwtracing/coresight/coresight-funnel.c +++ b/drivers/hwtracing/coresight/coresight-funnel.c @@ -15,6 +15,7 @@ #include <linux/slab.h> #include <linux/of.h> #include <linux/platform_device.h> +#include <linux/pm_domain.h> #include <linux/pm_runtime.h> #include <linux/coresight.h> #include <linux/amba/bus.h> @@ -40,6 +41,7 @@ DEFINE_CORESIGHT_DEVLIST(funnel_devs, "funnel"); * @csdev: component vitals needed by the framework. * @priority: port selection order. * @spinlock: serialize enable/disable operations. + * @cpumask: CPU mask representing the CPUs related to this funnel. */ struct funnel_drvdata { void __iomem *base; @@ -48,6 +50,13 @@ struct funnel_drvdata { struct coresight_device *csdev; unsigned long priority; raw_spinlock_t spinlock; + struct cpumask *cpumask; +}; + +struct funnel_smp_arg { + struct funnel_drvdata *drvdata; + int port; + int rc; };
static int dynamic_funnel_enable_hw(struct funnel_drvdata *drvdata, int port) @@ -76,6 +85,33 @@ static int dynamic_funnel_enable_hw(struct funnel_drvdata *drvdata, int port) return rc; }
+static void funnel_enable_hw_smp_call(void *info) +{ + struct funnel_smp_arg *arg = info; + + arg->rc = dynamic_funnel_enable_hw(arg->drvdata, arg->port); +} + +static int funnel_enable_hw(struct funnel_drvdata *drvdata, int port) +{ + int cpu, ret; + struct funnel_smp_arg arg = { 0 }; + + if (!drvdata->cpumask) + return dynamic_funnel_enable_hw(drvdata, port); + + arg.drvdata = drvdata; + arg.port = port; + + for_each_cpu(cpu, drvdata->cpumask) { + ret = smp_call_function_single(cpu, + funnel_enable_hw_smp_call, &arg, 1); + if (!ret) + return arg.rc; + } + return ret; +} + static int funnel_enable(struct coresight_device *csdev, struct coresight_connection *in, struct coresight_connection *out) @@ -86,19 +122,24 @@ static int funnel_enable(struct coresight_device *csdev, bool first_enable = false;
raw_spin_lock_irqsave(&drvdata->spinlock, flags); - if (in->dest_refcnt == 0) { - if (drvdata->base) - rc = dynamic_funnel_enable_hw(drvdata, in->dest_port); - if (!rc) - first_enable = true; - } - if (!rc) + + if (in->dest_refcnt == 0) + first_enable = true; + else in->dest_refcnt++; + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
- if (first_enable) - dev_dbg(&csdev->dev, "FUNNEL inport %d enabled\n", - in->dest_port); + if (first_enable) { + if (drvdata->base) + rc = funnel_enable_hw(drvdata, in->dest_port); + if (!rc) { + in->dest_refcnt++; + dev_dbg(&csdev->dev, "FUNNEL inport %d enabled\n", + in->dest_port); + } + } + return rc; }
@@ -188,15 +229,39 @@ static u32 get_funnel_ctrl_hw(struct funnel_drvdata *drvdata) return functl; }
+static void get_funnel_ctrl_smp_call(void *info) +{ + struct funnel_smp_arg *arg = info; + + arg->rc = get_funnel_ctrl_hw(arg->drvdata); +} + static ssize_t funnel_ctrl_show(struct device *dev, struct device_attribute *attr, char *buf) { u32 val; + int cpu, ret; struct funnel_drvdata *drvdata = dev_get_drvdata(dev->parent); + struct funnel_smp_arg arg = { 0 };
pm_runtime_get_sync(dev->parent); - - val = get_funnel_ctrl_hw(drvdata); + if (!drvdata->cpumask) { + val = get_funnel_ctrl_hw(drvdata); + } else { + arg.drvdata = drvdata; + for_each_cpu(cpu, drvdata->cpumask) { + ret = smp_call_function_single(cpu, + get_funnel_ctrl_smp_call, &arg, 1); + if (!ret) + break; + } + if (!ret) { + val = arg.rc; + } else { + pm_runtime_put(dev->parent); + return ret; + } + }
pm_runtime_put(dev->parent);
@@ -211,22 +276,68 @@ static struct attribute *coresight_funnel_attrs[] = { }; ATTRIBUTE_GROUPS(coresight_funnel);
+static void funnel_clear_self_claim_tag(struct funnel_drvdata *drvdata) +{ + struct csdev_access access = CSDEV_ACCESS_IOMEM(drvdata->base); + + coresight_clear_self_claim_tag(&access); +} + +static void funnel_init_on_cpu(void *info) +{ + struct funnel_drvdata *drvdata = info; + + funnel_clear_self_claim_tag(drvdata); +} + +static int funnel_add_coresight_dev(struct device *dev) +{ + struct coresight_desc desc = { 0 }; + struct funnel_drvdata *drvdata = dev_get_drvdata(dev); + + if (drvdata->base) { + desc.groups = coresight_funnel_groups; + desc.access = CSDEV_ACCESS_IOMEM(drvdata->base); + } + + desc.name = coresight_alloc_device_name(&funnel_devs, dev); + if (!desc.name) + return -ENOMEM; + + desc.type = CORESIGHT_DEV_TYPE_LINK; + desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_MERG; + desc.ops = &funnel_cs_ops; + desc.pdata = dev->platform_data; + desc.dev = dev; + + drvdata->csdev = coresight_register(&desc); + if (IS_ERR(drvdata->csdev)) + return PTR_ERR(drvdata->csdev); + return 0; +} + +static struct cpumask *funnel_get_cpumask(struct device *dev) +{ + struct generic_pm_domain *pd; + + pd = pd_to_genpd(dev->pm_domain); + if (pd) + return pd->cpus; + + return NULL; +} + static int funnel_probe(struct device *dev, struct resource *res) { void __iomem *base; struct coresight_platform_data *pdata = NULL; struct funnel_drvdata *drvdata; - struct coresight_desc desc = { 0 }; - int ret; + int cpu, ret;
if (is_of_node(dev_fwnode(dev)) && of_device_is_compatible(dev->of_node, "arm,coresight-funnel")) dev_warn_once(dev, "Uses OBSOLETE CoreSight funnel binding\n");
- desc.name = coresight_alloc_device_name(&funnel_devs, dev); - if (!desc.name) - return -ENOMEM; - drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); if (!drvdata) return -ENOMEM; @@ -244,9 +355,6 @@ static int funnel_probe(struct device *dev, struct resource *res) if (IS_ERR(base)) return PTR_ERR(base); drvdata->base = base; - desc.groups = coresight_funnel_groups; - desc.access = CSDEV_ACCESS_IOMEM(base); - coresight_clear_self_claim_tag(&desc.access); }
dev_set_drvdata(dev, drvdata); @@ -258,23 +366,37 @@ static int funnel_probe(struct device *dev, struct resource *res) dev->platform_data = pdata;
raw_spin_lock_init(&drvdata->spinlock); - desc.type = CORESIGHT_DEV_TYPE_LINK; - desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_MERG; - desc.ops = &funnel_cs_ops; - desc.pdata = pdata; - desc.dev = dev; - drvdata->csdev = coresight_register(&desc); - if (IS_ERR(drvdata->csdev)) - return PTR_ERR(drvdata->csdev);
- return 0; + if (is_of_node(dev_fwnode(dev)) && + of_device_is_compatible(dev->of_node, "arm,coresight-cpu-funnel")) { + drvdata->cpumask = funnel_get_cpumask(dev); + if (!drvdata->cpumask) + return -EINVAL; + + cpus_read_lock(); + for_each_cpu(cpu, drvdata->cpumask) { + ret = smp_call_function_single(cpu, + funnel_init_on_cpu, drvdata, 1); + if (!ret) + break; + } + cpus_read_unlock(); + + if (ret) + return 0; + } else if (res) { + funnel_clear_self_claim_tag(drvdata); + } + + return funnel_add_coresight_dev(dev); }
static int funnel_remove(struct device *dev) { struct funnel_drvdata *drvdata = dev_get_drvdata(dev);
- coresight_unregister(drvdata->csdev); + if (drvdata->csdev) + coresight_unregister(drvdata->csdev);
return 0; } @@ -341,6 +463,7 @@ static void funnel_platform_remove(struct platform_device *pdev)
static const struct of_device_id funnel_match[] = { {.compatible = "arm,coresight-static-funnel"}, + {.compatible = "arm,coresight-cpu-funnel"}, {} };
 
            Delay probe the cpu cluster funnel when all CPUs of this cluster are offline, re-probe the funnel when any CPU in the cluster comes online.
Key changes: - Introduce a global list to track delayed funnels waiting for CPU online. - Add CPU hotplug callback to retry registration when the CPU comes up.
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- drivers/hwtracing/coresight/coresight-funnel.c | 62 +++++++++++++++++++++++--- 1 file changed, 57 insertions(+), 5 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c index 12c29eb98b2122a3ade4cbed9a0d91c67ec777ed..43b9287a865eb26ce021521e4a5f193c48188bba 100644 --- a/drivers/hwtracing/coresight/coresight-funnel.c +++ b/drivers/hwtracing/coresight/coresight-funnel.c @@ -32,6 +32,9 @@ #define FUNNEL_ENSx_MASK 0xff
DEFINE_CORESIGHT_DEVLIST(funnel_devs, "funnel"); +static LIST_HEAD(funnel_delay_probe); +static enum cpuhp_state hp_online; +static DEFINE_SPINLOCK(delay_lock);
/** * struct funnel_drvdata - specifics associated to a funnel component @@ -42,6 +45,8 @@ DEFINE_CORESIGHT_DEVLIST(funnel_devs, "funnel"); * @priority: port selection order. * @spinlock: serialize enable/disable operations. * @cpumask: CPU mask representing the CPUs related to this funnel. + * @dev: pointer to the device associated with this funnel. + * @link: list node for adding this funnel to the delayed probe list. */ struct funnel_drvdata { void __iomem *base; @@ -51,6 +56,8 @@ struct funnel_drvdata { unsigned long priority; raw_spinlock_t spinlock; struct cpumask *cpumask; + struct device *dev; + struct list_head link; };
struct funnel_smp_arg { @@ -372,7 +379,7 @@ static int funnel_probe(struct device *dev, struct resource *res) drvdata->cpumask = funnel_get_cpumask(dev); if (!drvdata->cpumask) return -EINVAL; - + drvdata->dev = dev; cpus_read_lock(); for_each_cpu(cpu, drvdata->cpumask) { ret = smp_call_function_single(cpu, @@ -380,10 +387,15 @@ static int funnel_probe(struct device *dev, struct resource *res) if (!ret) break; } - cpus_read_unlock();
- if (ret) + if (ret) { + scoped_guard(spinlock, &delay_lock) + list_add(&drvdata->link, &funnel_delay_probe); + cpus_read_unlock(); return 0; + } + + cpus_read_unlock(); } else if (res) { funnel_clear_self_claim_tag(drvdata); } @@ -395,9 +407,12 @@ static int funnel_remove(struct device *dev) { struct funnel_drvdata *drvdata = dev_get_drvdata(dev);
- if (drvdata->csdev) + if (drvdata->csdev) { coresight_unregister(drvdata->csdev); - + } else { + scoped_guard(spinlock, &delay_lock) + list_del(&drvdata->link); + } return 0; }
@@ -535,8 +550,41 @@ static struct amba_driver dynamic_funnel_driver = { .id_table = dynamic_funnel_ids, };
+static int funnel_online_cpu(unsigned int cpu) +{ + struct funnel_drvdata *drvdata, *tmp; + int ret; + + list_for_each_entry_safe(drvdata, tmp, &funnel_delay_probe, link) { + if (cpumask_test_cpu(cpu, drvdata->cpumask)) { + scoped_guard(spinlock, &delay_lock) + list_del(&drvdata->link); + + ret = pm_runtime_resume_and_get(drvdata->dev); + if (ret < 0) + return 0; + + funnel_clear_self_claim_tag(drvdata); + funnel_add_coresight_dev(drvdata->dev); + pm_runtime_put(drvdata->dev); + } + } + return 0; +} + static int __init funnel_init(void) { + int ret; + + ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, + "arm/coresight-funnel:online", + funnel_online_cpu, NULL); + + if (ret > 0) + hp_online = ret; + else + return ret; + return coresight_init_driver("funnel", &dynamic_funnel_driver, &funnel_driver, THIS_MODULE); } @@ -544,6 +592,10 @@ static int __init funnel_init(void) static void __exit funnel_exit(void) { coresight_remove_driver(&dynamic_funnel_driver, &funnel_driver); + if (hp_online) { + cpuhp_remove_state_nocalls(hp_online); + hp_online = 0; + } }
module_init(funnel_init);
 
            The CPU cluster replicator is a type of CoreSight replicator that resides inside the CPU cluster's power domain. Unlike system-wide replicators, CPU replicators are tightly coupled with CPU clusters and inherit their power management characteristics. When the CPU cluster enters low-power mode (LPM), the replicator registers become inaccessible. Moreover, runtime PM alone cannot bring the CPU cluster out of LPM, making standard register access unreliable.
This patch enhances the existing CoreSight replicator platform driver to support CPU cluster replicators by:
- Adding replicator_claim/disclaim_device_unlocked() to handle device claim/disclaim before CoreSight device registration. - Wrapping replicator_reset and clear_clear_tag in replicator_init_hw. For cluster replicators, use smp_call_function_single() to ensure register visibility. - Encapsulating csdev registration in replicator_add_coresight_dev(). - Refactoring replicator_enable function. For cluster replicators, use smp_call_function_single() to ensure register visibility. - Maintaining compatibility with existing static/dynamic replicators while minimizing duplication.
This ensures replicator operations remain safe and functional even when the CPU cluster is in low-power states.
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- drivers/hwtracing/coresight/coresight-replicator.c | 202 +++++++++++++++++---- 1 file changed, 169 insertions(+), 33 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c index e6472658235dc479cec91ac18f3737f76f8c74f0..c5a9c7a2adfa90ae22890ed730fc008fe6901778 100644 --- a/drivers/hwtracing/coresight/coresight-replicator.c +++ b/drivers/hwtracing/coresight/coresight-replicator.c @@ -13,6 +13,7 @@ #include <linux/io.h> #include <linux/err.h> #include <linux/slab.h> +#include <linux/pm_domain.h> #include <linux/pm_runtime.h> #include <linux/property.h> #include <linux/clk.h> @@ -35,6 +36,7 @@ DEFINE_CORESIGHT_DEVLIST(replicator_devs, "replicator"); * @csdev: component vitals needed by the framework * @spinlock: serialize enable/disable operations. * @check_idfilter_val: check if the context is lost upon clock removal. + * @cpumask: CPU mask representing the CPUs related to this replicator. */ struct replicator_drvdata { void __iomem *base; @@ -43,18 +45,61 @@ struct replicator_drvdata { struct coresight_device *csdev; raw_spinlock_t spinlock; bool check_idfilter_val; + struct cpumask *cpumask; };
-static void dynamic_replicator_reset(struct replicator_drvdata *drvdata) +struct replicator_smp_arg { + struct replicator_drvdata *drvdata; + int outport; + int rc; +}; + +static void replicator_clear_self_claim_tag(struct replicator_drvdata *drvdata) +{ + struct csdev_access access = CSDEV_ACCESS_IOMEM(drvdata->base); + + coresight_clear_self_claim_tag(&access); +} + +static int replicator_claim_device_unlocked(struct replicator_drvdata *drvdata) +{ + struct coresight_device *csdev = drvdata->csdev; + struct csdev_access access = CSDEV_ACCESS_IOMEM(drvdata->base); + u32 claim_tag; + + if (csdev) + return coresight_claim_device_unlocked(csdev); + + writel_relaxed(CORESIGHT_CLAIM_SELF_HOSTED, drvdata->base + CORESIGHT_CLAIMSET); + + claim_tag = readl_relaxed(drvdata->base + CORESIGHT_CLAIMCLR); + if (claim_tag != CORESIGHT_CLAIM_SELF_HOSTED) { + coresight_clear_self_claim_tag_unlocked(&access); + return -EBUSY; + } + + return 0; +} + +static void replicator_disclaim_device_unlocked(struct replicator_drvdata *drvdata) { struct coresight_device *csdev = drvdata->csdev; + struct csdev_access access = CSDEV_ACCESS_IOMEM(drvdata->base); + + if (csdev) + return coresight_disclaim_device_unlocked(csdev);
+ coresight_clear_self_claim_tag_unlocked(&access); +} + +static void dynamic_replicator_reset(struct replicator_drvdata *drvdata) +{ CS_UNLOCK(drvdata->base);
- if (!coresight_claim_device_unlocked(csdev)) { + if (!replicator_claim_device_unlocked(drvdata)) { writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0); writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1); - coresight_disclaim_device_unlocked(csdev); + replicator_disclaim_device_unlocked(drvdata); }
CS_LOCK(drvdata->base); @@ -116,6 +161,34 @@ static int dynamic_replicator_enable(struct replicator_drvdata *drvdata, return rc; }
+static void replicator_enable_hw_smp_call(void *info) +{ + struct replicator_smp_arg *arg = info; + + arg->rc = dynamic_replicator_enable(arg->drvdata, 0, arg->outport); +} + +static int replicator_enable_hw(struct replicator_drvdata *drvdata, + int inport, int outport) +{ + int cpu, ret; + struct replicator_smp_arg arg = { 0 }; + + if (!drvdata->cpumask) + return dynamic_replicator_enable(drvdata, 0, outport); + + arg.drvdata = drvdata; + arg.outport = outport; + + for_each_cpu(cpu, drvdata->cpumask) { + ret = smp_call_function_single(cpu, replicator_enable_hw_smp_call, &arg, 1); + if (!ret) + return arg.rc; + } + + return ret; +} + static int replicator_enable(struct coresight_device *csdev, struct coresight_connection *in, struct coresight_connection *out) @@ -126,19 +199,24 @@ static int replicator_enable(struct coresight_device *csdev, bool first_enable = false;
raw_spin_lock_irqsave(&drvdata->spinlock, flags); - if (out->src_refcnt == 0) { - if (drvdata->base) - rc = dynamic_replicator_enable(drvdata, in->dest_port, - out->src_port); - if (!rc) - first_enable = true; - } - if (!rc) + + if (out->src_refcnt == 0) + first_enable = true; + else out->src_refcnt++; raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
- if (first_enable) - dev_dbg(&csdev->dev, "REPLICATOR enabled\n"); + if (first_enable) { + if (drvdata->base) + rc = replicator_enable_hw(drvdata, in->dest_port, + out->src_port); + if (!rc) { + out->src_refcnt++; + dev_dbg(&csdev->dev, "REPLICATOR enabled\n"); + return rc; + } + } + return rc; }
@@ -217,23 +295,69 @@ static const struct attribute_group *replicator_groups[] = { NULL, };
+static int replicator_add_coresight_dev(struct device *dev) +{ + struct coresight_desc desc = { 0 }; + struct replicator_drvdata *drvdata = dev_get_drvdata(dev); + + if (drvdata->base) { + desc.groups = replicator_groups; + desc.access = CSDEV_ACCESS_IOMEM(drvdata->base); + } + + desc.name = coresight_alloc_device_name(&replicator_devs, dev); + if (!desc.name) + return -ENOMEM; + + desc.type = CORESIGHT_DEV_TYPE_LINK; + desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_SPLIT; + desc.ops = &replicator_cs_ops; + desc.pdata = dev->platform_data; + desc.dev = dev; + + drvdata->csdev = coresight_register(&desc); + if (IS_ERR(drvdata->csdev)) + return PTR_ERR(drvdata->csdev); + + return 0; +} + +static void replicator_init_hw(struct replicator_drvdata *drvdata) +{ + replicator_clear_self_claim_tag(drvdata); + replicator_reset(drvdata); +} + +static void replicator_init_on_cpu(void *info) +{ + struct replicator_drvdata *drvdata = info; + + replicator_init_hw(drvdata); +} + +static struct cpumask *replicator_get_cpumask(struct device *dev) +{ + struct generic_pm_domain *pd; + + pd = pd_to_genpd(dev->pm_domain); + if (pd) + return pd->cpus; + + return NULL; +} + static int replicator_probe(struct device *dev, struct resource *res) { struct coresight_platform_data *pdata = NULL; struct replicator_drvdata *drvdata; - struct coresight_desc desc = { 0 }; void __iomem *base; - int ret; + int cpu, ret;
if (is_of_node(dev_fwnode(dev)) && of_device_is_compatible(dev->of_node, "arm,coresight-replicator")) dev_warn_once(dev, "Uses OBSOLETE CoreSight replicator binding\n");
- desc.name = coresight_alloc_device_name(&replicator_devs, dev); - if (!desc.name) - return -ENOMEM; - drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); if (!drvdata) return -ENOMEM; @@ -251,9 +375,6 @@ static int replicator_probe(struct device *dev, struct resource *res) if (IS_ERR(base)) return PTR_ERR(base); drvdata->base = base; - desc.groups = replicator_groups; - desc.access = CSDEV_ACCESS_IOMEM(base); - coresight_clear_self_claim_tag(&desc.access); }
if (fwnode_property_present(dev_fwnode(dev), @@ -268,25 +389,39 @@ static int replicator_probe(struct device *dev, struct resource *res) dev->platform_data = pdata;
raw_spin_lock_init(&drvdata->spinlock); - desc.type = CORESIGHT_DEV_TYPE_LINK; - desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_SPLIT; - desc.ops = &replicator_cs_ops; - desc.pdata = dev->platform_data; - desc.dev = dev;
- drvdata->csdev = coresight_register(&desc); - if (IS_ERR(drvdata->csdev)) - return PTR_ERR(drvdata->csdev); + if (is_of_node(dev_fwnode(dev)) && + of_device_is_compatible(dev->of_node, "arm,coresight-cpu-replicator")) { + drvdata->cpumask = replicator_get_cpumask(dev); + if (!drvdata->cpumask) + return -EINVAL; + + cpus_read_lock(); + for_each_cpu(cpu, drvdata->cpumask) { + ret = smp_call_function_single(cpu, + replicator_init_on_cpu, drvdata, 1); + if (!ret) + break; + } + cpus_read_unlock();
- replicator_reset(drvdata); - return 0; + if (ret) + return 0; + } else if (res) { + replicator_init_hw(drvdata); + } + + ret = replicator_add_coresight_dev(dev); + + return ret; }
static int replicator_remove(struct device *dev) { struct replicator_drvdata *drvdata = dev_get_drvdata(dev);
- coresight_unregister(drvdata->csdev); + if (drvdata->csdev) + coresight_unregister(drvdata->csdev); return 0; }
@@ -354,6 +489,7 @@ static const struct dev_pm_ops replicator_dev_pm_ops = { static const struct of_device_id replicator_match[] = { {.compatible = "arm,coresight-replicator"}, {.compatible = "arm,coresight-static-replicator"}, + {.compatible = "arm,coresight-cpu-replicator"}, {} };
 
            Delay probe the CPU cluster replicator when all CPUs of the cluster are offline, and re-probe the replicator when any CPU in the cluster comes online.
Key changes: - Maintain a global list to track delayed replicators waiting for CPU online. - Add a CPU hotplug callback to retry registration on CPU online events.
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- drivers/hwtracing/coresight/coresight-replicator.c | 65 ++++++++++++++++++++-- 1 file changed, 61 insertions(+), 4 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c index c5a9c7a2adfa90ae22890ed730fc008fe6901778..1dfe11940cd6001db3cf17249b0493027b65e19c 100644 --- a/drivers/hwtracing/coresight/coresight-replicator.c +++ b/drivers/hwtracing/coresight/coresight-replicator.c @@ -26,6 +26,9 @@ #define REPLICATOR_IDFILTER1 0x004
DEFINE_CORESIGHT_DEVLIST(replicator_devs, "replicator"); +static LIST_HEAD(replicator_delay_probe); +static enum cpuhp_state hp_online; +static DEFINE_SPINLOCK(delay_lock);
/** * struct replicator_drvdata - specifics associated to a replicator component @@ -37,6 +40,8 @@ DEFINE_CORESIGHT_DEVLIST(replicator_devs, "replicator"); * @spinlock: serialize enable/disable operations. * @check_idfilter_val: check if the context is lost upon clock removal. * @cpumask: CPU mask representing the CPUs related to this replicator. + * @dev: pointer to the device associated with this replicator. + * @link: link to the delay_probed list. */ struct replicator_drvdata { void __iomem *base; @@ -46,6 +51,8 @@ struct replicator_drvdata { raw_spinlock_t spinlock; bool check_idfilter_val; struct cpumask *cpumask; + struct device *dev; + struct list_head link; };
struct replicator_smp_arg { @@ -395,7 +402,7 @@ static int replicator_probe(struct device *dev, struct resource *res) drvdata->cpumask = replicator_get_cpumask(dev); if (!drvdata->cpumask) return -EINVAL; - + drvdata->dev = dev; cpus_read_lock(); for_each_cpu(cpu, drvdata->cpumask) { ret = smp_call_function_single(cpu, @@ -403,10 +410,15 @@ static int replicator_probe(struct device *dev, struct resource *res) if (!ret) break; } - cpus_read_unlock();
- if (ret) + if (ret) { + scoped_guard(spinlock, &delay_lock) + list_add(&drvdata->link, &replicator_delay_probe); + cpus_read_unlock(); return 0; + } + + cpus_read_unlock(); } else if (res) { replicator_init_hw(drvdata); } @@ -420,8 +432,13 @@ static int replicator_remove(struct device *dev) { struct replicator_drvdata *drvdata = dev_get_drvdata(dev);
- if (drvdata->csdev) + if (drvdata->csdev) { coresight_unregister(drvdata->csdev); + } else { + scoped_guard(spinlock, &delay_lock) + list_del(&drvdata->link); + } + return 0; }
@@ -554,8 +571,44 @@ static struct amba_driver dynamic_replicator_driver = { .id_table = dynamic_replicator_ids, };
+static int replicator_online_cpu(unsigned int cpu) +{ + struct replicator_drvdata *drvdata, *tmp; + int ret; + + spin_lock(&delay_lock); + list_for_each_entry_safe(drvdata, tmp, &replicator_delay_probe, link) { + if (cpumask_test_cpu(cpu, drvdata->cpumask)) { + list_del(&drvdata->link); + spin_unlock(&delay_lock); + ret = pm_runtime_resume_and_get(drvdata->dev); + if (ret < 0) + return 0; + + replicator_clear_self_claim_tag(drvdata); + replicator_reset(drvdata); + replicator_add_coresight_dev(drvdata->dev); + pm_runtime_put(drvdata->dev); + spin_lock(&delay_lock); + } + } + spin_unlock(&delay_lock); + return 0; +} + static int __init replicator_init(void) { + int ret; + + ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, + "arm/coresight-replicator:online", + replicator_online_cpu, NULL); + + if (ret > 0) + hp_online = ret; + else + return ret; + return coresight_init_driver("replicator", &dynamic_replicator_driver, &replicator_driver, THIS_MODULE); } @@ -563,6 +616,10 @@ static int __init replicator_init(void) static void __exit replicator_exit(void) { coresight_remove_driver(&dynamic_replicator_driver, &replicator_driver); + if (hp_online) { + cpuhp_remove_state_nocalls(hp_online); + hp_online = 0; + } }
module_init(replicator_init);
 
            This patch refactors the sysfs interfaces to ensure compatibility with CPU cluster replicators. For CPU cluster replicators, register reads are performed via smp_call_function_single().
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- drivers/hwtracing/coresight/coresight-replicator.c | 61 +++++++++++++++++++++- 1 file changed, 59 insertions(+), 2 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c index 1dfe11940cd6001db3cf17249b0493027b65e19c..22c9bc71817d238c2d4ddffbb42678bf792b29af 100644 --- a/drivers/hwtracing/coresight/coresight-replicator.c +++ b/drivers/hwtracing/coresight/coresight-replicator.c @@ -58,6 +58,7 @@ struct replicator_drvdata { struct replicator_smp_arg { struct replicator_drvdata *drvdata; int outport; + u32 offset; int rc; };
@@ -286,9 +287,65 @@ static const struct coresight_ops replicator_cs_ops = { .link_ops = &replicator_link_ops, };
+static void replicator_read_register_smp_call(void *info) +{ + struct replicator_smp_arg *arg = info; + + arg->rc = readl_relaxed(arg->drvdata->base + arg->offset); +} + +static ssize_t coresight_replicator_reg32_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct replicator_drvdata *drvdata = dev_get_drvdata(dev->parent); + struct cs_off_attribute *cs_attr = container_of(attr, struct cs_off_attribute, attr); + unsigned long flags; + struct replicator_smp_arg arg = { 0 }; + u32 val; + int ret, cpu; + + pm_runtime_get_sync(dev->parent); + + if (!drvdata->cpumask) { + raw_spin_lock_irqsave(&drvdata->spinlock, flags); + val = readl_relaxed(drvdata->base + cs_attr->off); + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags); + + } else { + arg.drvdata = drvdata; + arg.offset = cs_attr->off; + for_each_cpu(cpu, drvdata->cpumask) { + ret = smp_call_function_single(cpu, + replicator_read_register_smp_call, + &arg, 1); + if (!ret) + break; + } + if (!ret) { + val = arg.rc; + } else { + pm_runtime_put_sync(dev->parent); + return ret; + } + } + + pm_runtime_put_sync(dev->parent); + + return sysfs_emit(buf, "0x%x\n", val); +} + +#define coresight_replicator_reg32(name, offset) \ + (&((struct cs_off_attribute[]) { \ + { \ + __ATTR(name, 0444, coresight_replicator_reg32_show, NULL), \ + offset \ + } \ + })[0].attr.attr) + static struct attribute *replicator_mgmt_attrs[] = { - coresight_simple_reg32(idfilter0, REPLICATOR_IDFILTER0), - coresight_simple_reg32(idfilter1, REPLICATOR_IDFILTER1), + coresight_replicator_reg32(idfilter0, REPLICATOR_IDFILTER0), + coresight_replicator_reg32(idfilter1, REPLICATOR_IDFILTER1), NULL, };
 
            The CPU cluster ETF is a type of Coresight ETF that resides inside the CPU cluster's power domain. Unlike system-level ETFs, these devices share the CPU cluster's power management behavior: when the cluster enters low-power mode, ETF registers become inaccessible. Runtime PM alone cannot bring the cluster out of LPM, making standard register access unreliable.
This patch adds support for CPU cluster ETF and restructures the probe sequence to handle such cases safely:
- Wrap hardware access in tmc_init_hw_config(). For cluster TMCs, use smp_call_function_single() to ensure register visibility. - Encapsulate CoreSight device registration and misc_register setup in tmc_add_coresight_dev().
This ensures TMC initialization and runtime operations remain safe even when the CPU cluster is in low-power states, while maintaining compatibility with existing system-level TMCs.
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- drivers/hwtracing/coresight/coresight-tmc-core.c | 204 +++++++++++++++-------- drivers/hwtracing/coresight/coresight-tmc.h | 6 + 2 files changed, 141 insertions(+), 69 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-tmc-core.c b/drivers/hwtracing/coresight/coresight-tmc-core.c index 36599c431be6203e871fdcb8de569cc6701c52bb..d00f23f9a479ee9d4bdb4e051ed895d266bcc116 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-core.c +++ b/drivers/hwtracing/coresight/coresight-tmc-core.c @@ -21,6 +21,7 @@ #include <linux/slab.h> #include <linux/dma-mapping.h> #include <linux/spinlock.h> +#include <linux/pm_domain.h> #include <linux/pm_runtime.h> #include <linux/of.h> #include <linux/of_address.h> @@ -769,56 +770,14 @@ static void register_crash_dev_interface(struct tmc_drvdata *drvdata, "Valid crash tracedata found\n"); }
-static int __tmc_probe(struct device *dev, struct resource *res) +static int tmc_add_coresight_dev(struct device *dev) { - int ret = 0; - u32 devid; - void __iomem *base; - struct coresight_platform_data *pdata = NULL; - struct tmc_drvdata *drvdata; + struct tmc_drvdata *drvdata = dev_get_drvdata(dev); struct coresight_desc desc = { 0 }; struct coresight_dev_list *dev_list = NULL; + int ret = 0;
- drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); - if (!drvdata) - return -ENOMEM; - - dev_set_drvdata(dev, drvdata); - - ret = coresight_get_enable_clocks(dev, &drvdata->pclk, &drvdata->atclk); - if (ret) - return ret; - - ret = -ENOMEM; - - /* Validity for the resource is already checked by the AMBA core */ - base = devm_ioremap_resource(dev, res); - if (IS_ERR(base)) { - ret = PTR_ERR(base); - goto out; - } - - drvdata->base = base; - desc.access = CSDEV_ACCESS_IOMEM(base); - - raw_spin_lock_init(&drvdata->spinlock); - - devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID); - drvdata->config_type = BMVAL(devid, 6, 7); - drvdata->memwidth = tmc_get_memwidth(devid); - /* This device is not associated with a session */ - drvdata->pid = -1; - drvdata->etr_mode = ETR_MODE_AUTO; - - if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { - drvdata->size = tmc_etr_get_default_buffer_size(dev); - drvdata->max_burst_size = tmc_etr_get_max_burst_size(dev); - } else { - drvdata->size = readl_relaxed(drvdata->base + TMC_RSZ) * 4; - } - - tmc_get_reserved_region(dev); - + desc.access = CSDEV_ACCESS_IOMEM(drvdata->base); desc.dev = dev;
switch (drvdata->config_type) { @@ -834,9 +793,9 @@ static int __tmc_probe(struct device *dev, struct resource *res) desc.type = CORESIGHT_DEV_TYPE_SINK; desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_SYSMEM; desc.ops = &tmc_etr_cs_ops; - ret = tmc_etr_setup_caps(dev, devid, &desc.access); + ret = tmc_etr_setup_caps(dev, drvdata->devid, &desc.access); if (ret) - goto out; + return ret; idr_init(&drvdata->idr); mutex_init(&drvdata->idr_mutex); dev_list = &etr_devs; @@ -851,44 +810,142 @@ static int __tmc_probe(struct device *dev, struct resource *res) break; default: pr_err("%s: Unsupported TMC config\n", desc.name); - ret = -EINVAL; - goto out; + return -EINVAL; }
desc.name = coresight_alloc_device_name(dev_list, dev); - if (!desc.name) { - ret = -ENOMEM; + if (!desc.name) + return -ENOMEM; + + drvdata->desc_name = desc.name; + + desc.pdata = dev->platform_data; + + drvdata->csdev = coresight_register(&desc); + if (IS_ERR(drvdata->csdev)) + return PTR_ERR(drvdata->csdev); + + drvdata->miscdev.name = desc.name; + drvdata->miscdev.minor = MISC_DYNAMIC_MINOR; + drvdata->miscdev.fops = &tmc_fops; + ret = misc_register(&drvdata->miscdev); + if (ret) + coresight_unregister(drvdata->csdev); + + return ret; +} + +static void tmc_clear_self_claim_tag(struct tmc_drvdata *drvdata) +{ + struct csdev_access access = CSDEV_ACCESS_IOMEM(drvdata->base); + + coresight_clear_self_claim_tag(&access); +} + +static void tmc_init_hw_config(struct tmc_drvdata *drvdata) +{ + u32 devid; + + devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID); + drvdata->config_type = BMVAL(devid, 6, 7); + drvdata->memwidth = tmc_get_memwidth(devid); + drvdata->devid = devid; + drvdata->size = readl_relaxed(drvdata->base + TMC_RSZ) * 4; + tmc_clear_self_claim_tag(drvdata); +} + +static void tmc_init_on_cpu(void *info) +{ + struct tmc_drvdata *drvdata = info; + + tmc_init_hw_config(drvdata); +} + +static struct cpumask *tmc_get_cpumask(struct device *dev) +{ + struct generic_pm_domain *pd; + + pd = pd_to_genpd(dev->pm_domain); + if (pd) + return pd->cpus; + + return NULL; +} + +static int __tmc_probe(struct device *dev, struct resource *res) +{ + int cpu, ret = 0; + void __iomem *base; + struct coresight_platform_data *pdata = NULL; + struct tmc_drvdata *drvdata; + + drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); + if (!drvdata) + return -ENOMEM; + + dev_set_drvdata(dev, drvdata); + + ret = coresight_get_enable_clocks(dev, &drvdata->pclk, &drvdata->atclk); + if (ret) + return ret; + + ret = -ENOMEM; + + /* Validity for the resource is already checked by the AMBA core */ + base = devm_ioremap_resource(dev, res); + if (IS_ERR(base)) { + ret = PTR_ERR(base); goto out; }
+ drvdata->base = base; + + raw_spin_lock_init(&drvdata->spinlock); + /* This device is not associated with a session */ + drvdata->pid = -1; + drvdata->etr_mode = ETR_MODE_AUTO; + tmc_get_reserved_region(dev); + pdata = coresight_get_platform_data(dev); if (IS_ERR(pdata)) { ret = PTR_ERR(pdata); goto out; } dev->platform_data = pdata; - desc.pdata = pdata;
- coresight_clear_self_claim_tag(&desc.access); - drvdata->csdev = coresight_register(&desc); - if (IS_ERR(drvdata->csdev)) { - ret = PTR_ERR(drvdata->csdev); - goto out; + if (is_of_node(dev_fwnode(dev)) && + of_device_is_compatible(dev->of_node, "arm,coresight-cpu-tmc")) { + drvdata->cpumask = tmc_get_cpumask(dev); + if (!drvdata->cpumask) + return -EINVAL; + + cpus_read_lock(); + for_each_cpu(cpu, drvdata->cpumask) { + ret = smp_call_function_single(cpu, + tmc_init_on_cpu, drvdata, 1); + if (!ret) + break; + } + cpus_read_unlock(); + if (ret) { + ret = 0; + goto out; + } + } else { + tmc_init_hw_config(drvdata); }
- drvdata->miscdev.name = desc.name; - drvdata->miscdev.minor = MISC_DYNAMIC_MINOR; - drvdata->miscdev.fops = &tmc_fops; - ret = misc_register(&drvdata->miscdev); - if (ret) { - coresight_unregister(drvdata->csdev); - goto out; + if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { + drvdata->size = tmc_etr_get_default_buffer_size(dev); + drvdata->max_burst_size = tmc_etr_get_max_burst_size(dev); }
+ ret = tmc_add_coresight_dev(dev); + out: if (is_tmc_crashdata_valid(drvdata) && !tmc_prepare_crashdata(drvdata)) - register_crash_dev_interface(drvdata, desc.name); + register_crash_dev_interface(drvdata, drvdata->desc_name); return ret; }
@@ -934,10 +991,12 @@ static void __tmc_remove(struct device *dev) * etb fops in this case, device is there until last file * handler to this device is closed. */ - misc_deregister(&drvdata->miscdev); + if (!drvdata->cpumask) + misc_deregister(&drvdata->miscdev); if (drvdata->crashdev.fops) misc_deregister(&drvdata->crashdev); - coresight_unregister(drvdata->csdev); + if (drvdata->csdev) + coresight_unregister(drvdata->csdev); }
static void tmc_remove(struct amba_device *adev) @@ -992,7 +1051,6 @@ static void tmc_platform_remove(struct platform_device *pdev)
if (WARN_ON(!drvdata)) return; - __tmc_remove(&pdev->dev); pm_runtime_disable(&pdev->dev); } @@ -1029,6 +1087,13 @@ static const struct dev_pm_ops tmc_dev_pm_ops = { SET_RUNTIME_PM_OPS(tmc_runtime_suspend, tmc_runtime_resume, NULL) };
+static const struct of_device_id tmc_match[] = { + {.compatible = "arm,coresight-cpu-tmc"}, + {} +}; + +MODULE_DEVICE_TABLE(of, tmc_match); + #ifdef CONFIG_ACPI static const struct acpi_device_id tmc_acpi_ids[] = { {"ARMHC501", 0, 0, 0}, /* ARM CoreSight ETR */ @@ -1043,6 +1108,7 @@ static struct platform_driver tmc_platform_driver = { .remove = tmc_platform_remove, .driver = { .name = "coresight-tmc-platform", + .of_match_table = tmc_match, .acpi_match_table = ACPI_PTR(tmc_acpi_ids), .suppress_bind_attrs = true, .pm = &tmc_dev_pm_ops, diff --git a/drivers/hwtracing/coresight/coresight-tmc.h b/drivers/hwtracing/coresight/coresight-tmc.h index cbb4ba43915855a8acbb9205167e87185c9a8c6c..f5c76ca2dc9733daa020b79b1dcfc495045a2618 100644 --- a/drivers/hwtracing/coresight/coresight-tmc.h +++ b/drivers/hwtracing/coresight/coresight-tmc.h @@ -243,6 +243,9 @@ struct tmc_resrv_buf { * (after crash) by default. * @crash_mdata: Reserved memory for storing tmc crash metadata. * Used by ETR/ETF. + * @cpumask: CPU mask representing the CPUs related to this TMC. + * @devid: TMC variant ID inferred from the device configuration register. + * @desc_name: Name to be used while creating crash interface. */ struct tmc_drvdata { struct clk *atclk; @@ -273,6 +276,9 @@ struct tmc_drvdata { struct etr_buf *perf_buf; struct tmc_resrv_buf resrv_buf; struct tmc_resrv_buf crash_mdata; + struct cpumask *cpumask; + u32 devid; + const char *desc_name; };
struct etr_buf_operations {
 
            CPU cluster TMC instances share the cluster's power domain, so register access must occur on a CPU that keeps the domain powered. To ensure safe ETF enable sequences on such devices, split tmc_etf/etb_enable_hw function into two variants:
- tmc_etf/etb_enable_hw_local: replaces the old tmc_etf/etb_enable_hw for normal ETF cases - tmc_etf/etb_enable_hw_smp_call: executes CPU cluster ETF enable on a CPU within the cluster via smp_call_function_single
Also add a check to ensure the current CPU belongs to the cluster before calling tmc_etb_enable_hw_local for CPU cluster ETF.
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- drivers/hwtracing/coresight/coresight-tmc-etf.c | 86 ++++++++++++++++++++++--- 1 file changed, 76 insertions(+), 10 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c index 0f45ab5e5249933ce7059dfee7fe7376ab33ed2d..b8a1c10d4b4c49144449b33f26710cf11713b338 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-etf.c +++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c @@ -47,7 +47,7 @@ static int __tmc_etb_enable_hw(struct tmc_drvdata *drvdata) return rc; }
-static int tmc_etb_enable_hw(struct tmc_drvdata *drvdata) +static int tmc_etb_enable_hw_local(struct tmc_drvdata *drvdata) { int rc = coresight_claim_device(drvdata->csdev);
@@ -60,6 +60,36 @@ static int tmc_etb_enable_hw(struct tmc_drvdata *drvdata) return rc; }
+struct tmc_smp_arg { + struct tmc_drvdata *drvdata; + int rc; +}; + +static void tmc_etb_enable_hw_smp_call(void *info) +{ + struct tmc_smp_arg *arg = info; + + arg->rc = tmc_etb_enable_hw_local(arg->drvdata); +} + +static int tmc_etb_enable_hw(struct tmc_drvdata *drvdata) +{ + int cpu, ret; + struct tmc_smp_arg arg = { 0 }; + + if (!drvdata->cpumask) + return tmc_etb_enable_hw_local(drvdata); + + arg.drvdata = drvdata; + for_each_cpu(cpu, drvdata->cpumask) { + ret = smp_call_function_single(cpu, + tmc_etb_enable_hw_smp_call, &arg, 1); + if (!ret) + return arg.rc; + } + return ret; +} + static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata) { char *bufp; @@ -130,7 +160,7 @@ static int __tmc_etf_enable_hw(struct tmc_drvdata *drvdata) return rc; }
-static int tmc_etf_enable_hw(struct tmc_drvdata *drvdata) +static int tmc_etf_enable_hw_local(struct tmc_drvdata *drvdata) { int rc = coresight_claim_device(drvdata->csdev);
@@ -143,6 +173,32 @@ static int tmc_etf_enable_hw(struct tmc_drvdata *drvdata) return rc; }
+static void tmc_etf_enable_hw_smp_call(void *info) +{ + struct tmc_smp_arg *arg = info; + + arg->rc = tmc_etf_enable_hw_local(arg->drvdata); +} + +static int tmc_etf_enable_hw(struct tmc_drvdata *drvdata) +{ + int cpu, ret; + struct tmc_smp_arg arg = { 0 }; + + if (!drvdata->cpumask) + return tmc_etf_enable_hw_local(drvdata); + + arg.drvdata = drvdata; + + for_each_cpu(cpu, drvdata->cpumask) { + ret = smp_call_function_single(cpu, + tmc_etf_enable_hw_smp_call, &arg, 1); + if (!ret) + return arg.rc; + } + return ret; +} + static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata) { struct coresight_device *csdev = drvdata->csdev; @@ -228,7 +284,11 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev) used = true; drvdata->buf = buf; } + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags); + ret = tmc_etb_enable_hw(drvdata); + + raw_spin_lock_irqsave(&drvdata->spinlock, flags); if (!ret) { coresight_set_mode(csdev, CS_MODE_SYSFS); csdev->refcnt++; @@ -290,7 +350,10 @@ static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data) break; }
- ret = tmc_etb_enable_hw(drvdata); + if (drvdata->cpumask && !cpumask_test_cpu(smp_processor_id(), drvdata->cpumask)) + break; + + ret = tmc_etb_enable_hw_local(drvdata); if (!ret) { /* Associate with monitored process. */ drvdata->pid = pid; @@ -374,19 +437,22 @@ static int tmc_enable_etf_link(struct coresight_device *csdev, return -EBUSY; }
- if (csdev->refcnt == 0) { + if (csdev->refcnt == 0) + first_enable = true; + + if (!first_enable) + csdev->refcnt++; + + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags); + if (first_enable) { ret = tmc_etf_enable_hw(drvdata); if (!ret) { coresight_set_mode(csdev, CS_MODE_SYSFS); - first_enable = true; + csdev->refcnt++; + dev_dbg(&csdev->dev, "TMC-ETF enabled\n"); } } - if (!ret) - csdev->refcnt++; - raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
- if (first_enable) - dev_dbg(&csdev->dev, "TMC-ETF enabled\n"); return ret; }
 
            This patch refactors the sysfs interfaces to ensure compatibility with CPU cluster TMC. When operating on a CPU cluster TMC, register reads are performed via `smp_call_function_single()`.
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- drivers/hwtracing/coresight/coresight-tmc-core.c | 137 ++++++++++++++++++++--- 1 file changed, 123 insertions(+), 14 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-tmc-core.c b/drivers/hwtracing/coresight/coresight-tmc-core.c index d00f23f9a479ee9d4bdb4e051ed895d266bcc116..685a64d8ba1b5df4cff91694eee45c6d6a147bc1 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-core.c +++ b/drivers/hwtracing/coresight/coresight-tmc-core.c @@ -458,21 +458,130 @@ static enum tmc_mem_intf_width tmc_get_memwidth(u32 devid) return memwidth; }
+struct tmc_smp_arg { + struct tmc_drvdata *drvdata; + u32 offset; + int rc; +}; + +static void tmc_read_reg_smp_call(void *info) +{ + struct tmc_smp_arg *arg = info; + + arg->rc = readl_relaxed(arg->drvdata->base + arg->offset); +} + +static u32 cpu_tmc_read_reg(struct tmc_drvdata *drvdata, u32 offset) +{ + struct tmc_smp_arg arg = { + .drvdata = drvdata, + .offset = offset, + }; + int cpu, ret = 0; + + for_each_cpu(cpu, drvdata->cpumask) { + ret = smp_call_function_single(cpu, + tmc_read_reg_smp_call, &arg, 1); + if (!ret) + return arg.rc; + } + + return ret; +} + +static ssize_t coresight_tmc_reg32_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent); + struct cs_off_attribute *cs_attr = container_of(attr, struct cs_off_attribute, attr); + int ret; + u32 val; + + ret = pm_runtime_resume_and_get(dev->parent); + if (ret < 0) + return ret; + + if (!drvdata->cpumask) + val = readl_relaxed(drvdata->base + cs_attr->off); + else + val = cpu_tmc_read_reg(drvdata, cs_attr->off); + + pm_runtime_put(dev->parent); + + if (ret < 0) + return ret; + else + return sysfs_emit(buf, "0x%x\n", val); +} + +static ssize_t coresight_tmc_reg64_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent); + struct cs_pair_attribute *cs_attr = container_of(attr, struct cs_pair_attribute, attr); + int ret; + u64 val; + + ret = pm_runtime_resume_and_get(dev->parent); + if (ret < 0) + return ret; + if (!drvdata->cpumask) { + val = readl_relaxed(drvdata->base + cs_attr->lo_off) | + ((u64)readl_relaxed(drvdata->base + cs_attr->hi_off) << 32); + } else { + ret = cpu_tmc_read_reg(drvdata, cs_attr->lo_off); + + if (ret < 0) + goto out; + + val = ret; + + ret = cpu_tmc_read_reg(drvdata, cs_attr->hi_off); + if (ret < 0) + goto out; + + val |= ((u64)ret << 32); + } + +out: + pm_runtime_put_sync(dev->parent); + if (ret < 0) + return ret; + else + return sysfs_emit(buf, "0x%llx\n", val); +} + +#define coresight_tmc_reg32(name, offset) \ + (&((struct cs_off_attribute[]) { \ + { \ + __ATTR(name, 0444, coresight_tmc_reg32_show, NULL), \ + offset \ + } \ + })[0].attr.attr) +#define coresight_tmc_reg64(name, lo_off, hi_off) \ + (&((struct cs_pair_attribute[]) { \ + { \ + __ATTR(name, 0444, coresight_tmc_reg64_show, NULL), \ + lo_off, hi_off \ + } \ + })[0].attr.attr) static struct attribute *coresight_tmc_mgmt_attrs[] = { - coresight_simple_reg32(rsz, TMC_RSZ), - coresight_simple_reg32(sts, TMC_STS), - coresight_simple_reg64(rrp, TMC_RRP, TMC_RRPHI), - coresight_simple_reg64(rwp, TMC_RWP, TMC_RWPHI), - coresight_simple_reg32(trg, TMC_TRG), - coresight_simple_reg32(ctl, TMC_CTL), - coresight_simple_reg32(ffsr, TMC_FFSR), - coresight_simple_reg32(ffcr, TMC_FFCR), - coresight_simple_reg32(mode, TMC_MODE), - coresight_simple_reg32(pscr, TMC_PSCR), - coresight_simple_reg32(devid, CORESIGHT_DEVID), - coresight_simple_reg64(dba, TMC_DBALO, TMC_DBAHI), - coresight_simple_reg32(axictl, TMC_AXICTL), - coresight_simple_reg32(authstatus, TMC_AUTHSTATUS), + coresight_tmc_reg32(rsz, TMC_RSZ), + coresight_tmc_reg32(sts, TMC_STS), + coresight_tmc_reg64(rrp, TMC_RRP, TMC_RRPHI), + coresight_tmc_reg64(rwp, TMC_RWP, TMC_RWPHI), + coresight_tmc_reg32(trg, TMC_TRG), + coresight_tmc_reg32(ctl, TMC_CTL), + coresight_tmc_reg32(ffsr, TMC_FFSR), + coresight_tmc_reg32(ffcr, TMC_FFCR), + coresight_tmc_reg32(mode, TMC_MODE), + coresight_tmc_reg32(pscr, TMC_PSCR), + coresight_tmc_reg32(devid, CORESIGHT_DEVID), + coresight_tmc_reg64(dba, TMC_DBALO, TMC_DBAHI), + coresight_tmc_reg32(axictl, TMC_AXICTL), + coresight_tmc_reg32(authstatus, TMC_AUTHSTATUS), NULL, };
 
            Delay probe the cpu cluster TMC when all CPUs of this cluster are offline, re-probe the funnel when any CPU in the cluster comes online.
Key changes: - Introduce a global list to track delayed TMCs waiting for CPU online. - Add CPU hotplug callback to retry registration when the CPU comes up.
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- drivers/hwtracing/coresight/coresight-tmc-core.c | 59 +++++++++++++++++++++++- drivers/hwtracing/coresight/coresight-tmc.h | 4 ++ 2 files changed, 61 insertions(+), 2 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-tmc-core.c b/drivers/hwtracing/coresight/coresight-tmc-core.c index 685a64d8ba1b5df4cff91694eee45c6d6a147bc1..7274ad07c2b20d2aa6e568b4bab0fbb57e331ab8 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-core.c +++ b/drivers/hwtracing/coresight/coresight-tmc-core.c @@ -36,6 +36,9 @@ DEFINE_CORESIGHT_DEVLIST(etb_devs, "tmc_etb"); DEFINE_CORESIGHT_DEVLIST(etf_devs, "tmc_etf"); DEFINE_CORESIGHT_DEVLIST(etr_devs, "tmc_etr"); +static LIST_HEAD(tmc_delay_probe); +static enum cpuhp_state hp_online; +static DEFINE_SPINLOCK(delay_lock);
int tmc_wait_for_tmcready(struct tmc_drvdata *drvdata) { @@ -1028,6 +1031,8 @@ static int __tmc_probe(struct device *dev, struct resource *res) if (!drvdata->cpumask) return -EINVAL;
+ drvdata->dev = dev; + cpus_read_lock(); for_each_cpu(cpu, drvdata->cpumask) { ret = smp_call_function_single(cpu, @@ -1035,11 +1040,16 @@ static int __tmc_probe(struct device *dev, struct resource *res) if (!ret) break; } - cpus_read_unlock(); + if (ret) { + scoped_guard(spinlock, &delay_lock) + list_add(&drvdata->link, &tmc_delay_probe); + cpus_read_unlock(); ret = 0; goto out; } + + cpus_read_unlock(); } else { tmc_init_hw_config(drvdata); } @@ -1104,8 +1114,12 @@ static void __tmc_remove(struct device *dev) misc_deregister(&drvdata->miscdev); if (drvdata->crashdev.fops) misc_deregister(&drvdata->crashdev); - if (drvdata->csdev) + if (drvdata->csdev) { coresight_unregister(drvdata->csdev); + } else { + scoped_guard(spinlock, &delay_lock) + list_del(&drvdata->link); + } }
static void tmc_remove(struct amba_device *adev) @@ -1224,14 +1238,55 @@ static struct platform_driver tmc_platform_driver = { }, };
+static int tmc_online_cpu(unsigned int cpu) +{ + struct tmc_drvdata *drvdata, *tmp; + int ret; + + spin_lock(&delay_lock); + list_for_each_entry_safe(drvdata, tmp, &tmc_delay_probe, link) { + if (cpumask_test_cpu(cpu, drvdata->cpumask)) { + list_del(&drvdata->link); + + spin_unlock(&delay_lock); + ret = pm_runtime_resume_and_get(drvdata->dev); + if (ret < 0) + return 0; + + tmc_init_hw_config(drvdata); + tmc_clear_self_claim_tag(drvdata); + tmc_add_coresight_dev(drvdata->dev); + pm_runtime_put(drvdata->dev); + spin_lock(&delay_lock); + } + } + spin_unlock(&delay_lock); + return 0; +} + static int __init tmc_init(void) { + int ret; + + ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, + "arm/coresight-tmc:online", + tmc_online_cpu, NULL); + + if (ret > 0) + hp_online = ret; + else + return ret; + return coresight_init_driver("tmc", &tmc_driver, &tmc_platform_driver, THIS_MODULE); }
static void __exit tmc_exit(void) { coresight_remove_driver(&tmc_driver, &tmc_platform_driver); + if (hp_online) { + cpuhp_remove_state_nocalls(hp_online); + hp_online = 0; + } } module_init(tmc_init); module_exit(tmc_exit); diff --git a/drivers/hwtracing/coresight/coresight-tmc.h b/drivers/hwtracing/coresight/coresight-tmc.h index f5c76ca2dc9733daa020b79b1dcfc495045a2618..29ccf0b7f4fe90a93d926a2e273950bce9834336 100644 --- a/drivers/hwtracing/coresight/coresight-tmc.h +++ b/drivers/hwtracing/coresight/coresight-tmc.h @@ -246,6 +246,8 @@ struct tmc_resrv_buf { * @cpumask: CPU mask representing the CPUs related to this TMC. * @devid: TMC variant ID inferred from the device configuration register. * @desc_name: Name to be used while creating crash interface. + * @dev: pointer to the device associated with this TMC. + * @link: link to the delay_probed list. */ struct tmc_drvdata { struct clk *atclk; @@ -279,6 +281,8 @@ struct tmc_drvdata { struct cpumask *cpumask; u32 devid; const char *desc_name; + struct device *dev; + struct list_head link; };
struct etr_buf_operations {
 
            Extend the coresight link enable interfaces to accept an `enum cs_mode` argument. This allows link drivers to distinguish between sysfs and perf coresight modes when enabling links.
This change is necessary because enabling CPU cluster links may involve calling `smp_call_function_single()`, which is unsafe in perf mode due to context constraints. By passing the tracing mode explicitly, link drivers can apply mode-specific logic to avoid unsafe operations.
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- drivers/hwtracing/coresight/coresight-core.c | 7 ++++--- drivers/hwtracing/coresight/coresight-funnel.c | 21 +++++++++++++++++++- drivers/hwtracing/coresight/coresight-replicator.c | 23 +++++++++++++++++++++- drivers/hwtracing/coresight/coresight-tmc-etf.c | 19 +++++++++++++++++- drivers/hwtracing/coresight/coresight-tnoc.c | 3 ++- drivers/hwtracing/coresight/coresight-tpda.c | 3 ++- include/linux/coresight.h | 3 ++- 7 files changed, 70 insertions(+), 9 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c index 3267192f0c1c667b0570b9100c3c449064e7fb5e..2e62005655dbcb9b504a1a5be392a5b00ed567d4 100644 --- a/drivers/hwtracing/coresight/coresight-core.c +++ b/drivers/hwtracing/coresight/coresight-core.c @@ -313,7 +313,8 @@ static void coresight_disable_sink(struct coresight_device *csdev) static int coresight_enable_link(struct coresight_device *csdev, struct coresight_device *parent, struct coresight_device *child, - struct coresight_device *source) + struct coresight_device *source, + enum cs_mode mode) { int link_subtype; struct coresight_connection *inconn, *outconn; @@ -330,7 +331,7 @@ static int coresight_enable_link(struct coresight_device *csdev, if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT && IS_ERR(outconn)) return PTR_ERR(outconn);
- return link_ops(csdev)->enable(csdev, inconn, outconn); + return link_ops(csdev)->enable(csdev, inconn, outconn, mode); }
static void coresight_disable_link(struct coresight_device *csdev, @@ -546,7 +547,7 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode, case CORESIGHT_DEV_TYPE_LINK: parent = list_prev_entry(nd, link)->csdev; child = list_next_entry(nd, link)->csdev; - ret = coresight_enable_link(csdev, parent, child, source); + ret = coresight_enable_link(csdev, parent, child, source, mode); if (ret) goto err_disable_helpers; break; diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c index 43b9287a865eb26ce021521e4a5f193c48188bba..fd8dcd541454bd804210fa0f80f2dcc49a908717 100644 --- a/drivers/hwtracing/coresight/coresight-funnel.c +++ b/drivers/hwtracing/coresight/coresight-funnel.c @@ -121,7 +121,8 @@ static int funnel_enable_hw(struct funnel_drvdata *drvdata, int port)
static int funnel_enable(struct coresight_device *csdev, struct coresight_connection *in, - struct coresight_connection *out) + struct coresight_connection *out, + enum cs_mode mode) { int rc = 0; struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); @@ -135,6 +136,23 @@ static int funnel_enable(struct coresight_device *csdev, else in->dest_refcnt++;
+ if (mode == CS_MODE_PERF) { + if (first_enable) { + if (drvdata->cpumask && + !cpumask_test_cpu(smp_processor_id(), drvdata->cpumask)) { + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags); + return -EINVAL; + } + + if (drvdata->base) + rc = dynamic_funnel_enable_hw(drvdata, in->dest_port); + if (!rc) + in->dest_refcnt++; + } + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags); + return rc; + } + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
if (first_enable) { @@ -183,6 +201,7 @@ static void funnel_disable(struct coresight_device *csdev, dynamic_funnel_disable_hw(drvdata, in->dest_port); last_disable = true; } + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
if (last_disable) diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c index 22c9bc71817d238c2d4ddffbb42678bf792b29af..6cb57763f9b10b68f9e129adfd6a448edce2637d 100644 --- a/drivers/hwtracing/coresight/coresight-replicator.c +++ b/drivers/hwtracing/coresight/coresight-replicator.c @@ -199,7 +199,8 @@ static int replicator_enable_hw(struct replicator_drvdata *drvdata,
static int replicator_enable(struct coresight_device *csdev, struct coresight_connection *in, - struct coresight_connection *out) + struct coresight_connection *out, + enum cs_mode mode) { int rc = 0; struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); @@ -212,6 +213,25 @@ static int replicator_enable(struct coresight_device *csdev, first_enable = true; else out->src_refcnt++; + + if (mode == CS_MODE_PERF) { + if (first_enable) { + if (drvdata->cpumask && + !cpumask_test_cpu(smp_processor_id(), drvdata->cpumask)) { + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags); + return -EINVAL; + } + + if (drvdata->base) + rc = dynamic_replicator_enable(drvdata, in->dest_port, + out->src_port); + if (!rc) + out->src_refcnt++; + } + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags); + return rc; + } + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
if (first_enable) { @@ -272,6 +292,7 @@ static void replicator_disable(struct coresight_device *csdev, out->src_port); last_disable = true; } + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
if (last_disable) diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c index b8a1c10d4b4c49144449b33f26710cf11713b338..281c3b316dc3c0d4fab4f06b7093825b741cf595 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-etf.c +++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c @@ -424,7 +424,8 @@ static int tmc_disable_etf_sink(struct coresight_device *csdev)
static int tmc_enable_etf_link(struct coresight_device *csdev, struct coresight_connection *in, - struct coresight_connection *out) + struct coresight_connection *out, + enum cs_mode mode) { int ret = 0; unsigned long flags; @@ -443,6 +444,22 @@ static int tmc_enable_etf_link(struct coresight_device *csdev, if (!first_enable) csdev->refcnt++;
+ if (mode == CS_MODE_PERF) { + if (first_enable) { + if (drvdata->cpumask && + !cpumask_test_cpu(smp_processor_id(), drvdata->cpumask)) { + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags); + return -EINVAL; + } + + ret = tmc_etf_enable_hw_local(drvdata); + if (!ret) + csdev->refcnt++; + } + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags); + return ret; + } + raw_spin_unlock_irqrestore(&drvdata->spinlock, flags); if (first_enable) { ret = tmc_etf_enable_hw(drvdata); diff --git a/drivers/hwtracing/coresight/coresight-tnoc.c b/drivers/hwtracing/coresight/coresight-tnoc.c index ff9a0a9cfe96e5f5e3077c750ea2f890cdd50d94..48e9e685b9439d92bdaae9e40d3b3bc2d1ac1cd2 100644 --- a/drivers/hwtracing/coresight/coresight-tnoc.c +++ b/drivers/hwtracing/coresight/coresight-tnoc.c @@ -73,7 +73,8 @@ static void trace_noc_enable_hw(struct trace_noc_drvdata *drvdata) }
static int trace_noc_enable(struct coresight_device *csdev, struct coresight_connection *inport, - struct coresight_connection *outport) + struct coresight_connection *outport, + enum cs_mode mode) { struct trace_noc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
diff --git a/drivers/hwtracing/coresight/coresight-tpda.c b/drivers/hwtracing/coresight/coresight-tpda.c index 333b3cb236859f0feb1498f4ab81037c772143fd..4af433145728c9e5b600a4e58dfe8931447200f8 100644 --- a/drivers/hwtracing/coresight/coresight-tpda.c +++ b/drivers/hwtracing/coresight/coresight-tpda.c @@ -197,7 +197,8 @@ static int __tpda_enable(struct tpda_drvdata *drvdata, int port)
static int tpda_enable(struct coresight_device *csdev, struct coresight_connection *in, - struct coresight_connection *out) + struct coresight_connection *out, + enum cs_mode mode) { struct tpda_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); int ret = 0; diff --git a/include/linux/coresight.h b/include/linux/coresight.h index 6de59ce8ef8ca45c29e2f09c1b979eb7686b685f..f4d522dc096cb9d0f8d2d8b7ce8a90574bf7c5e1 100644 --- a/include/linux/coresight.h +++ b/include/linux/coresight.h @@ -385,7 +385,8 @@ struct coresight_ops_sink { struct coresight_ops_link { int (*enable)(struct coresight_device *csdev, struct coresight_connection *in, - struct coresight_connection *out); + struct coresight_connection *out, + enum cs_mode mode); void (*disable)(struct coresight_device *csdev, struct coresight_connection *in, struct coresight_connection *out);
 
            Add below Coresight devices for APSS debug block: -ETM -TMC ETF -Funnel -Replicator
Signed-off-by: Jie Gan jie.gan@oss.qualcomm.com Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com --- arch/arm64/boot/dts/qcom/x1e80100.dtsi | 885 +++++++++++++++++++++++++++++++++ arch/arm64/boot/dts/qcom/x1p42100.dtsi | 12 + 2 files changed, 897 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi index a9a7bb676c6f8ac48a2e443d28efdc8c9b5e52c0..9058ea8ce62c706667b931a8f4c2e7c666c6bcc4 100644 --- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi +++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi @@ -6597,6 +6597,14 @@ funnel1_in2: endpoint { }; };
+ port@4 { + reg = <4>; + + funnel1_in4: endpoint { + remote-endpoint = <&apss_funnel_out>; + }; + }; + port@5 { reg = <5>;
@@ -7887,6 +7895,883 @@ ddr_funnel1_out: endpoint { }; };
+ apss_funnel: funnel@12080000 { + compatible = "arm,coresight-dynamic-funnel", "arm,primecell"; + reg = <0x0 0x12080000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + + in-ports { + #address-cells = <1>; + #size-cells = <0>; + + port@0 { + reg = <0>; + + apss_funnel_in0: endpoint { + remote-endpoint = <&ncc0_etf_out>; + }; + }; + + port@1 { + reg = <1>; + + apss_funnel_in1: endpoint { + remote-endpoint = <&ncc1_etf_out>; + }; + }; + + port@2 { + reg = <2>; + + apss_funnel_in2: endpoint { + remote-endpoint = <&ncc2_etf_out>; + }; + }; + }; + + out-ports { + port { + apss_funnel_out: endpoint { + remote-endpoint = + <&funnel1_in4>; + }; + }; + }; + }; + + etm@13021000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu0>; + qcom,skip-power-up; + + out-ports { + port { + etm0_out: endpoint { + remote-endpoint = <&ncc0_0_rep_in>; + }; + }; + }; + }; + + etm@13121000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu1>; + qcom,skip-power-up; + + out-ports { + port { + etm1_out: endpoint { + remote-endpoint = <&ncc0_1_rep_in>; + }; + }; + }; + }; + + etm@13221000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu2>; + qcom,skip-power-up; + + out-ports { + port { + etm2_out: endpoint { + remote-endpoint = <&ncc0_2_rep_in>; + }; + }; + }; + }; + + etm@13321000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu3>; + qcom,skip-power-up; + + out-ports { + port { + etm3_out: endpoint { + remote-endpoint = <&ncc0_3_rep_in>; + }; + }; + }; + }; + + funnel@13401000 { + compatible = "arm,coresight-cpu-funnel"; + reg = <0x0 0x13401000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd0>; + + in-ports { + #address-cells = <1>; + #size-cells = <0>; + + port@2 { + reg = <2>; + + ncc0_2_funnel_in2: endpoint { + remote-endpoint = <&ncc0_1_funnel_out>; + }; + }; + }; + + out-ports { + port { + ncc0_2_funnel_out: endpoint { + remote-endpoint = <&ncc0_etf_in>; + }; + }; + }; + }; + + tmc@13409000 { + compatible = "arm,coresight-cpu-tmc"; + reg = <0x0 0x13409000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd0>; + + in-ports { + port { + ncc0_etf_in: endpoint { + remote-endpoint = <&ncc0_2_funnel_out>; + }; + }; + }; + + out-ports { + port { + ncc0_etf_out: endpoint { + remote-endpoint = <&apss_funnel_in0>; + }; + }; + }; + }; + + replicator@13490000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x13490000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd0>; + + in-ports { + port { + ncc0_0_rep_in: endpoint { + remote-endpoint = <&etm0_out>; + }; + }; + }; + + out-ports { + port { + ncc0_0_rep_out: endpoint { + remote-endpoint = <&ncc0_1_funnel_in0>; + }; + }; + }; + }; + + replicator@134a0000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x134a0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd0>; + + in-ports { + port { + ncc0_1_rep_in: endpoint { + remote-endpoint = <&etm1_out>; + }; + }; + }; + + out-ports { + port { + ncc0_1_rep_out: endpoint { + remote-endpoint = <&ncc0_1_funnel_in1>; + }; + }; + }; + }; + + replicator@134b0000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x134b0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd0>; + + in-ports { + port { + ncc0_2_rep_in: endpoint { + remote-endpoint = <&etm2_out>; + }; + }; + }; + + out-ports { + port { + ncc0_2_rep_out: endpoint { + remote-endpoint = <&ncc0_1_funnel_in2>; + }; + }; + }; + }; + + replicator@134c0000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x134c0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd0>; + + in-ports { + port { + ncc0_3_rep_in: endpoint { + remote-endpoint = <&etm3_out>; + }; + }; + }; + + out-ports { + port { + ncc0_3_rep_out: endpoint { + remote-endpoint = <&ncc0_1_funnel_in3>; + }; + }; + }; + }; + + funnel@134d0000 { + compatible = "arm,coresight-cpu-funnel"; + reg = <0x0 0x134d0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd0>; + + in-ports { + #address-cells = <1>; + #size-cells = <0>; + + port@0 { + reg = <0>; + + ncc0_1_funnel_in0: endpoint { + remote-endpoint = <&ncc0_0_rep_out>; + }; + }; + + port@1 { + reg = <1>; + + ncc0_1_funnel_in1: endpoint { + remote-endpoint = <&ncc0_1_rep_out>; + }; + }; + + port@2 { + reg = <2>; + + ncc0_1_funnel_in2: endpoint { + remote-endpoint = <&ncc0_2_rep_out>; + }; + }; + + port@3 { + reg = <3>; + + ncc0_1_funnel_in3: endpoint { + remote-endpoint = <&ncc0_3_rep_out>; + }; + }; + }; + + out-ports { + port { + ncc0_1_funnel_out: endpoint { + remote-endpoint = <&ncc0_2_funnel_in2>; + }; + }; + }; + }; + + etm@13521000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu4>; + qcom,skip-power-up; + + out-ports { + port { + etm4_out: endpoint { + remote-endpoint = <&ncc1_0_rep_in>; + }; + }; + }; + }; + + etm@13621000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu5>; + qcom,skip-power-up; + + out-ports { + port { + etm5_out: endpoint { + remote-endpoint = <&ncc1_1_rep_in>; + }; + }; + }; + }; + + etm@13721000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu6>; + qcom,skip-power-up; + + out-ports { + port { + etm6_out: endpoint { + remote-endpoint = <&ncc1_2_rep_in>; + }; + }; + }; + }; + + etm@13821000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu7>; + qcom,skip-power-up; + + out-ports { + port { + etm7_out: endpoint { + remote-endpoint = <&ncc1_3_rep_in>; + }; + }; + }; + }; + + funnel@13901000 { + compatible = "arm,coresight-cpu-funnel"; + reg = <0x0 0x13901000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd1>; + + in-ports { + #address-cells = <1>; + #size-cells = <0>; + + port@2 { + reg = <2>; + + ncc1_2_funnel_in2: endpoint { + remote-endpoint = <&ncc1_1_funnel_out>; + }; + }; + }; + + out-ports { + port { + ncc1_2_funnel_out: endpoint { + remote-endpoint = <&ncc1_etf_in>; + }; + }; + }; + }; + + tmc@13909000 { + compatible = "arm,coresight-cpu-tmc"; + reg = <0x0 0x13909000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd1>; + + in-ports { + port { + ncc1_etf_in: endpoint { + remote-endpoint = <&ncc1_2_funnel_out>; + }; + }; + }; + + out-ports { + port { + ncc1_etf_out: endpoint { + remote-endpoint = <&apss_funnel_in1>; + }; + }; + }; + }; + + replicator@13990000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x13990000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd1>; + + in-ports { + port { + ncc1_0_rep_in: endpoint { + remote-endpoint = <&etm4_out>; + }; + }; + }; + + out-ports { + port { + ncc1_0_rep_out: endpoint { + remote-endpoint = <&ncc1_1_funnel_in0>; + }; + }; + }; + }; + + replicator@139a0000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x139a0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd1>; + + in-ports { + port { + ncc1_1_rep_in: endpoint { + remote-endpoint = <&etm5_out>; + }; + }; + }; + + out-ports { + port { + ncc1_1_rep_out: endpoint { + remote-endpoint = <&ncc1_1_funnel_in1>; + }; + }; + }; + }; + + replicator@139b0000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x139b0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd1>; + + in-ports { + port { + ncc1_2_rep_in: endpoint { + remote-endpoint = <&etm6_out>; + }; + }; + }; + + out-ports { + port { + ncc1_2_rep_out: endpoint { + remote-endpoint = <&ncc1_1_funnel_in2>; + }; + }; + }; + }; + + replicator@139c0000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x139c0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd1>; + + in-ports { + port { + ncc1_3_rep_in: endpoint { + remote-endpoint = <&etm7_out>; + }; + }; + }; + + out-ports { + port { + ncc1_3_rep_out: endpoint { + remote-endpoint = <&ncc1_1_funnel_in3>; + }; + }; + }; + }; + + funnel@139d0000 { + compatible = "arm,coresight-cpu-funnel"; + reg = <0x0 0x139d0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd1>; + + in-ports { + #address-cells = <1>; + #size-cells = <0>; + + port@0 { + reg = <0>; + + ncc1_1_funnel_in0: endpoint { + remote-endpoint = <&ncc1_0_rep_out>; + }; + }; + + port@1 { + reg = <1>; + + ncc1_1_funnel_in1: endpoint { + remote-endpoint = <&ncc1_1_rep_out>; + }; + }; + + port@2 { + reg = <2>; + + ncc1_1_funnel_in2: endpoint { + remote-endpoint = <&ncc1_2_rep_out>; + }; + }; + + port@3 { + reg = <3>; + + ncc1_1_funnel_in3: endpoint { + remote-endpoint = <&ncc1_3_rep_out>; + }; + }; + }; + + out-ports { + port { + ncc1_1_funnel_out: endpoint { + remote-endpoint = <&ncc1_2_funnel_in2>; + }; + }; + }; + }; + + etm8: etm@13a21000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu8>; + qcom,skip-power-up; + + out-ports { + port { + etm8_out: endpoint { + remote-endpoint = <&ncc2_0_rep_in>; + }; + }; + }; + }; + + etm9: etm@13b21000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu9>; + qcom,skip-power-up; + + out-ports { + port { + etm9_out: endpoint { + remote-endpoint = <&ncc2_1_rep_in>; + }; + }; + }; + }; + + etm10: etm@13c21000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu10>; + qcom,skip-power-up; + + out-ports { + port { + etm10_out: endpoint { + remote-endpoint = <&ncc2_2_rep_in>; + }; + }; + }; + }; + + etm11: etm@13d21000 { + compatible = "arm,coresight-etm4x-sysreg"; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + cpu = <&cpu11>; + qcom,skip-power-up; + + out-ports { + port { + etm11_out: endpoint { + remote-endpoint = <&ncc2_3_rep_in>; + }; + }; + }; + }; + + cluster2_funnel_l2: funnel@13e01000 { + compatible = "arm,coresight-cpu-funnel"; + reg = <0x0 0x13e01000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd2>; + + in-ports { + #address-cells = <1>; + #size-cells = <0>; + + port@2 { + reg = <2>; + + ncc2_2_funnel_in2: endpoint { + remote-endpoint = <&ncc2_1_funnel_out>; + }; + }; + }; + + out-ports { + port { + ncc2_2_funnel_out: endpoint { + remote-endpoint = <&ncc2_etf_in>; + }; + }; + }; + }; + + cluster2_etf: tmc@13e09000 { + compatible = "arm,coresight-cpu-tmc"; + reg = <0x0 0x13e09000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd2>; + + in-ports { + port { + ncc2_etf_in: endpoint { + remote-endpoint = <&ncc2_2_funnel_out>; + }; + }; + }; + + out-ports { + port { + ncc2_etf_out: endpoint { + remote-endpoint = <&apss_funnel_in2>; + }; + }; + }; + }; + + cluster2_rep_2_0: replicator@13e90000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x13e90000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd2>; + + in-ports { + port { + ncc2_0_rep_in: endpoint { + remote-endpoint = <&etm8_out>; + }; + }; + }; + + out-ports { + port { + ncc2_0_rep_out: endpoint { + remote-endpoint = <&ncc2_1_funnel_in0>; + }; + }; + }; + }; + + cluster2_rep_2_1: replicator@13ea0000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x13ea0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd2>; + + in-ports { + port { + ncc2_1_rep_in: endpoint { + remote-endpoint = <&etm9_out>; + }; + }; + }; + + out-ports { + port { + ncc2_1_rep_out: endpoint { + remote-endpoint = <&ncc2_1_funnel_in1>; + }; + }; + }; + }; + + cluster2_rep_2_2: replicator@13eb0000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x13eb0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd2>; + + in-ports { + port { + ncc2_2_rep_in: endpoint { + remote-endpoint = <&etm10_out>; + }; + }; + }; + + out-ports { + port { + ncc2_2_rep_out: endpoint { + remote-endpoint = <&ncc2_1_funnel_in2>; + }; + }; + }; + }; + + cluster2_rep_2_3: replicator@13ec0000 { + compatible = "arm,coresight-cpu-replicator"; + reg = <0x0 0x13ec0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd2>; + + in-ports { + port { + ncc2_3_rep_in: endpoint { + remote-endpoint = <&etm11_out>; + }; + }; + }; + + out-ports { + port { + ncc2_3_rep_out: endpoint { + remote-endpoint = <&ncc2_1_funnel_in3>; + }; + }; + }; + }; + + cluster2_funnel_l1: funnel@13ed0000 { + compatible = "arm,coresight-cpu-funnel"; + reg = <0x0 0x13ed0000 0x0 0x1000>; + + clocks = <&aoss_qmp>; + clock-names = "apb_pclk"; + power-domains = <&cluster_pd2>; + + in-ports { + #address-cells = <1>; + #size-cells = <0>; + + port@0 { + reg = <0>; + + ncc2_1_funnel_in0: endpoint { + remote-endpoint = <&ncc2_0_rep_out>; + }; + }; + + port@1 { + reg = <1>; + + ncc2_1_funnel_in1: endpoint { + remote-endpoint = <&ncc2_1_rep_out>; + }; + }; + + port@2 { + reg = <2>; + + ncc2_1_funnel_in2: endpoint { + remote-endpoint = <&ncc2_2_rep_out>; + }; + }; + + port@3 { + reg = <3>; + + ncc2_1_funnel_in3: endpoint { + remote-endpoint = <&ncc2_3_rep_out>; + }; + }; + }; + + out-ports { + port { + ncc2_1_funnel_out: endpoint { + remote-endpoint = <&ncc2_2_funnel_in2>; + }; + }; + }; + }; + apps_smmu: iommu@15000000 { compatible = "qcom,x1e80100-smmu-500", "qcom,smmu-500", "arm,mmu-500"; reg = <0 0x15000000 0 0x100000>; diff --git a/arch/arm64/boot/dts/qcom/x1p42100.dtsi b/arch/arm64/boot/dts/qcom/x1p42100.dtsi index 9af9e707f982fe45f62a9420b1e6baa1fef4d2fa..9b5fe04ed05cc33fe6d0a3535648d318f6cc3a80 100644 --- a/arch/arm64/boot/dts/qcom/x1p42100.dtsi +++ b/arch/arm64/boot/dts/qcom/x1p42100.dtsi @@ -19,6 +19,18 @@ /delete-node/ &cpu_pd11; /delete-node/ &pcie3_phy; /delete-node/ &thermal_zones; +/delete-node/ &etm8; +/delete-node/ &etm9; +/delete-node/ &etm10; +/delete-node/ &etm11; +/delete-node/ &cluster2_funnel_l1; +/delete-node/ &cluster2_funnel_l2; +/delete-node/ &cluster2_etf; +/delete-node/ &cluster2_rep_2_0; +/delete-node/ &cluster2_rep_2_1; +/delete-node/ &cluster2_rep_2_2; +/delete-node/ &cluster2_rep_2_3; +/delete-node/ &apss_funnel_in2;
&gcc { compatible = "qcom,x1p42100-gcc", "qcom,x1e80100-gcc";
 
            Hi,
This entire set seems to initially check the generic power domain for a list of associated CPUs, then check CPU state for all other operations.
Why not simply use the generic power domain state itself, along with the power up / down notifiers to determine if the registers are safe to access? If the genpd is powered up then the registers must be safe to access?
Regards
Mike
On Tue, 28 Oct 2025 at 06:28, Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com wrote:
This patch series introduces support for CPU cluster local CoreSight components, including funnel, replicator, and TMC, which reside inside CPU cluster power domains. These components require special handling due to power domain constraints.
Unlike system-level CoreSight devices, CPU cluster local components share the power domain of the CPU cluster. When the cluster enters low-power mode (LPM), the registers of these components become inaccessible. Importantly, `pm_runtime_get` calls alone are insufficient to bring the CPU cluster out of LPM, making standard register access unreliable in such cases.
To address this, the series introduces:
- Device tree bindings for CPU cluster local funnel, replicator, and TMC.
- Introduce a cpumask to record the CPUs belonging to the cluster where the cpu cluster local component resides.
- Safe register access via smp_call_function_single() on CPUs within the associated cpumask, ensuring the cluster is power-resident during access.
- Delayed probe support for CPU cluster local components when all CPUs of this CPU cluster are offline, re-probe the component when any CPU in the cluster comes online.
- Introduce `cs_mode` to link enable interfaces to avoid the use smp_call_function_single() under perf mode.
Patch summary: Patch 1: Adds device tree bindings for CPU cluster funnel/replicator/TMC devices. Patches 2–3: Add support for CPU cluster funnel. Patches 4-6: Add support for CPU cluster replicator. Patches 7-10: Add support for CPU cluster TMC. Patch 11: Add 'cs_mode' to link enable functions. Patches 12-13: Add Coresight nodes for APSS debug block for x1e80100 and fix build issue.
Verification:
This series has been verified on sm8750.
Test steps for delay probe:
- limit the system to enable at most 6 CPU cores during boot.
- echo 1 >/sys/bus/cpu/devices/cpu6/online.
- check whether ETM6 and ETM7 have been probed.
Test steps for sysfs mode:
echo 1 >/sys/bus/coresight/devices/tmc_etf0/enable_sink echo 1 >/sys/bus/coresight/devices/etm0/enable_source echo 1 >/sys/bus/coresight/devices/etm6/enable_source echo 0 >/sys/bus/coresight/devices/etm0/enable_source echo 0 >/sys/bus/coresight/devicse/etm6/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf0/enable_sink
echo 1 >/sys/bus/coresight/devices/tmc_etf1/enable_sink echo 1 >/sys/bus/coresight/devcies/etm0/enable_source cat /dev/tmc_etf1 >/tmp/etf1.bin echo 0 >/sys/bus/coresight/devices/etm0/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf1/enable_sink
echo 1 >/sys/bus/coresight/devices/tmc_etf2/enable_sink echo 1 >/sys/bus/coresight/devices/etm6/enable_source cat /dev/tmc_etf2 >/tmp/etf2.bin echo 0 >/sys/bus/coresight/devices/etm6/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf2/enable_sink
Test steps for sysfs node:
cat /sys/bus/coresight/devices/tmc_etf*/mgmt/*
cat /sys/bus/coresight/devices/funnel*/funnel_ctrl
cat /sys/bus/coresight/devices/replicator*/mgmt/*
Test steps for perf mode:
perf record -a -e cs_etm//k -- sleep 5
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com
Yuanfang Zhang (12): dt-bindings: arm: coresight: Add cpu cluster tmc/funnel/replicator support coresight-funnel: Add support for CPU cluster funnel coresight-funnel: Handle delay probe for CPU cluster funnel coresight-replicator: Add support for CPU cluster replicator coresight-replicator: Handle delayed probe for CPU cluster replicator coresight-replicator: Update mgmt_attrs for CPU cluster replicator compatibility coresight-tmc: Add support for CPU cluster ETF and refactor probe flow coresight-tmc-etf: Refactor enable function for CPU cluster ETF support coresight-tmc: Update tmc_mgmt_attrs for CPU cluster TMC compatibility coresight-tmc: Handle delayed probe for CPU cluster TMC coresight: add 'cs_mode' to link enable functions arm64: dts: qcom: x1e80100: add Coresight nodes for APSS debug block
.../bindings/arm/arm,coresight-dynamic-funnel.yaml | 23 +- .../arm/arm,coresight-dynamic-replicator.yaml | 22 +- .../devicetree/bindings/arm/arm,coresight-tmc.yaml | 22 +- arch/arm64/boot/dts/qcom/x1e80100.dtsi | 885 +++++++++++++++++++++ arch/arm64/boot/dts/qcom/x1p42100.dtsi | 12 + drivers/hwtracing/coresight/coresight-core.c | 7 +- drivers/hwtracing/coresight/coresight-funnel.c | 260 +++++- drivers/hwtracing/coresight/coresight-replicator.c | 343 +++++++- drivers/hwtracing/coresight/coresight-tmc-core.c | 396 +++++++-- drivers/hwtracing/coresight/coresight-tmc-etf.c | 105 ++- drivers/hwtracing/coresight/coresight-tmc.h | 10 + drivers/hwtracing/coresight/coresight-tnoc.c | 3 +- drivers/hwtracing/coresight/coresight-tpda.c | 3 +- include/linux/coresight.h | 3 +- 14 files changed, 1912 insertions(+), 182 deletions(-)
base-commit: 01f96b812526a2c8dcd5c0e510dda37e09ec8bcd change-id: 20251016-cpu_cluster_component_pm-ce518f510433
Best regards,
Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com
 
            On 10/29/2025 7:01 PM, Mike Leach wrote:
Hi,
This entire set seems to initially check the generic power domain for a list of associated CPUs, then check CPU state for all other operations.
Why not simply use the generic power domain state itself, along with the power up / down notifiers to determine if the registers are safe to access? If the genpd is powered up then the registers must be safe to access?
Regards
Mike
Hi Mike,
Hi,
yes, when genpd is powered up then register can be accessed. but have blow problems:
1. pm_runtime_sync can trigger cluster power domian power up notifier but not really power up the cluster, and not able to distinguish whether it is a real power up notifier or triggered by pm_runtime_sync. 2. Using the power up/down notifier cannot actively wake up the cluster power, which results in the components related to this cluster failing to be enabled when the cluster power off. 3. Using the power up/down notifier for register access does not guarantee the correct path enablement sequence.
thanks, yuanfang
On Tue, 28 Oct 2025 at 06:28, Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com wrote:
This patch series introduces support for CPU cluster local CoreSight components, including funnel, replicator, and TMC, which reside inside CPU cluster power domains. These components require special handling due to power domain constraints.
Unlike system-level CoreSight devices, CPU cluster local components share the power domain of the CPU cluster. When the cluster enters low-power mode (LPM), the registers of these components become inaccessible. Importantly, `pm_runtime_get` calls alone are insufficient to bring the CPU cluster out of LPM, making standard register access unreliable in such cases.
To address this, the series introduces:
- Device tree bindings for CPU cluster local funnel, replicator, and TMC.
- Introduce a cpumask to record the CPUs belonging to the cluster where the cpu cluster local component resides.
- Safe register access via smp_call_function_single() on CPUs within the associated cpumask, ensuring the cluster is power-resident during access.
- Delayed probe support for CPU cluster local components when all CPUs of this CPU cluster are offline, re-probe the component when any CPU in the cluster comes online.
- Introduce `cs_mode` to link enable interfaces to avoid the use smp_call_function_single() under perf mode.
Patch summary: Patch 1: Adds device tree bindings for CPU cluster funnel/replicator/TMC devices. Patches 2–3: Add support for CPU cluster funnel. Patches 4-6: Add support for CPU cluster replicator. Patches 7-10: Add support for CPU cluster TMC. Patch 11: Add 'cs_mode' to link enable functions. Patches 12-13: Add Coresight nodes for APSS debug block for x1e80100 and fix build issue.
Verification:
This series has been verified on sm8750.
Test steps for delay probe:
- limit the system to enable at most 6 CPU cores during boot.
- echo 1 >/sys/bus/cpu/devices/cpu6/online.
- check whether ETM6 and ETM7 have been probed.
Test steps for sysfs mode:
echo 1 >/sys/bus/coresight/devices/tmc_etf0/enable_sink echo 1 >/sys/bus/coresight/devices/etm0/enable_source echo 1 >/sys/bus/coresight/devices/etm6/enable_source echo 0 >/sys/bus/coresight/devices/etm0/enable_source echo 0 >/sys/bus/coresight/devicse/etm6/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf0/enable_sink
echo 1 >/sys/bus/coresight/devices/tmc_etf1/enable_sink echo 1 >/sys/bus/coresight/devcies/etm0/enable_source cat /dev/tmc_etf1 >/tmp/etf1.bin echo 0 >/sys/bus/coresight/devices/etm0/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf1/enable_sink
echo 1 >/sys/bus/coresight/devices/tmc_etf2/enable_sink echo 1 >/sys/bus/coresight/devices/etm6/enable_source cat /dev/tmc_etf2 >/tmp/etf2.bin echo 0 >/sys/bus/coresight/devices/etm6/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf2/enable_sink
Test steps for sysfs node:
cat /sys/bus/coresight/devices/tmc_etf*/mgmt/*
cat /sys/bus/coresight/devices/funnel*/funnel_ctrl
cat /sys/bus/coresight/devices/replicator*/mgmt/*
Test steps for perf mode:
perf record -a -e cs_etm//k -- sleep 5
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com
Yuanfang Zhang (12): dt-bindings: arm: coresight: Add cpu cluster tmc/funnel/replicator support coresight-funnel: Add support for CPU cluster funnel coresight-funnel: Handle delay probe for CPU cluster funnel coresight-replicator: Add support for CPU cluster replicator coresight-replicator: Handle delayed probe for CPU cluster replicator coresight-replicator: Update mgmt_attrs for CPU cluster replicator compatibility coresight-tmc: Add support for CPU cluster ETF and refactor probe flow coresight-tmc-etf: Refactor enable function for CPU cluster ETF support coresight-tmc: Update tmc_mgmt_attrs for CPU cluster TMC compatibility coresight-tmc: Handle delayed probe for CPU cluster TMC coresight: add 'cs_mode' to link enable functions arm64: dts: qcom: x1e80100: add Coresight nodes for APSS debug block
.../bindings/arm/arm,coresight-dynamic-funnel.yaml | 23 +- .../arm/arm,coresight-dynamic-replicator.yaml | 22 +- .../devicetree/bindings/arm/arm,coresight-tmc.yaml | 22 +- arch/arm64/boot/dts/qcom/x1e80100.dtsi | 885 +++++++++++++++++++++ arch/arm64/boot/dts/qcom/x1p42100.dtsi | 12 + drivers/hwtracing/coresight/coresight-core.c | 7 +- drivers/hwtracing/coresight/coresight-funnel.c | 260 +++++- drivers/hwtracing/coresight/coresight-replicator.c | 343 +++++++- drivers/hwtracing/coresight/coresight-tmc-core.c | 396 +++++++-- drivers/hwtracing/coresight/coresight-tmc-etf.c | 105 ++- drivers/hwtracing/coresight/coresight-tmc.h | 10 + drivers/hwtracing/coresight/coresight-tnoc.c | 3 +- drivers/hwtracing/coresight/coresight-tpda.c | 3 +- include/linux/coresight.h | 3 +- 14 files changed, 1912 insertions(+), 182 deletions(-)
base-commit: 01f96b812526a2c8dcd5c0e510dda37e09ec8bcd change-id: 20251016-cpu_cluster_component_pm-ce518f510433
Best regards,
Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com
 
            Hi,
On Thu, 30 Oct 2025 at 07:51, yuanfang zhang yuanfang.zhang@oss.qualcomm.com wrote:
On 10/29/2025 7:01 PM, Mike Leach wrote:
Hi,
This entire set seems to initially check the generic power domain for a list of associated CPUs, then check CPU state for all other operations.
Why not simply use the generic power domain state itself, along with the power up / down notifiers to determine if the registers are safe to access? If the genpd is powered up then the registers must be safe to access?
Regards
Mike
Hi Mike,
Hi,
yes, when genpd is powered up then register can be accessed. but have blow problems:
The point I was making is to use genpd / notifications for determine if the device is powered so you know if it is safe to use registers. This is different from the faults you mention below in your power infrastructure. You are reading the dev->pm_domain to extract the cpu map then the notfiers and state must also be available. If you could use CPUHP notifiers then genpd notifiers would also work. However, one issue I do see with this is that there is no code added to the driver to associate the dev with the pm_domain which would normally be there so I am unclear how this actually works.
Associating a none CPU device with a bunch of CPUs does not seem correct. You are altering a generic coresight driver to solve a specific platform problem, when other solutions should be used.
- pm_runtime_sync can trigger cluster power domian power up notifier but not really
power up the cluster, and not able to distinguish whether it is a real power up notifier or triggered by pm_runtime_sync. 2. Using the power up/down notifier cannot actively wake up the cluster power, which results in the components related to this cluster failing to be enabled when the cluster power off. 3. Using the power up/down notifier for register access does not guarantee the correct path enablement sequence.
Does all this not simply mean that you need to fix your power management drivers / configuration so that it works correctly, rather than create a poor workaround in unrelated drivers such as the coresight devices?
Thanks and Regards
Mike
thanks, yuanfang
On Tue, 28 Oct 2025 at 06:28, Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com wrote:
This patch series introduces support for CPU cluster local CoreSight components, including funnel, replicator, and TMC, which reside inside CPU cluster power domains. These components require special handling due to power domain constraints.
Unlike system-level CoreSight devices, CPU cluster local components share the power domain of the CPU cluster. When the cluster enters low-power mode (LPM), the registers of these components become inaccessible. Importantly, `pm_runtime_get` calls alone are insufficient to bring the CPU cluster out of LPM, making standard register access unreliable in such cases.
To address this, the series introduces:
- Device tree bindings for CPU cluster local funnel, replicator, and TMC.
- Introduce a cpumask to record the CPUs belonging to the cluster where the cpu cluster local component resides.
- Safe register access via smp_call_function_single() on CPUs within the associated cpumask, ensuring the cluster is power-resident during access.
- Delayed probe support for CPU cluster local components when all CPUs of this CPU cluster are offline, re-probe the component when any CPU in the cluster comes online.
- Introduce `cs_mode` to link enable interfaces to avoid the use smp_call_function_single() under perf mode.
Patch summary: Patch 1: Adds device tree bindings for CPU cluster funnel/replicator/TMC devices. Patches 2–3: Add support for CPU cluster funnel. Patches 4-6: Add support for CPU cluster replicator. Patches 7-10: Add support for CPU cluster TMC. Patch 11: Add 'cs_mode' to link enable functions. Patches 12-13: Add Coresight nodes for APSS debug block for x1e80100 and fix build issue.
Verification:
This series has been verified on sm8750.
Test steps for delay probe:
- limit the system to enable at most 6 CPU cores during boot.
- echo 1 >/sys/bus/cpu/devices/cpu6/online.
- check whether ETM6 and ETM7 have been probed.
Test steps for sysfs mode:
echo 1 >/sys/bus/coresight/devices/tmc_etf0/enable_sink echo 1 >/sys/bus/coresight/devices/etm0/enable_source echo 1 >/sys/bus/coresight/devices/etm6/enable_source echo 0 >/sys/bus/coresight/devices/etm0/enable_source echo 0 >/sys/bus/coresight/devicse/etm6/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf0/enable_sink
echo 1 >/sys/bus/coresight/devices/tmc_etf1/enable_sink echo 1 >/sys/bus/coresight/devcies/etm0/enable_source cat /dev/tmc_etf1 >/tmp/etf1.bin echo 0 >/sys/bus/coresight/devices/etm0/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf1/enable_sink
echo 1 >/sys/bus/coresight/devices/tmc_etf2/enable_sink echo 1 >/sys/bus/coresight/devices/etm6/enable_source cat /dev/tmc_etf2 >/tmp/etf2.bin echo 0 >/sys/bus/coresight/devices/etm6/enable_source echo 0 >/sys/bus/coresight/devices/tmc_etf2/enable_sink
Test steps for sysfs node:
cat /sys/bus/coresight/devices/tmc_etf*/mgmt/*
cat /sys/bus/coresight/devices/funnel*/funnel_ctrl
cat /sys/bus/coresight/devices/replicator*/mgmt/*
Test steps for perf mode:
perf record -a -e cs_etm//k -- sleep 5
Signed-off-by: Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com
Yuanfang Zhang (12): dt-bindings: arm: coresight: Add cpu cluster tmc/funnel/replicator support coresight-funnel: Add support for CPU cluster funnel coresight-funnel: Handle delay probe for CPU cluster funnel coresight-replicator: Add support for CPU cluster replicator coresight-replicator: Handle delayed probe for CPU cluster replicator coresight-replicator: Update mgmt_attrs for CPU cluster replicator compatibility coresight-tmc: Add support for CPU cluster ETF and refactor probe flow coresight-tmc-etf: Refactor enable function for CPU cluster ETF support coresight-tmc: Update tmc_mgmt_attrs for CPU cluster TMC compatibility coresight-tmc: Handle delayed probe for CPU cluster TMC coresight: add 'cs_mode' to link enable functions arm64: dts: qcom: x1e80100: add Coresight nodes for APSS debug block
.../bindings/arm/arm,coresight-dynamic-funnel.yaml | 23 +- .../arm/arm,coresight-dynamic-replicator.yaml | 22 +- .../devicetree/bindings/arm/arm,coresight-tmc.yaml | 22 +- arch/arm64/boot/dts/qcom/x1e80100.dtsi | 885 +++++++++++++++++++++ arch/arm64/boot/dts/qcom/x1p42100.dtsi | 12 + drivers/hwtracing/coresight/coresight-core.c | 7 +- drivers/hwtracing/coresight/coresight-funnel.c | 260 +++++- drivers/hwtracing/coresight/coresight-replicator.c | 343 +++++++- drivers/hwtracing/coresight/coresight-tmc-core.c | 396 +++++++-- drivers/hwtracing/coresight/coresight-tmc-etf.c | 105 ++- drivers/hwtracing/coresight/coresight-tmc.h | 10 + drivers/hwtracing/coresight/coresight-tnoc.c | 3 +- drivers/hwtracing/coresight/coresight-tpda.c | 3 +- include/linux/coresight.h | 3 +- 14 files changed, 1912 insertions(+), 182 deletions(-)
base-commit: 01f96b812526a2c8dcd5c0e510dda37e09ec8bcd change-id: 20251016-cpu_cluster_component_pm-ce518f510433
Best regards,
Yuanfang Zhang yuanfang.zhang@oss.qualcomm.com
-- Mike Leach Principal Engineer, ARM Ltd. Manchester Design Centre. UK


