This series enables future IP trace features Embedded Trace Extension (ETE) and Trace Buffer Extension (TRBE). This series depends on the ETM system register instruction support series [0] which is available here [1]. This series which applies on [1] is avaialble here [2] for quick access.
ETE is the PE (CPU) trace unit for CPUs, implementing future architecture extensions. ETE overlaps with the ETMv4 architecture, with additions to support the newer architecture features and some restrictions on the supported features w.r.t ETMv4. The ETE support is added by extending the ETMv4 driver to recognise the ETE and handle the features as exposed by the TRCIDRx registers. ETE only supports system instructions access from the host CPU. The ETE could be integrated with a TRBE (see below), or with the legacy CoreSight trace bus (e.g, ETRs). Thus the ETE follows same firmware description as the ETMs and requires a node per instance.
Trace Buffer Extensions (TRBE) implements a per CPU trace buffer, which is accessible via the system registers and can be combined with the ETE to provide a 1x1 configuration of source & sink. TRBE is being represented here as a CoreSight sink. Primary reason is that the ETE source could work with other traditional CoreSight sink devices. As TRBE captures the trace data which is produced by ETE, it cannot work alone.
TRBE representation here have some distinct deviations from a traditional CoreSight sink device. Coresight path between ETE and TRBE are not built during boot looking at respective DT or ACPI entries.
Unlike traditional sinks, TRBE can generate interrupts to signal including many other things, buffer got filled. The interrupt is a PPI and should be communicated from the platform. DT or ACPI entry representing TRBE should have the PPI number for a given platform. During perf session, the TRBE IRQ handler should capture trace for perf auxiliary buffer before restarting it back. System registers being used here to configure ETE and TRBE could be referred in the link below.
https://developer.arm.com/docs/ddi0601/g/aarch64-system-registers.
Question:
- Should we implement sysfs based trace sessions for TRBE ?
[0] https://lore.kernel.org/linux-arm-kernel/20210110224850.1880240-1-suzuki.pou... [1] https://gitlab.arm.com/linux-arm/linux-skp/-/tree/coresight/etm/sysreg-v7 [2] https://gitlab.arm.com/linux-arm/linux-anshuman/-/tree/coresight/ete_trbe_v2
Changes in V2:
- Converted both ETE and TRBE DT bindings into Yaml - TRBE changes have been captured in the respective patches
Changes in V1:
https://lore.kernel.org/linux-arm-kernel/1608717823-18387-1-git-send-email-a...
- There are not much ETE changes from Suzuki apart from splitting of the ETE DTS patch - TRBE changes have been captured in the respective patches
Changes in RFC:
https://lore.kernel.org/linux-arm-kernel/1605012309-24812-1-git-send-email-a...
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Cc: Mike Leach mike.leach@linaro.org Cc: Linu Cherian lcherian@marvell.com Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org
Anshuman Khandual (4): arm64: Add TRBE definitions coresight: core: Add support for dedicated percpu sinks coresight: etm-perf: Truncate the perf record if handle has no space coresight: sink: Add TRBE driver
Suzuki K Poulose (7): coresight: etm-perf: Allow an event to use different sinks coresight: Do not scan for graph if none is present coresight: etm4x: Add support for PE OS lock coresight: ete: Add support for ETE sysreg access coresight: ete: Add support for ETE tracing dts: bindings: Document device tree bindings for ETE dts: bindings: Document device tree bindings for Arm TRBE
Documentation/devicetree/bindings/arm/ete.yaml | 71 ++ Documentation/devicetree/bindings/arm/trbe.yaml | 46 + Documentation/trace/coresight/coresight-trbe.rst | 39 + arch/arm64/include/asm/sysreg.h | 51 ++ drivers/hwtracing/coresight/Kconfig | 21 +- drivers/hwtracing/coresight/Makefile | 1 + drivers/hwtracing/coresight/coresight-core.c | 14 + drivers/hwtracing/coresight/coresight-etm-perf.c | 51 +- drivers/hwtracing/coresight/coresight-etm4x-core.c | 138 ++- .../hwtracing/coresight/coresight-etm4x-sysfs.c | 19 +- drivers/hwtracing/coresight/coresight-etm4x.h | 81 +- drivers/hwtracing/coresight/coresight-platform.c | 6 + drivers/hwtracing/coresight/coresight-trbe.c | 966 +++++++++++++++++++++ drivers/hwtracing/coresight/coresight-trbe.h | 216 +++++ include/linux/coresight.h | 12 + 15 files changed, 1683 insertions(+), 49 deletions(-) create mode 100644 Documentation/devicetree/bindings/arm/ete.yaml create mode 100644 Documentation/devicetree/bindings/arm/trbe.yaml create mode 100644 Documentation/trace/coresight/coresight-trbe.rst create mode 100644 drivers/hwtracing/coresight/coresight-trbe.c create mode 100644 drivers/hwtracing/coresight/coresight-trbe.h
From: Suzuki K Poulose suzuki.poulose@arm.com
When there are multiple sinks on the system, in the absence of a specified sink, it is quite possible that a default sink for an ETM could be different from that of another ETM. However we do not support having multiple sinks for an event yet. This patch allows the event to use the default sinks on the ETMs where they are scheduled as long as the sinks are of the same type.
e.g, if we have 1x1 topology with per-CPU ETRs, the event can use the per-CPU ETR for the session. However, if the sinks are of different type, e.g TMC-ETR on one and a custom sink on another, the event will only trace on the first detected sink.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Tested-by: Linu Cherian lcherian@marvell.com Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- drivers/hwtracing/coresight/coresight-etm-perf.c | 48 +++++++++++++++++++----- 1 file changed, 38 insertions(+), 10 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c index bdc34ca..eb9e7e9 100644 --- a/drivers/hwtracing/coresight/coresight-etm-perf.c +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c @@ -204,6 +204,13 @@ static void etm_free_aux(void *data) schedule_work(&event_data->work); }
+static bool sinks_match(struct coresight_device *a, struct coresight_device *b) +{ + if (!a || !b) + return false; + return (sink_ops(a) == sink_ops(b)); +} + static void *etm_setup_aux(struct perf_event *event, void **pages, int nr_pages, bool overwrite) { @@ -212,6 +219,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, cpumask_t *mask; struct coresight_device *sink = NULL; struct etm_event_data *event_data = NULL; + bool sink_forced = false;
event_data = alloc_event_data(cpu); if (!event_data) @@ -222,6 +230,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, if (event->attr.config2) { id = (u32)event->attr.config2; sink = coresight_get_sink_by_id(id); + sink_forced = true; }
mask = &event_data->mask; @@ -235,7 +244,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, */ for_each_cpu(cpu, mask) { struct list_head *path; - struct coresight_device *csdev; + struct coresight_device *csdev, *new_sink;
csdev = per_cpu(csdev_src, cpu); /* @@ -249,21 +258,35 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, }
/* - * No sink provided - look for a default sink for one of the - * devices. At present we only support topology where all CPUs - * use the same sink [N:1], so only need to find one sink. The - * coresight_build_path later will remove any CPU that does not - * attach to the sink, or if we have not found a sink. + * No sink provided - look for a default sink for all the devices. + * We only support multiple sinks, only if all the default sinks + * are of the same type, so that the sink buffer can be shared + * as the event moves around. We don't trace on a CPU if it can't + * */ - if (!sink) - sink = coresight_find_default_sink(csdev); + if (!sink_forced) { + new_sink = coresight_find_default_sink(csdev); + if (!new_sink) { + cpumask_clear_cpu(cpu, mask); + continue; + } + /* Skip checks for the first sink */ + if (!sink) { + sink = new_sink; + } else if (!sinks_match(new_sink, sink)) { + cpumask_clear_cpu(cpu, mask); + continue; + } + } else { + new_sink = sink; + }
/* * Building a path doesn't enable it, it simply builds a * list of devices from source to sink that can be * referenced later when the path is actually needed. */ - path = coresight_build_path(csdev, sink); + path = coresight_build_path(csdev, new_sink); if (IS_ERR(path)) { cpumask_clear_cpu(cpu, mask); continue; @@ -284,7 +307,12 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, if (!sink_ops(sink)->alloc_buffer || !sink_ops(sink)->free_buffer) goto err;
- /* Allocate the sink buffer for this session */ + /* + * Allocate the sink buffer for this session. All the sinks + * where this event can be scheduled are ensured to be of the + * same type. Thus the same sink configuration is used by the + * sinks. + */ event_data->snk_config = sink_ops(sink)->alloc_buffer(sink, event, pages, nr_pages, overwrite);
From: Suzuki K Poulose suzuki.poulose@arm.com
If a graph node is not found for a given node, of_get_next_endpoint() will emit the following error message :
OF: graph: no port node found in /<node_name>
If the given component doesn't have any explicit connections (e.g, ETE) we could simply ignore the graph parsing.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- drivers/hwtracing/coresight/coresight-platform.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c index 3629b78..c594f45 100644 --- a/drivers/hwtracing/coresight/coresight-platform.c +++ b/drivers/hwtracing/coresight/coresight-platform.c @@ -90,6 +90,12 @@ static void of_coresight_get_ports_legacy(const struct device_node *node, struct of_endpoint endpoint; int in = 0, out = 0;
+ /* + * Avoid warnings in of_graph_get_next_endpoint() + * if the device doesn't have any graph connections + */ + if (!of_graph_is_present(node)) + return; do { ep = of_graph_get_next_endpoint(node, ep); if (!ep)
From: Suzuki K Poulose suzuki.poulose@arm.com
ETE may not implement the OS lock and instead could rely on the PE OS Lock for the trace unit access. This is indicated by the TRCOLSR.OSM == 0b100. Add support for handling the PE OS lock
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- drivers/hwtracing/coresight/coresight-etm4x-core.c | 50 ++++++++++++++++++---- drivers/hwtracing/coresight/coresight-etm4x.h | 15 +++++++ 2 files changed, 56 insertions(+), 9 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c index 18c1a80..2ce2d0a 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -114,30 +114,59 @@ void etm4x_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit) } }
-static void etm4_os_unlock_csa(struct etmv4_drvdata *drvdata, struct csdev_access *csa) +static void etm_detect_os_lock(struct etmv4_drvdata *drvdata, + struct csdev_access *csa) { - /* Writing 0 to TRCOSLAR unlocks the trace registers */ - etm4x_relaxed_write32(csa, 0x0, TRCOSLAR); - drvdata->os_unlock = true; + u32 oslsr = etm4x_relaxed_read32(csa, TRCOSLSR); + + drvdata->os_lock_model = ETM_OSLSR_OSLM(oslsr); +} + +static void etm_write_os_lock(struct etmv4_drvdata *drvdata, + struct csdev_access *csa, u32 val) +{ + val = !!val; + + switch (drvdata->os_lock_model) { + case ETM_OSLOCK_PRESENT: + etm4x_relaxed_write32(csa, val, TRCOSLAR); + break; + case ETM_OSLOCK_PE: + write_sysreg_s(val, SYS_OSLAR_EL1); + break; + default: + pr_warn_once("CPU%d: Unsupported Trace OSLock model: %x\n", + smp_processor_id(), drvdata->os_lock_model); + fallthrough; + case ETM_OSLOCK_NI: + return; + } isb(); }
+static inline void etm4_os_unlock_csa(struct etmv4_drvdata *drvdata, + struct csdev_access *csa) +{ + WARN_ON(drvdata->cpu != smp_processor_id()); + + /* Writing 0 to OS Lock unlocks the trace unit registers */ + etm_write_os_lock(drvdata, csa, 0x0); + drvdata->os_unlock = true; +} + static void etm4_os_unlock(struct etmv4_drvdata *drvdata) { if (!WARN_ON(!drvdata->csdev)) etm4_os_unlock_csa(drvdata, &drvdata->csdev->access); - }
static void etm4_os_lock(struct etmv4_drvdata *drvdata) { if (WARN_ON(!drvdata->csdev)) return; - - /* Writing 0x1 to TRCOSLAR locks the trace registers */ - etm4x_relaxed_write32(&drvdata->csdev->access, 0x1, TRCOSLAR); + /* Writing 0x1 to OS Lock locks the trace registers */ + etm_write_os_lock(drvdata, &drvdata->csdev->access, 0x1); drvdata->os_unlock = false; - isb(); }
static void etm4_cs_lock(struct etmv4_drvdata *drvdata, @@ -906,6 +935,9 @@ static void etm4_init_arch_data(void *info) if (!etm4_init_csdev_access(drvdata, csa)) return;
+ /* Detect the support for OS Lock before we actuall use it */ + etm_detect_os_lock(drvdata, csa); + /* Make sure all registers are accessible */ etm4_os_unlock_csa(drvdata, csa); etm4_cs_unlock(drvdata, csa); diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h index 0af6057..0e86eba 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x.h +++ b/drivers/hwtracing/coresight/coresight-etm4x.h @@ -506,6 +506,20 @@ ETM_MODE_EXCL_USER)
/* + * TRCOSLSR.OSLM advertises the OS Lock model. + * OSLM[2:0] = TRCOSLSR[4:3,0] + * + * 0b000 - Trace OS Lock is not implemented. + * 0b010 - Trace OS Lock is implemented. + * 0b100 - Trace OS Lock is not implemented, unit is controlled by PE OS Lock. + */ +#define ETM_OSLOCK_NI 0b000 +#define ETM_OSLOCK_PRESENT 0b010 +#define ETM_OSLOCK_PE 0b100 + +#define ETM_OSLSR_OSLM(oslsr) ((((oslsr) & GENMASK(4, 3)) >> 2) | (oslsr & 0x1)) + +/* * TRCDEVARCH Bit field definitions * Bits[31:21] - ARCHITECT = Always Arm Ltd. * * Bits[31:28] = 0x4 @@ -897,6 +911,7 @@ struct etmv4_drvdata { u8 s_ex_level; u8 ns_ex_level; u8 q_support; + u8 os_lock_model; bool sticky_enable; bool boot_enable; bool os_unlock;
From: Suzuki K Poulose suzuki.poulose@arm.com
Add support for handling the system registers for Embedded Trace Extensions (ETE). ETE shares most of the registers with ETMv4 except for some and also adds some new registers. Re-arrange the ETMv4x list to share the common definitions and add the ETE sysreg support.
Cc: Mike Leach mike.leach@linaro.org Cc: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- drivers/hwtracing/coresight/coresight-etm4x-core.c | 32 +++++++++++++ drivers/hwtracing/coresight/coresight-etm4x.h | 52 ++++++++++++++++++---- 2 files changed, 75 insertions(+), 9 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c index 2ce2d0a..4305dc2 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -114,6 +114,38 @@ void etm4x_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit) } }
+u64 ete_sysreg_read(u32 offset, bool _relaxed, bool _64bit) +{ + u64 res = 0; + + switch (offset) { + ETE_READ_CASES(res) + default : + WARN_ONCE(1, "ete: trying to read unsupported register @%x\n", + offset); + } + + if (!_relaxed) + __iormb(res); /* Imitate the !relaxed I/O helpers */ + + return res; +} + +void ete_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit) +{ + if (!_relaxed) + __iowmb(); /* Imitate the !relaxed I/O helpers */ + if (!_64bit) + val &= GENMASK(31, 0); + + switch (offset) { + ETE_WRITE_CASES(val) + default : + WARN_ONCE(1, "ete: trying to write to unsupported register @%x\n", + offset); + } +} + static void etm_detect_os_lock(struct etmv4_drvdata *drvdata, struct csdev_access *csa) { diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h index 0e86eba..ca24ac5 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x.h +++ b/drivers/hwtracing/coresight/coresight-etm4x.h @@ -29,6 +29,7 @@ #define TRCAUXCTLR 0x018 #define TRCEVENTCTL0R 0x020 #define TRCEVENTCTL1R 0x024 +#define TRCRSR 0x028 #define TRCSTALLCTLR 0x02C #define TRCTSCTLR 0x030 #define TRCSYNCPR 0x034 @@ -49,6 +50,7 @@ #define TRCSEQRSTEVR 0x118 #define TRCSEQSTR 0x11C #define TRCEXTINSELR 0x120 +#define TRCEXTINSELRn(n) (0x120 + (n * 4)) /* n = 0-3 */ #define TRCCNTRLDVRn(n) (0x140 + (n * 4)) /* n = 0-3 */ #define TRCCNTCTLRn(n) (0x150 + (n * 4)) /* n = 0-3 */ #define TRCCNTVRn(n) (0x160 + (n * 4)) /* n = 0-3 */ @@ -160,10 +162,22 @@ #define CASE_NOP(__unused, x) \ case (x): /* fall through */
+#define ETE_ONLY_SYSREG_LIST(op, val) \ + CASE_##op((val), TRCRSR) \ + CASE_##op((val), TRCEXTINSELRn(1)) \ + CASE_##op((val), TRCEXTINSELRn(2)) \ + CASE_##op((val), TRCEXTINSELRn(3)) + /* List of registers accessible via System instructions */ -#define ETM_SYSREG_LIST(op, val) \ - CASE_##op((val), TRCPRGCTLR) \ +#define ETM4x_ONLY_SYSREG_LIST(op, val) \ CASE_##op((val), TRCPROCSELR) \ + CASE_##op((val), TRCVDCTLR) \ + CASE_##op((val), TRCVDSACCTLR) \ + CASE_##op((val), TRCVDARCCTLR) \ + CASE_##op((val), TRCOSLAR) + +#define ETM_COMMON_SYSREG_LIST(op, val) \ + CASE_##op((val), TRCPRGCTLR) \ CASE_##op((val), TRCSTATR) \ CASE_##op((val), TRCCONFIGR) \ CASE_##op((val), TRCAUXCTLR) \ @@ -180,9 +194,6 @@ CASE_##op((val), TRCVIIECTLR) \ CASE_##op((val), TRCVISSCTLR) \ CASE_##op((val), TRCVIPCSSCTLR) \ - CASE_##op((val), TRCVDCTLR) \ - CASE_##op((val), TRCVDSACCTLR) \ - CASE_##op((val), TRCVDARCCTLR) \ CASE_##op((val), TRCSEQEVRn(0)) \ CASE_##op((val), TRCSEQEVRn(1)) \ CASE_##op((val), TRCSEQEVRn(2)) \ @@ -277,7 +288,6 @@ CASE_##op((val), TRCSSPCICRn(5)) \ CASE_##op((val), TRCSSPCICRn(6)) \ CASE_##op((val), TRCSSPCICRn(7)) \ - CASE_##op((val), TRCOSLAR) \ CASE_##op((val), TRCOSLSR) \ CASE_##op((val), TRCACVRn(0)) \ CASE_##op((val), TRCACVRn(1)) \ @@ -369,12 +379,36 @@ CASE_##op((val), TRCPIDR2) \ CASE_##op((val), TRCPIDR3)
-#define ETM4x_READ_SYSREG_CASES(res) ETM_SYSREG_LIST(READ, (res)) -#define ETM4x_WRITE_SYSREG_CASES(val) ETM_SYSREG_LIST(WRITE, (val)) +#define ETM4x_READ_SYSREG_CASES(res) \ + ETM_COMMON_SYSREG_LIST(READ, (res)) \ + ETM4x_ONLY_SYSREG_LIST(READ, (res)) + +#define ETM4x_WRITE_SYSREG_CASES(val) \ + ETM_COMMON_SYSREG_LIST(WRITE, (val)) \ + ETM4x_ONLY_SYSREG_LIST(WRITE, (val)) + +#define ETM_COMMON_SYSREG_LIST_CASES \ + ETM_COMMON_SYSREG_LIST(NOP, __unused) + +#define ETM4x_SYSREG_LIST_CASES \ + ETM_COMMON_SYSREG_LIST_CASES \ + ETM4x_ONLY_SYSREG_LIST(NOP, __unused)
-#define ETM4x_SYSREG_LIST_CASES ETM_SYSREG_LIST(NOP, __unused) #define ETM4x_MMAP_LIST_CASES ETM_MMAP_LIST(NOP, __unused)
+/* ETE only supports system register access */ +#define ETE_READ_CASES(res) \ + ETM_COMMON_SYSREG_LIST(READ, (res)) \ + ETE_ONLY_SYSREG_LIST(READ, (res)) + +#define ETE_WRITE_CASES(val) \ + ETM_COMMON_SYSREG_LIST(WRITE, (val)) \ + ETE_ONLY_SYSREG_LIST(WRITE, (val)) + +#define ETE_ONLY_SYSREG_LIST_CASES \ + ETM_COMMON_SYSREG_LIST_CASES \ + ETE_ONLY_SYSREG_LIST(NOP, __unused) + #define read_etm4x_sysreg_offset(offset, _64bit) \ ({ \ u64 __val; \
From: Suzuki K Poulose suzuki.poulose@arm.com
Add ETE as one of the supported device types we support with ETM4x driver. The devices are named following the existing convention as ete<N>.
ETE mandates that the trace resource status register is programmed before the tracing is turned on. For the moment simply write to it indicating TraceActive.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- Changes in V2:
- Update Kconfig title and description to include ETE - Filter out registers not implemented in ETE from sysfs
drivers/hwtracing/coresight/Kconfig | 10 ++-- drivers/hwtracing/coresight/coresight-etm4x-core.c | 56 +++++++++++++++++----- .../hwtracing/coresight/coresight-etm4x-sysfs.c | 19 ++++++-- drivers/hwtracing/coresight/coresight-etm4x.h | 16 ++++++- 4 files changed, 79 insertions(+), 22 deletions(-)
diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig index 7b44ba2..f154ae7 100644 --- a/drivers/hwtracing/coresight/Kconfig +++ b/drivers/hwtracing/coresight/Kconfig @@ -97,15 +97,15 @@ config CORESIGHT_SOURCE_ETM3X module will be called coresight-etm3x.
config CORESIGHT_SOURCE_ETM4X - tristate "CoreSight Embedded Trace Macrocell 4.x driver" + tristate "CoreSight ETMv4.x / ETE driver" depends on ARM64 select CORESIGHT_LINKS_AND_SINKS select PID_IN_CONTEXTIDR help - This driver provides support for the ETM4.x tracer module, tracing the - instructions that a processor is executing. This is primarily useful - for instruction level tracing. Depending on the implemented version - data tracing may also be available. + This driver provides support for the CoreSight Embedded Trace Macrocell + version 4.x and the Embedded Trace Extensions (ETE). Both are CPU tracer + modules, tracing the instructions that a processor is executing. This is + primarily useful for instruction level tracing.
To compile this driver as a module, choose M here: the module will be called coresight-etm4x. diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c index 4305dc2..1c1b13d 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -431,6 +431,13 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata) etm4x_relaxed_write32(csa, trcpdcr | TRCPDCR_PU, TRCPDCR); }
+ /* + * ETE mandates that the TRCRSR is written to before + * enabling it. + */ + if (etm4x_is_ete(drvdata)) + etm4x_relaxed_write32(csa, TRCRSR_TA, TRCRSR); + /* Enable the trace unit */ etm4x_relaxed_write32(csa, 1, TRCPRGCTLR);
@@ -864,13 +871,24 @@ static bool etm4_init_sysreg_access(struct etmv4_drvdata *drvdata, * ETMs implementing sysreg access must implement TRCDEVARCH. */ devarch = read_etm4x_sysreg_const_offset(TRCDEVARCH); - if ((devarch & ETM_DEVARCH_ID_MASK) != ETM_DEVARCH_ETMv4x_ARCH) + switch (devarch & ETM_DEVARCH_ID_MASK) { + case ETM_DEVARCH_ETMv4x_ARCH: + *csa = (struct csdev_access) { + .io_mem = false, + .read = etm4x_sysreg_read, + .write = etm4x_sysreg_write, + }; + break; + case ETM_DEVARCH_ETE_ARCH: + *csa = (struct csdev_access) { + .io_mem = false, + .read = ete_sysreg_read, + .write = ete_sysreg_write, + }; + break; + default: return false; - *csa = (struct csdev_access) { - .io_mem = false, - .read = etm4x_sysreg_read, - .write = etm4x_sysreg_write, - }; + }
drvdata->arch = etm_devarch_to_arch(devarch); return true; @@ -1808,6 +1826,8 @@ static int etm4_probe(struct device *dev, void __iomem *base, u32 etm_pid) struct etmv4_drvdata *drvdata; struct coresight_desc desc = { 0 }; struct etm4_init_arg init_arg = { 0 }; + u8 major, minor; + char *type_name;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); if (!drvdata) @@ -1834,10 +1854,6 @@ static int etm4_probe(struct device *dev, void __iomem *base, u32 etm_pid) if (drvdata->cpu < 0) return drvdata->cpu;
- desc.name = devm_kasprintf(dev, GFP_KERNEL, "etm%d", drvdata->cpu); - if (!desc.name) - return -ENOMEM; - init_arg.drvdata = drvdata; init_arg.csa = &desc.access; init_arg.pid = etm_pid; @@ -1853,6 +1869,20 @@ static int etm4_probe(struct device *dev, void __iomem *base, u32 etm_pid) if (!desc.access.io_mem || fwnode_property_present(dev_fwnode(dev), "qcom,skip-power-up")) drvdata->skip_power_up = true; + major = ETM_ARCH_MAJOR_VERSION(drvdata->arch); + minor = ETM_ARCH_MINOR_VERSION(drvdata->arch); + if (etm4x_is_ete(drvdata)) { + type_name = "ete"; + /* ETE v1 has major version == 5. Adjust this for logging.*/ + major -= 4; + } else { + type_name = "etm"; + } + + desc.name = devm_kasprintf(dev, GFP_KERNEL, + "%s%d", type_name, drvdata->cpu); + if (!desc.name) + return -ENOMEM;
etm4_init_trace_id(drvdata); etm4_set_default(&drvdata->config); @@ -1881,9 +1911,8 @@ static int etm4_probe(struct device *dev, void __iomem *base, u32 etm_pid)
etmdrvdata[drvdata->cpu] = drvdata;
- dev_info(&drvdata->csdev->dev, "CPU%d: ETM v%d.%d initialized\n", - drvdata->cpu, ETM_ARCH_MAJOR_VERSION(drvdata->arch), - ETM_ARCH_MINOR_VERSION(drvdata->arch)); + dev_info(&drvdata->csdev->dev, "CPU%d: %s v%d.%d initialized\n", + drvdata->cpu, type_name, major, minor);
if (boot_enable) { coresight_enable(drvdata->csdev); @@ -2025,6 +2054,7 @@ static struct amba_driver etm4x_amba_driver = {
static const struct of_device_id etm4_sysreg_match[] = { { .compatible = "arm,coresight-etm4x-sysreg" }, + { .compatible = "arm,embedded-trace-extension" }, {} };
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c index b646d53..1c490bc 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c @@ -2374,12 +2374,20 @@ static inline bool etm4x_register_implemented(struct etmv4_drvdata *drvdata, u32 offset) { switch (offset) { - ETM4x_SYSREG_LIST_CASES + ETM_COMMON_SYSREG_LIST_CASES /* - * Registers accessible via system instructions are always - * implemented. + * Common registers to ETE & ETM4x accessible via system + * instructions are always implemented. */ return true; + + ETM4x_ONLY_SYSREG_LIST_CASES + /* + * We only support etm4x and ete. So if the device is not + * ETE, it must be ETMv4x. + */ + return !etm4x_is_ete(drvdata); + ETM4x_MMAP_LIST_CASES /* * Registers accessible only via memory-mapped registers @@ -2389,8 +2397,13 @@ etm4x_register_implemented(struct etmv4_drvdata *drvdata, u32 offset) * coresight_register() and the csdev is not initialized * until that is done. So rely on the drvdata->base to * detect if we have a memory mapped access. + * Also ETE doesn't implement memory mapped access, thus + * it is sufficient to check that we are using mmio. */ return !!drvdata->base; + + ETE_ONLY_SYSREG_LIST_CASES + return etm4x_is_ete(drvdata); }
return false; diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h index ca24ac5..8b90de5 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x.h +++ b/drivers/hwtracing/coresight/coresight-etm4x.h @@ -128,6 +128,8 @@ #define TRCCIDR2 0xFF8 #define TRCCIDR3 0xFFC
+#define TRCRSR_TA BIT(12) + /* * System instructions to access ETM registers. * See ETMv4.4 spec ARM IHI0064F section 4.3.6 System instructions @@ -390,6 +392,9 @@ #define ETM_COMMON_SYSREG_LIST_CASES \ ETM_COMMON_SYSREG_LIST(NOP, __unused)
+#define ETM4x_ONLY_SYSREG_LIST_CASES \ + ETM4x_ONLY_SYSREG_LIST(NOP, __unused) + #define ETM4x_SYSREG_LIST_CASES \ ETM_COMMON_SYSREG_LIST_CASES \ ETM4x_ONLY_SYSREG_LIST(NOP, __unused) @@ -406,7 +411,6 @@ ETE_ONLY_SYSREG_LIST(WRITE, (val))
#define ETE_ONLY_SYSREG_LIST_CASES \ - ETM_COMMON_SYSREG_LIST_CASES \ ETE_ONLY_SYSREG_LIST(NOP, __unused)
#define read_etm4x_sysreg_offset(offset, _64bit) \ @@ -589,11 +593,14 @@ ((ETM_DEVARCH_MAKE_ARCHID_ARCH_VER(major)) | ETM_DEVARCH_ARCHID_ARCH_PART(0xA13))
#define ETM_DEVARCH_ARCHID_ETMv4x ETM_DEVARCH_MAKE_ARCHID(0x4) +#define ETM_DEVARCH_ARCHID_ETE ETM_DEVARCH_MAKE_ARCHID(0x5)
#define ETM_DEVARCH_ID_MASK \ (ETM_DEVARCH_ARCHITECT_MASK | ETM_DEVARCH_ARCHID_MASK | ETM_DEVARCH_PRESENT) #define ETM_DEVARCH_ETMv4x_ARCH \ (ETM_DEVARCH_ARCHITECT_ARM | ETM_DEVARCH_ARCHID_ETMv4x | ETM_DEVARCH_PRESENT) +#define ETM_DEVARCH_ETE_ARCH \ + (ETM_DEVARCH_ARCHITECT_ARM | ETM_DEVARCH_ARCHID_ETE | ETM_DEVARCH_PRESENT)
#define TRCSTATR_IDLE_BIT 0 #define TRCSTATR_PMSTABLE_BIT 1 @@ -683,6 +690,8 @@ #define ETM_ARCH_MINOR_VERSION(arch) ((arch) & 0xfU)
#define ETM_ARCH_V4 ETM_ARCH_VERSION(4, 0) +#define ETM_ARCH_ETE ETM_ARCH_VERSION(5, 0) + /* Interpretation of resource numbers change at ETM v4.3 architecture */ #define ETM_ARCH_V4_3 ETM_ARCH_VERSION(4, 3)
@@ -989,4 +998,9 @@ void etm4_config_trace_mode(struct etmv4_config *config);
u64 etm4x_sysreg_read(u32 offset, bool _relaxed, bool _64bit); void etm4x_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit); + +static inline bool etm4x_is_ete(struct etmv4_drvdata *drvdata) +{ + return drvdata->arch >= ETM_ARCH_ETE; +} #endif
From: Suzuki K Poulose suzuki.poulose@arm.com
Document the device tree bindings for Embedded Trace Extensions. ETE can be connected to legacy coresight components and thus could optionally contain a connection graph as described by the CoreSight bindings.
Cc: devicetree@vger.kernel.org Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Rob Herring robh@kernel.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- Documentation/devicetree/bindings/arm/ete.yaml | 71 ++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 Documentation/devicetree/bindings/arm/ete.yaml
diff --git a/Documentation/devicetree/bindings/arm/ete.yaml b/Documentation/devicetree/bindings/arm/ete.yaml new file mode 100644 index 0000000..00e6a77 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/ete.yaml @@ -0,0 +1,71 @@ +# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause +# Copyright 2021, Arm Ltd +%YAML 1.2 +--- +$id: "http://devicetree.org/schemas/arm/ete.yaml#" +$schema: "http://devicetree.org/meta-schemas/core.yaml#" + +title: ARM Embedded Trace Extensions + +maintainers: + - Suzuki K Poulose suzuki.poulose@arm.com + - Mathieu Poirier mathieu.poirier@linaro.org + +description: | + Arm Embedded Trace Extension(ETE) is a per CPU trace component that + allows tracing the CPU execution. It overlaps with the CoreSight ETMv4 + architecture and has extended support for future architecture changes. + The trace generated by the ETE could be stored via legacy CoreSight + components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer + Arm Trace Buffer Extension (TRBE)). Since the ETE can be connected to + legacy CoreSight components, a node must be listed per instance, along + with any optional connection graph as per the coresight bindings. + See bindings/arm/coresight.txt. + +properties: + $nodename: + pattern: "^ete([0-9a-f]+)$" + compatible: + items: + - const: arm,embedded-trace-extension + + cpu: + description: | + Handle to the cpu this ETE is bound to. + $ref: /schemas/types.yaml#/definitions/phandle + + out-ports: + description: | + Out put connections from the ETE to legacy CoreSight trace bus. + $ref: /schemas/graph.yaml#/properties/ports + +required: + - compatible + - cpu + +additionalProperties: false + +examples: + +# An ETE node without legacy CoreSight connections + - | + ete0 { + compatible = "arm,embedded-trace-extension"; + cpu = <&cpu_0>; + }; +# An ETE node with legacy CoreSight connections + - | + ete1 { + compatible = "arm,embedded-trace-extension"; + cpu = <&cpu_1>; + + out-ports { /* legacy coresight connection */ + port { + ete1_out_port: endpoint { + remote-endpoint = <&funnel_in_port0>; + }; + }; + }; + }; + +...
On Wed, Jan 13, 2021 at 09:48:13AM +0530, Anshuman Khandual wrote:
From: Suzuki K Poulose suzuki.poulose@arm.com
Document the device tree bindings for Embedded Trace Extensions. ETE can be connected to legacy coresight components and thus could optionally contain a connection graph as described by the CoreSight bindings.
Cc: devicetree@vger.kernel.org Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Rob Herring robh@kernel.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Documentation/devicetree/bindings/arm/ete.yaml | 71 ++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 Documentation/devicetree/bindings/arm/ete.yaml
diff --git a/Documentation/devicetree/bindings/arm/ete.yaml b/Documentation/devicetree/bindings/arm/ete.yaml new file mode 100644 index 0000000..00e6a77 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/ete.yaml @@ -0,0 +1,71 @@ +# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause +# Copyright 2021, Arm Ltd +%YAML 1.2 +--- +$id: "http://devicetree.org/schemas/arm/ete.yaml#" +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+title: ARM Embedded Trace Extensions
+maintainers:
- Suzuki K Poulose suzuki.poulose@arm.com
- Mathieu Poirier mathieu.poirier@linaro.org
+description: |
- Arm Embedded Trace Extension(ETE) is a per CPU trace component that
- allows tracing the CPU execution. It overlaps with the CoreSight ETMv4
- architecture and has extended support for future architecture changes.
- The trace generated by the ETE could be stored via legacy CoreSight
- components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer
- Arm Trace Buffer Extension (TRBE)). Since the ETE can be connected to
- legacy CoreSight components, a node must be listed per instance, along
- with any optional connection graph as per the coresight bindings.
- See bindings/arm/coresight.txt.
+properties:
- $nodename:
- pattern: "^ete([0-9a-f]+)$"
- compatible:
- items:
- const: arm,embedded-trace-extension
- cpu:
We use 'cpus' in a couple of other places, let's do that here for consistency.
- description: |
Handle to the cpu this ETE is bound to.
- $ref: /schemas/types.yaml#/definitions/phandle
- out-ports:
- description: |
Out put connections from the ETE to legacy CoreSight trace bus.
Output
- $ref: /schemas/graph.yaml#/properties/ports
You have to define what each 'port' is if there can be more than 1. If there's only ever 1 then you just need 'port' though maybe all the coresight bindings require 'out-ports'. And the port nodes need a $ref to '/schemas/graph.yaml#/properties/port'.
+required:
- compatible
- cpu
+additionalProperties: false
+examples:
+# An ETE node without legacy CoreSight connections
- |
- ete0 {
compatible = "arm,embedded-trace-extension";
cpu = <&cpu_0>;
- };
+# An ETE node with legacy CoreSight connections
- |
- ete1 {
compatible = "arm,embedded-trace-extension";
cpu = <&cpu_1>;
out-ports { /* legacy coresight connection */
port {
ete1_out_port: endpoint {
remote-endpoint = <&funnel_in_port0>;
};
};
};
- };
+...
2.7.4
Hi Rob
On 1/25/21 7:22 PM, Rob Herring wrote:
On Wed, Jan 13, 2021 at 09:48:13AM +0530, Anshuman Khandual wrote:
From: Suzuki K Poulose suzuki.poulose@arm.com
Document the device tree bindings for Embedded Trace Extensions. ETE can be connected to legacy coresight components and thus could optionally contain a connection graph as described by the CoreSight bindings.
Cc: devicetree@vger.kernel.org Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Rob Herring robh@kernel.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Documentation/devicetree/bindings/arm/ete.yaml | 71 ++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 Documentation/devicetree/bindings/arm/ete.yaml
diff --git a/Documentation/devicetree/bindings/arm/ete.yaml b/Documentation/devicetree/bindings/arm/ete.yaml new file mode 100644 index 0000000..00e6a77 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/ete.yaml @@ -0,0 +1,71 @@ +# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause +# Copyright 2021, Arm Ltd +%YAML 1.2 +--- +$id: "http://devicetree.org/schemas/arm/ete.yaml#" +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+title: ARM Embedded Trace Extensions
+maintainers:
- Suzuki K Poulose suzuki.poulose@arm.com
- Mathieu Poirier mathieu.poirier@linaro.org
+description: |
- Arm Embedded Trace Extension(ETE) is a per CPU trace component that
- allows tracing the CPU execution. It overlaps with the CoreSight ETMv4
- architecture and has extended support for future architecture changes.
- The trace generated by the ETE could be stored via legacy CoreSight
- components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer
- Arm Trace Buffer Extension (TRBE)). Since the ETE can be connected to
- legacy CoreSight components, a node must be listed per instance, along
- with any optional connection graph as per the coresight bindings.
- See bindings/arm/coresight.txt.
+properties:
- $nodename:
- pattern: "^ete([0-9a-f]+)$"
- compatible:
- items:
- const: arm,embedded-trace-extension
- cpu:
We use 'cpus' in a couple of other places, let's do that here for consistency.
This is following the existing CoreSight bindings for ETM. The same driver probes both. Also there can only ever be a single CPU for ete/etm. So, we would prefer to keep it aligned with the existing bindings to avoid causing confusion.
- description: |
Handle to the cpu this ETE is bound to.
- $ref: /schemas/types.yaml#/definitions/phandle
- out-ports:
- description: |
Out put connections from the ETE to legacy CoreSight trace bus.
Output
Will fix.
- $ref: /schemas/graph.yaml#/properties/ports
You have to define what each 'port' is if there can be more than 1. If there's only ever 1 then you just need 'port' though maybe all the coresight bindings require 'out-ports'. And the port nodes need a $ref to '/schemas/graph.yaml#/properties/port'.
All CoreSight components require an out-ports and/or in-ports. The ETM/ETE always has one port, but must be under out-ports in line with the CoreSight bindings.
Does this look more apt:
out-ports: description: | Output connection from the ETE to legacy CoreSight trace bus. poperties: port: $ref: /schemas/graph.yaml#/properties/port
Suzuki
On 1/25/21 10:20 PM, Suzuki K Poulose wrote:
Hi Rob
On 1/25/21 7:22 PM, Rob Herring wrote:
On Wed, Jan 13, 2021 at 09:48:13AM +0530, Anshuman Khandual wrote:
From: Suzuki K Poulose suzuki.poulose@arm.com
Document the device tree bindings for Embedded Trace Extensions. ETE can be connected to legacy coresight components and thus could optionally contain a connection graph as described by the CoreSight bindings.
Cc: devicetree@vger.kernel.org Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Rob Herring robh@kernel.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Documentation/devicetree/bindings/arm/ete.yaml | 71 ++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 Documentation/devicetree/bindings/arm/ete.yaml
diff --git a/Documentation/devicetree/bindings/arm/ete.yaml b/Documentation/devicetree/bindings/arm/ete.yaml new file mode 100644 index 0000000..00e6a77 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/ete.yaml @@ -0,0 +1,71 @@ +# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause +# Copyright 2021, Arm Ltd +%YAML 1.2 +--- +$id: "http://devicetree.org/schemas/arm/ete.yaml#" +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+title: ARM Embedded Trace Extensions
+maintainers: + - Suzuki K Poulose suzuki.poulose@arm.com + - Mathieu Poirier mathieu.poirier@linaro.org
+description: | + Arm Embedded Trace Extension(ETE) is a per CPU trace component that + allows tracing the CPU execution. It overlaps with the CoreSight ETMv4 + architecture and has extended support for future architecture changes. + The trace generated by the ETE could be stored via legacy CoreSight + components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer + Arm Trace Buffer Extension (TRBE)). Since the ETE can be connected to + legacy CoreSight components, a node must be listed per instance, along + with any optional connection graph as per the coresight bindings. + See bindings/arm/coresight.txt.
+properties: + $nodename: + pattern: "^ete([0-9a-f]+)$" + compatible: + items: + - const: arm,embedded-trace-extension
+ cpu:
We use 'cpus' in a couple of other places, let's do that here for consistency.
This is following the existing CoreSight bindings for ETM. The same driver probes both. Also there can only ever be a single CPU for ete/etm. So, we would prefer to keep it aligned with the existing bindings to avoid causing confusion.
+ description: | + Handle to the cpu this ETE is bound to. + $ref: /schemas/types.yaml#/definitions/phandle
+ out-ports: + description: | + Out put connections from the ETE to legacy CoreSight trace bus.
Output
Will fix.
+ $ref: /schemas/graph.yaml#/properties/ports
You have to define what each 'port' is if there can be more than 1. If there's only ever 1 then you just need 'port' though maybe all the coresight bindings require 'out-ports'. And the port nodes need a $ref to '/schemas/graph.yaml#/properties/port'.
All CoreSight components require an out-ports and/or in-ports. The ETM/ETE always has one port, but must be under out-ports in line with the CoreSight bindings.
Does this look more apt:
out-ports: description: | Output connection from the ETE to legacy CoreSight trace bus. poperties: port: $ref: /schemas/graph.yaml#/properties/port
Correction, the above should be :
+ out-ports: + type: object + description: | + Output connections from the ETE to legacy CoreSight trace bus. + properties: + port: + $ref: /schemas/graph.yaml#/properties/port
That works fine for me. Does that look fine ? Some day, we should convert the coresight dt bindings to yaml and import the out-ports/in-ports from the scheme :-)
Cheers Suzuki
This adds TRBE related registers and corresponding feature macros.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- arch/arm64/include/asm/sysreg.h | 49 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 49 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 4acff97..d60750e7 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -329,6 +329,55 @@
/*** End of Statistical Profiling Extension ***/
+/* + * TRBE Registers + */ +#define SYS_TRBLIMITR_EL1 sys_reg(3, 0, 9, 11, 0) +#define SYS_TRBPTR_EL1 sys_reg(3, 0, 9, 11, 1) +#define SYS_TRBBASER_EL1 sys_reg(3, 0, 9, 11, 2) +#define SYS_TRBSR_EL1 sys_reg(3, 0, 9, 11, 3) +#define SYS_TRBMAR_EL1 sys_reg(3, 0, 9, 11, 4) +#define SYS_TRBTRG_EL1 sys_reg(3, 0, 9, 11, 6) +#define SYS_TRBIDR_EL1 sys_reg(3, 0, 9, 11, 7) + +#define TRBLIMITR_LIMIT_MASK GENMASK_ULL(51, 0) +#define TRBLIMITR_LIMIT_SHIFT 12 +#define TRBLIMITR_NVM BIT(5) +#define TRBLIMITR_TRIG_MODE_MASK GENMASK(1, 0) +#define TRBLIMITR_TRIG_MODE_SHIFT 2 +#define TRBLIMITR_FILL_MODE_MASK GENMASK(1, 0) +#define TRBLIMITR_FILL_MODE_SHIFT 1 +#define TRBLIMITR_ENABLE BIT(0) +#define TRBPTR_PTR_MASK GENMASK_ULL(63, 0) +#define TRBPTR_PTR_SHIFT 0 +#define TRBBASER_BASE_MASK GENMASK_ULL(51, 0) +#define TRBBASER_BASE_SHIFT 12 +#define TRBSR_EC_MASK GENMASK(5, 0) +#define TRBSR_EC_SHIFT 26 +#define TRBSR_IRQ BIT(22) +#define TRBSR_TRG BIT(21) +#define TRBSR_WRAP BIT(20) +#define TRBSR_ABORT BIT(18) +#define TRBSR_STOP BIT(17) +#define TRBSR_MSS_MASK GENMASK(15, 0) +#define TRBSR_MSS_SHIFT 0 +#define TRBSR_BSC_MASK GENMASK(5, 0) +#define TRBSR_BSC_SHIFT 0 +#define TRBSR_FSC_MASK GENMASK(5, 0) +#define TRBSR_FSC_SHIFT 0 +#define TRBMAR_SHARE_MASK GENMASK(1, 0) +#define TRBMAR_SHARE_SHIFT 8 +#define TRBMAR_OUTER_MASK GENMASK(3, 0) +#define TRBMAR_OUTER_SHIFT 4 +#define TRBMAR_INNER_MASK GENMASK(3, 0) +#define TRBMAR_INNER_SHIFT 0 +#define TRBTRG_TRG_MASK GENMASK(31, 0) +#define TRBTRG_TRG_SHIFT 0 +#define TRBIDR_FLAG BIT(5) +#define TRBIDR_PROG BIT(4) +#define TRBIDR_ALIGN_MASK GENMASK(3, 0) +#define TRBIDR_ALIGN_SHIFT 0 + #define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1) #define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2)
On 1/13/21 4:18 AM, Anshuman Khandual wrote:
This adds TRBE related registers and corresponding feature macros.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
arch/arm64/include/asm/sysreg.h | 49 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 49 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 4acff97..d60750e7 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -329,6 +329,55 @@ /*** End of Statistical Profiling Extension ***/ +/*
- TRBE Registers
- */
+#define SYS_TRBLIMITR_EL1 sys_reg(3, 0, 9, 11, 0) +#define SYS_TRBPTR_EL1 sys_reg(3, 0, 9, 11, 1) +#define SYS_TRBBASER_EL1 sys_reg(3, 0, 9, 11, 2) +#define SYS_TRBSR_EL1 sys_reg(3, 0, 9, 11, 3) +#define SYS_TRBMAR_EL1 sys_reg(3, 0, 9, 11, 4) +#define SYS_TRBTRG_EL1 sys_reg(3, 0, 9, 11, 6) +#define SYS_TRBIDR_EL1 sys_reg(3, 0, 9, 11, 7)
+#define TRBLIMITR_LIMIT_MASK GENMASK_ULL(51, 0) +#define TRBLIMITR_LIMIT_SHIFT 12 +#define TRBLIMITR_NVM BIT(5) +#define TRBLIMITR_TRIG_MODE_MASK GENMASK(1, 0) +#define TRBLIMITR_TRIG_MODE_SHIFT 2
This must be 3.
Rest looks fine to me
Suzuki
On 1/13/21 2:51 PM, Suzuki K Poulose wrote:
On 1/13/21 4:18 AM, Anshuman Khandual wrote:
This adds TRBE related registers and corresponding feature macros.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
arch/arm64/include/asm/sysreg.h | 49 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 49 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 4acff97..d60750e7 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -329,6 +329,55 @@ /*** End of Statistical Profiling Extension ***/ +/*
- TRBE Registers
- */
+#define SYS_TRBLIMITR_EL1 sys_reg(3, 0, 9, 11, 0) +#define SYS_TRBPTR_EL1 sys_reg(3, 0, 9, 11, 1) +#define SYS_TRBBASER_EL1 sys_reg(3, 0, 9, 11, 2) +#define SYS_TRBSR_EL1 sys_reg(3, 0, 9, 11, 3) +#define SYS_TRBMAR_EL1 sys_reg(3, 0, 9, 11, 4) +#define SYS_TRBTRG_EL1 sys_reg(3, 0, 9, 11, 6) +#define SYS_TRBIDR_EL1 sys_reg(3, 0, 9, 11, 7)
+#define TRBLIMITR_LIMIT_MASK GENMASK_ULL(51, 0) +#define TRBLIMITR_LIMIT_SHIFT 12 +#define TRBLIMITR_NVM BIT(5) +#define TRBLIMITR_TRIG_MODE_MASK GENMASK(1, 0) +#define TRBLIMITR_TRIG_MODE_SHIFT 2
This must be 3.
Changed.
Rest looks fine to me
Suzuki
On Wed, Jan 13, 2021 at 09:48:14AM +0530, Anshuman Khandual wrote:
This adds TRBE related registers and corresponding feature macros.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Acked-by: Catalin Marinas catalin.marinas@arm.com
On Mon, Feb 22, 2021 at 01:55:52PM +0000, Catalin Marinas wrote:
On Wed, Jan 13, 2021 at 09:48:14AM +0530, Anshuman Khandual wrote:
This adds TRBE related registers and corresponding feature macros.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Acked-by: Catalin Marinas catalin.marinas@arm.com
Ah, ignore this. I seem to have already acked v3:
https://lore.kernel.org/r/20210128171822.GB29183@gaia
Add support for dedicated sinks that are bound to individual CPUs. (e.g, TRBE). To allow quicker access to the sink for a given CPU bound source, keep a percpu array of the sink devices. Also, add support for building a path to the CPU local sink from the ETM.
This adds a new percpu sink type CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM. This new sink type is exclusively available and can only work with percpu source type device CORESIGHT_DEV_SUBTYPE_SOURCE_PERCPU_PROC.
This defines a percpu structure that accommodates a single coresight_device which can be used to store an initialized instance from a sink driver. As these sinks are exclusively linked and dependent on corresponding percpu sources devices, they should also be the default sink device during a perf session.
Outwards device connections are scanned while establishing paths between a source and a sink device. But such connections are not present for certain percpu source and sink devices which are exclusively linked and dependent. Build the path directly and skip connection scanning for such devices.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- drivers/hwtracing/coresight/coresight-core.c | 14 ++++++++++++++ include/linux/coresight.h | 12 ++++++++++++ 2 files changed, 26 insertions(+)
diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c index 0062c89..b300606 100644 --- a/drivers/hwtracing/coresight/coresight-core.c +++ b/drivers/hwtracing/coresight/coresight-core.c @@ -23,6 +23,7 @@ #include "coresight-priv.h"
static DEFINE_MUTEX(coresight_mutex); +DEFINE_PER_CPU(struct coresight_device *, csdev_sink);
/** * struct coresight_node - elements of a path, from source to sink @@ -784,6 +785,13 @@ static int _coresight_build_path(struct coresight_device *csdev, if (csdev == sink) goto out;
+ if (coresight_is_percpu_source(csdev) && coresight_is_percpu_sink(sink) && + sink == per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev))) { + _coresight_build_path(sink, sink, path); + found = true; + goto out; + } + /* Not a sink - recursively explore each port found on this element */ for (i = 0; i < csdev->pdata->nr_outport; i++) { struct coresight_device *child_dev; @@ -998,6 +1006,12 @@ coresight_find_default_sink(struct coresight_device *csdev) { int depth = 0;
+ if (coresight_is_percpu_source(csdev)) { + csdev->def_sink = per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev)); + if (csdev->def_sink) + return csdev->def_sink; + } + /* look for a default sink if we have not found for this device */ if (!csdev->def_sink) csdev->def_sink = coresight_find_sink(csdev, &depth); diff --git a/include/linux/coresight.h b/include/linux/coresight.h index 267c3ac..e019182 100644 --- a/include/linux/coresight.h +++ b/include/linux/coresight.h @@ -50,6 +50,7 @@ enum coresight_dev_subtype_sink { CORESIGHT_DEV_SUBTYPE_SINK_PORT, CORESIGHT_DEV_SUBTYPE_SINK_BUFFER, CORESIGHT_DEV_SUBTYPE_SINK_SYSMEM, + CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM, };
enum coresight_dev_subtype_link { @@ -428,6 +429,17 @@ static inline void csdev_access_write64(struct csdev_access *csa, u64 val, u32 o csa->write(val, offset, false, true); }
+static inline bool coresight_is_percpu_source(struct coresight_device *csdev) +{ + return csdev && (csdev->type == CORESIGHT_DEV_TYPE_SOURCE) && + csdev->subtype.source_subtype == CORESIGHT_DEV_SUBTYPE_SOURCE_PROC; +} + +static inline bool coresight_is_percpu_sink(struct coresight_device *csdev) +{ + return csdev && (csdev->type == CORESIGHT_DEV_TYPE_SINK) && + csdev->subtype.sink_subtype == CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM; +} #else /* !CONFIG_64BIT */
static inline u64 csdev_access_relaxed_read64(struct csdev_access *csa,
On 1/13/21 4:18 AM, Anshuman Khandual wrote:
Add support for dedicated sinks that are bound to individual CPUs. (e.g, TRBE). To allow quicker access to the sink for a given CPU bound source, keep a percpu array of the sink devices. Also, add support for building a path to the CPU local sink from the ETM.
This adds a new percpu sink type CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM. This new sink type is exclusively available and can only work with percpu source type device CORESIGHT_DEV_SUBTYPE_SOURCE_PERCPU_PROC.
This defines a percpu structure that accommodates a single coresight_device which can be used to store an initialized instance from a sink driver. As these sinks are exclusively linked and dependent on corresponding percpu sources devices, they should also be the default sink device during a perf session.
Outwards device connections are scanned while establishing paths between a source and a sink device. But such connections are not present for certain percpu source and sink devices which are exclusively linked and dependent. Build the path directly and skip connection scanning for such devices.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
drivers/hwtracing/coresight/coresight-core.c | 14 ++++++++++++++ include/linux/coresight.h | 12 ++++++++++++ 2 files changed, 26 insertions(+)
diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c index 0062c89..b300606 100644 --- a/drivers/hwtracing/coresight/coresight-core.c +++ b/drivers/hwtracing/coresight/coresight-core.c @@ -23,6 +23,7 @@ #include "coresight-priv.h" static DEFINE_MUTEX(coresight_mutex); +DEFINE_PER_CPU(struct coresight_device *, csdev_sink); /**
- struct coresight_node - elements of a path, from source to sink
@@ -784,6 +785,13 @@ static int _coresight_build_path(struct coresight_device *csdev, if (csdev == sink) goto out;
- if (coresight_is_percpu_source(csdev) && coresight_is_percpu_sink(sink) &&
sink == per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev))) {
_coresight_build_path(sink, sink, path);
found = true;
goto out;
- }
- /* Not a sink - recursively explore each port found on this element */ for (i = 0; i < csdev->pdata->nr_outport; i++) { struct coresight_device *child_dev;
@@ -998,6 +1006,12 @@ coresight_find_default_sink(struct coresight_device *csdev) { int depth = 0;
- if (coresight_is_percpu_source(csdev)) {
On a system without per_cpu sink, this would reset the default sink for the source device every single time and fallback to searching every single time. So I think it would be better if did check if the def_sink was not set. We could fold this into the case below may be. i.e,
if (!csdev->def_sink) { if (coresight_is_percpu_source(csdev)) csdev->def_sink = per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev)); if (!csdev->def_sink) csdev->def_sink = coresight_find_sink(csdev, &depth); }
Otherwise looks good to me.
Suzuki
On 1/13/21 3:13 PM, Suzuki K Poulose wrote:
On 1/13/21 4:18 AM, Anshuman Khandual wrote:
Add support for dedicated sinks that are bound to individual CPUs. (e.g, TRBE). To allow quicker access to the sink for a given CPU bound source, keep a percpu array of the sink devices. Also, add support for building a path to the CPU local sink from the ETM.
This adds a new percpu sink type CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM. This new sink type is exclusively available and can only work with percpu source type device CORESIGHT_DEV_SUBTYPE_SOURCE_PERCPU_PROC.
This defines a percpu structure that accommodates a single coresight_device which can be used to store an initialized instance from a sink driver. As these sinks are exclusively linked and dependent on corresponding percpu sources devices, they should also be the default sink device during a perf session.
Outwards device connections are scanned while establishing paths between a source and a sink device. But such connections are not present for certain percpu source and sink devices which are exclusively linked and dependent. Build the path directly and skip connection scanning for such devices.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
drivers/hwtracing/coresight/coresight-core.c | 14 ++++++++++++++ include/linux/coresight.h | 12 ++++++++++++ 2 files changed, 26 insertions(+)
diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c index 0062c89..b300606 100644 --- a/drivers/hwtracing/coresight/coresight-core.c +++ b/drivers/hwtracing/coresight/coresight-core.c @@ -23,6 +23,7 @@ #include "coresight-priv.h" static DEFINE_MUTEX(coresight_mutex); +DEFINE_PER_CPU(struct coresight_device *, csdev_sink); /** * struct coresight_node - elements of a path, from source to sink @@ -784,6 +785,13 @@ static int _coresight_build_path(struct coresight_device *csdev, if (csdev == sink) goto out; + if (coresight_is_percpu_source(csdev) && coresight_is_percpu_sink(sink) && + sink == per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev))) { + _coresight_build_path(sink, sink, path); + found = true; + goto out; + }
/* Not a sink - recursively explore each port found on this element */ for (i = 0; i < csdev->pdata->nr_outport; i++) { struct coresight_device *child_dev; @@ -998,6 +1006,12 @@ coresight_find_default_sink(struct coresight_device *csdev) { int depth = 0; + if (coresight_is_percpu_source(csdev)) {
On a system without per_cpu sink, this would reset the default sink for the source device every single time and fallback to searching every single time.
Right.
So I think it would be better if did check if the def_sink was not set. We could fold this into the case below may be. i.e,
if (!csdev->def_sink) { if (coresight_is_percpu_source(csdev)) csdev->def_sink = per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev)); if (!csdev->def_sink) csdev->def_sink = coresight_find_sink(csdev, &depth); }
Otherwise looks good to me.
struct coresight_device * coresight_find_default_sink(struct coresight_device *csdev) { int depth = 0;
/* look for a default sink if we have not found for this device */ if (!csdev->def_sink) { if (coresight_is_percpu_source(csdev)) csdev->def_sink = per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev)); if (!csdev->def_sink) csdev->def_sink = coresight_find_sink(csdev, &depth); } return csdev->def_sink; }
Would this be better instead ? coresight_find_sink() is invoked both when the source is not percpu (traditional coresight sources) and also as a fallback in case a percpu sink is not found for the percpu source device.
On 1/15/21 2:36 AM, Anshuman Khandual wrote:
On 1/13/21 3:13 PM, Suzuki K Poulose wrote:
On 1/13/21 4:18 AM, Anshuman Khandual wrote:
Add support for dedicated sinks that are bound to individual CPUs. (e.g, TRBE). To allow quicker access to the sink for a given CPU bound source, keep a percpu array of the sink devices. Also, add support for building a path to the CPU local sink from the ETM.
This adds a new percpu sink type CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM. This new sink type is exclusively available and can only work with percpu source type device CORESIGHT_DEV_SUBTYPE_SOURCE_PERCPU_PROC.
This defines a percpu structure that accommodates a single coresight_device which can be used to store an initialized instance from a sink driver. As these sinks are exclusively linked and dependent on corresponding percpu sources devices, they should also be the default sink device during a perf session.
Outwards device connections are scanned while establishing paths between a source and a sink device. But such connections are not present for certain percpu source and sink devices which are exclusively linked and dependent. Build the path directly and skip connection scanning for such devices.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
drivers/hwtracing/coresight/coresight-core.c | 14 ++++++++++++++ include/linux/coresight.h | 12 ++++++++++++ 2 files changed, 26 insertions(+)
diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c index 0062c89..b300606 100644 --- a/drivers/hwtracing/coresight/coresight-core.c +++ b/drivers/hwtracing/coresight/coresight-core.c @@ -23,6 +23,7 @@ #include "coresight-priv.h" static DEFINE_MUTEX(coresight_mutex); +DEFINE_PER_CPU(struct coresight_device *, csdev_sink); /** * struct coresight_node - elements of a path, from source to sink @@ -784,6 +785,13 @@ static int _coresight_build_path(struct coresight_device *csdev, if (csdev == sink) goto out; + if (coresight_is_percpu_source(csdev) && coresight_is_percpu_sink(sink) && + sink == per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev))) { + _coresight_build_path(sink, sink, path); + found = true; + goto out; + }
/* Not a sink - recursively explore each port found on this element */ for (i = 0; i < csdev->pdata->nr_outport; i++) { struct coresight_device *child_dev; @@ -998,6 +1006,12 @@ coresight_find_default_sink(struct coresight_device *csdev) { int depth = 0; + if (coresight_is_percpu_source(csdev)) {
On a system without per_cpu sink, this would reset the default sink for the source device every single time and fallback to searching every single time.
Right.
So I think it would be better if did check if the def_sink was not set. We could fold this into the case below may be. i.e,
if (!csdev->def_sink) { if (coresight_is_percpu_source(csdev)) csdev->def_sink = per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev)); if (!csdev->def_sink) csdev->def_sink = coresight_find_sink(csdev, &depth); }
Otherwise looks good to me.
struct coresight_device * coresight_find_default_sink(struct coresight_device *csdev) { int depth = 0;
/* look for a default sink if we have not found for this device */ if (!csdev->def_sink) { if (coresight_is_percpu_source(csdev)) csdev->def_sink = per_cpu(csdev_sink, source_ops(csdev)->cpu_id(csdev)); if (!csdev->def_sink) csdev->def_sink = coresight_find_sink(csdev, &depth); } return csdev->def_sink;
}
Would this be better instead ? coresight_find_sink() is invoked both when the source is not percpu (traditional coresight sources) and also as a fallback in case a percpu sink is not found for the percpu source device.
Yes, this is exactly what I proposed above.
Cheers Suzuki
While starting off the etm event, just abort and truncate the perf record if the perf handle as no space left. This avoids configuring both source and sink devices in case the data cannot be consumed in perf.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- drivers/hwtracing/coresight/coresight-etm-perf.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c index eb9e7e9..e776a07 100644 --- a/drivers/hwtracing/coresight/coresight-etm-perf.c +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c @@ -347,6 +347,9 @@ static void etm_event_start(struct perf_event *event, int flags) if (!event_data) goto fail;
+ if (!handle->size) + goto fail_end_stop; + /* * Check if this ETM is allowed to trace, as decided * at etm_setup_aux(). This could be due to an unreachable
On 1/13/21 4:18 AM, Anshuman Khandual wrote:
While starting off the etm event, just abort and truncate the perf record if the perf handle as no space left. This avoids configuring both source and sink devices in case the data cannot be consumed in perf.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Reviewed-by: Suzuki K Poulose suzuki.poulose@arm.com
Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is accessible via the system registers. The TRBE supports different addressing modes including CPU virtual address and buffer modes including the circular buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1), an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the access to the trace buffer could be prohibited by a higher exception level (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU private interrupt (PPI) on address translation errors and when the buffer is full. Overall implementation here is inspired from the Arm SPE driver.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- Changes in V2:
- Dropped irq from coresight sysfs documentation - Renamed get_trbe_limit() as compute_trbe_buffer_limit() - Dropped SYSTEM_RUNNING check for system_state - Dropped .data value from arm_trbe_of_match[] - Dropped [set|get]_trbe_[trig|fill]_mode() helpers - Dropped clearing TRBSR_FSC_MASK from TRBE status register - Added a comment in arm_trbe_update_buffer() - Updated comment for ETE_IGNORE_PACKET - Updated comment for basic TRBE operation - Updated TRBE buffer and trigger mode macros - Restructured trbe_enable_hw() - Updated trbe_snapshot_offset() to use the entire buffer - Changed dsb(ish) as dsb(nsh) during the buffer flush - Renamed set_trbe_flush() as trbe_drain_buffer() - Renamed trbe_disable_and_drain_local() as trbe_drain_and_disable_local() - Reworked sync in trbe_enable_hw(), trbe_update_buffer() and arm_trbe_irq_handler()
Documentation/trace/coresight/coresight-trbe.rst | 39 + arch/arm64/include/asm/sysreg.h | 2 + drivers/hwtracing/coresight/Kconfig | 11 + drivers/hwtracing/coresight/Makefile | 1 + drivers/hwtracing/coresight/coresight-trbe.c | 966 +++++++++++++++++++++++ drivers/hwtracing/coresight/coresight-trbe.h | 216 +++++ 6 files changed, 1235 insertions(+) create mode 100644 Documentation/trace/coresight/coresight-trbe.rst create mode 100644 drivers/hwtracing/coresight/coresight-trbe.c create mode 100644 drivers/hwtracing/coresight/coresight-trbe.h
diff --git a/Documentation/trace/coresight/coresight-trbe.rst b/Documentation/trace/coresight/coresight-trbe.rst new file mode 100644 index 0000000..1cbb819 --- /dev/null +++ b/Documentation/trace/coresight/coresight-trbe.rst @@ -0,0 +1,39 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============================== +Trace Buffer Extension (TRBE). +============================== + + :Author: Anshuman Khandual anshuman.khandual@arm.com + :Date: November 2020 + +Hardware Description +-------------------- + +Trace Buffer Extension (TRBE) is a percpu hardware which captures in system +memory, CPU traces generated from a corresponding percpu tracing unit. This +gets plugged in as a coresight sink device because the corresponding trace +genarators (ETE), are plugged in as source device. + +The TRBE is not compliant to CoreSight architecture specifications, but is +driven via the CoreSight driver framework to support the ETE (which is +CoreSight compliant) integration. + +Sysfs files and directories +--------------------------- + +The TRBE devices appear on the existing coresight bus alongside the other +coresight devices:: + + >$ ls /sys/bus/coresight/devices + trbe0 trbe1 trbe2 trbe3 + +The ``trbe<N>`` named TRBEs are associated with a CPU.:: + + >$ ls /sys/bus/coresight/devices/trbe0/ + align dbm + +*Key file items are:-* + * ``align``: TRBE write pointer alignment + * ``dbm``: TRBE updates memory with access and dirty flags + diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index d60750e7..d7e65f0 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -97,6 +97,7 @@ #define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift)) #define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift)) #define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift)) +#define TSB_CSYNC __emit_inst(0xd503225f)
#define set_pstate_pan(x) asm volatile(SET_PSTATE_PAN(x)) #define set_pstate_uao(x) asm volatile(SET_PSTATE_UAO(x)) @@ -880,6 +881,7 @@ #define ID_AA64MMFR2_CNP_SHIFT 0
/* id_aa64dfr0 */ +#define ID_AA64DFR0_TRBE_SHIFT 44 #define ID_AA64DFR0_TRACE_FILT_SHIFT 40 #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36 #define ID_AA64DFR0_PMSVER_SHIFT 32 diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig index f154ae7..aa657ab 100644 --- a/drivers/hwtracing/coresight/Kconfig +++ b/drivers/hwtracing/coresight/Kconfig @@ -164,6 +164,17 @@ config CORESIGHT_CTI To compile this driver as a module, choose M here: the module will be called coresight-cti.
+config CORESIGHT_TRBE + bool "Trace Buffer Extension (TRBE) driver" + depends on ARM64 + help + This driver provides support for percpu Trace Buffer Extension (TRBE). + TRBE always needs to be used along with it's corresponding percpu ETE + component. ETE generates trace data which is then captured with TRBE. + Unlike traditional sink devices, TRBE is a CPU feature accessible via + system registers. But it's explicit dependency with trace unit (ETE) + requires it to be plugged in as a coresight sink device. + config CORESIGHT_CTI_INTEGRATION_REGS bool "Access CTI CoreSight Integration Registers" depends on CORESIGHT_CTI diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile index f20e357..d608165 100644 --- a/drivers/hwtracing/coresight/Makefile +++ b/drivers/hwtracing/coresight/Makefile @@ -21,5 +21,6 @@ obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o +obj-$(CONFIG_CORESIGHT_TRBE) += coresight-trbe.o coresight-cti-y := coresight-cti-core.o coresight-cti-platform.o \ coresight-cti-sysfs.o diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c new file mode 100644 index 0000000..ddc1d34 --- /dev/null +++ b/drivers/hwtracing/coresight/coresight-trbe.c @@ -0,0 +1,966 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * This driver enables Trace Buffer Extension (TRBE) as a per-cpu coresight + * sink device could then pair with an appropriate per-cpu coresight source + * device (ETE) thus generating required trace data. Trace can be enabled + * via the perf framework. + * + * Copyright (C) 2020 ARM Ltd. + * + * Author: Anshuman Khandual anshuman.khandual@arm.com + */ +#define DRVNAME "arm_trbe" + +#define pr_fmt(fmt) DRVNAME ": " fmt + +#include "coresight-trbe.h" + +#define PERF_IDX2OFF(idx, buf) ((idx) % ((buf)->nr_pages << PAGE_SHIFT)) + +/* + * A padding packet that will help the user space tools + * in skipping relevant sections in the captured trace + * data which could not be decoded. TRBE doesn't support + * formatting the trace data, unlike the legacy CoreSight + * sinks and thus we use ETE trace packets to pad the + * sections of the buffer. + */ +#define ETE_IGNORE_PACKET 0x70 + +enum trbe_fault_action { + TRBE_FAULT_ACT_WRAP, + TRBE_FAULT_ACT_SPURIOUS, + TRBE_FAULT_ACT_FATAL, +}; + +struct trbe_buf { + unsigned long trbe_base; + unsigned long trbe_limit; + unsigned long trbe_write; + int nr_pages; + void **pages; + bool snapshot; + struct trbe_cpudata *cpudata; +}; + +struct trbe_cpudata { + bool trbe_dbm; + u64 trbe_align; + int cpu; + enum cs_mode mode; + struct trbe_buf *buf; + struct trbe_drvdata *drvdata; +}; + +struct trbe_drvdata { + struct trbe_cpudata __percpu *cpudata; + struct perf_output_handle __percpu **handle; + struct hlist_node hotplug_node; + int irq; + cpumask_t supported_cpus; + enum cpuhp_state trbe_online; + struct platform_device *pdev; +}; + +static int trbe_alloc_node(struct perf_event *event) +{ + if (event->cpu == -1) + return NUMA_NO_NODE; + return cpu_to_node(event->cpu); +} + +static void trbe_drain_buffer(void) +{ + asm(TSB_CSYNC); + dsb(nsh); +} + +static void trbe_drain_and_disable_local(void) +{ + trbe_drain_buffer(); + write_sysreg_s(0, SYS_TRBLIMITR_EL1); + isb(); +} + +static void trbe_reset_local(void) +{ + trbe_drain_and_disable_local(); + write_sysreg_s(0, SYS_TRBPTR_EL1); + write_sysreg_s(0, SYS_TRBBASER_EL1); + write_sysreg_s(0, SYS_TRBSR_EL1); + isb(); +} + +/* + * TRBE Buffer Management + * + * The TRBE buffer spans from the base pointer till the limit pointer. When enabled, + * it starts writing trace data from the write pointer onward till the limit pointer. + * When the write pointer reaches the address just before the limit pointer, it gets + * wrapped around again to the base pointer. This is called a TRBE wrap event, which + * generates a maintenance interrupt when operated in WRAP or STOP mode. The write + * pointer again starts writing trace data from the base pointer until just before + * the limit pointer before getting wrapped again with an IRQ and this process just + * goes on as long as the TRBE is enabled. + * + * Wrap around with an IRQ + * ------ < ------ < ------- < ----- < ----- + * | | + * ------ > ------ > ------- > ----- > ----- + * + * +---------------+-----------------------+ + * | | | + * +---------------+-----------------------+ + * Base Pointer Write Pointer Limit Pointer + * + * The base and limit pointers always needs to be PAGE_SIZE aligned. But the write + * pointer can be aligned to the implementation defined TRBE trace buffer alignment + * as captured in trbe_cpudata->trbe_align. + * + * + * head tail wakeup + * +---------------------------------------+----- ~ ~ ------ + * |$$$$$$$|################|$$$$$$$$$$$$$$| | + * +---------------------------------------+----- ~ ~ ------ + * Base Pointer Write Pointer Limit Pointer + * + * The perf_output_handle indices (head, tail, wakeup) are monotonically increasing + * values which tracks all the driver writes and user reads from the perf auxiliary + * buffer. Generally [head..tail] is the area where the driver can write into unless + * the wakeup is behind the tail. Enabled TRBE buffer span needs to be adjusted and + * configured depending on the perf_output_handle indices, so that the driver does + * not override into areas in the perf auxiliary buffer which is being or yet to be + * consumed from the user space. The enabled TRBE buffer area is a moving subset of + * the allocated perf auxiliary buffer. + */ +static void trbe_pad_buf(struct perf_output_handle *handle, int len) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle); + u64 head = PERF_IDX2OFF(handle->head, buf); + + memset((void *) buf->trbe_base + head, ETE_IGNORE_PACKET, len); + if (!buf->snapshot) + perf_aux_output_skip(handle, len); +} + +static unsigned long trbe_snapshot_offset(struct perf_output_handle *handle) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle); + + /* + * The ETE trace has alignment synchronization packets allowing + * the decoder to reset in case of an overflow or corruption. + * So we can use the entire buffer for the snapshot mode. + */ + return buf->nr_pages * PAGE_SIZE; +} + +/* + * TRBE Limit Calculation + * + * The following markers are used to illustrate various TRBE buffer situations. + * + * $$$$ - Data area, unconsumed captured trace data, not to be overridden + * #### - Free area, enabled, trace will be written + * %%%% - Free area, disabled, trace will not be written + * ==== - Free area, padded with ETE_IGNORE_PACKET, trace will be skipped + */ +static unsigned long trbe_normal_offset(struct perf_output_handle *handle) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle); + struct trbe_cpudata *cpudata = buf->cpudata; + const u64 bufsize = buf->nr_pages * PAGE_SIZE; + u64 limit = bufsize; + u64 head, tail, wakeup; + + head = PERF_IDX2OFF(handle->head, buf); + + /* + * head + * ------->| + * | + * head TRBE align tail + * +----|-------|---------------|-------+ + * |$$$$|=======|###############|$$$$$$$| + * +----|-------|---------------|-------+ + * trbe_base trbe_base + nr_pages + * + * Perf aux buffer output head position can be misaligned depending on + * various factors including user space reads. In case misaligned, head + * needs to be aligned before TRBE can be configured. Pad the alignment + * gap with ETE_IGNORE_PACKET bytes that will be ignored by user tools + * and skip this section thus advancing the head. + */ + if (!IS_ALIGNED(head, cpudata->trbe_align)) { + unsigned long delta = roundup(head, cpudata->trbe_align) - head; + + delta = min(delta, handle->size); + trbe_pad_buf(handle, delta); + head = PERF_IDX2OFF(handle->head, buf); + } + + /* + * head = tail (size = 0) + * +----|-------------------------------+ + * |$$$$|$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ | + * +----|-------------------------------+ + * trbe_base trbe_base + nr_pages + * + * Perf aux buffer does not have any space for the driver to write into. + * Just communicate trace truncation event to the user space by marking + * it with PERF_AUX_FLAG_TRUNCATED. + */ + if (!handle->size) { + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + return 0; + } + + /* Compute the tail and wakeup indices now that we've aligned head */ + tail = PERF_IDX2OFF(handle->head + handle->size, buf); + wakeup = PERF_IDX2OFF(handle->wakeup, buf); + + /* + * Lets calculate the buffer area which TRBE could write into. There + * are three possible scenarios here. Limit needs to be aligned with + * PAGE_SIZE per the TRBE requirement. Always avoid clobbering the + * unconsumed data. + * + * 1) head < tail + * + * head tail + * +----|-----------------------|-------+ + * |$$$$|#######################|$$$$$$$| + * +----|-----------------------|-------+ + * trbe_base limit trbe_base + nr_pages + * + * TRBE could write into [head..tail] area. Unless the tail is right at + * the end of the buffer, neither an wrap around nor an IRQ is expected + * while being enabled. + * + * 2) head == tail + * + * head = tail (size > 0) + * +----|-------------------------------+ + * |%%%%|###############################| + * +----|-------------------------------+ + * trbe_base limit = trbe_base + nr_pages + * + * TRBE should just write into [head..base + nr_pages] area even though + * the entire buffer is empty. Reason being, when the trace reaches the + * end of the buffer, it will just wrap around with an IRQ giving an + * opportunity to reconfigure the buffer. + * + * 3) tail < head + * + * tail head + * +----|-----------------------|-------+ + * |%%%%|$$$$$$$$$$$$$$$$$$$$$$$|#######| + * +----|-----------------------|-------+ + * trbe_base limit = trbe_base + nr_pages + * + * TRBE should just write into [head..base + nr_pages] area even though + * the [trbe_base..tail] is also empty. Reason being, when the trace + * reaches the end of the buffer, it will just wrap around with an IRQ + * giving an opportunity to reconfigure the buffer. + */ + if (head < tail) + limit = round_down(tail, PAGE_SIZE); + + /* + * Wakeup may be arbitrarily far into the future. If it's not in the + * current generation, either we'll wrap before hitting it, or it's + * in the past and has been handled already. + * + * If there's a wakeup before we wrap, arrange to be woken up by the + * page boundary following it. Keep the tail boundary if that's lower. + * + * head wakeup tail + * +----|---------------|-------|-------+ + * |$$$$|###############|%%%%%%%|$$$$$$$| + * +----|---------------|-------|-------+ + * trbe_base limit trbe_base + nr_pages + */ + if (handle->wakeup < (handle->head + handle->size) && head <= wakeup) + limit = min(limit, round_up(wakeup, PAGE_SIZE)); + + /* + * There are two situation when this can happen i.e limit is before + * the head and hence TRBE cannot be configured. + * + * 1) head < tail (aligned down with PAGE_SIZE) and also they are both + * within the same PAGE size range. + * + * PAGE_SIZE + * |----------------------| + * + * limit head tail + * +------------|------|--------|-------+ + * |$$$$$$$$$$$$$$$$$$$|========|$$$$$$$| + * +------------|------|--------|-------+ + * trbe_base trbe_base + nr_pages + * + * 2) head < wakeup (aligned up with PAGE_SIZE) < tail and also both + * head and wakeup are within same PAGE size range. + * + * PAGE_SIZE + * |----------------------| + * + * limit head wakeup tail + * +----|------|-------|--------|-------+ + * |$$$$$$$$$$$|=======|========|$$$$$$$| + * +----|------|-------|--------|-------+ + * trbe_base trbe_base + nr_pages + */ + if (limit > head) + return limit; + + trbe_pad_buf(handle, handle->size); + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + return 0; +} + +static unsigned long compute_trbe_buffer_limit(struct perf_output_handle *handle) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle); + unsigned long offset; + + if (buf->snapshot) + offset = trbe_snapshot_offset(handle); + else + offset = trbe_normal_offset(handle); + return buf->trbe_base + offset; +} + +static void clr_trbe_status(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1); + + WARN_ON(is_trbe_enabled()); + trbsr &= ~TRBSR_IRQ; + trbsr &= ~TRBSR_TRG; + trbsr &= ~TRBSR_WRAP; + trbsr &= ~(TRBSR_EC_MASK << TRBSR_EC_SHIFT); + trbsr &= ~(TRBSR_BSC_MASK << TRBSR_BSC_SHIFT); + trbsr &= ~TRBSR_STOP; + write_sysreg_s(trbsr, SYS_TRBSR_EL1); +} + +static void set_trbe_limit_pointer_enabled(unsigned long addr) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1); + + WARN_ON(!IS_ALIGNED(addr, (1UL << TRBLIMITR_LIMIT_SHIFT))); + WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE)); + + trblimitr &= ~TRBLIMITR_NVM; + trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT); + trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT); + trblimitr &= ~(TRBLIMITR_LIMIT_MASK << TRBLIMITR_LIMIT_SHIFT); + + /* + * Fill trace buffer mode is used here while configuring the + * TRBE for trace capture. In this particular mode, the trace + * collection is stopped and a maintenance interrupt is raised + * when the current write pointer wraps. This pause in trace + * collection gives the software an opportunity to capture the + * trace data in the interrupt handler, before reconfiguring + * the TRBE. + */ + trblimitr |= (TRBE_FILL_MODE_FILL & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT; + + /* + * Trigger mode is not used here while configuring the TRBE for + * the trace capture. Hence just keep this in the ignore mode. + */ + trblimitr |= (TRBE_TRIG_MODE_IGNORE & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT; + trblimitr |= (addr & PAGE_MASK); + + trblimitr |= TRBLIMITR_ENABLE; + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); +} + +static void trbe_enable_hw(struct trbe_buf *buf) +{ + WARN_ON(buf->trbe_write < buf->trbe_base); + WARN_ON(buf->trbe_write >= buf->trbe_limit); + set_trbe_disabled(); + isb(); + clr_trbe_status(); + set_trbe_base_pointer(buf->trbe_base); + set_trbe_write_pointer(buf->trbe_write); + + /* + * Synchronize all the register updates + * till now before enabling the TRBE. + */ + isb(); + set_trbe_limit_pointer_enabled(buf->trbe_limit); + + /* Synchronize the TRBE enable event */ + isb(); +} + +static void *arm_trbe_alloc_buffer(struct coresight_device *csdev, + struct perf_event *event, void **pages, + int nr_pages, bool snapshot) +{ + struct trbe_buf *buf; + struct page **pglist; + int i; + + if ((nr_pages < 2) || (snapshot && (nr_pages & 1))) + return NULL; + + buf = kzalloc_node(sizeof(*buf), GFP_KERNEL, trbe_alloc_node(event)); + if (IS_ERR(buf)) + return ERR_PTR(-ENOMEM); + + pglist = kcalloc(nr_pages, sizeof(*pglist), GFP_KERNEL); + if (IS_ERR(pglist)) { + kfree(buf); + return ERR_PTR(-ENOMEM); + } + + for (i = 0; i < nr_pages; i++) + pglist[i] = virt_to_page(pages[i]); + + buf->trbe_base = (unsigned long) vmap(pglist, nr_pages, VM_MAP, PAGE_KERNEL); + if (IS_ERR((void *) buf->trbe_base)) { + kfree(pglist); + kfree(buf); + return ERR_PTR(buf->trbe_base); + } + buf->trbe_limit = buf->trbe_base + nr_pages * PAGE_SIZE; + buf->trbe_write = buf->trbe_base; + buf->snapshot = snapshot; + buf->nr_pages = nr_pages; + buf->pages = pages; + kfree(pglist); + return buf; +} + +void arm_trbe_free_buffer(void *config) +{ + struct trbe_buf *buf = config; + + vunmap((void *) buf->trbe_base); + kfree(buf); +} + +static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev, + struct perf_output_handle *handle, + void *config) +{ + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev); + struct trbe_buf *buf = config; + unsigned long size, offset; + + WARN_ON(buf->cpudata != cpudata); + WARN_ON(cpudata->cpu != smp_processor_id()); + WARN_ON(cpudata->drvdata != drvdata); + if (cpudata->mode != CS_MODE_PERF) + return -EINVAL; + + /* + * perf handle structure needs to be shared with the TRBE IRQ handler for + * capturing trace data and restarting the handle. There is a probability + * of an undefined reference based crash when etm event is being stopped + * while a TRBE IRQ also getting processed. This happens due the release + * of perf handle via perf_aux_output_end() in etm_event_stop(). Stopping + * the TRBE here will ensure that no IRQ could be generated when the perf + * handle gets freed in etm_event_stop(). + */ + trbe_reset_local(); + offset = get_trbe_write_pointer() - get_trbe_base_pointer(); + size = offset - PERF_IDX2OFF(handle->head, buf); + if (buf->snapshot) + handle->head += size; + return size; +} + +static int arm_trbe_enable(struct coresight_device *csdev, u32 mode, void *data) +{ + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev); + struct perf_output_handle *handle = data; + struct trbe_buf *buf = etm_perf_sink_config(handle); + + WARN_ON(cpudata->cpu != smp_processor_id()); + WARN_ON(cpudata->drvdata != drvdata); + if (mode != CS_MODE_PERF) + return -EINVAL; + + *this_cpu_ptr(drvdata->handle) = handle; + cpudata->buf = buf; + cpudata->mode = mode; + buf->cpudata = cpudata; + buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf); + buf->trbe_limit = compute_trbe_buffer_limit(handle); + if (buf->trbe_limit == buf->trbe_base) { + trbe_drain_and_disable_local(); + return 0; + } + trbe_enable_hw(buf); + return 0; +} + +static int arm_trbe_disable(struct coresight_device *csdev) +{ + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev); + struct trbe_buf *buf = cpudata->buf; + + WARN_ON(buf->cpudata != cpudata); + WARN_ON(cpudata->cpu != smp_processor_id()); + WARN_ON(cpudata->drvdata != drvdata); + if (cpudata->mode != CS_MODE_PERF) + return -EINVAL; + + trbe_drain_and_disable_local(); + buf->cpudata = NULL; + cpudata->buf = NULL; + cpudata->mode = CS_MODE_DISABLED; + return 0; +} + +static void trbe_handle_fatal(struct perf_output_handle *handle) +{ + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + perf_aux_output_end(handle, 0); + trbe_drain_and_disable_local(); +} + +static void trbe_handle_spurious(struct perf_output_handle *handle) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle); + + buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf); + buf->trbe_limit = compute_trbe_buffer_limit(handle); + if (buf->trbe_limit == buf->trbe_base) { + trbe_drain_and_disable_local(); + return; + } + trbe_enable_hw(buf); +} + +static void trbe_handle_overflow(struct perf_output_handle *handle) +{ + struct perf_event *event = handle->event; + struct trbe_buf *buf = etm_perf_sink_config(handle); + unsigned long offset, size; + struct etm_event_data *event_data; + + offset = get_trbe_limit_pointer() - get_trbe_base_pointer(); + size = offset - PERF_IDX2OFF(handle->head, buf); + if (buf->snapshot) + handle->head = offset; + perf_aux_output_end(handle, size); + + event_data = perf_aux_output_begin(handle, event); + if (!event_data) { + event->hw.state |= PERF_HES_STOPPED; + trbe_drain_and_disable_local(); + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + return; + } + buf->trbe_write = buf->trbe_base; + buf->trbe_limit = compute_trbe_buffer_limit(handle); + if (buf->trbe_limit == buf->trbe_base) { + trbe_drain_and_disable_local(); + return; + } + *this_cpu_ptr(buf->cpudata->drvdata->handle) = handle; + trbe_enable_hw(buf); +} + +static bool is_perf_trbe(struct perf_output_handle *handle) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle); + struct trbe_cpudata *cpudata = buf->cpudata; + struct trbe_drvdata *drvdata = cpudata->drvdata; + int cpu = smp_processor_id(); + + WARN_ON(buf->trbe_base != get_trbe_base_pointer()); + WARN_ON(buf->trbe_limit != get_trbe_limit_pointer()); + + if (cpudata->mode != CS_MODE_PERF) + return false; + + if (cpudata->cpu != cpu) + return false; + + if (!cpumask_test_cpu(cpu, &drvdata->supported_cpus)) + return false; + + return true; +} + +static enum trbe_fault_action trbe_get_fault_act(struct perf_output_handle *handle) +{ + int ec = get_trbe_ec(); + int bsc = get_trbe_bsc(); + + WARN_ON(is_trbe_running()); + if (is_trbe_trg() || is_trbe_abort()) + return TRBE_FAULT_ACT_FATAL; + + if ((ec == TRBE_EC_STAGE1_ABORT) || (ec == TRBE_EC_STAGE2_ABORT)) + return TRBE_FAULT_ACT_FATAL; + + if (is_trbe_wrap() && (ec == TRBE_EC_OTHERS) && (bsc == TRBE_BSC_FILLED)) { + if (get_trbe_write_pointer() == get_trbe_base_pointer()) + return TRBE_FAULT_ACT_WRAP; + } + return TRBE_FAULT_ACT_SPURIOUS; +} + +static irqreturn_t arm_trbe_irq_handler(int irq, void *dev) +{ + struct perf_output_handle **handle_ptr = dev; + struct perf_output_handle *handle = *handle_ptr; + enum trbe_fault_action act; + + WARN_ON(!is_trbe_irq()); + clr_trbe_irq(); + + /* + * Ensure the trace is visible to the CPUs and + * any external aborts have been resolved. + */ + trbe_drain_buffer(); + isb(); + + if (!perf_get_aux(handle)) + return IRQ_NONE; + + if (!is_perf_trbe(handle)) + return IRQ_NONE; + + irq_work_run(); + + act = trbe_get_fault_act(handle); + switch (act) { + case TRBE_FAULT_ACT_WRAP: + trbe_handle_overflow(handle); + break; + case TRBE_FAULT_ACT_SPURIOUS: + trbe_handle_spurious(handle); + break; + case TRBE_FAULT_ACT_FATAL: + trbe_handle_fatal(handle); + break; + } + return IRQ_HANDLED; +} + +static const struct coresight_ops_sink arm_trbe_sink_ops = { + .enable = arm_trbe_enable, + .disable = arm_trbe_disable, + .alloc_buffer = arm_trbe_alloc_buffer, + .free_buffer = arm_trbe_free_buffer, + .update_buffer = arm_trbe_update_buffer, +}; + +static const struct coresight_ops arm_trbe_cs_ops = { + .sink_ops = &arm_trbe_sink_ops, +}; + +static ssize_t align_show(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct trbe_cpudata *cpudata = dev_get_drvdata(dev); + + return sprintf(buf, "%llx\n", cpudata->trbe_align); +} +static DEVICE_ATTR_RO(align); + +static ssize_t dbm_show(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct trbe_cpudata *cpudata = dev_get_drvdata(dev); + + return sprintf(buf, "%d\n", cpudata->trbe_dbm); +} +static DEVICE_ATTR_RO(dbm); + +static struct attribute *arm_trbe_attrs[] = { + &dev_attr_align.attr, + &dev_attr_dbm.attr, + NULL, +}; + +static const struct attribute_group arm_trbe_group = { + .attrs = arm_trbe_attrs, +}; + +static const struct attribute_group *arm_trbe_groups[] = { + &arm_trbe_group, + NULL, +}; + +static void arm_trbe_probe_coresight_cpu(void *info) +{ + struct trbe_drvdata *drvdata = info; + struct coresight_desc desc = { 0 }; + int cpu = smp_processor_id(); + struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu); + struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu); + struct device *dev; + + if (WARN_ON(!cpudata)) + goto cpu_clear; + + if (trbe_csdev) + return; + + cpudata->cpu = smp_processor_id(); + cpudata->drvdata = drvdata; + dev = &cpudata->drvdata->pdev->dev; + + if (!is_trbe_available()) { + pr_err("TRBE is not implemented on cpu %d\n", cpudata->cpu); + goto cpu_clear; + } + + if (!is_trbe_programmable()) { + pr_err("TRBE is owned in higher exception level on cpu %d\n", cpudata->cpu); + goto cpu_clear; + } + desc.name = devm_kasprintf(dev, GFP_KERNEL, "%s%d", DRVNAME, smp_processor_id()); + if (IS_ERR(desc.name)) + goto cpu_clear; + + desc.type = CORESIGHT_DEV_TYPE_SINK; + desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM; + desc.ops = &arm_trbe_cs_ops; + desc.pdata = dev_get_platdata(dev); + desc.groups = arm_trbe_groups; + desc.dev = dev; + trbe_csdev = coresight_register(&desc); + if (IS_ERR(trbe_csdev)) + goto cpu_clear; + + dev_set_drvdata(&trbe_csdev->dev, cpudata); + cpudata->trbe_dbm = get_trbe_flag_update(); + cpudata->trbe_align = 1ULL << get_trbe_address_align(); + if (cpudata->trbe_align > SZ_2K) { + pr_err("Unsupported alignment on cpu %d\n", cpudata->cpu); + goto cpu_clear; + } + per_cpu(csdev_sink, cpu) = trbe_csdev; + trbe_reset_local(); + enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE); + return; +cpu_clear: + cpumask_clear_cpu(cpudata->cpu, &cpudata->drvdata->supported_cpus); +} + +static void arm_trbe_remove_coresight_cpu(void *info) +{ + int cpu = smp_processor_id(); + struct trbe_drvdata *drvdata = info; + struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu); + struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu); + + if (trbe_csdev) { + coresight_unregister(trbe_csdev); + cpudata->drvdata = NULL; + per_cpu(csdev_sink, cpu) = NULL; + } + disable_percpu_irq(drvdata->irq); + trbe_reset_local(); +} + +static int arm_trbe_probe_coresight(struct trbe_drvdata *drvdata) +{ + drvdata->cpudata = alloc_percpu(typeof(*drvdata->cpudata)); + if (IS_ERR(drvdata->cpudata)) + return PTR_ERR(drvdata->cpudata); + + arm_trbe_probe_coresight_cpu(drvdata); + smp_call_function_many(&drvdata->supported_cpus, arm_trbe_probe_coresight_cpu, drvdata, 1); + return 0; +} + +static int arm_trbe_remove_coresight(struct trbe_drvdata *drvdata) +{ + arm_trbe_remove_coresight_cpu(drvdata); + smp_call_function_many(&drvdata->supported_cpus, arm_trbe_remove_coresight_cpu, drvdata, 1); + free_percpu(drvdata->cpudata); + return 0; +} + +static int arm_trbe_cpu_startup(unsigned int cpu, struct hlist_node *node) +{ + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node); + + if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) { + if (!per_cpu(csdev_sink, cpu)) { + arm_trbe_probe_coresight_cpu(drvdata); + } else { + trbe_reset_local(); + enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE); + } + } + return 0; +} + +static int arm_trbe_cpu_teardown(unsigned int cpu, struct hlist_node *node) +{ + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node); + + if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) { + disable_percpu_irq(drvdata->irq); + trbe_reset_local(); + } + return 0; +} + +static int arm_trbe_probe_cpuhp(struct trbe_drvdata *drvdata) +{ + enum cpuhp_state trbe_online; + + trbe_online = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, DRVNAME, + arm_trbe_cpu_startup, arm_trbe_cpu_teardown); + if (trbe_online < 0) + return -EINVAL; + + if (cpuhp_state_add_instance(trbe_online, &drvdata->hotplug_node)) + return -EINVAL; + + drvdata->trbe_online = trbe_online; + return 0; +} + +static void arm_trbe_remove_cpuhp(struct trbe_drvdata *drvdata) +{ + cpuhp_remove_multi_state(drvdata->trbe_online); +} + +static int arm_trbe_probe_irq(struct platform_device *pdev, + struct trbe_drvdata *drvdata) +{ + drvdata->irq = platform_get_irq(pdev, 0); + if (!drvdata->irq) { + pr_err("IRQ not found for the platform device\n"); + return -ENXIO; + } + + if (!irq_is_percpu(drvdata->irq)) { + pr_err("IRQ is not a PPI\n"); + return -EINVAL; + } + + if (irq_get_percpu_devid_partition(drvdata->irq, &drvdata->supported_cpus)) + return -EINVAL; + + drvdata->handle = alloc_percpu(typeof(*drvdata->handle)); + if (!drvdata->handle) + return -ENOMEM; + + if (request_percpu_irq(drvdata->irq, arm_trbe_irq_handler, DRVNAME, drvdata->handle)) { + free_percpu(drvdata->handle); + return -EINVAL; + } + return 0; +} + +static void arm_trbe_remove_irq(struct trbe_drvdata *drvdata) +{ + free_percpu_irq(drvdata->irq, drvdata->handle); + free_percpu(drvdata->handle); +} + +static int arm_trbe_device_probe(struct platform_device *pdev) +{ + struct coresight_platform_data *pdata; + struct trbe_drvdata *drvdata; + struct device *dev = &pdev->dev; + int ret; + + drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); + if (IS_ERR(drvdata)) + return -ENOMEM; + + pdata = coresight_get_platform_data(dev); + if (IS_ERR(pdata)) { + kfree(drvdata); + return -ENOMEM; + } + + dev_set_drvdata(dev, drvdata); + dev->platform_data = pdata; + drvdata->pdev = pdev; + ret = arm_trbe_probe_irq(pdev, drvdata); + if (ret) + goto irq_failed; + + ret = arm_trbe_probe_coresight(drvdata); + if (ret) + goto probe_failed; + + ret = arm_trbe_probe_cpuhp(drvdata); + if (ret) + goto cpuhp_failed; + + return 0; +cpuhp_failed: + arm_trbe_remove_coresight(drvdata); +probe_failed: + arm_trbe_remove_irq(drvdata); +irq_failed: + kfree(pdata); + kfree(drvdata); + return ret; +} + +static int arm_trbe_device_remove(struct platform_device *pdev) +{ + struct coresight_platform_data *pdata = dev_get_platdata(&pdev->dev); + struct trbe_drvdata *drvdata = platform_get_drvdata(pdev); + + arm_trbe_remove_coresight(drvdata); + arm_trbe_remove_cpuhp(drvdata); + arm_trbe_remove_irq(drvdata); + kfree(pdata); + kfree(drvdata); + return 0; +} + +static const struct of_device_id arm_trbe_of_match[] = { + { .compatible = "arm,trace-buffer-extension"}, + {}, +}; +MODULE_DEVICE_TABLE(of, arm_trbe_of_match); + +static struct platform_driver arm_trbe_driver = { + .driver = { + .name = DRVNAME, + .of_match_table = of_match_ptr(arm_trbe_of_match), + .suppress_bind_attrs = true, + }, + .probe = arm_trbe_device_probe, + .remove = arm_trbe_device_remove, +}; + +static int __init arm_trbe_init(void) +{ + int ret; + + ret = platform_driver_register(&arm_trbe_driver); + if (!ret) + return 0; + + pr_err("Error registering %s platform driver\n", DRVNAME); + return ret; +} + +static void __exit arm_trbe_exit(void) +{ + platform_driver_unregister(&arm_trbe_driver); +} +module_init(arm_trbe_init); +module_exit(arm_trbe_exit); + +MODULE_AUTHOR("Anshuman Khandual anshuman.khandual@arm.com"); +MODULE_DESCRIPTION("Arm Trace Buffer Extension (TRBE) driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/hwtracing/coresight/coresight-trbe.h b/drivers/hwtracing/coresight/coresight-trbe.h new file mode 100644 index 0000000..d9f5079 --- /dev/null +++ b/drivers/hwtracing/coresight/coresight-trbe.h @@ -0,0 +1,216 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * This contains all required hardware related helper functions for + * Trace Buffer Extension (TRBE) driver in the coresight framework. + * + * Copyright (C) 2020 ARM Ltd. + * + * Author: Anshuman Khandual anshuman.khandual@arm.com + */ +#include <linux/coresight.h> +#include <linux/device.h> +#include <linux/irq.h> +#include <linux/kernel.h> +#include <linux/of.h> +#include <linux/platform_device.h> +#include <linux/smp.h> + +#include "coresight-etm-perf.h" + +DECLARE_PER_CPU(struct coresight_device *, csdev_sink); + +static inline bool is_trbe_available(void) +{ + u64 aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1); + int trbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_TRBE_SHIFT); + + return trbe >= 0b0001; +} + +static inline bool is_trbe_enabled(void) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1); + + return trblimitr & TRBLIMITR_ENABLE; +} + +#define TRBE_EC_OTHERS 0 +#define TRBE_EC_STAGE1_ABORT 36 +#define TRBE_EC_STAGE2_ABORT 37 + +static inline int get_trbe_ec(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1); + + return (trbsr >> TRBSR_EC_SHIFT) & TRBSR_EC_MASK; +} + +#define TRBE_BSC_NOT_STOPPED 0 +#define TRBE_BSC_FILLED 1 +#define TRBE_BSC_TRIGGERED 2 + +static inline int get_trbe_bsc(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1); + + return (trbsr >> TRBSR_BSC_SHIFT) & TRBSR_BSC_MASK; +} + +static inline void clr_trbe_irq(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1); + + trbsr &= ~TRBSR_IRQ; + write_sysreg_s(trbsr, SYS_TRBSR_EL1); +} + +static inline bool is_trbe_irq(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1); + + return trbsr & TRBSR_IRQ; +} + +static inline bool is_trbe_trg(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1); + + return trbsr & TRBSR_TRG; +} + +static inline bool is_trbe_wrap(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1); + + return trbsr & TRBSR_WRAP; +} + +static inline bool is_trbe_abort(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1); + + return trbsr & TRBSR_ABORT; +} + +static inline bool is_trbe_running(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1); + + return !(trbsr & TRBSR_STOP); +} + +static inline void set_trbe_running(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1); + + trbsr &= ~TRBSR_STOP; + write_sysreg_s(trbsr, SYS_TRBSR_EL1); +} + +static inline void set_trbe_virtual_mode(void) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1); + + trblimitr &= ~TRBLIMITR_NVM; + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); +} + +#define TRBE_TRIG_MODE_STOP 0 +#define TRBE_TRIG_MODE_IRQ 1 +#define TRBE_TRIG_MODE_IGNORE 3 + +#define TRBE_FILL_MODE_FILL 0 +#define TRBE_FILL_MODE_WRAP 1 +#define TRBE_FILL_MODE_CIRCULAR_BUFFER 3 + +static inline void set_trbe_disabled(void) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1); + + trblimitr &= ~TRBLIMITR_ENABLE; + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); +} + +static inline void set_trbe_enabled(void) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1); + + trblimitr |= TRBLIMITR_ENABLE; + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); +} + +static inline bool get_trbe_flag_update(void) +{ + u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1); + + return trbidr & TRBIDR_FLAG; +} + +static inline bool is_trbe_programmable(void) +{ + u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1); + + return !(trbidr & TRBIDR_PROG); +} + +static inline int get_trbe_address_align(void) +{ + u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1); + + return (trbidr >> TRBIDR_ALIGN_SHIFT) & TRBIDR_ALIGN_MASK; +} + +static inline unsigned long get_trbe_write_pointer(void) +{ + u64 trbptr = read_sysreg_s(SYS_TRBPTR_EL1); + unsigned long addr = (trbptr >> TRBPTR_PTR_SHIFT) & TRBPTR_PTR_MASK; + + return addr; +} + +static inline void set_trbe_write_pointer(unsigned long addr) +{ + WARN_ON(is_trbe_enabled()); + addr = (addr >> TRBPTR_PTR_SHIFT) & TRBPTR_PTR_MASK; + write_sysreg_s(addr, SYS_TRBPTR_EL1); +} + +static inline unsigned long get_trbe_limit_pointer(void) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1); + unsigned long limit = (trblimitr >> TRBLIMITR_LIMIT_SHIFT) & TRBLIMITR_LIMIT_MASK; + unsigned long addr = limit << TRBLIMITR_LIMIT_SHIFT; + + WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE)); + return addr; +} + +static inline void set_trbe_limit_pointer(unsigned long addr) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1); + + WARN_ON(is_trbe_enabled()); + WARN_ON(!IS_ALIGNED(addr, (1UL << TRBLIMITR_LIMIT_SHIFT))); + WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE)); + trblimitr &= ~(TRBLIMITR_LIMIT_MASK << TRBLIMITR_LIMIT_SHIFT); + trblimitr |= (addr & PAGE_MASK); + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); +} + +static inline unsigned long get_trbe_base_pointer(void) +{ + u64 trbbaser = read_sysreg_s(SYS_TRBBASER_EL1); + unsigned long addr = (trbbaser >> TRBBASER_BASE_SHIFT) & TRBBASER_BASE_MASK; + + addr = addr << TRBBASER_BASE_SHIFT; + WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE)); + return addr; +} + +static inline void set_trbe_base_pointer(unsigned long addr) +{ + WARN_ON(is_trbe_enabled()); + WARN_ON(!IS_ALIGNED(addr, (1UL << TRBLIMITR_LIMIT_SHIFT))); + WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE)); + write_sysreg_s(addr, SYS_TRBBASER_EL1); +}
Hi Anshuman,
The driver looks overall good to me. Please find some minor comments below
On 1/13/21 4:18 AM, Anshuman Khandual wrote:
Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is accessible via the system registers. The TRBE supports different addressing modes including CPU virtual address and buffer modes including the circular buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1), an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the access to the trace buffer could be prohibited by a higher exception level (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU private interrupt (PPI) on address translation errors and when the buffer is full. Overall implementation here is inspired from the Arm SPE driver.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Changes in V2:
Dropped irq from coresight sysfs documentation
Renamed get_trbe_limit() as compute_trbe_buffer_limit()
Dropped SYSTEM_RUNNING check for system_state
Dropped .data value from arm_trbe_of_match[]
Dropped [set|get]_trbe_[trig|fill]_mode() helpers
Dropped clearing TRBSR_FSC_MASK from TRBE status register
Added a comment in arm_trbe_update_buffer()
Updated comment for ETE_IGNORE_PACKET
Updated comment for basic TRBE operation
Updated TRBE buffer and trigger mode macros
Restructured trbe_enable_hw()
Updated trbe_snapshot_offset() to use the entire buffer
Changed dsb(ish) as dsb(nsh) during the buffer flush
Renamed set_trbe_flush() as trbe_drain_buffer()
Renamed trbe_disable_and_drain_local() as trbe_drain_and_disable_local()
Reworked sync in trbe_enable_hw(), trbe_update_buffer() and arm_trbe_irq_handler()
Documentation/trace/coresight/coresight-trbe.rst | 39 + arch/arm64/include/asm/sysreg.h | 2 + drivers/hwtracing/coresight/Kconfig | 11 + drivers/hwtracing/coresight/Makefile | 1 + drivers/hwtracing/coresight/coresight-trbe.c | 966 +++++++++++++++++++++++ drivers/hwtracing/coresight/coresight-trbe.h | 216 +++++ 6 files changed, 1235 insertions(+) create mode 100644 Documentation/trace/coresight/coresight-trbe.rst create mode 100644 drivers/hwtracing/coresight/coresight-trbe.c create mode 100644 drivers/hwtracing/coresight/coresight-trbe.h
diff --git a/Documentation/trace/coresight/coresight-trbe.rst b/Documentation/trace/coresight/coresight-trbe.rst new file mode 100644 index 0000000..1cbb819 --- /dev/null +++ b/Documentation/trace/coresight/coresight-trbe.rst @@ -0,0 +1,39 @@ +.. SPDX-License-Identifier: GPL-2.0
+============================== +Trace Buffer Extension (TRBE). +==============================
- :Author: Anshuman Khandual anshuman.khandual@arm.com
- :Date: November 2020
+Hardware Description +--------------------
+Trace Buffer Extension (TRBE) is a percpu hardware which captures in system +memory, CPU traces generated from a corresponding percpu tracing unit. This +gets plugged in as a coresight sink device because the corresponding trace +genarators (ETE), are plugged in as source device.
+The TRBE is not compliant to CoreSight architecture specifications, but is +driven via the CoreSight driver framework to support the ETE (which is +CoreSight compliant) integration.
+Sysfs files and directories +---------------------------
+The TRBE devices appear on the existing coresight bus alongside the other +coresight devices::
$ ls /sys/bus/coresight/devices- trbe0 trbe1 trbe2 trbe3
+The ``trbe<N>`` named TRBEs are associated with a CPU.::
$ ls /sys/bus/coresight/devices/trbe0/align dbm
+*Key file items are:-*
- ``align``: TRBE write pointer alignment
- ``dbm``: TRBE updates memory with access and dirty flags
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index d60750e7..d7e65f0 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -97,6 +97,7 @@ #define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift)) #define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift)) #define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift)) +#define TSB_CSYNC __emit_inst(0xd503225f) #define set_pstate_pan(x) asm volatile(SET_PSTATE_PAN(x)) #define set_pstate_uao(x) asm volatile(SET_PSTATE_UAO(x)) @@ -880,6 +881,7 @@ #define ID_AA64MMFR2_CNP_SHIFT 0 /* id_aa64dfr0 */ +#define ID_AA64DFR0_TRBE_SHIFT 44 #define ID_AA64DFR0_TRACE_FILT_SHIFT 40 #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36 #define ID_AA64DFR0_PMSVER_SHIFT 32 diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig index f154ae7..aa657ab 100644 --- a/drivers/hwtracing/coresight/Kconfig +++ b/drivers/hwtracing/coresight/Kconfig @@ -164,6 +164,17 @@ config CORESIGHT_CTI To compile this driver as a module, choose M here: the module will be called coresight-cti. +config CORESIGHT_TRBE
- bool "Trace Buffer Extension (TRBE) driver"
- depends on ARM64
- help
This driver provides support for percpu Trace Buffer Extension (TRBE).
TRBE always needs to be used along with it's corresponding percpu ETE
component. ETE generates trace data which is then captured with TRBE.
Unlike traditional sink devices, TRBE is a CPU feature accessible via
system registers. But it's explicit dependency with trace unit (ETE)
requires it to be plugged in as a coresight sink device.
- config CORESIGHT_CTI_INTEGRATION_REGS bool "Access CTI CoreSight Integration Registers" depends on CORESIGHT_CTI
diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile index f20e357..d608165 100644 --- a/drivers/hwtracing/coresight/Makefile +++ b/drivers/hwtracing/coresight/Makefile @@ -21,5 +21,6 @@ obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o +obj-$(CONFIG_CORESIGHT_TRBE) += coresight-trbe.o coresight-cti-y := coresight-cti-core.o coresight-cti-platform.o \ coresight-cti-sysfs.o diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c new file mode 100644 index 0000000..ddc1d34 --- /dev/null +++ b/drivers/hwtracing/coresight/coresight-trbe.c @@ -0,0 +1,966 @@ +// SPDX-License-Identifier: GPL-2.0 +/*
- This driver enables Trace Buffer Extension (TRBE) as a per-cpu coresight
- sink device could then pair with an appropriate per-cpu coresight source
- device (ETE) thus generating required trace data. Trace can be enabled
- via the perf framework.
- Copyright (C) 2020 ARM Ltd.
- Author: Anshuman Khandual anshuman.khandual@arm.com
- */
+#define DRVNAME "arm_trbe"
+#define pr_fmt(fmt) DRVNAME ": " fmt
+#include "coresight-trbe.h"
+#define PERF_IDX2OFF(idx, buf) ((idx) % ((buf)->nr_pages << PAGE_SHIFT))
+/*
- A padding packet that will help the user space tools
- in skipping relevant sections in the captured trace
- data which could not be decoded. TRBE doesn't support
- formatting the trace data, unlike the legacy CoreSight
- sinks and thus we use ETE trace packets to pad the
- sections of the buffer.
- */
+#define ETE_IGNORE_PACKET 0x70
+enum trbe_fault_action {
- TRBE_FAULT_ACT_WRAP,
- TRBE_FAULT_ACT_SPURIOUS,
- TRBE_FAULT_ACT_FATAL,
+};
+struct trbe_buf {
- unsigned long trbe_base;
- unsigned long trbe_limit;
- unsigned long trbe_write;
- int nr_pages;
- void **pages;
- bool snapshot;
- struct trbe_cpudata *cpudata;
+};
+struct trbe_cpudata {
- bool trbe_dbm;
- u64 trbe_align;
- int cpu;
- enum cs_mode mode;
- struct trbe_buf *buf;
- struct trbe_drvdata *drvdata;
+};
+struct trbe_drvdata {
- struct trbe_cpudata __percpu *cpudata;
- struct perf_output_handle __percpu **handle;
- struct hlist_node hotplug_node;
- int irq;
- cpumask_t supported_cpus;
- enum cpuhp_state trbe_online;
- struct platform_device *pdev;
+};
+static int trbe_alloc_node(struct perf_event *event) +{
- if (event->cpu == -1)
return NUMA_NO_NODE;
- return cpu_to_node(event->cpu);
+}
+static void trbe_drain_buffer(void) +{
- asm(TSB_CSYNC);
- dsb(nsh);
+}
+static void trbe_drain_and_disable_local(void) +{
- trbe_drain_buffer();
- write_sysreg_s(0, SYS_TRBLIMITR_EL1);
- isb();
+}
+static void trbe_reset_local(void) +{
- trbe_drain_and_disable_local();
- write_sysreg_s(0, SYS_TRBPTR_EL1);
- write_sysreg_s(0, SYS_TRBBASER_EL1);
- write_sysreg_s(0, SYS_TRBSR_EL1);
- isb();
This is isb() is not necessary.
+}
+/*
- TRBE Buffer Management
- The TRBE buffer spans from the base pointer till the limit pointer. When enabled,
- it starts writing trace data from the write pointer onward till the limit pointer.
- When the write pointer reaches the address just before the limit pointer, it gets
- wrapped around again to the base pointer. This is called a TRBE wrap event, which
- generates a maintenance interrupt when operated in WRAP or STOP mode.
According to the TRM, it is FILL mode, instead of STOP. So please change the above to:
"operated in WRAP or FILL mode".
The write
- pointer again starts writing trace data from the base pointer until just before
- the limit pointer before getting wrapped again with an IRQ and this process just
- goes on as long as the TRBE is enabled.
This could be dropped as it applies to WRAP/CIRCULAR buffer mode, which we don't use.
- Wrap around with an IRQ
- ------ < ------ < ------- < ----- < -----
- | |
- ------ > ------ > ------- > ----- > -----
- +---------------+-----------------------+
- | | |
- +---------------+-----------------------+
- Base Pointer Write Pointer Limit Pointer
- The base and limit pointers always needs to be PAGE_SIZE aligned. But the write
- pointer can be aligned to the implementation defined TRBE trace buffer alignment
- as captured in trbe_cpudata->trbe_align.
head tail wakeup
- +---------------------------------------+----- ~ ~ ------
- |$$$$$$$|################|$$$$$$$$$$$$$$| |
- +---------------------------------------+----- ~ ~ ------
- Base Pointer Write Pointer Limit Pointer
- The perf_output_handle indices (head, tail, wakeup) are monotonically increasing
- values which tracks all the driver writes and user reads from the perf auxiliary
- buffer. Generally [head..tail] is the area where the driver can write into unless
- the wakeup is behind the tail. Enabled TRBE buffer span needs to be adjusted and
- configured depending on the perf_output_handle indices, so that the driver does
- not override into areas in the perf auxiliary buffer which is being or yet to be
- consumed from the user space. The enabled TRBE buffer area is a moving subset of
- the allocated perf auxiliary buffer.
- */
+static void trbe_pad_buf(struct perf_output_handle *handle, int len) +{
- struct trbe_buf *buf = etm_perf_sink_config(handle);
- u64 head = PERF_IDX2OFF(handle->head, buf);
- memset((void *) buf->trbe_base + head, ETE_IGNORE_PACKET, len);
minor nit: You don't need a space after "(type *)" for casting, here and below at some other places.
- if (!buf->snapshot)
perf_aux_output_skip(handle, len);
+}
+static unsigned long trbe_snapshot_offset(struct perf_output_handle *handle) +{
- struct trbe_buf *buf = etm_perf_sink_config(handle);
- /*
* The ETE trace has alignment synchronization packets allowing
* the decoder to reset in case of an overflow or corruption.
* So we can use the entire buffer for the snapshot mode.
*/
- return buf->nr_pages * PAGE_SIZE;
+}
+/*
- TRBE Limit Calculation
- The following markers are used to illustrate various TRBE buffer situations.
- $$$$ - Data area, unconsumed captured trace data, not to be overridden
- #### - Free area, enabled, trace will be written
- %%%% - Free area, disabled, trace will not be written
- ==== - Free area, padded with ETE_IGNORE_PACKET, trace will be skipped
- */
+static unsigned long trbe_normal_offset(struct perf_output_handle *handle) +{
- struct trbe_buf *buf = etm_perf_sink_config(handle);
- struct trbe_cpudata *cpudata = buf->cpudata;
- const u64 bufsize = buf->nr_pages * PAGE_SIZE;
- u64 limit = bufsize;
- u64 head, tail, wakeup;
- head = PERF_IDX2OFF(handle->head, buf);
- /*
* head
* ------->|
* |
* head TRBE align tail
* +----|-------|---------------|-------+
* |$$$$|=======|###############|$$$$$$$|
* +----|-------|---------------|-------+
* trbe_base trbe_base + nr_pages
*
* Perf aux buffer output head position can be misaligned depending on
* various factors including user space reads. In case misaligned, head
* needs to be aligned before TRBE can be configured. Pad the alignment
* gap with ETE_IGNORE_PACKET bytes that will be ignored by user tools
* and skip this section thus advancing the head.
*/
- if (!IS_ALIGNED(head, cpudata->trbe_align)) {
unsigned long delta = roundup(head, cpudata->trbe_align) - head;
delta = min(delta, handle->size);
trbe_pad_buf(handle, delta);
head = PERF_IDX2OFF(handle->head, buf);
- }
- /*
* head = tail (size = 0)
* +----|-------------------------------+
* |$$$$|$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ |
* +----|-------------------------------+
* trbe_base trbe_base + nr_pages
*
* Perf aux buffer does not have any space for the driver to write into.
* Just communicate trace truncation event to the user space by marking
* it with PERF_AUX_FLAG_TRUNCATED.
*/
- if (!handle->size) {
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
return 0;
- }
- /* Compute the tail and wakeup indices now that we've aligned head */
- tail = PERF_IDX2OFF(handle->head + handle->size, buf);
- wakeup = PERF_IDX2OFF(handle->wakeup, buf);
- /*
* Lets calculate the buffer area which TRBE could write into. There
* are three possible scenarios here. Limit needs to be aligned with
* PAGE_SIZE per the TRBE requirement. Always avoid clobbering the
* unconsumed data.
*
* 1) head < tail
*
* head tail
* +----|-----------------------|-------+
* |$$$$|#######################|$$$$$$$|
* +----|-----------------------|-------+
* trbe_base limit trbe_base + nr_pages
*
* TRBE could write into [head..tail] area. Unless the tail is right at
* the end of the buffer, neither an wrap around nor an IRQ is expected
* while being enabled.
*
* 2) head == tail
*
* head = tail (size > 0)
* +----|-------------------------------+
* |%%%%|###############################|
* +----|-------------------------------+
* trbe_base limit = trbe_base + nr_pages
*
* TRBE should just write into [head..base + nr_pages] area even though
* the entire buffer is empty. Reason being, when the trace reaches the
* end of the buffer, it will just wrap around with an IRQ giving an
* opportunity to reconfigure the buffer.
*
* 3) tail < head
*
* tail head
* +----|-----------------------|-------+
* |%%%%|$$$$$$$$$$$$$$$$$$$$$$$|#######|
* +----|-----------------------|-------+
* trbe_base limit = trbe_base + nr_pages
*
* TRBE should just write into [head..base + nr_pages] area even though
* the [trbe_base..tail] is also empty. Reason being, when the trace
* reaches the end of the buffer, it will just wrap around with an IRQ
* giving an opportunity to reconfigure the buffer.
*/
- if (head < tail)
limit = round_down(tail, PAGE_SIZE);
- /*
* Wakeup may be arbitrarily far into the future. If it's not in the
* current generation, either we'll wrap before hitting it, or it's
* in the past and has been handled already.
*
* If there's a wakeup before we wrap, arrange to be woken up by the
* page boundary following it. Keep the tail boundary if that's lower.
*
* head wakeup tail
* +----|---------------|-------|-------+
* |$$$$|###############|%%%%%%%|$$$$$$$|
* +----|---------------|-------|-------+
* trbe_base limit trbe_base + nr_pages
*/
- if (handle->wakeup < (handle->head + handle->size) && head <= wakeup)
limit = min(limit, round_up(wakeup, PAGE_SIZE));
- /*
* There are two situation when this can happen i.e limit is before
* the head and hence TRBE cannot be configured.
*
* 1) head < tail (aligned down with PAGE_SIZE) and also they are both
* within the same PAGE size range.
*
* PAGE_SIZE
* |----------------------|
*
* limit head tail
* +------------|------|--------|-------+
* |$$$$$$$$$$$$$$$$$$$|========|$$$$$$$|
* +------------|------|--------|-------+
* trbe_base trbe_base + nr_pages
*
* 2) head < wakeup (aligned up with PAGE_SIZE) < tail and also both
* head and wakeup are within same PAGE size range.
*
* PAGE_SIZE
* |----------------------|
*
* limit head wakeup tail
* +----|------|-------|--------|-------+
* |$$$$$$$$$$$|=======|========|$$$$$$$|
* +----|------|-------|--------|-------+
* trbe_base trbe_base + nr_pages
*/
- if (limit > head)
return limit;
- trbe_pad_buf(handle, handle->size);
- perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
- return 0;
+}
+static unsigned long compute_trbe_buffer_limit(struct perf_output_handle *handle) +{
- struct trbe_buf *buf = etm_perf_sink_config(handle);
- unsigned long offset;
- if (buf->snapshot)
offset = trbe_snapshot_offset(handle);
- else
offset = trbe_normal_offset(handle);
- return buf->trbe_base + offset;
+}
+static void clr_trbe_status(void) +{
- u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
- WARN_ON(is_trbe_enabled());
- trbsr &= ~TRBSR_IRQ;
- trbsr &= ~TRBSR_TRG;
- trbsr &= ~TRBSR_WRAP;
- trbsr &= ~(TRBSR_EC_MASK << TRBSR_EC_SHIFT);
- trbsr &= ~(TRBSR_BSC_MASK << TRBSR_BSC_SHIFT);
- trbsr &= ~TRBSR_STOP;
- write_sysreg_s(trbsr, SYS_TRBSR_EL1);
+}
+static void set_trbe_limit_pointer_enabled(unsigned long addr) +{
- u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
- WARN_ON(!IS_ALIGNED(addr, (1UL << TRBLIMITR_LIMIT_SHIFT)));
- WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE));
- trblimitr &= ~TRBLIMITR_NVM;
- trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT);
- trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT);
- trblimitr &= ~(TRBLIMITR_LIMIT_MASK << TRBLIMITR_LIMIT_SHIFT);
- /*
* Fill trace buffer mode is used here while configuring the
* TRBE for trace capture. In this particular mode, the trace
* collection is stopped and a maintenance interrupt is raised
* when the current write pointer wraps. This pause in trace
* collection gives the software an opportunity to capture the
* trace data in the interrupt handler, before reconfiguring
* the TRBE.
*/
- trblimitr |= (TRBE_FILL_MODE_FILL & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT;
- /*
* Trigger mode is not used here while configuring the TRBE for
* the trace capture. Hence just keep this in the ignore mode.
*/
- trblimitr |= (TRBE_TRIG_MODE_IGNORE & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT;
- trblimitr |= (addr & PAGE_MASK);
- trblimitr |= TRBLIMITR_ENABLE;
- write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
+static void trbe_enable_hw(struct trbe_buf *buf) +{
- WARN_ON(buf->trbe_write < buf->trbe_base);
- WARN_ON(buf->trbe_write >= buf->trbe_limit);
- set_trbe_disabled();
- isb();
- clr_trbe_status();
- set_trbe_base_pointer(buf->trbe_base);
- set_trbe_write_pointer(buf->trbe_write);
- /*
* Synchronize all the register updates
* till now before enabling the TRBE.
*/
- isb();
- set_trbe_limit_pointer_enabled(buf->trbe_limit);
- /* Synchronize the TRBE enable event */
- isb();
+}
+static void *arm_trbe_alloc_buffer(struct coresight_device *csdev,
struct perf_event *event, void **pages,
int nr_pages, bool snapshot)
+{
- struct trbe_buf *buf;
- struct page **pglist;
- int i;
- if ((nr_pages < 2) || (snapshot && (nr_pages & 1)))
This restriction on snapshot could be removed now, since we use the full buffer.
return NULL;
- buf = kzalloc_node(sizeof(*buf), GFP_KERNEL, trbe_alloc_node(event));
- if (IS_ERR(buf))
return ERR_PTR(-ENOMEM);
- pglist = kcalloc(nr_pages, sizeof(*pglist), GFP_KERNEL);
- if (IS_ERR(pglist)) {
kfree(buf);
return ERR_PTR(-ENOMEM);
- }
- for (i = 0; i < nr_pages; i++)
pglist[i] = virt_to_page(pages[i]);
- buf->trbe_base = (unsigned long) vmap(pglist, nr_pages, VM_MAP, PAGE_KERNEL);
- if (IS_ERR((void *) buf->trbe_base)) {
kfree(pglist);
kfree(buf);
return ERR_PTR(buf->trbe_base);
- }
- buf->trbe_limit = buf->trbe_base + nr_pages * PAGE_SIZE;
- buf->trbe_write = buf->trbe_base;
- buf->snapshot = snapshot;
- buf->nr_pages = nr_pages;
- buf->pages = pages;
- kfree(pglist);
- return buf;
+}
+void arm_trbe_free_buffer(void *config) +{
- struct trbe_buf *buf = config;
- vunmap((void *) buf->trbe_base);
- kfree(buf);
+}
+static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *config)
+{
- struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
- struct trbe_buf *buf = config;
- unsigned long size, offset;
- WARN_ON(buf->cpudata != cpudata);
- WARN_ON(cpudata->cpu != smp_processor_id());
- WARN_ON(cpudata->drvdata != drvdata);
- if (cpudata->mode != CS_MODE_PERF)
return -EINVAL;
- /*
* perf handle structure needs to be shared with the TRBE IRQ handler for
* capturing trace data and restarting the handle. There is a probability
* of an undefined reference based crash when etm event is being stopped
* while a TRBE IRQ also getting processed. This happens due the release
* of perf handle via perf_aux_output_end() in etm_event_stop(). Stopping
* the TRBE here will ensure that no IRQ could be generated when the perf
* handle gets freed in etm_event_stop().
*/
- trbe_reset_local();
- offset = get_trbe_write_pointer() - get_trbe_base_pointer();
- size = offset - PERF_IDX2OFF(handle->head, buf);
- if (buf->snapshot)
handle->head += size;
- return size;
+}
+static int arm_trbe_enable(struct coresight_device *csdev, u32 mode, void *data) +{
- struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
- struct perf_output_handle *handle = data;
- struct trbe_buf *buf = etm_perf_sink_config(handle);
- WARN_ON(cpudata->cpu != smp_processor_id());
- WARN_ON(cpudata->drvdata != drvdata);
- if (mode != CS_MODE_PERF)
return -EINVAL;
- *this_cpu_ptr(drvdata->handle) = handle;
- cpudata->buf = buf;
- cpudata->mode = mode;
- buf->cpudata = cpudata;
- buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf);
- buf->trbe_limit = compute_trbe_buffer_limit(handle);
- if (buf->trbe_limit == buf->trbe_base) {
trbe_drain_and_disable_local();
return 0;
- }
- trbe_enable_hw(buf);
- return 0;
+}
+static int arm_trbe_disable(struct coresight_device *csdev) +{
- struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev);
- struct trbe_buf *buf = cpudata->buf;
- WARN_ON(buf->cpudata != cpudata);
- WARN_ON(cpudata->cpu != smp_processor_id());
- WARN_ON(cpudata->drvdata != drvdata);
- if (cpudata->mode != CS_MODE_PERF)
return -EINVAL;
- trbe_drain_and_disable_local();
- buf->cpudata = NULL;
- cpudata->buf = NULL;
- cpudata->mode = CS_MODE_DISABLED;
- return 0;
+}
+static void trbe_handle_fatal(struct perf_output_handle *handle) +{
- perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
- perf_aux_output_end(handle, 0);
- trbe_drain_and_disable_local();
+}
+static void trbe_handle_spurious(struct perf_output_handle *handle) +{
- struct trbe_buf *buf = etm_perf_sink_config(handle);
- buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf);
- buf->trbe_limit = compute_trbe_buffer_limit(handle);
- if (buf->trbe_limit == buf->trbe_base) {
trbe_drain_and_disable_local();
return;
- }
- trbe_enable_hw(buf);
+}
+static void trbe_handle_overflow(struct perf_output_handle *handle) +{
- struct perf_event *event = handle->event;
- struct trbe_buf *buf = etm_perf_sink_config(handle);
- unsigned long offset, size;
- struct etm_event_data *event_data;
- offset = get_trbe_limit_pointer() - get_trbe_base_pointer();
- size = offset - PERF_IDX2OFF(handle->head, buf);
- if (buf->snapshot)
handle->head = offset;
- perf_aux_output_end(handle, size);
- event_data = perf_aux_output_begin(handle, event);
- if (!event_data) {
event->hw.state |= PERF_HES_STOPPED;
trbe_drain_and_disable_local();
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
return;
- }
- buf->trbe_write = buf->trbe_base;
- buf->trbe_limit = compute_trbe_buffer_limit(handle);
- if (buf->trbe_limit == buf->trbe_base) {
trbe_drain_and_disable_local();
return;
- }
- *this_cpu_ptr(buf->cpudata->drvdata->handle) = handle;
- trbe_enable_hw(buf);
+}
+static bool is_perf_trbe(struct perf_output_handle *handle) +{
- struct trbe_buf *buf = etm_perf_sink_config(handle);
- struct trbe_cpudata *cpudata = buf->cpudata;
- struct trbe_drvdata *drvdata = cpudata->drvdata;
- int cpu = smp_processor_id();
- WARN_ON(buf->trbe_base != get_trbe_base_pointer());
- WARN_ON(buf->trbe_limit != get_trbe_limit_pointer());
- if (cpudata->mode != CS_MODE_PERF)
return false;
- if (cpudata->cpu != cpu)
return false;
- if (!cpumask_test_cpu(cpu, &drvdata->supported_cpus))
return false;
- return true;
+}
+static enum trbe_fault_action trbe_get_fault_act(struct perf_output_handle *handle) +{
- int ec = get_trbe_ec();
- int bsc = get_trbe_bsc();
- WARN_ON(is_trbe_running());
- if (is_trbe_trg() || is_trbe_abort())
We seem to be reading the TRBSR every single in these helpers. Could we optimise them by passing the register value in ?
i.e u64 trbsr = get_trbe_status();
WARN_ON(is_trbe_runnign(trbsr)) if (is_trbe_trg(trbsr) || is_trbe_abort(trbsr))
For is_trbe_wrap() too
return TRBE_FAULT_ACT_FATAL;
- if ((ec == TRBE_EC_STAGE1_ABORT) || (ec == TRBE_EC_STAGE2_ABORT))
return TRBE_FAULT_ACT_FATAL;
- if (is_trbe_wrap() && (ec == TRBE_EC_OTHERS) && (bsc == TRBE_BSC_FILLED)) {
if (get_trbe_write_pointer() == get_trbe_base_pointer())
return TRBE_FAULT_ACT_WRAP;
- }
- return TRBE_FAULT_ACT_SPURIOUS;
+}
+static irqreturn_t arm_trbe_irq_handler(int irq, void *dev) +{
- struct perf_output_handle **handle_ptr = dev;
- struct perf_output_handle *handle = *handle_ptr;
- enum trbe_fault_action act;
- WARN_ON(!is_trbe_irq());
- clr_trbe_irq();
- /*
* Ensure the trace is visible to the CPUs and
* any external aborts have been resolved.
*/
- trbe_drain_buffer();
- isb();
- if (!perf_get_aux(handle))
return IRQ_NONE;
- if (!is_perf_trbe(handle))
return IRQ_NONE;
- irq_work_run();
- act = trbe_get_fault_act(handle);
- switch (act) {
- case TRBE_FAULT_ACT_WRAP:
trbe_handle_overflow(handle);
break;
- case TRBE_FAULT_ACT_SPURIOUS:
trbe_handle_spurious(handle);
break;
- case TRBE_FAULT_ACT_FATAL:
trbe_handle_fatal(handle);
break;
- }
- return IRQ_HANDLED;
+}
+static const struct coresight_ops_sink arm_trbe_sink_ops = {
- .enable = arm_trbe_enable,
- .disable = arm_trbe_disable,
- .alloc_buffer = arm_trbe_alloc_buffer,
- .free_buffer = arm_trbe_free_buffer,
- .update_buffer = arm_trbe_update_buffer,
+};
+static const struct coresight_ops arm_trbe_cs_ops = {
- .sink_ops = &arm_trbe_sink_ops,
+};
+static ssize_t align_show(struct device *dev, struct device_attribute *attr, char *buf) +{
- struct trbe_cpudata *cpudata = dev_get_drvdata(dev);
- return sprintf(buf, "%llx\n", cpudata->trbe_align);
+} +static DEVICE_ATTR_RO(align);
+static ssize_t dbm_show(struct device *dev, struct device_attribute *attr, char *buf) +{
- struct trbe_cpudata *cpudata = dev_get_drvdata(dev);
- return sprintf(buf, "%d\n", cpudata->trbe_dbm);
+} +static DEVICE_ATTR_RO(dbm);
+static struct attribute *arm_trbe_attrs[] = {
- &dev_attr_align.attr,
- &dev_attr_dbm.attr,
- NULL,
+};
+static const struct attribute_group arm_trbe_group = {
- .attrs = arm_trbe_attrs,
+};
+static const struct attribute_group *arm_trbe_groups[] = {
- &arm_trbe_group,
- NULL,
+};
+static void arm_trbe_probe_coresight_cpu(void *info) +{
- struct trbe_drvdata *drvdata = info;
- struct coresight_desc desc = { 0 };
- int cpu = smp_processor_id();
- struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
- struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu);
- struct device *dev;
- if (WARN_ON(!cpudata))
goto cpu_clear;
- if (trbe_csdev)
return;
- cpudata->cpu = smp_processor_id();
- cpudata->drvdata = drvdata;
- dev = &cpudata->drvdata->pdev->dev;
- if (!is_trbe_available()) {
pr_err("TRBE is not implemented on cpu %d\n", cpudata->cpu);
goto cpu_clear;
- }
- if (!is_trbe_programmable()) {
pr_err("TRBE is owned in higher exception level on cpu %d\n", cpudata->cpu);
goto cpu_clear;
- }
- desc.name = devm_kasprintf(dev, GFP_KERNEL, "%s%d", DRVNAME, smp_processor_id());
- if (IS_ERR(desc.name))
goto cpu_clear;
- desc.type = CORESIGHT_DEV_TYPE_SINK;
- desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM;
- desc.ops = &arm_trbe_cs_ops;
- desc.pdata = dev_get_platdata(dev);
- desc.groups = arm_trbe_groups;
- desc.dev = dev;
- trbe_csdev = coresight_register(&desc);
- if (IS_ERR(trbe_csdev))
goto cpu_clear;
- dev_set_drvdata(&trbe_csdev->dev, cpudata);
- cpudata->trbe_dbm = get_trbe_flag_update();
- cpudata->trbe_align = 1ULL << get_trbe_address_align();
- if (cpudata->trbe_align > SZ_2K) {
pr_err("Unsupported alignment on cpu %d\n", cpudata->cpu);
goto cpu_clear;
- }
- per_cpu(csdev_sink, cpu) = trbe_csdev;
- trbe_reset_local();
- enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE);
- return;
+cpu_clear:
- cpumask_clear_cpu(cpudata->cpu, &cpudata->drvdata->supported_cpus);
+}
+static void arm_trbe_remove_coresight_cpu(void *info) +{
- int cpu = smp_processor_id();
- struct trbe_drvdata *drvdata = info;
- struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
- struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu);
- if (trbe_csdev) {
coresight_unregister(trbe_csdev);
cpudata->drvdata = NULL;
per_cpu(csdev_sink, cpu) = NULL;
- }
- disable_percpu_irq(drvdata->irq);
- trbe_reset_local();
+}
+static int arm_trbe_probe_coresight(struct trbe_drvdata *drvdata) +{
- drvdata->cpudata = alloc_percpu(typeof(*drvdata->cpudata));
- if (IS_ERR(drvdata->cpudata))
return PTR_ERR(drvdata->cpudata);
- arm_trbe_probe_coresight_cpu(drvdata);
- smp_call_function_many(&drvdata->supported_cpus, arm_trbe_probe_coresight_cpu, drvdata, 1);
- return 0;
+}
+static int arm_trbe_remove_coresight(struct trbe_drvdata *drvdata) +{
- arm_trbe_remove_coresight_cpu(drvdata);
- smp_call_function_many(&drvdata->supported_cpus, arm_trbe_remove_coresight_cpu, drvdata, 1);
- free_percpu(drvdata->cpudata);
- return 0;
+}
+static int arm_trbe_cpu_startup(unsigned int cpu, struct hlist_node *node) +{
- struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
- if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
if (!per_cpu(csdev_sink, cpu)) {
arm_trbe_probe_coresight_cpu(drvdata);
} else {
trbe_reset_local();
enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE);
}
- }
- return 0;
+}
+static int arm_trbe_cpu_teardown(unsigned int cpu, struct hlist_node *node) +{
- struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
- if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
disable_percpu_irq(drvdata->irq);
trbe_reset_local();
- }
- return 0;
+}
+static int arm_trbe_probe_cpuhp(struct trbe_drvdata *drvdata) +{
- enum cpuhp_state trbe_online;
- trbe_online = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, DRVNAME,
arm_trbe_cpu_startup, arm_trbe_cpu_teardown);
- if (trbe_online < 0)
return -EINVAL;
- if (cpuhp_state_add_instance(trbe_online, &drvdata->hotplug_node))
return -EINVAL;
- drvdata->trbe_online = trbe_online;
- return 0;
+}
+static void arm_trbe_remove_cpuhp(struct trbe_drvdata *drvdata) +{
- cpuhp_remove_multi_state(drvdata->trbe_online);
+}
+static int arm_trbe_probe_irq(struct platform_device *pdev,
struct trbe_drvdata *drvdata)
+{
- drvdata->irq = platform_get_irq(pdev, 0);
- if (!drvdata->irq) {
pr_err("IRQ not found for the platform device\n");
return -ENXIO;
- }
- if (!irq_is_percpu(drvdata->irq)) {
pr_err("IRQ is not a PPI\n");
return -EINVAL;
- }
- if (irq_get_percpu_devid_partition(drvdata->irq, &drvdata->supported_cpus))
return -EINVAL;
- drvdata->handle = alloc_percpu(typeof(*drvdata->handle));
- if (!drvdata->handle)
return -ENOMEM;
- if (request_percpu_irq(drvdata->irq, arm_trbe_irq_handler, DRVNAME, drvdata->handle)) {
free_percpu(drvdata->handle);
return -EINVAL;
- }
- return 0;
+}
+static void arm_trbe_remove_irq(struct trbe_drvdata *drvdata) +{
- free_percpu_irq(drvdata->irq, drvdata->handle);
- free_percpu(drvdata->handle);
+}
+static int arm_trbe_device_probe(struct platform_device *pdev) +{
- struct coresight_platform_data *pdata;
- struct trbe_drvdata *drvdata;
- struct device *dev = &pdev->dev;
- int ret;
- drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
- if (IS_ERR(drvdata))
return -ENOMEM;
- pdata = coresight_get_platform_data(dev);
- if (IS_ERR(pdata)) {
kfree(drvdata);
return -ENOMEM;
- }
- dev_set_drvdata(dev, drvdata);
- dev->platform_data = pdata;
- drvdata->pdev = pdev;
- ret = arm_trbe_probe_irq(pdev, drvdata);
- if (ret)
goto irq_failed;
- ret = arm_trbe_probe_coresight(drvdata);
- if (ret)
goto probe_failed;
- ret = arm_trbe_probe_cpuhp(drvdata);
- if (ret)
goto cpuhp_failed;
- return 0;
+cpuhp_failed:
- arm_trbe_remove_coresight(drvdata);
+probe_failed:
- arm_trbe_remove_irq(drvdata);
+irq_failed:
- kfree(pdata);
- kfree(drvdata);
- return ret;
+}
+static int arm_trbe_device_remove(struct platform_device *pdev) +{
- struct coresight_platform_data *pdata = dev_get_platdata(&pdev->dev);
- struct trbe_drvdata *drvdata = platform_get_drvdata(pdev);
- arm_trbe_remove_coresight(drvdata);
- arm_trbe_remove_cpuhp(drvdata);
- arm_trbe_remove_irq(drvdata);
- kfree(pdata);
- kfree(drvdata);
- return 0;
+}
+static const struct of_device_id arm_trbe_of_match[] = {
- { .compatible = "arm,trace-buffer-extension"},
- {},
+}; +MODULE_DEVICE_TABLE(of, arm_trbe_of_match);
+static struct platform_driver arm_trbe_driver = {
- .driver = {
.name = DRVNAME,
.of_match_table = of_match_ptr(arm_trbe_of_match),
.suppress_bind_attrs = true,
- },
- .probe = arm_trbe_device_probe,
- .remove = arm_trbe_device_remove,
+};
+static int __init arm_trbe_init(void) +{
- int ret;
We should skip the driver init, if the kernel is unmapped at EL0, as the TRBE can't safely write to the kernel virtual addressed buffer when the CPU is running at EL0. This is unlikely, but we should cover that case.
- ret = platform_driver_register(&arm_trbe_driver);
- if (!ret)
return 0;
- pr_err("Error registering %s platform driver\n", DRVNAME);
- return ret;
+}
+static void __exit arm_trbe_exit(void) +{
- platform_driver_unregister(&arm_trbe_driver);
+} +module_init(arm_trbe_init); +module_exit(arm_trbe_exit);
+MODULE_AUTHOR("Anshuman Khandual anshuman.khandual@arm.com"); +MODULE_DESCRIPTION("Arm Trace Buffer Extension (TRBE) driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/hwtracing/coresight/coresight-trbe.h b/drivers/hwtracing/coresight/coresight-trbe.h new file mode 100644 index 0000000..d9f5079 --- /dev/null +++ b/drivers/hwtracing/coresight/coresight-trbe.h @@ -0,0 +1,216 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/*
- This contains all required hardware related helper functions for
- Trace Buffer Extension (TRBE) driver in the coresight framework.
- Copyright (C) 2020 ARM Ltd.
- Author: Anshuman Khandual anshuman.khandual@arm.com
- */
+#include <linux/coresight.h> +#include <linux/device.h> +#include <linux/irq.h> +#include <linux/kernel.h> +#include <linux/of.h> +#include <linux/platform_device.h> +#include <linux/smp.h>
+#include "coresight-etm-perf.h"
+DECLARE_PER_CPU(struct coresight_device *, csdev_sink);
+static inline bool is_trbe_available(void) +{
- u64 aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1);
- int trbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_TRBE_SHIFT);
This could be "unsigned int" to make it future proof.
- return trbe >= 0b0001;
+}
+static inline bool is_trbe_enabled(void) +{
- u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
- return trblimitr & TRBLIMITR_ENABLE;
+}
+#define TRBE_EC_OTHERS 0 +#define TRBE_EC_STAGE1_ABORT 36 +#define TRBE_EC_STAGE2_ABORT 37
+static inline int get_trbe_ec(void) +{
- u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
- return (trbsr >> TRBSR_EC_SHIFT) & TRBSR_EC_MASK;
+}
+#define TRBE_BSC_NOT_STOPPED 0 +#define TRBE_BSC_FILLED 1 +#define TRBE_BSC_TRIGGERED 2
+static inline int get_trbe_bsc(void) +{
- u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
- return (trbsr >> TRBSR_BSC_SHIFT) & TRBSR_BSC_MASK;
+}
+static inline void clr_trbe_irq(void) +{
- u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
- trbsr &= ~TRBSR_IRQ;
- write_sysreg_s(trbsr, SYS_TRBSR_EL1);
+}
+static inline bool is_trbe_irq(void) +{
- u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
- return trbsr & TRBSR_IRQ;
+}
+static inline bool is_trbe_trg(void) +{
- u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
- return trbsr & TRBSR_TRG;
+}
+static inline bool is_trbe_wrap(void) +{
- u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
- return trbsr & TRBSR_WRAP;
+}
+static inline bool is_trbe_abort(void) +{
- u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
- return trbsr & TRBSR_ABORT;
+}
+static inline bool is_trbe_running(void) +{
- u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
- return !(trbsr & TRBSR_STOP);
+}
+static inline void set_trbe_running(void) +{
- u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
- trbsr &= ~TRBSR_STOP;
- write_sysreg_s(trbsr, SYS_TRBSR_EL1);
+}
This could be removed now.
+static inline void set_trbe_virtual_mode(void) +{
- u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
- trblimitr &= ~TRBLIMITR_NVM;
- write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
Same here.
+#define TRBE_TRIG_MODE_STOP 0 +#define TRBE_TRIG_MODE_IRQ 1 +#define TRBE_TRIG_MODE_IGNORE 3
+#define TRBE_FILL_MODE_FILL 0 +#define TRBE_FILL_MODE_WRAP 1 +#define TRBE_FILL_MODE_CIRCULAR_BUFFER 3
+static inline void set_trbe_disabled(void) +{
- u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
- trblimitr &= ~TRBLIMITR_ENABLE;
- write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
+static inline void set_trbe_enabled(void) +{
- u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
- trblimitr |= TRBLIMITR_ENABLE;
- write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
Same as above.
+static inline bool get_trbe_flag_update(void) +{
- u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1);
- return trbidr & TRBIDR_FLAG;
+}
+static inline bool is_trbe_programmable(void) +{
- u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1);
- return !(trbidr & TRBIDR_PROG);
+}
+static inline int get_trbe_address_align(void) +{
- u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1);
- return (trbidr >> TRBIDR_ALIGN_SHIFT) & TRBIDR_ALIGN_MASK;
+}
Similar comment to the TRBSR read on each of these functions. They all are only called from a single function. It may make sense to read once and pass the value.
+static inline unsigned long get_trbe_write_pointer(void) +{
- u64 trbptr = read_sysreg_s(SYS_TRBPTR_EL1);
- unsigned long addr = (trbptr >> TRBPTR_PTR_SHIFT) & TRBPTR_PTR_MASK;
- return addr;
+}
+static inline void set_trbe_write_pointer(unsigned long addr) +{
- WARN_ON(is_trbe_enabled());
- addr = (addr >> TRBPTR_PTR_SHIFT) & TRBPTR_PTR_MASK;
- write_sysreg_s(addr, SYS_TRBPTR_EL1);
+}
+static inline unsigned long get_trbe_limit_pointer(void) +{
- u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
- unsigned long limit = (trblimitr >> TRBLIMITR_LIMIT_SHIFT) & TRBLIMITR_LIMIT_MASK;
- unsigned long addr = limit << TRBLIMITR_LIMIT_SHIFT;
- WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE));
- return addr;
+}
+static inline void set_trbe_limit_pointer(unsigned long addr) +{
- u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
- WARN_ON(is_trbe_enabled());
- WARN_ON(!IS_ALIGNED(addr, (1UL << TRBLIMITR_LIMIT_SHIFT)));
- WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE));
- trblimitr &= ~(TRBLIMITR_LIMIT_MASK << TRBLIMITR_LIMIT_SHIFT);
- trblimitr |= (addr & PAGE_MASK);
- write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1);
+}
Remove the unused function.
+static inline unsigned long get_trbe_base_pointer(void) +{
- u64 trbbaser = read_sysreg_s(SYS_TRBBASER_EL1);
- unsigned long addr = (trbbaser >> TRBBASER_BASE_SHIFT) & TRBBASER_BASE_MASK;
- addr = addr << TRBBASER_BASE_SHIFT;
- WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE));
- return addr;
+}
+static inline void set_trbe_base_pointer(unsigned long addr) +{
- WARN_ON(is_trbe_enabled());
- WARN_ON(!IS_ALIGNED(addr, (1UL << TRBLIMITR_LIMIT_SHIFT)));
- WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE));
- write_sysreg_s(addr, SYS_TRBBASER_EL1);
+}
Suzuki
On 1/13/21 8:58 PM, Suzuki K Poulose wrote:
Hi Anshuman,
The driver looks overall good to me. Please find some minor comments below
On 1/13/21 4:18 AM, Anshuman Khandual wrote:
Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is accessible via the system registers. The TRBE supports different addressing modes including CPU virtual address and buffer modes including the circular buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1), an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the access to the trace buffer could be prohibited by a higher exception level (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU private interrupt (PPI) on address translation errors and when the buffer is full. Overall implementation here is inspired from the Arm SPE driver.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Changes in V2:
- Dropped irq from coresight sysfs documentation
- Renamed get_trbe_limit() as compute_trbe_buffer_limit()
- Dropped SYSTEM_RUNNING check for system_state
- Dropped .data value from arm_trbe_of_match[]
- Dropped [set|get]_trbe_[trig|fill]_mode() helpers
- Dropped clearing TRBSR_FSC_MASK from TRBE status register
- Added a comment in arm_trbe_update_buffer()
- Updated comment for ETE_IGNORE_PACKET
- Updated comment for basic TRBE operation
- Updated TRBE buffer and trigger mode macros
- Restructured trbe_enable_hw()
- Updated trbe_snapshot_offset() to use the entire buffer
- Changed dsb(ish) as dsb(nsh) during the buffer flush
- Renamed set_trbe_flush() as trbe_drain_buffer()
- Renamed trbe_disable_and_drain_local() as trbe_drain_and_disable_local()
- Reworked sync in trbe_enable_hw(), trbe_update_buffer() and arm_trbe_irq_handler()
Documentation/trace/coresight/coresight-trbe.rst | 39 + arch/arm64/include/asm/sysreg.h | 2 + drivers/hwtracing/coresight/Kconfig | 11 + drivers/hwtracing/coresight/Makefile | 1 + drivers/hwtracing/coresight/coresight-trbe.c | 966 +++++++++++++++++++++++ drivers/hwtracing/coresight/coresight-trbe.h | 216 +++++ 6 files changed, 1235 insertions(+) create mode 100644 Documentation/trace/coresight/coresight-trbe.rst create mode 100644 drivers/hwtracing/coresight/coresight-trbe.c create mode 100644 drivers/hwtracing/coresight/coresight-trbe.h
diff --git a/Documentation/trace/coresight/coresight-trbe.rst b/Documentation/trace/coresight/coresight-trbe.rst new file mode 100644 index 0000000..1cbb819 --- /dev/null +++ b/Documentation/trace/coresight/coresight-trbe.rst @@ -0,0 +1,39 @@ +.. SPDX-License-Identifier: GPL-2.0
+============================== +Trace Buffer Extension (TRBE). +==============================
+ :Author: Anshuman Khandual anshuman.khandual@arm.com + :Date: November 2020
+Hardware Description +--------------------
+Trace Buffer Extension (TRBE) is a percpu hardware which captures in system +memory, CPU traces generated from a corresponding percpu tracing unit. This +gets plugged in as a coresight sink device because the corresponding trace +genarators (ETE), are plugged in as source device.
+The TRBE is not compliant to CoreSight architecture specifications, but is +driven via the CoreSight driver framework to support the ETE (which is +CoreSight compliant) integration.
+Sysfs files and directories +---------------------------
+The TRBE devices appear on the existing coresight bus alongside the other +coresight devices::
+ >$ ls /sys/bus/coresight/devices + trbe0 trbe1 trbe2 trbe3
+The ``trbe<N>`` named TRBEs are associated with a CPU.::
+ >$ ls /sys/bus/coresight/devices/trbe0/ + align dbm
+*Key file items are:-* + * ``align``: TRBE write pointer alignment + * ``dbm``: TRBE updates memory with access and dirty flags
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index d60750e7..d7e65f0 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -97,6 +97,7 @@ #define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift)) #define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift)) #define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift)) +#define TSB_CSYNC __emit_inst(0xd503225f) #define set_pstate_pan(x) asm volatile(SET_PSTATE_PAN(x)) #define set_pstate_uao(x) asm volatile(SET_PSTATE_UAO(x)) @@ -880,6 +881,7 @@ #define ID_AA64MMFR2_CNP_SHIFT 0 /* id_aa64dfr0 */ +#define ID_AA64DFR0_TRBE_SHIFT 44 #define ID_AA64DFR0_TRACE_FILT_SHIFT 40 #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36 #define ID_AA64DFR0_PMSVER_SHIFT 32 diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig index f154ae7..aa657ab 100644 --- a/drivers/hwtracing/coresight/Kconfig +++ b/drivers/hwtracing/coresight/Kconfig @@ -164,6 +164,17 @@ config CORESIGHT_CTI To compile this driver as a module, choose M here: the module will be called coresight-cti. +config CORESIGHT_TRBE + bool "Trace Buffer Extension (TRBE) driver" + depends on ARM64 + help + This driver provides support for percpu Trace Buffer Extension (TRBE). + TRBE always needs to be used along with it's corresponding percpu ETE + component. ETE generates trace data which is then captured with TRBE. + Unlike traditional sink devices, TRBE is a CPU feature accessible via + system registers. But it's explicit dependency with trace unit (ETE) + requires it to be plugged in as a coresight sink device.
config CORESIGHT_CTI_INTEGRATION_REGS bool "Access CTI CoreSight Integration Registers" depends on CORESIGHT_CTI diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile index f20e357..d608165 100644 --- a/drivers/hwtracing/coresight/Makefile +++ b/drivers/hwtracing/coresight/Makefile @@ -21,5 +21,6 @@ obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o +obj-$(CONFIG_CORESIGHT_TRBE) += coresight-trbe.o coresight-cti-y := coresight-cti-core.o coresight-cti-platform.o \ coresight-cti-sysfs.o diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c new file mode 100644 index 0000000..ddc1d34 --- /dev/null +++ b/drivers/hwtracing/coresight/coresight-trbe.c @@ -0,0 +1,966 @@ +// SPDX-License-Identifier: GPL-2.0 +/*
- This driver enables Trace Buffer Extension (TRBE) as a per-cpu coresight
- sink device could then pair with an appropriate per-cpu coresight source
- device (ETE) thus generating required trace data. Trace can be enabled
- via the perf framework.
- Copyright (C) 2020 ARM Ltd.
- Author: Anshuman Khandual anshuman.khandual@arm.com
- */
+#define DRVNAME "arm_trbe"
+#define pr_fmt(fmt) DRVNAME ": " fmt
+#include "coresight-trbe.h"
+#define PERF_IDX2OFF(idx, buf) ((idx) % ((buf)->nr_pages << PAGE_SHIFT))
+/*
- A padding packet that will help the user space tools
- in skipping relevant sections in the captured trace
- data which could not be decoded. TRBE doesn't support
- formatting the trace data, unlike the legacy CoreSight
- sinks and thus we use ETE trace packets to pad the
- sections of the buffer.
- */
+#define ETE_IGNORE_PACKET 0x70
+enum trbe_fault_action { + TRBE_FAULT_ACT_WRAP, + TRBE_FAULT_ACT_SPURIOUS, + TRBE_FAULT_ACT_FATAL, +};
+struct trbe_buf { + unsigned long trbe_base; + unsigned long trbe_limit; + unsigned long trbe_write; + int nr_pages; + void **pages; + bool snapshot; + struct trbe_cpudata *cpudata; +};
+struct trbe_cpudata { + bool trbe_dbm; + u64 trbe_align; + int cpu; + enum cs_mode mode; + struct trbe_buf *buf; + struct trbe_drvdata *drvdata; +};
+struct trbe_drvdata { + struct trbe_cpudata __percpu *cpudata; + struct perf_output_handle __percpu **handle; + struct hlist_node hotplug_node; + int irq; + cpumask_t supported_cpus; + enum cpuhp_state trbe_online; + struct platform_device *pdev; +};
+static int trbe_alloc_node(struct perf_event *event) +{ + if (event->cpu == -1) + return NUMA_NO_NODE; + return cpu_to_node(event->cpu); +}
+static void trbe_drain_buffer(void) +{ + asm(TSB_CSYNC); + dsb(nsh); +}
+static void trbe_drain_and_disable_local(void) +{ + trbe_drain_buffer(); + write_sysreg_s(0, SYS_TRBLIMITR_EL1); + isb(); +}
+static void trbe_reset_local(void) +{ + trbe_drain_and_disable_local(); + write_sysreg_s(0, SYS_TRBPTR_EL1); + write_sysreg_s(0, SYS_TRBBASER_EL1); + write_sysreg_s(0, SYS_TRBSR_EL1); + isb();
This is isb() is not necessary.
Dropped.
+}
+/*
- TRBE Buffer Management
- The TRBE buffer spans from the base pointer till the limit pointer. When enabled,
- it starts writing trace data from the write pointer onward till the limit pointer.
- When the write pointer reaches the address just before the limit pointer, it gets
- wrapped around again to the base pointer. This is called a TRBE wrap event, which
- generates a maintenance interrupt when operated in WRAP or STOP mode.
According to the TRM, it is FILL mode, instead of STOP. So please change the above to:
"operated in WRAP or FILL mode".
Updated.
The write
- pointer again starts writing trace data from the base pointer until just before
- the limit pointer before getting wrapped again with an IRQ and this process just
- goes on as long as the TRBE is enabled.
This could be dropped as it applies to WRAP/CIRCULAR buffer mode, which we don't use.
Probably this could be changed a bit to match the FILL mode. Because it is essential to describe the continuous nature of the buffer operation, even in the FILL mode.
* After TRBE * IRQ gets handled and enabled again, write pointer again starts writing trace data * from the base pointer until just before the limit pointer before getting wrapped * again with an IRQ and this process just goes on as long as the TRBE is enabled.
- * Wrap around with an IRQ
- * ------ < ------ < ------- < ----- < -----
- * | |
- * ------ > ------ > ------- > ----- > -----
- * +---------------+-----------------------+
- * | | |
- * +---------------+-----------------------+
- * Base Pointer Write Pointer Limit Pointer
- The base and limit pointers always needs to be PAGE_SIZE aligned. But the write
- pointer can be aligned to the implementation defined TRBE trace buffer alignment
- as captured in trbe_cpudata->trbe_align.
- * head tail wakeup
- * +---------------------------------------+----- ~ ~ ------
- * |$$$$$$$|################|$$$$$$$$$$$$$$| |
- * +---------------------------------------+----- ~ ~ ------
- * Base Pointer Write Pointer Limit Pointer
- The perf_output_handle indices (head, tail, wakeup) are monotonically increasing
- values which tracks all the driver writes and user reads from the perf auxiliary
- buffer. Generally [head..tail] is the area where the driver can write into unless
- the wakeup is behind the tail. Enabled TRBE buffer span needs to be adjusted and
- configured depending on the perf_output_handle indices, so that the driver does
- not override into areas in the perf auxiliary buffer which is being or yet to be
- consumed from the user space. The enabled TRBE buffer area is a moving subset of
- the allocated perf auxiliary buffer.
- */
+static void trbe_pad_buf(struct perf_output_handle *handle, int len) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle); + u64 head = PERF_IDX2OFF(handle->head, buf);
+ memset((void *) buf->trbe_base + head, ETE_IGNORE_PACKET, len);
minor nit: You don't need a space after "(type *)" for casting, here and below at some other places.
Fixed.
+ if (!buf->snapshot) + perf_aux_output_skip(handle, len); +}
+static unsigned long trbe_snapshot_offset(struct perf_output_handle *handle) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle);
+ /* + * The ETE trace has alignment synchronization packets allowing + * the decoder to reset in case of an overflow or corruption. + * So we can use the entire buffer for the snapshot mode. + */ + return buf->nr_pages * PAGE_SIZE; +}
+/*
- TRBE Limit Calculation
- The following markers are used to illustrate various TRBE buffer situations.
- $$$$ - Data area, unconsumed captured trace data, not to be overridden
- #### - Free area, enabled, trace will be written
- %%%% - Free area, disabled, trace will not be written
- ==== - Free area, padded with ETE_IGNORE_PACKET, trace will be skipped
- */
+static unsigned long trbe_normal_offset(struct perf_output_handle *handle) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle); + struct trbe_cpudata *cpudata = buf->cpudata; + const u64 bufsize = buf->nr_pages * PAGE_SIZE; + u64 limit = bufsize; + u64 head, tail, wakeup;
+ head = PERF_IDX2OFF(handle->head, buf);
+ /* + * head + * ------->| + * | + * head TRBE align tail + * +----|-------|---------------|-------+ + * |$$$$|=======|###############|$$$$$$$| + * +----|-------|---------------|-------+ + * trbe_base trbe_base + nr_pages + * + * Perf aux buffer output head position can be misaligned depending on + * various factors including user space reads. In case misaligned, head + * needs to be aligned before TRBE can be configured. Pad the alignment + * gap with ETE_IGNORE_PACKET bytes that will be ignored by user tools + * and skip this section thus advancing the head. + */ + if (!IS_ALIGNED(head, cpudata->trbe_align)) { + unsigned long delta = roundup(head, cpudata->trbe_align) - head;
+ delta = min(delta, handle->size); + trbe_pad_buf(handle, delta); + head = PERF_IDX2OFF(handle->head, buf); + }
+ /* + * head = tail (size = 0) + * +----|-------------------------------+ + * |$$$$|$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ | + * +----|-------------------------------+ + * trbe_base trbe_base + nr_pages + * + * Perf aux buffer does not have any space for the driver to write into. + * Just communicate trace truncation event to the user space by marking + * it with PERF_AUX_FLAG_TRUNCATED. + */ + if (!handle->size) { + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + return 0; + }
+ /* Compute the tail and wakeup indices now that we've aligned head */ + tail = PERF_IDX2OFF(handle->head + handle->size, buf); + wakeup = PERF_IDX2OFF(handle->wakeup, buf);
+ /* + * Lets calculate the buffer area which TRBE could write into. There + * are three possible scenarios here. Limit needs to be aligned with + * PAGE_SIZE per the TRBE requirement. Always avoid clobbering the + * unconsumed data. + * + * 1) head < tail + * + * head tail + * +----|-----------------------|-------+ + * |$$$$|#######################|$$$$$$$| + * +----|-----------------------|-------+ + * trbe_base limit trbe_base + nr_pages + * + * TRBE could write into [head..tail] area. Unless the tail is right at + * the end of the buffer, neither an wrap around nor an IRQ is expected + * while being enabled. + * + * 2) head == tail + * + * head = tail (size > 0) + * +----|-------------------------------+ + * |%%%%|###############################| + * +----|-------------------------------+ + * trbe_base limit = trbe_base + nr_pages + * + * TRBE should just write into [head..base + nr_pages] area even though + * the entire buffer is empty. Reason being, when the trace reaches the + * end of the buffer, it will just wrap around with an IRQ giving an + * opportunity to reconfigure the buffer. + * + * 3) tail < head + * + * tail head + * +----|-----------------------|-------+ + * |%%%%|$$$$$$$$$$$$$$$$$$$$$$$|#######| + * +----|-----------------------|-------+ + * trbe_base limit = trbe_base + nr_pages + * + * TRBE should just write into [head..base + nr_pages] area even though + * the [trbe_base..tail] is also empty. Reason being, when the trace + * reaches the end of the buffer, it will just wrap around with an IRQ + * giving an opportunity to reconfigure the buffer. + */ + if (head < tail) + limit = round_down(tail, PAGE_SIZE);
+ /* + * Wakeup may be arbitrarily far into the future. If it's not in the + * current generation, either we'll wrap before hitting it, or it's + * in the past and has been handled already. + * + * If there's a wakeup before we wrap, arrange to be woken up by the + * page boundary following it. Keep the tail boundary if that's lower. + * + * head wakeup tail + * +----|---------------|-------|-------+ + * |$$$$|###############|%%%%%%%|$$$$$$$| + * +----|---------------|-------|-------+ + * trbe_base limit trbe_base + nr_pages + */ + if (handle->wakeup < (handle->head + handle->size) && head <= wakeup) + limit = min(limit, round_up(wakeup, PAGE_SIZE));
+ /* + * There are two situation when this can happen i.e limit is before + * the head and hence TRBE cannot be configured. + * + * 1) head < tail (aligned down with PAGE_SIZE) and also they are both + * within the same PAGE size range. + * + * PAGE_SIZE + * |----------------------| + * + * limit head tail + * +------------|------|--------|-------+ + * |$$$$$$$$$$$$$$$$$$$|========|$$$$$$$| + * +------------|------|--------|-------+ + * trbe_base trbe_base + nr_pages + * + * 2) head < wakeup (aligned up with PAGE_SIZE) < tail and also both + * head and wakeup are within same PAGE size range. + * + * PAGE_SIZE + * |----------------------| + * + * limit head wakeup tail + * +----|------|-------|--------|-------+ + * |$$$$$$$$$$$|=======|========|$$$$$$$| + * +----|------|-------|--------|-------+ + * trbe_base trbe_base + nr_pages + */ + if (limit > head) + return limit;
+ trbe_pad_buf(handle, handle->size); + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + return 0; +}
+static unsigned long compute_trbe_buffer_limit(struct perf_output_handle *handle) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle); + unsigned long offset;
+ if (buf->snapshot) + offset = trbe_snapshot_offset(handle); + else + offset = trbe_normal_offset(handle); + return buf->trbe_base + offset; +}
+static void clr_trbe_status(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+ WARN_ON(is_trbe_enabled()); + trbsr &= ~TRBSR_IRQ; + trbsr &= ~TRBSR_TRG; + trbsr &= ~TRBSR_WRAP; + trbsr &= ~(TRBSR_EC_MASK << TRBSR_EC_SHIFT); + trbsr &= ~(TRBSR_BSC_MASK << TRBSR_BSC_SHIFT); + trbsr &= ~TRBSR_STOP; + write_sysreg_s(trbsr, SYS_TRBSR_EL1); +}
+static void set_trbe_limit_pointer_enabled(unsigned long addr) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+ WARN_ON(!IS_ALIGNED(addr, (1UL << TRBLIMITR_LIMIT_SHIFT))); + WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE));
+ trblimitr &= ~TRBLIMITR_NVM; + trblimitr &= ~(TRBLIMITR_FILL_MODE_MASK << TRBLIMITR_FILL_MODE_SHIFT); + trblimitr &= ~(TRBLIMITR_TRIG_MODE_MASK << TRBLIMITR_TRIG_MODE_SHIFT); + trblimitr &= ~(TRBLIMITR_LIMIT_MASK << TRBLIMITR_LIMIT_SHIFT);
+ /* + * Fill trace buffer mode is used here while configuring the + * TRBE for trace capture. In this particular mode, the trace + * collection is stopped and a maintenance interrupt is raised + * when the current write pointer wraps. This pause in trace + * collection gives the software an opportunity to capture the + * trace data in the interrupt handler, before reconfiguring + * the TRBE. + */ + trblimitr |= (TRBE_FILL_MODE_FILL & TRBLIMITR_FILL_MODE_MASK) << TRBLIMITR_FILL_MODE_SHIFT;
+ /* + * Trigger mode is not used here while configuring the TRBE for + * the trace capture. Hence just keep this in the ignore mode. + */ + trblimitr |= (TRBE_TRIG_MODE_IGNORE & TRBLIMITR_TRIG_MODE_MASK) << TRBLIMITR_TRIG_MODE_SHIFT; + trblimitr |= (addr & PAGE_MASK);
+ trblimitr |= TRBLIMITR_ENABLE; + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); +}
+static void trbe_enable_hw(struct trbe_buf *buf) +{ + WARN_ON(buf->trbe_write < buf->trbe_base); + WARN_ON(buf->trbe_write >= buf->trbe_limit); + set_trbe_disabled(); + isb(); + clr_trbe_status(); + set_trbe_base_pointer(buf->trbe_base); + set_trbe_write_pointer(buf->trbe_write);
+ /* + * Synchronize all the register updates + * till now before enabling the TRBE. + */ + isb(); + set_trbe_limit_pointer_enabled(buf->trbe_limit);
+ /* Synchronize the TRBE enable event */ + isb(); +}
+static void *arm_trbe_alloc_buffer(struct coresight_device *csdev, + struct perf_event *event, void **pages, + int nr_pages, bool snapshot) +{ + struct trbe_buf *buf; + struct page **pglist; + int i;
+ if ((nr_pages < 2) || (snapshot && (nr_pages & 1)))
This restriction on snapshot could be removed now, since we use the full buffer.
Dropped only the second condition here i.e (snapshot && (nr_pages & 1). Just wondering if the aux buffer could work with a single page so that the first condition can also be dropped.
+ return NULL;
+ buf = kzalloc_node(sizeof(*buf), GFP_KERNEL, trbe_alloc_node(event)); + if (IS_ERR(buf)) + return ERR_PTR(-ENOMEM);
+ pglist = kcalloc(nr_pages, sizeof(*pglist), GFP_KERNEL); + if (IS_ERR(pglist)) { + kfree(buf); + return ERR_PTR(-ENOMEM); + }
+ for (i = 0; i < nr_pages; i++) + pglist[i] = virt_to_page(pages[i]);
+ buf->trbe_base = (unsigned long) vmap(pglist, nr_pages, VM_MAP, PAGE_KERNEL); + if (IS_ERR((void *) buf->trbe_base)) { + kfree(pglist); + kfree(buf); + return ERR_PTR(buf->trbe_base); + } + buf->trbe_limit = buf->trbe_base + nr_pages * PAGE_SIZE; + buf->trbe_write = buf->trbe_base; + buf->snapshot = snapshot; + buf->nr_pages = nr_pages; + buf->pages = pages; + kfree(pglist); + return buf; +}
+void arm_trbe_free_buffer(void *config) +{ + struct trbe_buf *buf = config;
+ vunmap((void *) buf->trbe_base); + kfree(buf); +}
+static unsigned long arm_trbe_update_buffer(struct coresight_device *csdev, + struct perf_output_handle *handle, + void *config) +{ + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev); + struct trbe_buf *buf = config; + unsigned long size, offset;
+ WARN_ON(buf->cpudata != cpudata); + WARN_ON(cpudata->cpu != smp_processor_id()); + WARN_ON(cpudata->drvdata != drvdata); + if (cpudata->mode != CS_MODE_PERF) + return -EINVAL;
+ /* + * perf handle structure needs to be shared with the TRBE IRQ handler for + * capturing trace data and restarting the handle. There is a probability + * of an undefined reference based crash when etm event is being stopped + * while a TRBE IRQ also getting processed. This happens due the release + * of perf handle via perf_aux_output_end() in etm_event_stop(). Stopping + * the TRBE here will ensure that no IRQ could be generated when the perf + * handle gets freed in etm_event_stop(). + */ + trbe_reset_local(); + offset = get_trbe_write_pointer() - get_trbe_base_pointer(); + size = offset - PERF_IDX2OFF(handle->head, buf); + if (buf->snapshot) + handle->head += size; + return size; +}
+static int arm_trbe_enable(struct coresight_device *csdev, u32 mode, void *data) +{ + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev); + struct perf_output_handle *handle = data; + struct trbe_buf *buf = etm_perf_sink_config(handle);
+ WARN_ON(cpudata->cpu != smp_processor_id()); + WARN_ON(cpudata->drvdata != drvdata); + if (mode != CS_MODE_PERF) + return -EINVAL;
+ *this_cpu_ptr(drvdata->handle) = handle; + cpudata->buf = buf; + cpudata->mode = mode; + buf->cpudata = cpudata; + buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf); + buf->trbe_limit = compute_trbe_buffer_limit(handle); + if (buf->trbe_limit == buf->trbe_base) { + trbe_drain_and_disable_local(); + return 0; + } + trbe_enable_hw(buf); + return 0; +}
+static int arm_trbe_disable(struct coresight_device *csdev) +{ + struct trbe_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); + struct trbe_cpudata *cpudata = dev_get_drvdata(&csdev->dev); + struct trbe_buf *buf = cpudata->buf;
+ WARN_ON(buf->cpudata != cpudata); + WARN_ON(cpudata->cpu != smp_processor_id()); + WARN_ON(cpudata->drvdata != drvdata); + if (cpudata->mode != CS_MODE_PERF) + return -EINVAL;
+ trbe_drain_and_disable_local(); + buf->cpudata = NULL; + cpudata->buf = NULL; + cpudata->mode = CS_MODE_DISABLED; + return 0; +}
+static void trbe_handle_fatal(struct perf_output_handle *handle) +{ + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + perf_aux_output_end(handle, 0); + trbe_drain_and_disable_local(); +}
+static void trbe_handle_spurious(struct perf_output_handle *handle) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle);
+ buf->trbe_write = buf->trbe_base + PERF_IDX2OFF(handle->head, buf); + buf->trbe_limit = compute_trbe_buffer_limit(handle); + if (buf->trbe_limit == buf->trbe_base) { + trbe_drain_and_disable_local(); + return; + } + trbe_enable_hw(buf); +}
+static void trbe_handle_overflow(struct perf_output_handle *handle) +{ + struct perf_event *event = handle->event; + struct trbe_buf *buf = etm_perf_sink_config(handle); + unsigned long offset, size; + struct etm_event_data *event_data;
+ offset = get_trbe_limit_pointer() - get_trbe_base_pointer(); + size = offset - PERF_IDX2OFF(handle->head, buf); + if (buf->snapshot) + handle->head = offset; + perf_aux_output_end(handle, size);
+ event_data = perf_aux_output_begin(handle, event); + if (!event_data) { + event->hw.state |= PERF_HES_STOPPED; + trbe_drain_and_disable_local(); + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + return; + } + buf->trbe_write = buf->trbe_base; + buf->trbe_limit = compute_trbe_buffer_limit(handle); + if (buf->trbe_limit == buf->trbe_base) { + trbe_drain_and_disable_local(); + return; + } + *this_cpu_ptr(buf->cpudata->drvdata->handle) = handle; + trbe_enable_hw(buf); +}
+static bool is_perf_trbe(struct perf_output_handle *handle) +{ + struct trbe_buf *buf = etm_perf_sink_config(handle); + struct trbe_cpudata *cpudata = buf->cpudata; + struct trbe_drvdata *drvdata = cpudata->drvdata; + int cpu = smp_processor_id();
+ WARN_ON(buf->trbe_base != get_trbe_base_pointer()); + WARN_ON(buf->trbe_limit != get_trbe_limit_pointer());
+ if (cpudata->mode != CS_MODE_PERF) + return false;
+ if (cpudata->cpu != cpu) + return false;
+ if (!cpumask_test_cpu(cpu, &drvdata->supported_cpus)) + return false;
+ return true; +}
+static enum trbe_fault_action trbe_get_fault_act(struct perf_output_handle *handle) +{ + int ec = get_trbe_ec(); + int bsc = get_trbe_bsc();
+ WARN_ON(is_trbe_running()); + if (is_trbe_trg() || is_trbe_abort())
We seem to be reading the TRBSR every single in these helpers. Could we optimise them by passing the register value in ?
The same goes for get_trbe_ec() and get_trbe_bsc() as well. Probably all TRBSR field probing helpers should be modified to accept a TRBSR register value instead.
i.e u64 trbsr = get_trbe_status();
WARN_ON(is_trbe_runnign(trbsr)) if (is_trbe_trg(trbsr) || is_trbe_abort(trbsr))
For is_trbe_wrap() too
Yes.
+ return TRBE_FAULT_ACT_FATAL;
+ if ((ec == TRBE_EC_STAGE1_ABORT) || (ec == TRBE_EC_STAGE2_ABORT)) + return TRBE_FAULT_ACT_FATAL;
+ if (is_trbe_wrap() && (ec == TRBE_EC_OTHERS) && (bsc == TRBE_BSC_FILLED)) { + if (get_trbe_write_pointer() == get_trbe_base_pointer()) + return TRBE_FAULT_ACT_WRAP; + } + return TRBE_FAULT_ACT_SPURIOUS; +}
+static irqreturn_t arm_trbe_irq_handler(int irq, void *dev) +{ + struct perf_output_handle **handle_ptr = dev; + struct perf_output_handle *handle = *handle_ptr; + enum trbe_fault_action act;
+ WARN_ON(!is_trbe_irq()); + clr_trbe_irq();
+ /* + * Ensure the trace is visible to the CPUs and + * any external aborts have been resolved. + */ + trbe_drain_buffer(); + isb();
+ if (!perf_get_aux(handle)) + return IRQ_NONE;
+ if (!is_perf_trbe(handle)) + return IRQ_NONE;
+ irq_work_run();
+ act = trbe_get_fault_act(handle); + switch (act) { + case TRBE_FAULT_ACT_WRAP: + trbe_handle_overflow(handle); + break; + case TRBE_FAULT_ACT_SPURIOUS: + trbe_handle_spurious(handle); + break; + case TRBE_FAULT_ACT_FATAL: + trbe_handle_fatal(handle); + break; + } + return IRQ_HANDLED; +}
+static const struct coresight_ops_sink arm_trbe_sink_ops = { + .enable = arm_trbe_enable, + .disable = arm_trbe_disable, + .alloc_buffer = arm_trbe_alloc_buffer, + .free_buffer = arm_trbe_free_buffer, + .update_buffer = arm_trbe_update_buffer, +};
+static const struct coresight_ops arm_trbe_cs_ops = { + .sink_ops = &arm_trbe_sink_ops, +};
+static ssize_t align_show(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct trbe_cpudata *cpudata = dev_get_drvdata(dev);
+ return sprintf(buf, "%llx\n", cpudata->trbe_align); +} +static DEVICE_ATTR_RO(align);
+static ssize_t dbm_show(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct trbe_cpudata *cpudata = dev_get_drvdata(dev);
+ return sprintf(buf, "%d\n", cpudata->trbe_dbm); +} +static DEVICE_ATTR_RO(dbm);
+static struct attribute *arm_trbe_attrs[] = { + &dev_attr_align.attr, + &dev_attr_dbm.attr, + NULL, +};
+static const struct attribute_group arm_trbe_group = { + .attrs = arm_trbe_attrs, +};
+static const struct attribute_group *arm_trbe_groups[] = { + &arm_trbe_group, + NULL, +};
+static void arm_trbe_probe_coresight_cpu(void *info) +{ + struct trbe_drvdata *drvdata = info; + struct coresight_desc desc = { 0 }; + int cpu = smp_processor_id(); + struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu); + struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu); + struct device *dev;
+ if (WARN_ON(!cpudata)) + goto cpu_clear;
+ if (trbe_csdev) + return;
+ cpudata->cpu = smp_processor_id(); + cpudata->drvdata = drvdata; + dev = &cpudata->drvdata->pdev->dev;
+ if (!is_trbe_available()) { + pr_err("TRBE is not implemented on cpu %d\n", cpudata->cpu); + goto cpu_clear; + }
+ if (!is_trbe_programmable()) { + pr_err("TRBE is owned in higher exception level on cpu %d\n", cpudata->cpu); + goto cpu_clear; + } + desc.name = devm_kasprintf(dev, GFP_KERNEL, "%s%d", DRVNAME, smp_processor_id()); + if (IS_ERR(desc.name)) + goto cpu_clear;
+ desc.type = CORESIGHT_DEV_TYPE_SINK; + desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM; + desc.ops = &arm_trbe_cs_ops; + desc.pdata = dev_get_platdata(dev); + desc.groups = arm_trbe_groups; + desc.dev = dev; + trbe_csdev = coresight_register(&desc); + if (IS_ERR(trbe_csdev)) + goto cpu_clear;
+ dev_set_drvdata(&trbe_csdev->dev, cpudata); + cpudata->trbe_dbm = get_trbe_flag_update(); + cpudata->trbe_align = 1ULL << get_trbe_address_align(); + if (cpudata->trbe_align > SZ_2K) { + pr_err("Unsupported alignment on cpu %d\n", cpudata->cpu); + goto cpu_clear; + } + per_cpu(csdev_sink, cpu) = trbe_csdev; + trbe_reset_local(); + enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE); + return; +cpu_clear: + cpumask_clear_cpu(cpudata->cpu, &cpudata->drvdata->supported_cpus); +}
+static void arm_trbe_remove_coresight_cpu(void *info) +{ + int cpu = smp_processor_id(); + struct trbe_drvdata *drvdata = info; + struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu); + struct coresight_device *trbe_csdev = per_cpu(csdev_sink, cpu);
+ if (trbe_csdev) { + coresight_unregister(trbe_csdev); + cpudata->drvdata = NULL; + per_cpu(csdev_sink, cpu) = NULL; + } + disable_percpu_irq(drvdata->irq); + trbe_reset_local(); +}
+static int arm_trbe_probe_coresight(struct trbe_drvdata *drvdata) +{ + drvdata->cpudata = alloc_percpu(typeof(*drvdata->cpudata)); + if (IS_ERR(drvdata->cpudata)) + return PTR_ERR(drvdata->cpudata);
+ arm_trbe_probe_coresight_cpu(drvdata); + smp_call_function_many(&drvdata->supported_cpus, arm_trbe_probe_coresight_cpu, drvdata, 1); + return 0; +}
+static int arm_trbe_remove_coresight(struct trbe_drvdata *drvdata) +{ + arm_trbe_remove_coresight_cpu(drvdata); + smp_call_function_many(&drvdata->supported_cpus, arm_trbe_remove_coresight_cpu, drvdata, 1); + free_percpu(drvdata->cpudata); + return 0; +}
+static int arm_trbe_cpu_startup(unsigned int cpu, struct hlist_node *node) +{ + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
+ if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) { + if (!per_cpu(csdev_sink, cpu)) { + arm_trbe_probe_coresight_cpu(drvdata); + } else { + trbe_reset_local(); + enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE); + } + } + return 0; +}
+static int arm_trbe_cpu_teardown(unsigned int cpu, struct hlist_node *node) +{ + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
+ if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) { + disable_percpu_irq(drvdata->irq); + trbe_reset_local(); + } + return 0; +}
+static int arm_trbe_probe_cpuhp(struct trbe_drvdata *drvdata) +{ + enum cpuhp_state trbe_online;
+ trbe_online = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, DRVNAME, + arm_trbe_cpu_startup, arm_trbe_cpu_teardown); + if (trbe_online < 0) + return -EINVAL;
+ if (cpuhp_state_add_instance(trbe_online, &drvdata->hotplug_node)) + return -EINVAL;
+ drvdata->trbe_online = trbe_online; + return 0; +}
+static void arm_trbe_remove_cpuhp(struct trbe_drvdata *drvdata) +{ + cpuhp_remove_multi_state(drvdata->trbe_online); +}
+static int arm_trbe_probe_irq(struct platform_device *pdev, + struct trbe_drvdata *drvdata) +{ + drvdata->irq = platform_get_irq(pdev, 0); + if (!drvdata->irq) { + pr_err("IRQ not found for the platform device\n"); + return -ENXIO; + }
+ if (!irq_is_percpu(drvdata->irq)) { + pr_err("IRQ is not a PPI\n"); + return -EINVAL; + }
+ if (irq_get_percpu_devid_partition(drvdata->irq, &drvdata->supported_cpus)) + return -EINVAL;
+ drvdata->handle = alloc_percpu(typeof(*drvdata->handle)); + if (!drvdata->handle) + return -ENOMEM;
+ if (request_percpu_irq(drvdata->irq, arm_trbe_irq_handler, DRVNAME, drvdata->handle)) { + free_percpu(drvdata->handle); + return -EINVAL; + } + return 0; +}
+static void arm_trbe_remove_irq(struct trbe_drvdata *drvdata) +{ + free_percpu_irq(drvdata->irq, drvdata->handle); + free_percpu(drvdata->handle); +}
+static int arm_trbe_device_probe(struct platform_device *pdev) +{ + struct coresight_platform_data *pdata; + struct trbe_drvdata *drvdata; + struct device *dev = &pdev->dev; + int ret;
+ drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); + if (IS_ERR(drvdata)) + return -ENOMEM;
+ pdata = coresight_get_platform_data(dev); + if (IS_ERR(pdata)) { + kfree(drvdata); + return -ENOMEM; + }
+ dev_set_drvdata(dev, drvdata); + dev->platform_data = pdata; + drvdata->pdev = pdev; + ret = arm_trbe_probe_irq(pdev, drvdata); + if (ret) + goto irq_failed;
+ ret = arm_trbe_probe_coresight(drvdata); + if (ret) + goto probe_failed;
+ ret = arm_trbe_probe_cpuhp(drvdata); + if (ret) + goto cpuhp_failed;
+ return 0; +cpuhp_failed: + arm_trbe_remove_coresight(drvdata); +probe_failed: + arm_trbe_remove_irq(drvdata); +irq_failed: + kfree(pdata); + kfree(drvdata); + return ret; +}
+static int arm_trbe_device_remove(struct platform_device *pdev) +{ + struct coresight_platform_data *pdata = dev_get_platdata(&pdev->dev); + struct trbe_drvdata *drvdata = platform_get_drvdata(pdev);
+ arm_trbe_remove_coresight(drvdata); + arm_trbe_remove_cpuhp(drvdata); + arm_trbe_remove_irq(drvdata); + kfree(pdata); + kfree(drvdata); + return 0; +}
+static const struct of_device_id arm_trbe_of_match[] = { + { .compatible = "arm,trace-buffer-extension"}, + {}, +}; +MODULE_DEVICE_TABLE(of, arm_trbe_of_match);
+static struct platform_driver arm_trbe_driver = { + .driver = { + .name = DRVNAME, + .of_match_table = of_match_ptr(arm_trbe_of_match), + .suppress_bind_attrs = true, + }, + .probe = arm_trbe_device_probe, + .remove = arm_trbe_device_remove, +};
+static int __init arm_trbe_init(void) +{ + int ret;
We should skip the driver init, if the kernel is unmapped at EL0, as the TRBE can't safely write to the kernel virtual addressed buffer when the CPU is running at EL0. This is unlikely, but we should cover that case.
This should be sufficient or it needs a pr_err() as well ?
--- a/drivers/hwtracing/coresight/coresight-trbe.c +++ b/drivers/hwtracing/coresight/coresight-trbe.c @@ -946,6 +946,9 @@ static int __init arm_trbe_init(void) { int ret;
+ if (arm64_kernel_unmapped_at_el0()) + return -EOPNOTSUPP; + ret = platform_driver_register(&arm_trbe_driver); if (!ret) return 0;
+ ret = platform_driver_register(&arm_trbe_driver); + if (!ret) + return 0;
+ pr_err("Error registering %s platform driver\n", DRVNAME); + return ret; +}
+static void __exit arm_trbe_exit(void) +{ + platform_driver_unregister(&arm_trbe_driver); +} +module_init(arm_trbe_init); +module_exit(arm_trbe_exit);
+MODULE_AUTHOR("Anshuman Khandual anshuman.khandual@arm.com"); +MODULE_DESCRIPTION("Arm Trace Buffer Extension (TRBE) driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/hwtracing/coresight/coresight-trbe.h b/drivers/hwtracing/coresight/coresight-trbe.h new file mode 100644 index 0000000..d9f5079 --- /dev/null +++ b/drivers/hwtracing/coresight/coresight-trbe.h @@ -0,0 +1,216 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/*
- This contains all required hardware related helper functions for
- Trace Buffer Extension (TRBE) driver in the coresight framework.
- Copyright (C) 2020 ARM Ltd.
- Author: Anshuman Khandual anshuman.khandual@arm.com
- */
+#include <linux/coresight.h> +#include <linux/device.h> +#include <linux/irq.h> +#include <linux/kernel.h> +#include <linux/of.h> +#include <linux/platform_device.h> +#include <linux/smp.h>
+#include "coresight-etm-perf.h"
+DECLARE_PER_CPU(struct coresight_device *, csdev_sink);
+static inline bool is_trbe_available(void) +{ + u64 aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1); + int trbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_TRBE_SHIFT);
This could be "unsigned int" to make it future proof.
Changed.
+ return trbe >= 0b0001; +}
+static inline bool is_trbe_enabled(void) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+ return trblimitr & TRBLIMITR_ENABLE; +}
+#define TRBE_EC_OTHERS 0 +#define TRBE_EC_STAGE1_ABORT 36 +#define TRBE_EC_STAGE2_ABORT 37
+static inline int get_trbe_ec(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+ return (trbsr >> TRBSR_EC_SHIFT) & TRBSR_EC_MASK; +}
+#define TRBE_BSC_NOT_STOPPED 0 +#define TRBE_BSC_FILLED 1 +#define TRBE_BSC_TRIGGERED 2
+static inline int get_trbe_bsc(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+ return (trbsr >> TRBSR_BSC_SHIFT) & TRBSR_BSC_MASK; +}
+static inline void clr_trbe_irq(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+ trbsr &= ~TRBSR_IRQ; + write_sysreg_s(trbsr, SYS_TRBSR_EL1); +}
+static inline bool is_trbe_irq(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+ return trbsr & TRBSR_IRQ; +}
+static inline bool is_trbe_trg(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+ return trbsr & TRBSR_TRG; +}
+static inline bool is_trbe_wrap(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+ return trbsr & TRBSR_WRAP; +}
+static inline bool is_trbe_abort(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+ return trbsr & TRBSR_ABORT; +}
+static inline bool is_trbe_running(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+ return !(trbsr & TRBSR_STOP); +}
+static inline void set_trbe_running(void) +{ + u64 trbsr = read_sysreg_s(SYS_TRBSR_EL1);
+ trbsr &= ~TRBSR_STOP; + write_sysreg_s(trbsr, SYS_TRBSR_EL1); +}
This could be removed now.
Dropped.
+static inline void set_trbe_virtual_mode(void) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+ trblimitr &= ~TRBLIMITR_NVM; + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); +}
Same here.
Dropped.
+#define TRBE_TRIG_MODE_STOP 0 +#define TRBE_TRIG_MODE_IRQ 1 +#define TRBE_TRIG_MODE_IGNORE 3
+#define TRBE_FILL_MODE_FILL 0 +#define TRBE_FILL_MODE_WRAP 1 +#define TRBE_FILL_MODE_CIRCULAR_BUFFER 3
+static inline void set_trbe_disabled(void) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+ trblimitr &= ~TRBLIMITR_ENABLE; + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); +}
+static inline void set_trbe_enabled(void) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+ trblimitr |= TRBLIMITR_ENABLE; + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); +}
Same as above.
Dropped.
+static inline bool get_trbe_flag_update(void) +{ + u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1);
+ return trbidr & TRBIDR_FLAG; +}
+static inline bool is_trbe_programmable(void) +{ + u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1);
+ return !(trbidr & TRBIDR_PROG); +}
+static inline int get_trbe_address_align(void) +{ + u64 trbidr = read_sysreg_s(SYS_TRBIDR_EL1);
+ return (trbidr >> TRBIDR_ALIGN_SHIFT) & TRBIDR_ALIGN_MASK; +}
Similar comment to the TRBSR read on each of these functions. They all are only called from a single function. It may make sense to read once and pass the value.
Changed is_trbe_programmable(), get_trbe_address_align() and get_trbe_flag_update() to accept a previously read TRBIDR register.
+static inline unsigned long get_trbe_write_pointer(void) +{ + u64 trbptr = read_sysreg_s(SYS_TRBPTR_EL1); + unsigned long addr = (trbptr >> TRBPTR_PTR_SHIFT) & TRBPTR_PTR_MASK;
+ return addr; +}
+static inline void set_trbe_write_pointer(unsigned long addr) +{ + WARN_ON(is_trbe_enabled()); + addr = (addr >> TRBPTR_PTR_SHIFT) & TRBPTR_PTR_MASK; + write_sysreg_s(addr, SYS_TRBPTR_EL1); +}
+static inline unsigned long get_trbe_limit_pointer(void) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1); + unsigned long limit = (trblimitr >> TRBLIMITR_LIMIT_SHIFT) & TRBLIMITR_LIMIT_MASK; + unsigned long addr = limit << TRBLIMITR_LIMIT_SHIFT;
+ WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE)); + return addr; +}
+static inline void set_trbe_limit_pointer(unsigned long addr) +{ + u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);
+ WARN_ON(is_trbe_enabled()); + WARN_ON(!IS_ALIGNED(addr, (1UL << TRBLIMITR_LIMIT_SHIFT))); + WARN_ON(!IS_ALIGNED(addr, PAGE_SIZE)); + trblimitr &= ~(TRBLIMITR_LIMIT_MASK << TRBLIMITR_LIMIT_SHIFT); + trblimitr |= (addr & PAGE_MASK); + write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); +}
Remove the unused function.
Removed.
On 1/15/21 5:29 AM, Anshuman Khandual wrote:
On 1/13/21 8:58 PM, Suzuki K Poulose wrote:
Hi Anshuman,
The driver looks overall good to me. Please find some minor comments below
On 1/13/21 4:18 AM, Anshuman Khandual wrote:
Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is accessible via the system registers. The TRBE supports different addressing modes including CPU virtual address and buffer modes including the circular buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1), an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the access to the trace buffer could be prohibited by a higher exception level (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU private interrupt (PPI) on address translation errors and when the buffer is full. Overall implementation here is inspired from the Arm SPE driver.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
...
+/*
- TRBE Buffer Management
- The TRBE buffer spans from the base pointer till the limit pointer. When enabled,
- it starts writing trace data from the write pointer onward till the limit pointer.
- When the write pointer reaches the address just before the limit pointer, it gets
- wrapped around again to the base pointer. This is called a TRBE wrap event, which
- generates a maintenance interrupt when operated in WRAP or STOP mode.
According to the TRM, it is FILL mode, instead of STOP. So please change the above to:
"operated in WRAP or FILL mode".
Updated.
The write
- pointer again starts writing trace data from the base pointer until just before
- the limit pointer before getting wrapped again with an IRQ and this process just
- goes on as long as the TRBE is enabled.
This could be dropped as it applies to WRAP/CIRCULAR buffer mode, which we don't use.
Probably this could be changed a bit to match the FILL mode. Because it is essential to describe the continuous nature of the buffer operation, even in the FILL mode.
- After TRBE
- IRQ gets handled and enabled again, write pointer again starts writing trace data
- from the base pointer until just before the limit pointer before getting wrapped
- again with an IRQ and this process just goes on as long as the TRBE is enabled.
The above doesn't parse well and kind of repeats the operation of TRBE which is already explained above. How about :
- When the write pointer reaches the address just before the limit pointer, it gets
- wrapped around again to the base pointer. This is called a TRBE wrap event, which
- generates a maintenance interrupt when operated in WRAP or STOP mode.
This driver uses FILL mode, where the TRBE stops the trace collection at wrap event. The IRQ handler updates the AUX buffer and re-enables the TRBE with updated WRITE and LIMIT pointers.
+static void *arm_trbe_alloc_buffer(struct coresight_device *csdev, + struct perf_event *event, void **pages, + int nr_pages, bool snapshot) +{ + struct trbe_buf *buf; + struct page **pglist; + int i;
+ if ((nr_pages < 2) || (snapshot && (nr_pages & 1)))
This restriction on snapshot could be removed now, since we use the full buffer.
Dropped only the second condition here i.e (snapshot && (nr_pages & 1). Just wondering if the aux buffer could work with a single page so that the first condition can also be dropped.
I think it is good to keep the restriction of 2 pages, as the WRITE_PTR and the LIMIT_PTR must be page aligned. With a single page, you can't do much, with writing into a partially filled buffer. This may be added as a comment to explain the restriction.
+static enum trbe_fault_action trbe_get_fault_act(struct perf_output_handle *handle) +{ + int ec = get_trbe_ec(); + int bsc = get_trbe_bsc();
+ WARN_ON(is_trbe_running()); + if (is_trbe_trg() || is_trbe_abort())
We seem to be reading the TRBSR every single in these helpers. Could we optimise them by passing the register value in ?
The same goes for get_trbe_ec() and get_trbe_bsc() as well. Probably all TRBSR field probing helpers should be modified to accept a TRBSR register value instead.
i.e u64 trbsr = get_trbe_status();
WARN_ON(is_trbe_runnign(trbsr)) if (is_trbe_trg(trbsr) || is_trbe_abort(trbsr))
For is_trbe_wrap() too
Yes.
We should skip the driver init, if the kernel is unmapped at EL0, as the TRBE can't safely write to the kernel virtual addressed buffer when the CPU is running at EL0. This is unlikely, but we should cover that case.
This should be sufficient or it needs a pr_err() as well ?
Please add a pr_err() message to indicate why this failed. Otherwise the user could be left with no clue.
Cheers Suzuki
On 1/15/21 6:13 PM, Suzuki K Poulose wrote:
On 1/15/21 5:29 AM, Anshuman Khandual wrote:
On 1/13/21 8:58 PM, Suzuki K Poulose wrote:
Hi Anshuman,
The driver looks overall good to me. Please find some minor comments below
Sure.
On 1/13/21 4:18 AM, Anshuman Khandual wrote:
Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is accessible via the system registers. The TRBE supports different addressing modes including CPU virtual address and buffer modes including the circular buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1), an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the access to the trace buffer could be prohibited by a higher exception level (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU private interrupt (PPI) on address translation errors and when the buffer is full. Overall implementation here is inspired from the Arm SPE driver.
Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
...
+/*
- TRBE Buffer Management
- The TRBE buffer spans from the base pointer till the limit pointer. When enabled,
- it starts writing trace data from the write pointer onward till the limit pointer.
- When the write pointer reaches the address just before the limit pointer, it gets
- wrapped around again to the base pointer. This is called a TRBE wrap event, which
- generates a maintenance interrupt when operated in WRAP or STOP mode.
According to the TRM, it is FILL mode, instead of STOP. So please change the above to:
"operated in WRAP or FILL mode".
Changed.
Updated.
The write
- pointer again starts writing trace data from the base pointer until just before
- the limit pointer before getting wrapped again with an IRQ and this process just
- goes on as long as the TRBE is enabled.
This could be dropped as it applies to WRAP/CIRCULAR buffer mode, which we don't use.
Probably this could be changed a bit to match the FILL mode. Because it is essential to describe the continuous nature of the buffer operation, even in the FILL mode.
* After TRBE * IRQ gets handled and enabled again, write pointer again starts writing trace data * from the base pointer until just before the limit pointer before getting wrapped * again with an IRQ and this process just goes on as long as the TRBE is enabled.
The above doesn't parse well and kind of repeats the operation of TRBE which is already explained above. How about :
- When the write pointer reaches the address just before the limit pointer, it gets
- wrapped around again to the base pointer. This is called a TRBE wrap event, which
- generates a maintenance interrupt when operated in WRAP or STOP mode.
This driver uses FILL mode, where the TRBE stops the trace collection at wrap event. The IRQ handler updates the AUX buffer and re-enables the TRBE with updated WRITE and LIMIT pointers.
Updated.
+static void *arm_trbe_alloc_buffer(struct coresight_device *csdev, + struct perf_event *event, void **pages, + int nr_pages, bool snapshot) +{ + struct trbe_buf *buf; + struct page **pglist; + int i;
+ if ((nr_pages < 2) || (snapshot && (nr_pages & 1)))
This restriction on snapshot could be removed now, since we use the full buffer.
Dropped only the second condition here i.e (snapshot && (nr_pages & 1). Just wondering if the aux buffer could work with a single page so that the first condition can also be dropped.
I think it is good to keep the restriction of 2 pages, as the WRITE_PTR and the LIMIT_PTR must be page aligned. With a single page, you can't do much, with writing into a partially filled buffer. This may be added as a comment to explain the restriction.
Added the above comment.
+static enum trbe_fault_action trbe_get_fault_act(struct perf_output_handle *handle) +{ + int ec = get_trbe_ec(); + int bsc = get_trbe_bsc();
+ WARN_ON(is_trbe_running()); + if (is_trbe_trg() || is_trbe_abort())
We seem to be reading the TRBSR every single in these helpers. Could we optimise them by passing the register value in ?
The same goes for get_trbe_ec() and get_trbe_bsc() as well. Probably all TRBSR field probing helpers should be modified to accept a TRBSR register value instead.
i.e u64 trbsr = get_trbe_status();
WARN_ON(is_trbe_runnign(trbsr)) if (is_trbe_trg(trbsr) || is_trbe_abort(trbsr))
For is_trbe_wrap() too
Yes.
We should skip the driver init, if the kernel is unmapped at EL0, as the TRBE can't safely write to the kernel virtual addressed buffer when the CPU is running at EL0. This is unlikely, but we should cover that case.
This should be sufficient or it needs a pr_err() as well ?
Please add a pr_err() message to indicate why this failed. Otherwise the user could be left with no clue.
Sure, will add the following before exiting the TRBE init.
pr_err("TRBE wouldn't work if kernel gets unmapped at EL0\n")
From: Suzuki K Poulose suzuki.poulose@arm.com
Document the device tree bindings for Trace Buffer Extension (TRBE).
Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Rob Herring robh@kernel.org Cc: devicetree@vger.kernel.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com --- Documentation/devicetree/bindings/arm/trbe.yaml | 46 +++++++++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 Documentation/devicetree/bindings/arm/trbe.yaml
diff --git a/Documentation/devicetree/bindings/arm/trbe.yaml b/Documentation/devicetree/bindings/arm/trbe.yaml new file mode 100644 index 0000000..2258595 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/trbe.yaml @@ -0,0 +1,46 @@ +# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause +# Copyright 2021, Arm Ltd +%YAML 1.2 +--- +$id: "http://devicetree.org/schemas/arm/trbe.yaml#" +$schema: "http://devicetree.org/meta-schemas/core.yaml#" + +title: ARM Trace Buffer Extensions + +maintainers: + - Anshuman Khandual anshuman.khandual@arm.com + +description: | + Description of TRBE hw + +properties: + $nodename: + pattern: "trbe" + compatible: + items: + - const: arm,trace-buffer-extension + + interrupts: + description: | + Exactly 1 PPI must be listed. For heterogeneous systems where + TRBE is only supported on a subset of the CPUs, please consult + the arm,gic-v3 binding for details on describing a PPI partition. + maxItems: 1 + +required: + - compatible + - interrupts + +additionalProperties: false + + +examples: + + - | + #include <dt-bindings/interrupt-controller/arm-gic.h> + + trbe { + compatible = "arm,trace-buffer-extension"; + interrupts = <GIC_PPI 15 IRQ_TYPE_LEVEL_HIGH>; + }; +...
On Wed, 13 Jan 2021 09:48:18 +0530, Anshuman Khandual wrote:
From: Suzuki K Poulose suzuki.poulose@arm.com
Document the device tree bindings for Trace Buffer Extension (TRBE).
Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Rob Herring robh@kernel.org Cc: devicetree@vger.kernel.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Documentation/devicetree/bindings/arm/trbe.yaml | 46 +++++++++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 Documentation/devicetree/bindings/arm/trbe.yaml
My bot found errors running 'make dt_binding_check' on your patch:
yamllint warnings/errors: ./Documentation/devicetree/bindings/arm/trbe.yaml:39:2: [warning] wrong indentation: expected 2 but found 1 (indentation)
dtschema/dtc warnings/errors:
See https://patchwork.ozlabs.org/patch/1425605
This check can fail if there are any dependencies. The base for a patch series is generally the most recent rc1.
If you already ran 'make dt_binding_check' and didn't see the above error(s), then make sure 'yamllint' is installed and dt-schema is up to date:
pip3 install dtschema --upgrade
Please check and re-submit.
Hi Rob
On 1/13/21 3:45 PM, Rob Herring wrote:
On Wed, 13 Jan 2021 09:48:18 +0530, Anshuman Khandual wrote:
From: Suzuki K Poulose suzuki.poulose@arm.com
Document the device tree bindings for Trace Buffer Extension (TRBE).
Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Rob Herring robh@kernel.org Cc: devicetree@vger.kernel.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Documentation/devicetree/bindings/arm/trbe.yaml | 46 +++++++++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 Documentation/devicetree/bindings/arm/trbe.yaml
My bot found errors running 'make dt_binding_check' on your patch:
yamllint warnings/errors: ./Documentation/devicetree/bindings/arm/trbe.yaml:39:2: [warning] wrong indentation: expected 2 but found 1 (indentation)
dtschema/dtc warnings/errors:
Thanks for that. I guess Anshuman can fix this up, with the following patch:
diff --git a/Documentation/devicetree/bindings/arm/trbe.yaml b/Documentation/devicetree/bindings/arm/trbe.yaml index 2258595c40dd..24951e02fa58 100644 --- a/Documentation/devicetree/bindings/arm/trbe.yaml +++ b/Documentation/devicetree/bindings/arm/trbe.yaml @@ -36,7 +36,7 @@ additionalProperties: false
examples:
- - | + - | #include <dt-bindings/interrupt-controller/arm-gic.h>
trbe {
See https://patchwork.ozlabs.org/patch/1425605
This check can fail if there are any dependencies. The base for a patch series is generally the most recent rc1.
If you already ran 'make dt_binding_check' and didn't see the above error(s), then make sure 'yamllint' is installed and dt-schema is up to date:
I did see the warning, but I thought I fixed it. Sorry about that.
Cheers Suzuki
On Wed, Jan 13, 2021 at 09:48:18AM +0530, Anshuman Khandual wrote:
From: Suzuki K Poulose suzuki.poulose@arm.com
Document the device tree bindings for Trace Buffer Extension (TRBE).
Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Rob Herring robh@kernel.org Cc: devicetree@vger.kernel.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Documentation/devicetree/bindings/arm/trbe.yaml | 46 +++++++++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 Documentation/devicetree/bindings/arm/trbe.yaml
diff --git a/Documentation/devicetree/bindings/arm/trbe.yaml b/Documentation/devicetree/bindings/arm/trbe.yaml new file mode 100644 index 0000000..2258595 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/trbe.yaml @@ -0,0 +1,46 @@ +# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause +# Copyright 2021, Arm Ltd +%YAML 1.2 +--- +$id: "http://devicetree.org/schemas/arm/trbe.yaml#" +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+title: ARM Trace Buffer Extensions
+maintainers:
- Anshuman Khandual anshuman.khandual@arm.com
+description: |
- Description of TRBE hw
Huh?
+properties:
- $nodename:
- pattern: "trbe"
const: trbe
- compatible:
- items:
- const: arm,trace-buffer-extension
Any versioning to this? Or is that discoverable?
- interrupts:
- description: |
Exactly 1 PPI must be listed. For heterogeneous systems where
TRBE is only supported on a subset of the CPUs, please consult
the arm,gic-v3 binding for details on describing a PPI partition.
- maxItems: 1
+required:
- compatible
- interrupts
+additionalProperties: false
Extra blank line.
+examples:
- |
- #include <dt-bindings/interrupt-controller/arm-gic.h>
- trbe {
compatible = "arm,trace-buffer-extension";
interrupts = <GIC_PPI 15 IRQ_TYPE_LEVEL_HIGH>;
- };
+...
2.7.4
On 1/14/21 2:07 PM, Rob Herring wrote:
On Wed, Jan 13, 2021 at 09:48:18AM +0530, Anshuman Khandual wrote:
From: Suzuki K Poulose suzuki.poulose@arm.com
Document the device tree bindings for Trace Buffer Extension (TRBE).
Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Rob Herring robh@kernel.org Cc: devicetree@vger.kernel.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Anshuman Khandual anshuman.khandual@arm.com
Documentation/devicetree/bindings/arm/trbe.yaml | 46 +++++++++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 Documentation/devicetree/bindings/arm/trbe.yaml
diff --git a/Documentation/devicetree/bindings/arm/trbe.yaml b/Documentation/devicetree/bindings/arm/trbe.yaml new file mode 100644 index 0000000..2258595 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/trbe.yaml @@ -0,0 +1,46 @@ +# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause +# Copyright 2021, Arm Ltd +%YAML 1.2 +--- +$id: "http://devicetree.org/schemas/arm/trbe.yaml#" +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+title: ARM Trace Buffer Extensions
+maintainers:
- Anshuman Khandual anshuman.khandual@arm.com
+description: |
- Description of TRBE hw
Huh?
Doh ! That was due to a miscommunication between us. This should be :
description: | Arm Trace Buffer Extension (TRBE) is a per CPU component for storing trace generated on the CPU to memory. It is accessed via CPU system registers. The software can verify if it is permitted to use the component by checking the TRBIDR register.
+properties:
- $nodename:
- pattern: "trbe"
const: trbe
- compatible:
- items:
- const: arm,trace-buffer-extension
Any versioning to this? Or is that discoverable?
It must be discoverable via ID_AA64DFR0_EL1.TraceBuffer. The IP is entirely accessed by the CPU system registers. So, any further changes can be interpreted from the system registers (including if the access is blocked by a higher exception level).
- interrupts:
- description: |
Exactly 1 PPI must be listed. For heterogeneous systems where
TRBE is only supported on a subset of the CPUs, please consult
the arm,gic-v3 binding for details on describing a PPI partition.
- maxItems: 1
+required:
- compatible
- interrupts
+additionalProperties: false
Extra blank line.
Removed.
Cheers
Suzuki