Hi Mike,
On Fri, May 24, 2019 at 12:45:47PM +0100, Mike Leach wrote:
[...]
BTW, please consider to use spinlock to protect the flow, e.g. if use sysfs node to enable/disable CTI device concurrently, it might introduce mistaching between the counter and hardware enabling.
A spinlock is set in the cti_enable_hw() call, which does nothing if the hw is already enabled. While I was testing I was getting warnings from the kernel if I manipulate the atomic inside the spinlock - but I think I have to revisit this flow to re-test this.
Just remind, after acquire spinlock the kernel is preemption disabled thus cannot use mutex/semaphore, etc. But we still can use atomic APIs since mutex/semaphore might introduce sleeping but atomic APIs will not.
return rc;
+}
+int cti_disable(struct coresight_device *csdev, void *__unused) +{
int rc = 0;
struct cti_drvdata *drvdata = csdev_to_cti_drvdata(csdev);
if (!atomic_dec_return(&drvdata->config.enable_req_count)) {
rc = cti_disable_hw(drvdata);
if (rc)
atomic_inc(&drvdata->config.enable_req_count);
}
return rc;
Same at here, should we return -EBUSY if counter is not zero?
CTI is reference counted as it can be associated with more than one coresight device. e.g. in Juno r1/2 CTI0 takes triggers from the ETR, TPIU, STM and ETF0. So as each device in the path is enabled, then the count is incremented, and then decremented as each device is disabled. Thus the code is designed to enable the CTI when the first device requests enabling, and disable after the last device is disabled. Thus if the count is non zero then this is not an error condition - so we don't want to propagate an error code.
Thanks for the info. This makes sense for me.
Thanks, Leo Yan