From: Mark Brown <broonie(a)linaro.org>
The intn and connect GPIO properties are swapped in the code which will
cause failures at runtime if these are connected, fix the code.
There are currently no in-tree users of this device to check or update.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
drivers/usb/misc/usb3503.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/misc/usb3503.c b/drivers/usb/misc/usb3503.c
index cbb6b78..4e3a2d2 100644
--- a/drivers/usb/misc/usb3503.c
+++ b/drivers/usb/misc/usb3503.c
@@ -215,10 +215,10 @@ static int usb3503_probe(struct i2c_client *i2c, const struct i2c_device_id *id)
}
}
- hub->gpio_intn = of_get_named_gpio(np, "connect-gpios", 0);
+ hub->gpio_intn = of_get_named_gpio(np, "intn-gpios", 0);
if (hub->gpio_intn == -EPROBE_DEFER)
return -EPROBE_DEFER;
- hub->gpio_connect = of_get_named_gpio(np, "intn-gpios", 0);
+ hub->gpio_connect = of_get_named_gpio(np, "connect-gpios", 0);
if (hub->gpio_connect == -EPROBE_DEFER)
return -EPROBE_DEFER;
hub->gpio_reset = of_get_named_gpio(np, "reset-gpios", 0);
--
1.8.4.rc1
Hello all,
The purpose of this email is to get other opinions and advices to buffer synchronization mechanism, and coupling cache operation feature with the buffer synchronization mechanism. First of all, I am not a native English speaker so I'm not sure that I can convey my intention to you. And I'm not a specialist in Linux than other people so also there might be my missing points.
I had posted the buffer synchronization mechanism called dmabuf sync framework like below,
http://lists.infradead.org/pipermail/linux-arm-kernel/2013-June/177045.html
And I'm sending this email before posting next version with more stable patch set and features. The purpose of this framework is to provide not only buffer access control to CPU and DMA but also easy-to-use interfaces for device drivers and user application. This framework can be used for all DMA devices using system memory as DMA buffer, especially for most ARM based SoCs.
There are two cases we are using this buffer synchronization framework. One is to primarily enhance GPU rendering performance on Tizen platform in case of 3d app with compositing mode that the 3d app draws something in off-screen buffer.
And other is to couple buffer access control and cache operation between CPU and DMA; the cache operation is done by the dmabuf sync framework in kernel side.
Why do we need buffer access control between CPU and DMA?
---------------------------------------------------------
The below shows simple 3D software layers,
3D Application
-----------------------------------------
Standard OpenGL ES and EGL Interfaces ------- [A]
-----------------------------------------
MALI OpenGL ES and EGL Native modules --------- [B]
-----------------------------------------
Exynos DRM Driver | GPU Driver ---------- [C]
3d application requests 3d rendering through A. And then B fills a 3d command buffer with the requests from A. And then 3D application calls glFlush so that the 3d command buffer can be submitted to C, and rendered by GPU hardware. Internally, glFlush(also glFinish) will call a function of native module[B] glFinish blocks caller's task until all GL execution is complete. On the other hand, glFlush forces execution of GL commands but doesn't block the caller's task until the completion.
In composting mode, in case of using glFinish, the 3d rendering performance is quite lower than using glFlush because glFinish makes CPU blocked until the execution of the 3d commands is completed. However, the use of glFlush has one issue that the shared buffer with GPU could be broken when CPU accesses the shared buffer at once after glFlush because CPU cannot be aware of the completion of GPU access to the shared buffer: actually, Tizen platform internally calls only eglSwapBuffer instead of glFlush and glFinish, and whether flushing or finishing is decided according to compositing mode or not. So in such case, we would need buffer access control between CPU and DMA for more performance.
About cache operation
---------------------
The dmabuf sync framework can include cache operation feature, and the below shows how the cache operation based on dmabuf sync framework is performed,
device driver in kernel or fctrl in user land
dmabuf_sync_lock or dmabuf_sync_single_lock
check before and after buffer access
dma_buf_begin_cpu_access or dma_buf_end_cpu_access
begin_cpu_access or end_cpu_access of exporter
dma_sync_sg_for_device or dma_sync_sg_for_cpu
In case that using dmabuf sync framework, kernel can be aware of when CPU and DMA access to a shared buffer is completed so we can do cache operation in kernel so that way, we can couple buffer access control and cache operation. So with this, we can avoid that user land overuses cache operation.
I guess most Linux based platforms are using cachable mapped buffer for more performance: in case that CPU frequently accesses the shared buffer which is shared with DMA, the use of cachable mapped buffer is more fast than the use of non-cachable. However, this way could make cache operation overused by user land because only user land can be aware of the completion of CPU or DMA access to the shared buffer so user land could request cache operations every time it wants even the cache operation is unnecessary. That is how user land could overuse cache operations.
To Android, Chrome OS, and other forks,
Are there other cases that buffer access control between CPU and DMA is needed? I know Android sync driver and KDS are already being used for Android, Chrome OS, and so on.
How does your platform do cache operation? And How do you think about coupling buffer access control and cache operation between CPU and DMA?.
Lastly, I think we may need Linux generic buffer synchronization mechanism that uses only Linux standard interfaces (dmabuf) including user land interfaces (fcntl and select system calls), and the dmabuf sync framework could meet it.
Thanks,
Inki Dae
From: Mark Brown <broonie(a)linaro.org>
The GPIO manipulation done by this driver is never in atomic context so
we can use gpio_set_value_cansleep() and support GPIOs that can't be set
from atomic context.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
sound/soc/codecs/adau1701.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/sound/soc/codecs/adau1701.c b/sound/soc/codecs/adau1701.c
index 2c10252..ebff112 100644
--- a/sound/soc/codecs/adau1701.c
+++ b/sound/soc/codecs/adau1701.c
@@ -247,21 +247,21 @@ static int adau1701_reset(struct snd_soc_codec *codec, unsigned int clkdiv)
gpio_is_valid(adau1701->gpio_pll_mode[1])) {
switch (clkdiv) {
case 64:
- gpio_set_value(adau1701->gpio_pll_mode[0], 0);
- gpio_set_value(adau1701->gpio_pll_mode[1], 0);
+ gpio_set_value_cansleep(adau1701->gpio_pll_mode[0], 0);
+ gpio_set_value_cansleep(adau1701->gpio_pll_mode[1], 0);
break;
case 256:
- gpio_set_value(adau1701->gpio_pll_mode[0], 0);
- gpio_set_value(adau1701->gpio_pll_mode[1], 1);
+ gpio_set_value_cansleep(adau1701->gpio_pll_mode[0], 0);
+ gpio_set_value_cansleep(adau1701->gpio_pll_mode[1], 1);
break;
case 384:
- gpio_set_value(adau1701->gpio_pll_mode[0], 1);
- gpio_set_value(adau1701->gpio_pll_mode[1], 0);
+ gpio_set_value_cansleep(adau1701->gpio_pll_mode[0], 1);
+ gpio_set_value_cansleep(adau1701->gpio_pll_mode[1], 0);
break;
case 0: /* fallback */
case 512:
- gpio_set_value(adau1701->gpio_pll_mode[0], 1);
- gpio_set_value(adau1701->gpio_pll_mode[1], 1);
+ gpio_set_value_cansleep(adau1701->gpio_pll_mode[0], 1);
+ gpio_set_value_cansleep(adau1701->gpio_pll_mode[1], 1);
break;
}
}
@@ -269,10 +269,10 @@ static int adau1701_reset(struct snd_soc_codec *codec, unsigned int clkdiv)
adau1701->pll_clkdiv = clkdiv;
if (gpio_is_valid(adau1701->gpio_nreset)) {
- gpio_set_value(adau1701->gpio_nreset, 0);
+ gpio_set_value_cansleep(adau1701->gpio_nreset, 0);
/* minimum reset time is 20ns */
udelay(1);
- gpio_set_value(adau1701->gpio_nreset, 1);
+ gpio_set_value_cansleep(adau1701->gpio_nreset, 1);
/* power-up time may be as long as 85ms */
mdelay(85);
}
--
1.8.4.rc1
From: Mark Brown <broonie(a)linaro.org>
There is no need for the power down work to be done on a per CPU workqueue
especially considering the fairly long delay before powerdown.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
sound/soc/soc-compress.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
index d220150..53c9ecd 100644
--- a/sound/soc/soc-compress.c
+++ b/sound/soc/soc-compress.c
@@ -149,8 +149,9 @@ static int soc_compr_free(struct snd_compr_stream *cstream)
SND_SOC_DAPM_STREAM_STOP);
} else {
rtd->pop_wait = 1;
- schedule_delayed_work(&rtd->delayed_work,
- msecs_to_jiffies(rtd->pmdown_time));
+ queue_delayed_work(system_power_efficient_wq,
+ &rtd->delayed_work,
+ msecs_to_jiffies(rtd->pmdown_time));
}
} else {
/* capture streams can be powered down now */
--
1.8.4.rc1
From: Mark Brown <broonie(a)linaro.org>
The intn and connect GPIO properties are swapped in the code which will
cause failures at runtime if these are connected, fix the code.
There are currently no in-tree users of this device to check or update.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
drivers/usb/misc/usb3503.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/misc/usb3503.c b/drivers/usb/misc/usb3503.c
index cbb6b78..4e3a2d2 100644
--- a/drivers/usb/misc/usb3503.c
+++ b/drivers/usb/misc/usb3503.c
@@ -215,10 +215,10 @@ static int usb3503_probe(struct i2c_client *i2c, const struct i2c_device_id *id)
}
}
- hub->gpio_intn = of_get_named_gpio(np, "connect-gpios", 0);
+ hub->gpio_intn = of_get_named_gpio(np, "intn-gpios", 0);
if (hub->gpio_intn == -EPROBE_DEFER)
return -EPROBE_DEFER;
- hub->gpio_connect = of_get_named_gpio(np, "intn-gpios", 0);
+ hub->gpio_connect = of_get_named_gpio(np, "connect-gpios", 0);
if (hub->gpio_connect == -EPROBE_DEFER)
return -EPROBE_DEFER;
hub->gpio_reset = of_get_named_gpio(np, "reset-gpios", 0);
--
1.8.4.rc1