Hi,
This patch set allocates the restricted DMA-bufs via the TEE subsystem.
This a big update compared to the previous patch set [1].
The TEE subsystem handles the DMA-buf allocations since it is the TEE
(OP-TEE, AMD-TEE, TS-TEE, or perhaps a future QTEE) which sets up the
restrictions for the memory used for the DMA-bufs.
I've added a new IOCTL, TEE_IOC_RSTMEM_ALLOC, to allocate the restricted
DMA-bufs. This IOCTL reaches the backend TEE driver, allowing it to choose
how to allocate the restricted physical memory.
TEE_IOC_RSTMEM_ALLOC takes in addition to a size and flags parameters also
a use-case parameter. This is used by the backend TEE driver to decide on
allocation policy and which devices should be able to access the memory.
Three use-cases (Secure Video Playback, Trusted UI, and Secure Video
Recording) has been identified so far to serve as examples of what can be
expected. More use-cases can be added in userspace ABI, but it's up to the
backend TEE drivers to provide the implementation.
Each use-case has it's own restricted memory pool since different use-cases
requires isolation from different parts of the system. A restricted memory
pool can be based on a static carveout instantiated while probing the TEE
backend driver, or dynamically allocated from CMA and made restricted as
needed by the TEE.
This can be tested on QEMU with the following steps:
repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \
-b prototype/sdp-v3
repo sync -j8
cd build
make toolchains -j$(nproc)
make SPMC_AT_EL=1 all -j$(nproc)
make SPMC_AT_EL=1 run-only
# login and at the prompt:
xtest --sdp-basic
The SPMC_AT_EL=1 parameter configures the build with FF-A and an SPMC at
S-EL1 inside OP-TEE. The parameter can be changed into SPMC_AT_EL=n to test
without FF-A using the original SMC ABI instead. Please remember to do
%rm -rf ../trusted-firmware-a/build/qemu
for TF-A to be rebuilt properly using the new configuration.
https://optee.readthedocs.io/en/latest/building/prerequisites.html
list dependencies needed to build the above.
The tests are pretty basic, mostly checking that a Trusted Application in
the secure world can access and manipulate the memory. There are also some
negative tests for out of bounds buffers etc.
Thanks,
Jens
[1] https://lore.kernel.org/lkml/20241015101716.740829-1-jens.wiklander@linaro.…
Changes since the V2 RFC:
* Based on v6.12
* Replaced the flags for SVP and Trusted UID memory with a u32 field with
unique id for each use case
* Added dynamic allocation of restricted memory pools
* Added OP-TEE ABI both with and without FF-A for dynamic restricted memory
* Added support for FF-A with FFA_LEND
Changes since the V1 RFC:
* Based on v6.11
* Complete rewrite, replacing the restricted heap with TEE_IOC_RSTMEM_ALLOC
Changes since Olivier's post [2]:
* Based on Yong Wu's post [1] where much of dma-buf handling is done in
the generic restricted heap
* Simplifications and cleanup
* New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap
support"
* Replaced the word "secure" with "restricted" where applicable
Jens Wiklander (4):
tee: add restricted memory allocation
optee: account for direction while converting parameters
optee: sync secure world ABI headers
optee: support restricted memory allocation
drivers/tee/Makefile | 1 +
drivers/tee/optee/Makefile | 1 +
drivers/tee/optee/call.c | 10 +-
drivers/tee/optee/core.c | 1 +
drivers/tee/optee/ffa_abi.c | 178 +++++++++++++-
drivers/tee/optee/optee_ffa.h | 27 ++-
drivers/tee/optee/optee_msg.h | 65 ++++-
drivers/tee/optee/optee_private.h | 75 ++++--
drivers/tee/optee/optee_smc.h | 71 +++++-
drivers/tee/optee/rpc.c | 31 ++-
drivers/tee/optee/rstmem.c | 380 ++++++++++++++++++++++++++++++
drivers/tee/optee/smc_abi.c | 214 +++++++++++++++--
drivers/tee/tee_core.c | 37 ++-
drivers/tee/tee_private.h | 2 +
drivers/tee/tee_rstmem.c | 201 ++++++++++++++++
drivers/tee/tee_shm.c | 2 +
drivers/tee/tee_shm_pool.c | 69 +++++-
include/linux/tee_core.h | 15 ++
include/linux/tee_drv.h | 4 +-
include/uapi/linux/tee.h | 37 ++-
20 files changed, 1344 insertions(+), 77 deletions(-)
create mode 100644 drivers/tee/optee/rstmem.c
create mode 100644 drivers/tee/tee_rstmem.c
--
2.43.0
On Thu, Nov 21, 2024 at 06:31:34PM +0530, Jyothi Kumar Seerapu wrote:
> I2C functionality has dependencies on the GPI driver.
> Ensure that the GPI driver is enabled when using the I2C
> driver functionality.
> Therefore, update the I2C GENI driver to depend on the GPI driver.
>
> Signed-off-by: Jyothi Kumar Seerapu <quic_jseerapu(a)quicinc.com>
> ---
> v2 -> v3:
> - Moved this change to patch3.
> - Updated commit description.
>
> v1 -> v2:
> - This patch is added in v2 to address the kernel test robot
> reported compilation error.
> ERROR: modpost: "gpi_multi_desc_process" [drivers/i2c/busses/i2c-qcom-geni.ko] undefined!
>
> drivers/i2c/busses/Kconfig | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
> index 0aa948014008..87634a682855 100644
> --- a/drivers/i2c/busses/Kconfig
> +++ b/drivers/i2c/busses/Kconfig
> @@ -1049,6 +1049,7 @@ config I2C_QCOM_GENI
> tristate "Qualcomm Technologies Inc.'s GENI based I2C controller"
> depends on ARCH_QCOM || COMPILE_TEST
> depends on QCOM_GENI_SE
> + depends on QCOM_GPI_DMA
So... without this change the previous patch is broken, which is a
no-go. And anyway, adding dependency onto a particular DMA driver is a
bad idea. Please make use of the DMA API instead.
> help
> This driver supports GENI serial engine based I2C controller in
> master mode on the Qualcomm Technologies Inc.'s SoCs. If you say
> --
> 2.17.1
>
--
With best wishes
Dmitry
On Mon, Nov 18, 2024 at 1:15 AM Tvrtko Ursulin <tursulin(a)ursulin.net> wrote:
>
>
> On 17/11/2024 17:03, T.J. Mercier wrote:
> > The arguments for __dma_buf_debugfs_list_del do not match for both the
> > CONFIG_DEBUG_FS case and the !CONFIG_DEBUG_FS case. The !CONFIG_DEBUG_FS
> > case should take a struct dma_buf *, but it's currently struct file *.
> > This can lead to the build error:
> >
> > error: passing argument 1 of ‘__dma_buf_debugfs_list_del’ from
> > incompatible pointer type [-Werror=incompatible-pointer-types]
> >
> > dma-buf.c:63:53: note: expected ‘struct file *’ but argument is of
> > type ‘struct dma_buf *’
> > 63 | static void __dma_buf_debugfs_list_del(struct file *file)
> >
> > Fixes: bfc7bc539392 ("dma-buf: Do not build debugfs related code when !CONFIG_DEBUG_FS")
> > Signed-off-by: T.J. Mercier <tjmercier(a)google.com>
> > ---
> > drivers/dma-buf/dma-buf.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> > index 8892bc701a66..afb8c1c50107 100644
> > --- a/drivers/dma-buf/dma-buf.c
> > +++ b/drivers/dma-buf/dma-buf.c
> > @@ -60,7 +60,7 @@ static void __dma_buf_debugfs_list_add(struct dma_buf *dmabuf)
> > {
> > }
> >
> > -static void __dma_buf_debugfs_list_del(struct file *file)
> > +static void __dma_buf_debugfs_list_del(struct dma_buf *dmabuf)
> > {
> > }
> > #endif
>
> Huh I wonder how this sneaked by until now.. thanks for fixing!
>
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>
> Regards,
>
> Tvrtko
Thanks Tvrtko. Upstream there is currently only the one use where it's
called with a void pointer which doesn't generate the error, but
KernelCI caught the problem on an Android branch where it's also
called with a dma_buf pointer:
https://dashboard.kernelci.org/tree/5a4c93e2f794001a5efa13c0dec931235240d38…
The arguments for __dma_buf_debugfs_list_del do not match for both the
CONFIG_DEBUG_FS case and the !CONFIG_DEBUG_FS case. The !CONFIG_DEBUG_FS
case should take a struct dma_buf *, but it's currently struct file *.
This can lead to the build error:
error: passing argument 1 of ‘__dma_buf_debugfs_list_del’ from
incompatible pointer type [-Werror=incompatible-pointer-types]
dma-buf.c:63:53: note: expected ‘struct file *’ but argument is of
type ‘struct dma_buf *’
63 | static void __dma_buf_debugfs_list_del(struct file *file)
Fixes: bfc7bc539392 ("dma-buf: Do not build debugfs related code when !CONFIG_DEBUG_FS")
Signed-off-by: T.J. Mercier <tjmercier(a)google.com>
---
drivers/dma-buf/dma-buf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 8892bc701a66..afb8c1c50107 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -60,7 +60,7 @@ static void __dma_buf_debugfs_list_add(struct dma_buf *dmabuf)
{
}
-static void __dma_buf_debugfs_list_del(struct file *file)
+static void __dma_buf_debugfs_list_del(struct dma_buf *dmabuf)
{
}
#endif
--
2.47.0.338.g60cca15819-goog
Am 14.11.24 um 12:14 schrieb Tvrtko Ursulin:
> From: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>
> One alternative to the fix Christian proposed in
> https://lore.kernel.org/dri-devel/20241024124159.4519-3-christian.koenig@am…
> is to replace the rather complex open coded sorting loops with the kernel
> standard sort followed by a context squashing pass.
>
> Proposed advantage of this would be readability but one concern Christian
> raised was that there could be many fences, that they are typically mostly
> sorted, and so the kernel's heap sort would be much worse by the proposed
> algorithm.
>
> I had a look running some games and vkcube to see what are the typical
> number of input fences. Tested scenarios:
>
> 1) Hogwarts Legacy under Gamescope
>
> 450 calls per second to __dma_fence_unwrap_merge.
>
> Percentages per number of fences buckets, before and after checking for
> signalled status, sorting and flattening:
>
> N Before After
> 0 0.91%
> 1 69.40%
> 2-3 28.72% 9.4% (90.6% resolved to one fence)
> 4-5 0.93%
> 6-9 0.03%
> 10+
>
> 2) Cyberpunk 2077 under Gamescope
>
> 1050 calls per second, amounting to 0.01% CPU time according to perf top.
>
> N Before After
> 0 1.13%
> 1 52.30%
> 2-3 40.34% 55.57%
> 4-5 1.46% 0.50%
> 6-9 2.44%
> 10+ 2.34%
>
> 3) vkcube under Plasma
>
> 90 calls per second.
>
> N Before After
> 0
> 1
> 2-3 100% 0% (Ie. all resolved to a single fence)
> 4-5
> 6-9
> 10+
>
> In the case of vkcube all invocations in the 2-3 bucket were actually
> just two input fences.
>
> From these numbers it looks like the heap sort should not be a
> disadvantage, given how the dominant case is <= 2 input fences which heap
> sort solves with just one compare and swap. (And for the case of one input
> fence we have a fast path in the previous patch.)
>
> A complementary possibility is to implement a different sorting algorithm
> under the same API as the kernel's sort() and so keep the simplicity,
> potentially moving the new sort under lib/ if it would be found more
> widely useful.
>
> v2:
> * Hold on to fence references and reduce commentary. (Christian)
> * Record and use latest signaled timestamp in the 2nd loop too.
> * Consolidate zero or one fences fast paths.
>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
> Fixes: 245a4a7b531c ("dma-buf: generalize dma_fence unwrap & merging v3")
> Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3617
> Cc: Christian König <christian.koenig(a)amd.com>
> Cc: Daniel Vetter <daniel.vetter(a)ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
> Cc: Gustavo Padovan <gustavo(a)padovan.org>
> Cc: Friedrich Vock <friedrich.vock(a)gmx.de>
> Cc: linux-media(a)vger.kernel.org
> Cc: dri-devel(a)lists.freedesktop.org
> Cc: linaro-mm-sig(a)lists.linaro.org
> Cc: <stable(a)vger.kernel.org> # v6.0+
> ---
> drivers/dma-buf/dma-fence-unwrap.c | 129 ++++++++++++++---------------
> 1 file changed, 64 insertions(+), 65 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-fence-unwrap.c b/drivers/dma-buf/dma-fence-unwrap.c
> index 628af51c81af..26cad03340ce 100644
> --- a/drivers/dma-buf/dma-fence-unwrap.c
> +++ b/drivers/dma-buf/dma-fence-unwrap.c
> @@ -12,6 +12,7 @@
> #include <linux/dma-fence-chain.h>
> #include <linux/dma-fence-unwrap.h>
> #include <linux/slab.h>
> +#include <linux/sort.h>
>
> /* Internal helper to start new array iteration, don't use directly */
> static struct dma_fence *
> @@ -59,6 +60,25 @@ struct dma_fence *dma_fence_unwrap_next(struct dma_fence_unwrap *cursor)
> }
> EXPORT_SYMBOL_GPL(dma_fence_unwrap_next);
>
> +
> +static int fence_cmp(const void *_a, const void *_b)
> +{
> + struct dma_fence *a = *(struct dma_fence **)_a;
> + struct dma_fence *b = *(struct dma_fence **)_b;
> +
> + if (a->context < b->context)
> + return -1;
> + else if (a->context > b->context)
> + return 1;
> +
> + if (dma_fence_is_later(b, a))
> + return -1;
> + else if (dma_fence_is_later(a, b))
> + return 1;
> +
> + return 0;
> +}
> +
> /* Implementation for the dma_fence_merge() marco, don't use directly */
> struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
> struct dma_fence **fences,
> @@ -67,8 +87,7 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
> struct dma_fence_array *result;
> struct dma_fence *tmp, **array;
> ktime_t timestamp;
> - unsigned int i;
> - size_t count;
> + int i, j, count;
>
> count = 0;
> timestamp = ns_to_ktime(0);
> @@ -96,78 +115,58 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
> if (!array)
> return NULL;
>
> - /*
> - * This trashes the input fence array and uses it as position for the
> - * following merge loop. This works because the dma_fence_merge()
> - * wrapper macro is creating this temporary array on the stack together
> - * with the iterators.
> - */
> - for (i = 0; i < num_fences; ++i)
> - fences[i] = dma_fence_unwrap_first(fences[i], &iter[i]);
> -
> count = 0;
> - do {
> - unsigned int sel;
> -
> -restart:
> - tmp = NULL;
> - for (i = 0; i < num_fences; ++i) {
> - struct dma_fence *next;
> -
> - while (fences[i] && dma_fence_is_signaled(fences[i]))
> - fences[i] = dma_fence_unwrap_next(&iter[i]);
> -
> - next = fences[i];
> - if (!next)
> - continue;
> -
> - /*
> - * We can't guarantee that inpute fences are ordered by
> - * context, but it is still quite likely when this
> - * function is used multiple times. So attempt to order
> - * the fences by context as we pass over them and merge
> - * fences with the same context.
> - */
> - if (!tmp || tmp->context > next->context) {
> - tmp = next;
> - sel = i;
> -
> - } else if (tmp->context < next->context) {
> - continue;
> -
> - } else if (dma_fence_is_later(tmp, next)) {
> - fences[i] = dma_fence_unwrap_next(&iter[i]);
> - goto restart;
> + for (i = 0; i < num_fences; ++i) {
> + dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) {
> + if (!dma_fence_is_signaled(tmp)) {
> + array[count++] = dma_fence_get(tmp);
> } else {
> - fences[sel] = dma_fence_unwrap_next(&iter[sel]);
> - goto restart;
> + ktime_t t = dma_fence_timestamp(tmp);
> +
> + if (ktime_after(t, timestamp))
> + timestamp = t;
> }
> }
> + }
>
> - if (tmp) {
> - array[count++] = dma_fence_get(tmp);
> - fences[sel] = dma_fence_unwrap_next(&iter[sel]);
> + if (count == 0 || count == 1)
> + goto return_fastpath;
> +
> + sort(array, count, sizeof(*array), fence_cmp, NULL);
> +
> + /*
> + * Only keep the most recent fence for each context.
> + */
> + j = 0;
> + tmp = array[0];
> + for (i = 1; i < count; i++) {
> + if (array[i]->context != tmp->context)
> + array[j++] = tmp;
> + else
> + dma_fence_put(tmp);
If I'm not completely mistaken that can result in dropping the first
element but not assigning it again.
E.g. array[0] is potentially invalid after the loop.
> + tmp = array[i];
> + }
> + if (j == 0 || tmp->context != array[j - 1]->context) {
> + array[j++] = tmp;
> + }
Maybe adjust the sort criteria so that the highest seqno comes first.
This reduces the whole loop to something like this:
j = 0;
for (i = 1; i < count; i++) {
if (array[i]->context == array[j]->context)
dma_fence_put(array[i]);
else
array[++j] = array[i];
}
count = ++j;
Regards,
Christian.
> + count = j;
> +
> + if (count > 1) {
> + result = dma_fence_array_create(count, array,
> + dma_fence_context_alloc(1),
> + 1, false);
> + if (!result) {
> + tmp = NULL;
> + goto return_tmp;
> }
> - } while (tmp);
> -
> - if (count == 0) {
> - tmp = dma_fence_allocate_private_stub(ktime_get());
> - goto return_tmp;
> + return &result->base;
> }
>
> - if (count == 1) {
> +return_fastpath:
> + if (count == 0)
> + tmp = dma_fence_allocate_private_stub(timestamp);
> + else
> tmp = array[0];
> - goto return_tmp;
> - }
> -
> - result = dma_fence_array_create(count, array,
> - dma_fence_context_alloc(1),
> - 1, false);
> - if (!result) {
> - tmp = NULL;
> - goto return_tmp;
> - }
> - return &result->base;
>
> return_tmp:
> kfree(array);
On Mon, Nov 11, 2024 at 07:32:42PM +0530, Jyothi Kumar Seerapu wrote:
> GSI hardware generates an interrupt for each transfer completion.
> For multiple messages within a single transfer, this results
> in receiving N interrupts for N messages, which can introduce
> significant software interrupt latency.
Here's an excellent opportunity for splitting your problem description
and solution description in two easy to read paragraphs by adding some
newlines.
> To mitigate this latency,
> utilize Block Event Interrupt (BEI) only when an interrupt is necessary.
> When using BEI, consider splitting a single multi-message transfer into
> chunks of 8. This approach can enhance overall transfer time and
The reason for the number 8 must be documented.
"This approach..." wouldn't hurt from having it's own paragraph once
again.
> efficiency.
>
> Signed-off-by: Jyothi Kumar Seerapu <quic_jseerapu(a)quicinc.com>
> ---
>
> v1 -> v2:
> - Changed dma_addr type from array of pointers to array.
> - To support BEI functionality with the TRE size of 64 defined in GPI driver,
> updated QCOM_GPI_MAX_NUM_MSGS to 16 and NUM_MSGS_PER_IRQ to 8.
>
> drivers/dma/qcom/gpi.c | 49 ++++++++++++++++++++++++++++++++
> include/linux/dma/qcom-gpi-dma.h | 37 ++++++++++++++++++++++++
> 2 files changed, 86 insertions(+)
>
> diff --git a/drivers/dma/qcom/gpi.c b/drivers/dma/qcom/gpi.c
> index 52a7c8f2498f..a98de3178764 100644
> --- a/drivers/dma/qcom/gpi.c
> +++ b/drivers/dma/qcom/gpi.c
> @@ -1693,6 +1693,9 @@ static int gpi_create_i2c_tre(struct gchan *chan, struct gpi_desc *desc,
>
> tre->dword[3] = u32_encode_bits(TRE_TYPE_DMA, TRE_FLAGS_TYPE);
> tre->dword[3] |= u32_encode_bits(1, TRE_FLAGS_IEOT);
> +
> + if (i2c->flags & QCOM_GPI_BLOCK_EVENT_IRQ)
> + tre->dword[3] |= u32_encode_bits(1, TRE_FLAGS_BEI);
> }
>
> for (i = 0; i < tre_idx; i++)
> @@ -2098,6 +2101,52 @@ static int gpi_find_avail_gpii(struct gpi_dev *gpi_dev, u32 seid)
> return -EIO;
> }
>
> +/**
> + * gpi_multi_desc_process() - Process received transfers from GSI HW
> + * @dev: pointer to the corresponding dev node
> + * @multi_xfer: pointer to the gpi_multi_xfer
> + * @num_xfers: total number of transfers
> + * @transfer_timeout_msecs: transfer timeout value
> + * @transfer_comp: completion object of the transfer
> + *
> + * This function is used to process the received transfers based on the
> + * completion events
As far as I can tell it doesn't "process" anything. All it does is
reinit the completion (n + 7) / 8 times, and for the first n / 8
iterations it will wait for an externally defined completion.
Why is this function even defined here, it solely operates on parameters
coming from the I2C driver?
> + *
> + * Return: On success returns 0, otherwise return error code
> + */
> +int gpi_multi_desc_process(struct device *dev, struct gpi_multi_xfer *multi_xfer,
> + u32 num_xfers, u32 transfer_timeout_msecs,
> + struct completion *transfer_comp)
> +{
> + int i;
> + u32 max_irq_cnt, time_left;
> +
> + max_irq_cnt = num_xfers / NUM_MSGS_PER_IRQ;
> + if (num_xfers % NUM_MSGS_PER_IRQ)
> + max_irq_cnt++;
> +
> + /*
> + * Wait for the interrupts of the processed transfers in multiple
> + * of 64 and for the last transfer. If the hardware is fast and
I'm confused, where does this 64 come from?
> + * already processed all the transfers then no need to wait.
> + */
> + for (i = 0; i < max_irq_cnt; i++) {
> + reinit_completion(transfer_comp);
I'm trying to convince myself that this isn't racey, but the split
ownership of updating and checking multi_xfer->irq_cnt between the GPI
and I2C drivers is just too hard for me to follow.
> + if (max_irq_cnt != multi_xfer->irq_cnt) {
> + time_left = wait_for_completion_timeout(transfer_comp,
> + transfer_timeout_msecs);
> + if (!time_left) {
> + dev_err(dev, "%s: Transfer timeout\n", __func__);
> + return -ETIMEDOUT;
> + }
> + }
> + if (num_xfers > multi_xfer->msg_idx_cnt)
> + return 0;
> + }
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(gpi_multi_desc_process);
The dmaengine framework is expected to provide an abstraction between
clients and DMA engines, so this doesn't look right.
> +
> /* gpi_of_dma_xlate: open client requested channel */
> static struct dma_chan *gpi_of_dma_xlate(struct of_phandle_args *args,
> struct of_dma *of_dma)
> diff --git a/include/linux/dma/qcom-gpi-dma.h b/include/linux/dma/qcom-gpi-dma.h
> index 6680dd1a43c6..1341ff0db808 100644
> --- a/include/linux/dma/qcom-gpi-dma.h
> +++ b/include/linux/dma/qcom-gpi-dma.h
> @@ -15,6 +15,12 @@ enum spi_transfer_cmd {
> SPI_DUPLEX,
> };
>
> +#define QCOM_GPI_BLOCK_EVENT_IRQ BIT(0)
> +
> +#define QCOM_GPI_MAX_NUM_MSGS 16
> +#define NUM_MSGS_PER_IRQ 8
> +#define MIN_NUM_OF_MSGS_MULTI_DESC 4
Prefixing these QCOM_GPI_ seems like an excellent idea. Still puzzled
about the numbers 8 and 4 though, are they universal for all variants of
GPI or are they just arbitrary numbers picked by experimentation?
> +
> /**
> * struct gpi_spi_config - spi config for peripheral
> *
> @@ -51,6 +57,29 @@ enum i2c_op {
> I2C_READ,
> };
>
> +/**
> + * struct gpi_multi_xfer - Used for multi transfer support
> + *
> + * @msg_idx_cnt: message index for the transfer
> + * @buf_idx: dma buffer index
> + * @unmap_msg_cnt: unampped transfer index
s/unampped/unmapped
> + * @freed_msg_cnt: freed transfer index
> + * @irq_cnt: received interrupt count
> + * @irq_msg_cnt: transfer message count for the received irqs
> + * @dma_buf: virtual address of the buffer
> + * @dma_addr: dma address of the buffer
"the buffer"? There's up to 16 of them...
As mentioned above, I'm skeptical about this custom API - but if we
were to go this route, the exact responsibilities and semantics should
be documented.
Regards,
Bjorn
> + */
> +struct gpi_multi_xfer {
> + u32 msg_idx_cnt;
> + u32 buf_idx;
> + u32 unmap_msg_cnt;
> + u32 freed_msg_cnt;
> + u32 irq_cnt;
> + u32 irq_msg_cnt;
> + void *dma_buf[QCOM_GPI_MAX_NUM_MSGS];
> + dma_addr_t dma_addr[QCOM_GPI_MAX_NUM_MSGS];
> +};
> +
> /**
> * struct gpi_i2c_config - i2c config for peripheral
> *
> @@ -65,6 +94,8 @@ enum i2c_op {
> * @rx_len: receive length for buffer
> * @op: i2c cmd
> * @muli-msg: is part of multi i2c r-w msgs
> + * @flags: true for block event interrupt support
> + * @multi_xfer: indicates transfer has multi messages
> */
> struct gpi_i2c_config {
> u8 set_config;
> @@ -78,6 +109,12 @@ struct gpi_i2c_config {
> u32 rx_len;
> enum i2c_op op;
> bool multi_msg;
> + u8 flags;
> + struct gpi_multi_xfer multi_xfer;
> };
>
> +int gpi_multi_desc_process(struct device *dev, struct gpi_multi_xfer *multi_xfer,
> + u32 num_xfers, u32 tranfer_timeout_msecs,
> + struct completion *transfer_comp);
> +
> #endif /* QCOM_GPI_DMA_H */
> --
> 2.17.1
>
>