Il 12/03/24 15:50, Alexandre Mergnat ha scritto:
>
>
> On 26/02/2024 16:25, AngeloGioacchino Del Regno wrote:
>>> + if (enable) {
>>> + /* set gpio mosi mode */
>>> + regmap_write(priv->regmap, MT6357_GPIO_MODE2_CLR, GPIO_MODE2_CLEAR_ALL);
>>> + regmap_write(priv->regmap, MT6357_GPIO_MODE2_SET,
>>> GPIO8_MODE_SET_AUD_CLK_MOSI |
>>> + GPIO9_MODE_SET_AUD_DAT_MOSI0 |
>>> + GPIO10_MODE_SET_AUD_DAT_MOSI1 |
>>> + GPIO11_MODE_SET_AUD_SYNC_MOSI);
>>
>> Are you sure that you need to write to MODE2_SET *and* to MODE2?!
>
> This is downstream code and these registers aren't in my documentation.
> I've removed the MODE2_SET write and test the audio: it's still working.
>
> So I will keep the spurious write removed for v2. :)
>
Usually, MediaTek registers are laid out like "REG" being R/legacy-W and
"REG_SET/CLR" for setting and clearing bits in "REG" internally, and that
might account for internal latencies and such.
Can you please try to remove the MODE2 write instead of the MODE2_SET one
and check if that works?
You're already using the SETCLR way when manipulating registers in here,
so I would confidently expect that to work.
Cheers,
Angelo
Hi Jonathan,
Here's the final(tm) version of the IIO DMABUF patchset.
This v8 fixes the remaining few issues that Christian reported.
I also updated the documentation patch as there has been changes to
index.rst.
This was based on next-20240308.
Changelog:
- [3/6]:
- Fix swapped fence direction
- Simplify fence wait mechanism
- Remove "Buffer closed with active transfers" print, as it was dead
code
- Un-export iio_buffer_dmabuf_{get,put}. They are not used anywhere
else so they can even be static.
- Prevent attaching already-attached DMABUFs
- [6/6]:
Renamed dmabuf_api.rst -> iio_dmabuf_api.rst, and updated index.rst
whose format changed in iio/togreg.
Cheers,
-Paul
Paul Cercueil (6):
dmaengine: Add API function dmaengine_prep_peripheral_dma_vec()
dmaengine: dma-axi-dmac: Implement device_prep_peripheral_dma_vec
iio: core: Add new DMABUF interface infrastructure
iio: buffer-dma: Enable support for DMABUFs
iio: buffer-dmaengine: Support new DMABUF based userspace API
Documentation: iio: Document high-speed DMABUF based API
Documentation/iio/iio_dmabuf_api.rst | 54 ++
Documentation/iio/index.rst | 1 +
drivers/dma/dma-axi-dmac.c | 40 ++
drivers/iio/buffer/industrialio-buffer-dma.c | 181 ++++++-
.../buffer/industrialio-buffer-dmaengine.c | 59 ++-
drivers/iio/industrialio-buffer.c | 462 ++++++++++++++++++
include/linux/dmaengine.h | 27 +
include/linux/iio/buffer-dma.h | 31 ++
include/linux/iio/buffer_impl.h | 30 ++
include/uapi/linux/iio/buffer.h | 22 +
10 files changed, 890 insertions(+), 17 deletions(-)
create mode 100644 Documentation/iio/iio_dmabuf_api.rst
--
2.43.0
On Tue, Mar 5, 2024 at 10:02 AM Ricardo B. Marliere
<ricardo(a)marliere.net> wrote:
>
> On 5 Mar 09:07, T.J. Mercier wrote:
> >
> > Reviewed-by: T.J. Mercier <tjmercier(a)google.com>
> >
> > Is this really a resend? I don't see anything on lore and I can't
> > recall seeing this patch in my inbox before.
>
> Hi T.J. thanks for reviewing!
>
> I'm sorry about that, I sent the series only to Greg before but I
> thought it had Cc'ed the lists as well. Then I realized it was sent
> publicly only once. Double mistake :(
>
> Best regards,
> - Ricardo.
Cheers, glad I don't have to try to rework my email filters. :)
On Thu, Jan 18, 2024 at 7:33 PM Tommy Chiang <ototot(a)chromium.org> wrote:
>
> This patch tries to improve the display of the code listing
> on The Linux Kernel documentation website for dma-buf [1] .
>
> Originally, it appears that it was attempting to escape
> the '*' character, but looks like it's not necessary (now),
> so we are seeing something like '\*' on the webite.
>
> This patch removes these unnecessary backslashes and adds syntax
> highlighting to improve the readability of the code listing.
>
> [1] https://docs.kernel.org/driver-api/dma-buf.html
>
> Signed-off-by: Tommy Chiang <ototot(a)chromium.org>
> ---
> drivers/dma-buf/dma-buf.c | 15 +++++++++------
> 1 file changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 8fe5aa67b167..e083a0ab06d7 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1282,10 +1282,12 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF);
> * vmap interface is introduced. Note that on very old 32-bit architectures
> * vmalloc space might be limited and result in vmap calls failing.
> *
> - * Interfaces::
> + * Interfaces:
> *
> - * void \*dma_buf_vmap(struct dma_buf \*dmabuf, struct iosys_map \*map)
> - * void dma_buf_vunmap(struct dma_buf \*dmabuf, struct iosys_map \*map)
> + * .. code-block:: c
> + *
> + * void *dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> + * void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
> *
> * The vmap call can fail if there is no vmap support in the exporter, or if
> * it runs out of vmalloc space. Note that the dma-buf layer keeps a reference
> @@ -1342,10 +1344,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF);
> * enough, since adding interfaces to intercept pagefaults and allow pte
> * shootdowns would increase the complexity quite a bit.
> *
> - * Interface::
> + * Interface:
> + *
> + * .. code-block:: c
> *
> - * int dma_buf_mmap(struct dma_buf \*, struct vm_area_struct \*,
> - * unsigned long);
> + * int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long);
> *
> * If the importing subsystem simply provides a special-purpose mmap call to
> * set up a mapping in userspace, calling do_mmap with &dma_buf.file will
> --
> 2.43.0.381.gb435a96ce8-goog
Reviewed-by: T.J. Mercier <tjmercier(a)google.com>
The code block highlighting is nice.
> 3) From 2), am65_cpsw_alloc_skb() function removed and replaced by
> netdev_alloc_skb_ip_align(), as used by the driver before -> res = 506
> Conclusion: Here is where the loss comes from.
> IOW, My am65_cpsw_alloc_skb() function is not good.
>
> Initially, I mainly created this 'custom' am65_cpsw_alloc_skb() function
> because I thought that none of XDP memory models could be used along
> with netdev_alloc_skb_ip_align() function. Was I wrong ?
> By creating this custom am65_cpsw_alloc_skb(), I also wanted to handle
> the way headroom is reserved differently.
What is special about your device? Why would
netdev_alloc_skb_ip_align() not work?
Andrew
On Tue, Mar 05, 2024 at 11:46:00AM +0100, Julien Panis wrote:
> On 3/1/24 17:38, Andrew Lunn wrote:
> > On Fri, Mar 01, 2024 at 04:02:53PM +0100, Julien Panis wrote:
> > > This patch adds XDP (eXpress Data Path) support to TI AM65 CPSW
> > > Ethernet driver. The following features are implemented:
> > > - NETDEV_XDP_ACT_BASIC (XDP_PASS, XDP_TX, XDP_DROP, XDP_ABORTED)
> > > - NETDEV_XDP_ACT_REDIRECT (XDP_REDIRECT)
> > > - NETDEV_XDP_ACT_NDO_XMIT (ndo_xdp_xmit callback)
> > >
> > > The page pool memory model is used to get better performance.
> > Do you have any benchmark numbers? It should help with none XDP
> > traffic as well. So maybe iperf numbers before and after?
> >
> > Andrew
>
> Argh...Houston, we have a problem. I checked my v3, which is ready for
> submission, with iperf3:
> 1) Before = without page pool -> 500 MBits/sec
> 2) After = with page pool -> 442 MBits/sec
> -> ~ 10% worse with page pool here.
>
> Unless the difference is not due to page pool. Maybe there's something else
> which is not good in my patch. I'm going to send the v3 which uses page pool,
> hopefully someone will find out something suspicious. Meanwhile, I'll carry on
> investigating: I'll check the results with my patch, by removing only the using of
> page pool.
You can also go the other way. First add page pool support. For the
FEC, that improved its performance. Then add XDP, which i think
decreased the performance a little. It is extra processing in the hot
path, so a little loss is not unsurprising.
What tends to be expensive with ARM is cache invalidation and
flush. So make sure you have the lengths correct. You don't want to
operate on more memory than necessary. No point flushing the full MTU
for a 64 byte TCP ACK, etc.
Andrew
Hi Jonathan,
Le mardi 05 mars 2024 à 10:07 +0000, Jonathan Cameron a écrit :
> On Mon, 04 Mar 2024 08:59:47 +0100
> Nuno Sá <noname.nuno(a)gmail.com> wrote:
>
> > On Sun, 2024-03-03 at 17:42 +0000, Jonathan Cameron wrote:
> > > On Fri, 23 Feb 2024 13:13:58 +0100
> > > Nuno Sa <nuno.sa(a)analog.com> wrote:
> > >
> > > > Hi Jonathan, likely you're wondering why I'm sending v7. Well,
> > > > to be
> > > > honest, we're hoping to get this merged this for the 6.9 merge
> > > > window.
> > > > Main reason is because the USB part is already in (so it would
> > > > be nice
> > > > to get the whole thing in). Moreover, the changes asked in v6
> > > > were simple
> > > > (even though I'm not quite sure in one of them) and Paul has no
> > > > access to
> > > > it's laptop so he can't send v7 himself. So he kind of
> > > > said/asked for me to
> > > > do it.
> > >
> > > So, we are cutting this very fine. If Linus hints strongly at an
> > > rc8 maybe we
> > > can sneak this in. However, I need an Ack from Vinod for the dma
> > > engine
> > > changes first.
> > >
> > > Also I'd love a final 'looks ok' comment from DMABUF folk (Ack
> > > even better!)
> > >
> > > Seems that the other side got resolved in the USB gadget, but
> > > last we heard
> > > form
> > > Daniel and Christian looks to have been back on v5. I'd like them
> > > to confirm
> > > they are fine with the changes made as a result.
> > >
> >
> > I can ask Christian or Daniel for some acks but my feeling (I still
> > need, at
> > some point, to get really familiar with all of this) is that this
> > should be
> > pretty similar to the USB series (from a DMABUF point of view) as
> > they are both
> > importers.
> >
> > > I've been happy with the IIO parts for a few versions now but my
> > > ability to
> > > review
> > > the DMABUF and DMA engine bits is limited.
> > >
> > > A realistic path to get this in is rc8 is happening, is all Acks
> > > in place by
> > > Wednesday,
> > > I get apply it and hits Linux-next Thursday, Pull request to Greg
> > > on Saturday
> > > and Greg
> > > is feeling particularly generous to take one on the day he
> > > normally closes his
> > > trees.
> > >
> >
> > Well, it looks like we still have a shot. I'll try to see if Vinod
> > is fine with
> > the DMAENGINE stuff.
> >
>
> Sadly, looks like rc7 was at the end of a quiet week, so almost
> certain to not
> be an rc8 in the end. Let's aim to get this in at the start of the
> next cycle
> so we can build on it from there.
And it looks like I'll need a V8 for the few things noted by Christian.
Having it in 6.9 would have been great but having it eventually merged
is all that matters - so I'm fine to have it queued for 6.10 instead.
Cheers,
-Paul
On Mon, Mar 4, 2024 at 5:46 AM Maxime Ripard <mripard(a)redhat.com> wrote:
> On Wed, Feb 28, 2024 at 08:17:55PM -0800, John Stultz wrote:
> > On Wed, Feb 28, 2024 at 7:24 AM Maxime Ripard <mripard(a)redhat.com> wrote:
> > >
> > > I'm currently working on a platform that seems to have togglable RAM ECC
> > > support. Enabling ECC reduces the memory capacity and memory bandwidth,
> > > so while it's a good idea to protect most of the system, it's not worth
> > > it for things like framebuffers that won't really be affected by a
> > > bitflip.
> > >
> > > It's currently setup by enabling ECC on the entire memory, and then
> > > having a region of memory where ECC is disabled and where we're supposed
> > > to allocate from for allocations that don't need it.
> > >
> > > My first thought to support this was to create a reserved memory region
> > > for the !ECC memory, and to create a heap to allocate buffers from that
> > > region. That would leave the system protected by ECC, while enabling
> > > userspace to be nicer to the system by allocating buffers from the !ECC
> > > region if it doesn't need it.
> > >
> > > However, this creates basically a new combination compared to the one we
> > > already have (ie, physically contiguous vs virtually contiguous), and we
> > > probably would want to throw in cacheable vs non-cacheable too.
> > >
> > > If we had to provide new heaps for each variation, we would have 8 heaps
> > > (and 6 new ones), which could be fine I guess but would still increase
> > > quite a lot the number of heaps we have so far.
> > >
> > > Is it something that would be a problem? If it is, do you see another
> > > way to support those kind of allocations (like providing hints through
> > > the ioctl maybe?)?
> >
> > So, the dma-buf heaps interface uses chardevs so that we can have a
> > lot of flexibility in the types of heaps (and don't have the risk of
> > bitmask exhaustion like ION had). So I don't see adding many
> > differently named heaps as particularly problematic.
>
> Ok
>
> > That said, if there are truly generic properties (cacheable vs
> > non-cachable is maybe one of those) which apply to most heaps, I'm
> > open to making use of the flags. But I want to avoid having per-heap
> > flags, it really needs to be a generic attribute.
> >
> > And I personally don't mind the idea of having things added as heaps
> > initially, and potentially upgrading them to flags if needed (allowing
> > heap drivers to optionally enumerate the old chardevs behind a config
> > option for backwards compatibility).
> >
> > How common is the hardware that is going to provide this configurable
> > ECC option
>
> In terms of number of SoCs with the feature, it's probably a handful. In
> terms of number of units shipped, we're in the fairly common range :)
>
Sure, I guess I was trying to get a sense of is this a feature we'll
likely be seeing commonly across hardware (such that internal kernel
allocators would be considering it as a flag), or is it more tied to a
single vendor such that enabling/isolating it in a driver is the right
place in the abstraction to put it.
> > and will you really want the option on all of the heap types?
>
> Aside from the cacheable/uncacheable discussion, yes. We could probably
> get away with only physically contiguous allocations at the moment
> though, I'll double check.
Ok, that will be useful to know.
> > Will there be any hardware constraint limitations caused by the
> > ECC/!ECC flags? (ie: Devices that can't use !ECC allocated buffers?)
>
> My understanding is that there's no device restriction. It will be a
> carved out memory so we will need to maintain a separate pool and it
> will be limited in size, but that's pretty much the only one afaik.
Ok.
> > If not, I wonder if it would make sense to have something more along
> > the lines using a fcntl() like how F_SEAL_* is used with memfds?
> > With some of the discussion around "restricted"/"secure" heaps that
> > can change state, I've liked this idea of just allocating dmabufs from
> > normal heaps and then using fcntl or something similar to modify
> > properties of the buffer that are separate from the type of memory
> > that is needed to be allocated to satisfy device constraints.
>
> Sorry, I'm not super familiar with F_SEAL so I don't really follow what
> you have in mind here. Do you have any additional resources I could read
> to understand better what you're thinking about?
See the File Sealing section: https://man7.org/linux/man-pages/man2/fcntl.2.html
> Also, if we were to modify the ECC attributes after the dma-buf has been
> allocated by dma-buf, and if the !ECC memory is carved out only, then
> wouldn't that mean we would need to reallocate the backing buffer for
> that dma-buf?
So yea, having to work on a larger pool likely makes this not useful
here, so apologies for the tangent.
To explain myself, part of what I'm thinking of is, the dmabuf heaps
(and really ION before it) try to solve how to allocate a buffer type
that can be used across a number of devices that may have different
constraints. So the focus was on "types of memory" to satisfy the
constraint (contiguous, non-contiguous, secure/restricted, etc), which
come down to what pages actually get used. However, outside of the
"constraint type" the buffer may have, there are other "properties"
that may affect performance (like cacheability, and some variants of
"restricted buffers" which can change over their lifetime). With ION
vendors would mix these two together in their vendor heaps, and with
out-of-tree dmabuf heaps it is also common to tangle types and
properties together.
So I'm sort of stewing on how to best distinguish between heaps for
"types of memory/pages" (ie: what's *required* to share the buffer
between devices) vs these buffer properties (which affect performance)
that may apply to multiple memory types.
(And I'm also not 100% convinced that distinguishing between this is
necessary, but casually mixing them feels messy to me)
For buffers where those properties might change over time (like some
variants of restricted buffers), I think the fnctl/F_SEAL_* idea makes
sense to allow the buffer to become restricted.
For cacheability, it seems likely an allocation flag would be nicest,
but we don't have upstream users and not a lot of heap types yet, thus
the out-of-tree "system-uncached" heap which sort of mixes types and
properties.
With ECC I was trying to get a sense of where it would sit between
this "type of memory" vs a "buffer property". If internal allocators
are likely to consider it in a way similar to CMA (and with the pool
granular control, it sounds like it), then yeah, it probably should be
a type of memory, so a new heap name is likely the way to go - but
there is still the question of how to best support the various
combinations of (contiguous, cacheable) along with ECC. But if it
were something that was dynamically controllable at a finer grained
level in the future, maybe it would be something like a buffer
property.
thanks
-john