Hi all!
Benjamin Herrenschmidt pointed a few issues in the proposed design of
device tree bindings for contiguous memory allocator and reserved memory
regions:
https://lkml.org/lkml/2013/9/15/151http://www.spinics.net/lists/arm-kernel/msg273548.html
Some time has passed, but there is still no consensus on the bindings
for the reserved memory and various drawback of this solution has been
shown, so in my opinion the best I can do now is to revert them
completely and start from scratch again later.
This patch series reverts patches related to device tree bindings
proposed in the following thread:
http://thread.gmane.org/gmane.linux.ports.arm.kernel/263216
and merged by commit 64c353864e3f7ccba0ade1bd6f562f9a3bc7e68d ("Merge
branch 'for-v3.12' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping").
Best regards
Marek Szyprowski
Samsung R&D Institute Poland
Marek Szyprowski (2):
Revert "ARM: init: add support for reserved memory defined by device tree"
Revert "drivers: of: add initialization code for dma reserved memory"
Documentation/devicetree/bindings/memory.txt | 168 -------------------------
arch/arm/mm/init.c | 3 -
drivers/of/Kconfig | 6 -
drivers/of/Makefile | 1 -
drivers/of/of_reserved_mem.c | 173 --------------------------
drivers/of/platform.c | 4 -
include/linux/of_reserved_mem.h | 14 ---
7 files changed, 369 deletions(-)
delete mode 100644 Documentation/devicetree/bindings/memory.txt
delete mode 100644 drivers/of/of_reserved_mem.c
delete mode 100644 include/linux/of_reserved_mem.h
--
1.7.9.5
In this context, a "doomed" object is an object whose refcount has reached
zero, but that has not yet been freed.
To avoid mutual refcounting vmwgfx need to have a non-refcounted pointer to
a dma-buf in a lookup structure. The pointer is removed in the dma-buf
destructor. To allow lookup-structure private locks we need
get_dma_buf_unless_doomed(). This common refcounting scenario is described
with examples in detail in the kref documentaion.
The solution with local locks is under kref_get_unless_zero().
See also kobject_get_unless_zero() and its commit message.
Since dma-bufs are using the attached file for refcounting,
get_dma_buf_unless_doomed maps directly to a get_file_unless_doomed.
Signed-off-by: Thomas Hellstrom <thellstrom(a)vmware.com>
---
include/linux/dma-buf.h | 16 ++++++++++++++++
include/linux/fs.h | 15 +++++++++++++++
2 files changed, 31 insertions(+)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index dfac5ed..6e58144 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -162,6 +162,22 @@ static inline void get_dma_buf(struct dma_buf *dmabuf)
get_file(dmabuf->file);
}
+/**
+ * get_dma_buf_unless_doomed - convenience wrapper for get_file_unless_doomed
+ *
+ * @dmabuf: [in] pointer to dma_buf
+ *
+ * Obtain a dma-buf reference from a lookup structure that doesn't refcount
+ * the dma-buf, but synchronizes with its release method to make sure it has
+ * not been freed yet. See for example kref_get_unless_zero documentation.
+ * Returns true if refcounting succeeds, false otherwise.
+ */
+static inline bool __must_check
+get_dma_buf_unless_doomed(struct dma_buf *dmabuf)
+{
+ return get_file_unless_doomed(dmabuf->file);
+}
+
struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
struct device *dev);
void dma_buf_detach(struct dma_buf *dmabuf,
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 3f40547..a96c333 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -824,6 +824,21 @@ static inline struct file *get_file(struct file *f)
atomic_long_inc(&f->f_count);
return f;
}
+
+/**
+ * get_file_unless_doomed - convenience wrapper for get_file_unless_doomed
+ * @file: [in] pointer to file
+ *
+ * Obtain a file reference from a lookup structure that doesn't refcount
+ * the file, but synchronizes with its release method to make sure it has
+ * not been freed yet. See for example kref_get_unless_zero documentation.
+ * Returns true if refcounting succeeds, false otherwise.
+ */
+static inline bool __must_check get_file_unless_doomed(struct file *f)
+{
+ return atomic_long_inc_not_zero(&f->f_count) != 0L;
+}
+
#define fput_atomic(x) atomic_long_add_unless(&(x)->f_count, -1, 1)
#define file_count(x) atomic_long_read(&(x)->f_count)
--
1.7.10.4
op 07-11-13 22:11, Rom Lemarchand schreef:
> Hi Maarten, I tested your changes and needed the attached patch: behavior
> now seems equivalent as android sync. I haven't tested performance.
>
> The issue resolved by this patch happens when i_b < b->num_fences and i_a
>> = a->num_fences (or vice versa). Then, pt_a is invalid and so
> dereferencing pt_a->context causes a crash.
>
Yeah, I pushed my original fix. I intended to keep android userspace behavior the same, and I tried to keep the kernelspace the api same as much as I could. If peformance is the same, or not noticeably worse, would there be any objections on the android side about renaming dma-fence to syncpoint, and getting it in mainline?
~Maarten
op 07-11-13 22:11, Rom Lemarchand schreef:
> Hi Maarten, I tested your changes and needed the attached patch: behavior
> now seems equivalent as android sync. I haven't tested performance.
>
> The issue resolved by this patch happens when i_b < b->num_fences and
> i_a >= a->num_fences (or vice versa). Then, pt_a is invalid and so
> dereferencing pt_a->context causes a crash.
Oops, thinko. :) Originally I had it correct by doing this:
+ /*
+ * Assume sync_fence a and b are both ordered and have no
+ * duplicates with the same context.
+ *
+ * If a sync_fence can only be created with sync_fence_merge
+ * and sync_fence_create, this is a reasonable assumption.
+ */
+ for (i = i_a = i_b = 0; i_a < a->num_fences && i_b < b->num_fences; ) {
+ struct fence *pt_a = a->cbs[i_a].sync_pt;
+ struct fence *pt_b = b->cbs[i_b].sync_pt;
+
+ if (pt_a->context < pt_b->context) {
+ sync_fence_add_pt(fence, &i, pt_a);
+
+ i_a++;
+ } else if (pt_a->context > pt_b->context) {
+ sync_fence_add_pt(fence, &i, pt_b);
+
+ i_b++;
+ } else {
+ if (pt_a->seqno - pt_b->seqno <= INT_MAX)
+ sync_fence_add_pt(fence, &i, pt_a);
+ else
+ sync_fence_add_pt(fence, &i, pt_b);
+
+ i_a++;
+ i_b++;
+ }
+ }
+
+ /* Add remaining fences from a or b*/
+ for (; i_a < a->num_fences; i_a++)
+ sync_fence_add_pt(fence, &i, a->cbs[i_a].sync_pt);
+
+ for (; i_b < b->num_fences; i_b++)
+ sync_fence_add_pt(fence, &i, b->cbs[i_b].sync_pt);
Then I thought I could clean it up by merging it, but that ended up being
more unreadable and crashing... so I guess I'll revert back to this version. :)
Anyone else but me that feels such a function could be useful?
My main use-case is that it would resolve the mutual refcounting problem:
1) drm buffer object caches a dma_buf pointer which it refcounts
2) The dma-buf holds a refcount to the buffer.
This is resolved today by having the user-space visible part of the
drm-buffer holding the refcount to the dma_buf. When user-space closes
the drm-buffer, the reference goes away, and eventually the buffer is
freed, when all external dma-buf users are done with the dma-buf
However, this also means that the dma-buf remains for the buffer
lifetime even when there are no external users, which bugs me a bit.
This can be resolved by viewing the drm buffer as a lookup structure
that doesn't hold a refcount to the dma-buf, but that means that the
lookup structure (buffer) would need to share locks with the dma-buf
implementation, unless we have a get_dma_buf_unless_zero, which means we
can use locks local to the lookup structure, the drm buffer.
(See the last part of the kref documentation for a detailed discussion
of this).
Now I don't think keeping the dma_buf for the drm buffer lifetime is a
HUGE problem, but I just wanted to get people's views of this.
Thanks,
Thomas
Hi!
I'm just looking over what's needed to implement drm Prime / dma-buf
exports + imports in the vmwgfx driver. It seems like most dma-bufs ops
are quite straightforward to implement except user-space mmap().
The reason being that vmwgfx dma-bufs will be using completely
non-coherent memory, whenever there needs to be CPU accesses.
The accelerated contents resides in an opaque structure on the device
into which we can DMA to and from, so for mmap to work we need to zap
ptes and DMA to the device when doing something accelerated, and on the
first page-fault DMA data back and wait for idle if the device did a
write to the dma-buf.
Now this shouldn't really be a problem if dma-bufs were only used for
cross-device sharing, but since people apparently want to use dma-buf
file handles to share CPU data between processes it really becomes a
serious problem.
Needless to say we'd want to limit the size of the DMAs, and have mmap
users request regions for read, and mark regions dirty for write,
something similar to gallium's texture transfer objects.
Any ideas?
/Thomas
Hey,
So I took a look at the sync stuff in android, in a lot of ways I believe that they're similar, yet subtly different.
Most of the stuff I looked at is from the sync.h header in drivers/staging, so maybe my knowledge is incomplete.
The timeline is similar to what I called a fence context. Each command stream on a gpu can have a context. Because
nvidia hardware can have 4095 separate timelines, I didn't want to keep the bookkeeping for each timeline, although
I guess that it's already done. Maybe it could be done in a unified way for each driver, making a transition to
timelines that can be used by android easier.
I did not have an explicit syncpoint addition, but I think that sync points + sync_fence were similar to what I did with
my dma-fence stuff, except slightly different.
In my approach the dma-fence is signaled after all sync_points are done AND the queued commands are executed.
In effect the dma-fence becomes the next syncpoint, depending on all previous dma-fence syncpoints.
An important thing to note is that dma-fence is kernelspace only, so it might be better to rename it to syncpoint,
and use fence for the userspace interface.
A big difference is locking, I assume in my code that most fences emitted are not waited on, so the fastpath
fence_signal is a test_and_set_bit plus test_bit. A single lock is used for the waitqueue and callbacks,
with the waitqueue being implemented internally as an asynchronous callback. The lock is provided by the driver,
which makes adding support for old hardware that has no reliable way of notifying completion of events easier.
I avoided using global locks, but I think for debugfs support I may end up having to add some.
The dma fence looks similar overall, except that I allow overriding some stuff and keep less track about state.
I do believe that I can create a userspace interface around dma_fence that works similar to android, and the
kernel space interface could be done in a similar way too.
One thing though: is it really required to merge fences? It seems to me that if I add a poll callback userspace
could simply do a poll on a list of fences. This would give userspace all the information it needs about each
individual fence.
The thing about wait/wound mutexes can be ignored for this discussion. It's really just a method of adding a
fence to a dma-buf, and building a list of all dma-fences to wait on in the kernel before starting a command
buffer, and setting a new fence to all the dma-bufs to signal completion of the event. Regardless of the sync
mechanism we'll decide on, this stuff wouldn't change.
Depending on feedback I'll try reflashing my nexus 7 to stock android, and work on trying to convert android
syncpoints to dma-fence, which I'll probably rename to syncpoints.
~Maarten
Hi!
Considering that the linux DMA-API states that information in an sg-list
may be destroyed when it's mapped,
it seems to me that at least one of the drm prime functions use invalid
assumptions.
In particular, I don't think it's safe to assume that pages in a single
sg-list segment are contigous after mapping, so
if we want struct page pointers we should use
pfn = dma_to_phys((sg_dma_address(sg) + p_offset*PAGE_SIZE)) >> PAGE_SHIFT
and if the pfn is valid, convert it to a struct page.
(Incorrect code is, for example, in drm_prime_sg_to_page_addr_arrays)
Or does dma-buf require that page info in sg-lists need to be kept
across the map operation?
BTW this brings up another question: It's stated that the above function
is needed by the TTM driver in order to do
correct fault handling. This seems odd, TTM shouldn't be able to mmap()
or fault an imported dma-buf, right?
Thanks,
Thomas
On Thu, 2013-10-17 at 13:37 -0500, Matt Sealey wrote:
> This may be late, but please can you consider re-using the CHRP
> reserved node (i.e. device_type = "reserved")?
>
> Since it does exactly the same thing, is well defined since the dark ages?
>
> It's CHRP 1.7 section 5.9 by the way (just before /chosen gets defined).
>
> It would solve a selection of the issues; and require zero binding
> work except describing perhaps a couple freakish Linux-specific
> properties that may be only as intrusive as, say, linux,initrd would
> be in /chosen.
>
> The most effective, multi-OS way of using it ("available" property not
> currently implemented in Linux for some reason, but it could come in
> so handy - not only because it matches the way Linux resource
> structures are handled)
First, the original /reserved on CHRP was supposed to be about reserved
bus space for things like hidden HW devices, but yes, it could be used for
that. However it would be nice to enrich the binding to provide at least
some kind of specific identification of what a given reserved area is about,
see my comments about that in the previous threads.
The available property is of no use to us. It purely indicates what is
available while OFW is still running. Once we get rid of OFW its content
is utterly meaningless.
The original OFW was design with the idea that OFW remains alive along
with the operating system, and that has been done on some platforms, but
that idea has been ditched very early on in powerpc space for many
reasons, one of them being that most implementations of OFW around were
way to broken to bother.
> memory@0x70000000 {
> device_type = "memory";
> reg = <0x70000000 0x40000000>;
> available = <0x70000000 0x10000000
> 0x90000000 0x1ffc00000>; /* top 16KiB of memory
> is where the secure firmware keeps it's mailboxes */
> };
> freaky-codec-memory: reserved@0x80000000 {
> device_type = "reserved";
> reg = <0x80000000 0x10000000>;
> available = <0x80000000 0x8000000
> 0x88000000 0x8000000>; /* two 128MiB buffers */
> non-objectionable-mark-as-contiguous-property-name-here;
> cacheable;
> };
>
> Any driver that has, previously, required a bunch of it's own memory
> carved out of DDR *should* be gaining a phandle reference to that
> reserved node however it likes (it would be up to that devices'
> binding).
>
> On Linux under CMA, it may well be ignored and just stuffed into the
> generic CMA regions, and the driver MAY allocate anywhere it likes,
> but it COULD ask for memory based on a region phandle or, horribly, by
> name (since there's no other way to search for it, the OF "name" for
> reserved SHALL be "reserved") and be given memory in that region
> defined by the reserved node if it had any addressing restrictions.
>
> /videodecoder@0x43f01000 {
> compatible = "freaky,codec";
> :
> decode-buffer = &freaky-codec-memory;
> };
>
> On another OS it may manually map and use a custom allocator for that
> memory region, since otherwise the OS would not have even looked at
> it.
>
> Also this discussion of Jeremy Kerr's proposal seems to be 'missing'
> on Google. Do you happen to have a link to it?
>
> Thanks,
> Matt Sealey <neko(a)bakuhatsu.net>