On Tue, Jan 26, 2021 at 08:44:49AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 11:33:36, Minchan Kim wrote:
> > On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > > > Contiguous memory allocation can be stalled due to waiting
> > > > on page writeback and/or page lock which causes unpredictable
> > > > delay. It's a unavoidable cost for the requestor to get *big*
> > > > contiguous memory but it's expensive for *small* contiguous
> > > > memory(e.g., order-4) because caller could retry the request
> > > > in different range where would have easy migratable pages
> > > > without stalling.
> > > >
> > > > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > > > alloc_contig_range so it will fail fast without blocking
> > > > when it encounters pages needed waiting.
> > >
> > > I am not against controling how hard this allocator tries with gfp mask
> > > but this changelog is rather void on any data and any user.
> > >
> > > It is also rather dubious to have retries when then caller says to not
> > > retry.
> >
> > Since max_tries is 1 with ++tries, it shouldn't retry.
>
> OK, I have missed that. This is a tricky code. ASYNC mode should be
> completely orthogonal to the retries count. Those are different things.
> Page allocator does an explicit bail out based on __GFP_NORETRY. You
> should be doing the same.
Before sending next revision, let me check this part again.
I want to use __GFP_NORETRY to indicate "opportunistic-easy-to-fail attempt"
and I want to use ASYNC migrate_mode to help the goal.
Do you see the problem?
On Tue, Jan 26, 2021 at 08:38:08AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 11:42:34, Minchan Kim wrote:
> > On Mon, Jan 25, 2021 at 02:07:01PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 09:54:59, Minchan Kim wrote:
> > > > The upcoming patch will introduce __GFP_NORETRY semantic
> > > > in alloc_contig_range which is a failfast mode of the API.
> > > > Instead of adding a additional parameter for gfp, replace
> > > > no_warn with gfp flag.
> > > >
> > > > To keep old behaviors, it follows the rule below.
> > > >
> > > > no_warn gfp_flags
> > > >
> > > > false GFP_KERNEL
> > > > true GFP_KERNEL|__GFP_NOWARN
> > > > gfp & __GFP_NOWARN GFP_KERNEL | (gfp & __GFP_NOWARN)
> > > >
> > > > Reviewed-by: Suren Baghdasaryan <surenb(a)google.com>
> > > > Signed-off-by: Minchan Kim <minchan(a)kernel.org>
> > > [...]
> > > > diff --git a/mm/cma.c b/mm/cma.c
> > > > index 0ba69cd16aeb..d50627686fec 100644
> > > > --- a/mm/cma.c
> > > > +++ b/mm/cma.c
> > > > @@ -419,13 +419,13 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
> > > > * @cma: Contiguous memory region for which the allocation is performed.
> > > > * @count: Requested number of pages.
> > > > * @align: Requested alignment of pages (in PAGE_SIZE order).
> > > > - * @no_warn: Avoid printing message about failed allocation
> > > > + * @gfp_mask: GFP mask to use during the cma allocation.
> > >
> > > Call out supported gfp flags explicitly. Have a look at kvmalloc_node
> > > for a guidance.
> >
> > How about this?
> >
> >
> > diff --git a/mm/cma.c b/mm/cma.c
> > index d50627686fec..b94727b694d6 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -423,6 +423,10 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
> > *
> > * This function allocates part of contiguous memory on specific
> > * contiguous memory area.
> > + *
> > + * For gfp_mask, GFP_KERNEL and __GFP_NORETRY are supported. __GFP_NORETRY
> > + * will avoid costly functions(e.g., waiting on page_writeback and locking)
> > + * at current implementaion during the page migration.
>
> rather than explicitly mentioning what the flag implies I think it would
> be more useful to state the intended usecase. See how kvmalloc_node says
> "__GFP_RETRY_MAYFAIL is supported, and it should be used only if kmalloc is
> preferable to the vmalloc fallback, due to visible performance
> drawbacks.
> __GFP_NOWARN is also supported to suppress allocation failure messages."
>
> This would help people not familiar with internals to see whether this
> flag is a good fit for them.
>
> In this case I woul go with
> "
> @flags: gfp mask. Must be compatible (superset) with GFP_KERNEL.
> [...]
> Reclaim modifiers (__GFP_RETRY_MAYFAIL, __GFP_NOFAIL) are not supported.
> __GFP_NORETRY is supported, and it should be used for opportunistic
> allocation attempts that should rather fail quickly when the caller has
> a fallback strategy.
> "
>
> Obviously for this patch you will go with a simple statement that
> Reclaim modifiers are not supported at all.
After more discussion for gfp_flags in thread of next patch, let me
changes a bit more based on it.
Thanks for the suggestion, Michal.
On Tue, Jan 26, 2021 at 08:46:05AM +0100, Michal Hocko wrote:
> On Thu 21-01-21 09:55:02, Minchan Kim wrote:
> > From: Hyesoo Yu <hyesoo.yu(a)samsung.com>
> >
> > This patch supports chunk heap that allocates the buffers that
> > arranged into a list a fixed size chunks taken from CMA.
> >
> > The chunk heap driver is bound directly to a reserved_memory
> > node by following Rob Herring's suggestion in [1].
> >
> > [1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/…
>
> Who is using this allocator in the kernel?
Userspace uses the memory via mapping it via dmabuf.
On Tue, Jan 26, 2021 at 08:44:49AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 11:33:36, Minchan Kim wrote:
> > On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > > > Contiguous memory allocation can be stalled due to waiting
> > > > on page writeback and/or page lock which causes unpredictable
> > > > delay. It's a unavoidable cost for the requestor to get *big*
> > > > contiguous memory but it's expensive for *small* contiguous
> > > > memory(e.g., order-4) because caller could retry the request
> > > > in different range where would have easy migratable pages
> > > > without stalling.
> > > >
> > > > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > > > alloc_contig_range so it will fail fast without blocking
> > > > when it encounters pages needed waiting.
> > >
> > > I am not against controling how hard this allocator tries with gfp mask
> > > but this changelog is rather void on any data and any user.
> > >
> > > It is also rather dubious to have retries when then caller says to not
> > > retry.
> >
> > Since max_tries is 1 with ++tries, it shouldn't retry.
>
> OK, I have missed that. This is a tricky code. ASYNC mode should be
> completely orthogonal to the retries count. Those are different things.
> Page allocator does an explicit bail out based on __GFP_NORETRY. You
> should be doing the same.
A concern with __GFP_NOWAIT is regardless of flags passed to cma_alloc,
internal implementation of alloc_contig_range inside will use blockable
operation. See __alloc_contig_migrate_range.
If we go with __GFP_NOWAIT, we should propagate the gfp_mask inside of
__alloc_contig_migrate_range to make cma_alloc consistent with alloc_pages.
(IIUC, that's what you want - make gfp_mask consistent between cma_alloc
and alloc_pages) but I am worry about the direction will make complicate
situation since cma invovles migration context as well as target page
allocation context. Sometime, the single gfp flag could be trouble
to express both contexts all at once.
>
> > >
> > > Also why didn't you consider GFP_NOWAIT semantic for non blocking mode?
> >
> > GFP_NOWAIT seems to be low(specific) flags rather than the one I want to
> > express. Even though I said only page writeback/lock in the description,
> > the goal is to avoid costly operations we might find later so such
> > "failfast", I thought GFP_NORETRY would be good fit.
>
> I suspect you are too focused on implementation details here. Think
> about the indended semantic. Callers of this functionality will not
> think about those (I hope because if they rely on these details then the
> whole thing will become unmaintainable because any change would require
> an audit of all existing users). All you should be caring about is to
> control how expensive the call can be. GFP_NOWAIT is not really low
> level from that POV. It gives you a very lightweight non-sleeping
> attempt to allocate. GFP_NORETRY will give you potentially sleeping but
> an opportunistic-easy-to-fail attempt. And so on. See how that is
> absolutely free of any page writeback or any specific locking.
With above reason I mentioned, I wanted to express __GFP_NORETRY as
"opportunistic-easy-to-fail attempt" to support cma_alloc as "failfast"
for migration context.
> --
> Michal Hocko
> SUSE Labs
On Mon, Jan 25, 2021 at 02:07:01PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 09:54:59, Minchan Kim wrote:
> > The upcoming patch will introduce __GFP_NORETRY semantic
> > in alloc_contig_range which is a failfast mode of the API.
> > Instead of adding a additional parameter for gfp, replace
> > no_warn with gfp flag.
> >
> > To keep old behaviors, it follows the rule below.
> >
> > no_warn gfp_flags
> >
> > false GFP_KERNEL
> > true GFP_KERNEL|__GFP_NOWARN
> > gfp & __GFP_NOWARN GFP_KERNEL | (gfp & __GFP_NOWARN)
> >
> > Reviewed-by: Suren Baghdasaryan <surenb(a)google.com>
> > Signed-off-by: Minchan Kim <minchan(a)kernel.org>
> [...]
> > diff --git a/mm/cma.c b/mm/cma.c
> > index 0ba69cd16aeb..d50627686fec 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -419,13 +419,13 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
> > * @cma: Contiguous memory region for which the allocation is performed.
> > * @count: Requested number of pages.
> > * @align: Requested alignment of pages (in PAGE_SIZE order).
> > - * @no_warn: Avoid printing message about failed allocation
> > + * @gfp_mask: GFP mask to use during the cma allocation.
>
> Call out supported gfp flags explicitly. Have a look at kvmalloc_node
> for a guidance.
How about this?
diff --git a/mm/cma.c b/mm/cma.c
index d50627686fec..b94727b694d6 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -423,6 +423,10 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
*
* This function allocates part of contiguous memory on specific
* contiguous memory area.
+ *
+ * For gfp_mask, GFP_KERNEL and __GFP_NORETRY are supported. __GFP_NORETRY
+ * will avoid costly functions(e.g., waiting on page_writeback and locking)
+ * at current implementaion during the page migration.
*/
struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
gfp_t gfp_mask)
On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > Contiguous memory allocation can be stalled due to waiting
> > on page writeback and/or page lock which causes unpredictable
> > delay. It's a unavoidable cost for the requestor to get *big*
> > contiguous memory but it's expensive for *small* contiguous
> > memory(e.g., order-4) because caller could retry the request
> > in different range where would have easy migratable pages
> > without stalling.
> >
> > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > alloc_contig_range so it will fail fast without blocking
> > when it encounters pages needed waiting.
>
> I am not against controling how hard this allocator tries with gfp mask
> but this changelog is rather void on any data and any user.
>
> It is also rather dubious to have retries when then caller says to not
> retry.
Since max_tries is 1 with ++tries, it shouldn't retry.
>
> Also why didn't you consider GFP_NOWAIT semantic for non blocking mode?
GFP_NOWAIT seems to be low(specific) flags rather than the one I want to
express. Even though I said only page writeback/lock in the description,
the goal is to avoid costly operations we might find later so such
"failfast", I thought GFP_NORETRY would be good fit.
>
> > Signed-off-by: Minchan Kim <minchan(a)kernel.org>
> > ---
> > mm/page_alloc.c | 8 ++++++--
> > 1 file changed, 6 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index b031a5ae0bd5..1cdc3ee0b22e 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -8491,12 +8491,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> > unsigned int nr_reclaimed;
> > unsigned long pfn = start;
> > unsigned int tries = 0;
> > + unsigned int max_tries = 5;
> > int ret = 0;
> > struct migration_target_control mtc = {
> > .nid = zone_to_nid(cc->zone),
> > .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> > };
> >
> > + if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC)
> > + max_tries = 1;
> > +
> > migrate_prep();
> >
> > while (pfn < end || !list_empty(&cc->migratepages)) {
> > @@ -8513,7 +8517,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> > break;
> > }
> > tries = 0;
> > - } else if (++tries == 5) {
> > + } else if (++tries == max_tries) {
> > ret = ret < 0 ? ret : -EBUSY;
> > break;
> > }
> > @@ -8564,7 +8568,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
> > .nr_migratepages = 0,
> > .order = -1,
> > .zone = page_zone(pfn_to_page(start)),
> > - .mode = MIGRATE_SYNC,
> > + .mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
> > .ignore_skip_hint = true,
> > .no_set_skip_hint = true,
> > .gfp_mask = current_gfp_context(gfp_mask),
> > --
> > 2.30.0.296.g2bfb1c46d8-goog
>
> --
> Michal Hocko
> SUSE Labs
Requested by Thomas. I think it justifies a new level, since I tried
to make some forward progress on this last summer, and gave up (for
now). This is very tricky.
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Maxime Ripard <mripard(a)kernel.org>
Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: David Airlie <airlied(a)linux.ie>
Cc: Daniel Vetter <daniel(a)ffwll.ch>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
Documentation/gpu/todo.rst | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index dea9082c0e5f..f872d3d33218 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -23,6 +23,9 @@ Advanced: Tricky tasks that need fairly good understanding of the DRM subsystem
and graphics topics. Generally need the relevant hardware for development and
testing.
+Expert: Only attempt these if you've successfully completed some tricky
+refactorings already and are an expert in the specific area
+
Subsystem-wide refactorings
===========================
@@ -168,6 +171,22 @@ Contact: Daniel Vetter, respective driver maintainers
Level: Advanced
+Move Buffer Object Locking to dma_resv_lock()
+---------------------------------------------
+
+Many drivers have their own per-object locking scheme, usually using
+mutex_lock(). This causes all kinds of trouble for buffer sharing, since
+depending which driver is the exporter and importer, the locking hierarchy is
+reversed.
+
+To solve this we need one standard per-object locking mechanism, which is
+dma_resv_lock(). This lock needs to be called as the outermost lock, with all
+other driver specific per-object locks removed. The problem is tha rolling out
+the actual change to the locking contract is a flag day, due to struct dma_buf
+buffer sharing.
+
+Level: Expert
+
Convert logging to drm_* functions with drm_device paramater
------------------------------------------------------------
--
2.30.0
This is rather overkill since currently all drivers call this from
hardirq (or at least timers). But maybe in the future we're going to
have thread irq handlers and what not, doesn't hurt to be prepared.
Plus this is an easy start for sprinkling these fence annotations into
shared code.
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: linux-rdma(a)vger.kernel.org
Cc: amd-gfx(a)lists.freedesktop.org
Cc: intel-gfx(a)lists.freedesktop.org
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
---
drivers/gpu/drm/drm_vblank.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 30912d8f82a5..f2aeb9bf325f 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -24,6 +24,7 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <linux/dma-fence.h>
#include <linux/export.h>
#include <linux/kthread.h>
#include <linux/moduleparam.h>
@@ -1922,7 +1923,7 @@ bool drm_handle_vblank(struct drm_device *dev, unsigned int pipe)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
unsigned long irqflags;
- bool disable_irq;
+ bool disable_irq, fence_cookie;
if (drm_WARN_ON_ONCE(dev, !drm_dev_has_vblank(dev)))
return false;
@@ -1930,6 +1931,8 @@ bool drm_handle_vblank(struct drm_device *dev, unsigned int pipe)
if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
return false;
+ fence_cookie = dma_fence_begin_signalling();
+
spin_lock_irqsave(&dev->event_lock, irqflags);
/* Need timestamp lock to prevent concurrent execution with
@@ -1942,6 +1945,7 @@ bool drm_handle_vblank(struct drm_device *dev, unsigned int pipe)
if (!vblank->enabled) {
spin_unlock(&dev->vblank_time_lock);
spin_unlock_irqrestore(&dev->event_lock, irqflags);
+ dma_fence_end_signalling(fence_cookie);
return false;
}
@@ -1968,6 +1972,8 @@ bool drm_handle_vblank(struct drm_device *dev, unsigned int pipe)
if (disable_irq)
vblank_disable_fn(&vblank->disable_timer);
+ dma_fence_end_signalling(fence_cookie);
+
return true;
}
EXPORT_SYMBOL(drm_handle_vblank);
--
2.30.0
This is a bit disappointing since we need to split the annotations
over all the different parts.
I was considering just leaking the critical section into the
->atomic_commit_tail callback of each driver. But that would mean we
need to pass the fence_cookie into each driver (there's a total of 13
implementations of this hook right now), so bad flag day. And also a
bit leaky abstraction.
Hence just do it function-by-function.
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: linux-rdma(a)vger.kernel.org
Cc: amd-gfx(a)lists.freedesktop.org
Cc: intel-gfx(a)lists.freedesktop.org
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
---
drivers/gpu/drm/drm_atomic_helper.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
index 4a66768b6057..69121d2925bd 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -1567,6 +1567,7 @@ EXPORT_SYMBOL(drm_atomic_helper_wait_for_flip_done);
void drm_atomic_helper_commit_tail(struct drm_atomic_state *old_state)
{
struct drm_device *dev = old_state->dev;
+ bool fence_cookie = dma_fence_begin_signalling();
drm_atomic_helper_commit_modeset_disables(dev, old_state);
@@ -1578,6 +1579,8 @@ void drm_atomic_helper_commit_tail(struct drm_atomic_state *old_state)
drm_atomic_helper_commit_hw_done(old_state);
+ dma_fence_end_signalling(fence_cookie);
+
drm_atomic_helper_wait_for_vblanks(dev, old_state);
drm_atomic_helper_cleanup_planes(dev, old_state);
@@ -1597,6 +1600,7 @@ EXPORT_SYMBOL(drm_atomic_helper_commit_tail);
void drm_atomic_helper_commit_tail_rpm(struct drm_atomic_state *old_state)
{
struct drm_device *dev = old_state->dev;
+ bool fence_cookie = dma_fence_begin_signalling();
drm_atomic_helper_commit_modeset_disables(dev, old_state);
@@ -1609,6 +1613,8 @@ void drm_atomic_helper_commit_tail_rpm(struct drm_atomic_state *old_state)
drm_atomic_helper_commit_hw_done(old_state);
+ dma_fence_end_signalling(fence_cookie);
+
drm_atomic_helper_wait_for_vblanks(dev, old_state);
drm_atomic_helper_cleanup_planes(dev, old_state);
@@ -1624,6 +1630,9 @@ static void commit_tail(struct drm_atomic_state *old_state)
ktime_t start;
s64 commit_time_ms;
unsigned int i, new_self_refresh_mask = 0;
+ bool fence_cookie;
+
+ fence_cookie = dma_fence_begin_signalling();
funcs = dev->mode_config.helper_private;
@@ -1652,6 +1661,8 @@ static void commit_tail(struct drm_atomic_state *old_state)
if (new_crtc_state->self_refresh_active)
new_self_refresh_mask |= BIT(i);
+ dma_fence_end_signalling(fence_cookie);
+
if (funcs && funcs->atomic_commit_tail)
funcs->atomic_commit_tail(old_state);
else
@@ -1810,6 +1821,7 @@ int drm_atomic_helper_commit(struct drm_device *dev,
bool nonblock)
{
int ret;
+ bool fence_cookie;
if (state->async_update) {
ret = drm_atomic_helper_prepare_planes(dev, state);
@@ -1832,6 +1844,8 @@ int drm_atomic_helper_commit(struct drm_device *dev,
if (ret)
return ret;
+ fence_cookie = dma_fence_begin_signalling();
+
if (!nonblock) {
ret = drm_atomic_helper_wait_for_fences(dev, state, true);
if (ret)
@@ -1869,6 +1883,7 @@ int drm_atomic_helper_commit(struct drm_device *dev,
*/
drm_atomic_state_get(state);
+ dma_fence_end_signalling(fence_cookie);
if (nonblock)
queue_work(system_unbound_wq, &state->commit_work);
else
@@ -1877,6 +1892,7 @@ int drm_atomic_helper_commit(struct drm_device *dev,
return 0;
err:
+ dma_fence_end_signalling(fence_cookie);
drm_atomic_helper_cleanup_planes(dev, state);
return ret;
}
--
2.30.0