Replace the freelist (O(n)) used for free block management with a red-black tree, providing more efficient O(log n) search, insert, and delete operations. This improves scalability and performance when managing large numbers of free blocks per order (e.g., hundreds or thousands).
In the VK-CTS memory stress subtest, the buddy manager merges fragmented memory and inserts freed blocks into the freelist. Since freelist insertion is O(n), this becomes a bottleneck as fragmentation increases. Benchmarking shows list_insert_sorted() consumes ~52.69% CPU with the freelist, compared to just 0.03% with the RB tree (rbtree_insert.isra.0), despite performing the same sorted insert.
This also improves performance in heavily fragmented workloads, such as games or graphics tests that stress memory.
As the buddy allocator evolves with new features such as clear-page tracking, the resulting fragmentation and complexity have grown. These RB-tree based design changes are introduced to address that growth and ensure the allocator continues to perform efficiently under fragmented conditions.
The RB tree implementation with separate clear/dirty trees provides: - O(n log n) aggregate complexity for all operations instead of O(n^2) - Elimination of soft lockups and system instability - Improved code maintainability and clarity - Better scalability for large memory systems - Predictable performance under fragmentation
v3(Matthew): - Remove RB_EMPTY_NODE check in force_merge function. - Rename rb for loop macros to have less generic names and move to .c file. - Make the rb node rb and link field as union.
v4(Jani Nikula): - The kernel-doc comment should be "/**" - Move all the rbtree macros to rbtree.h and add parens to ensure correct precedence.
v5: - Remove the inline in a .c file (Jani Nikula).
v6(Peter Zijlstra): - Add rb_add() function replacing the existing rbtree_insert() code.
v7: - A full walk iteration in rbtree is slower than the list (Peter Zijlstra). - The existing rbtree_postorder_for_each_entry_safe macro should be used in scenarios where traversal order is not a critical factor (Christian).
Cc: stable@vger.kernel.org Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality") Signed-off-by: Arunpravin Paneer Selvam Arunpravin.PaneerSelvam@amd.com --- drivers/gpu/drm/drm_buddy.c | 188 ++++++++++++++++++++++-------------- include/drm/drm_buddy.h | 11 ++- 2 files changed, 124 insertions(+), 75 deletions(-)
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c index a94061f373de..67aa67229cc3 100644 --- a/drivers/gpu/drm/drm_buddy.c +++ b/drivers/gpu/drm/drm_buddy.c @@ -14,6 +14,8 @@
static struct kmem_cache *slab_blocks;
+#define rbtree_get_free_block(node) rb_entry((node), struct drm_buddy_block, rb) + static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm, struct drm_buddy_block *parent, unsigned int order, @@ -31,6 +33,8 @@ static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm, block->header |= order; block->parent = parent;
+ RB_CLEAR_NODE(&block->rb); + BUG_ON(block->header & DRM_BUDDY_HEADER_UNUSED); return block; } @@ -41,23 +45,49 @@ static void drm_block_free(struct drm_buddy *mm, kmem_cache_free(slab_blocks, block); }
-static void list_insert_sorted(struct drm_buddy *mm, - struct drm_buddy_block *block) +static bool drm_buddy_block_offset_less(const struct drm_buddy_block *block, + const struct drm_buddy_block *node) { - struct drm_buddy_block *node; - struct list_head *head; + return drm_buddy_block_offset(block) < drm_buddy_block_offset(node); +}
- head = &mm->free_list[drm_buddy_block_order(block)]; - if (list_empty(head)) { - list_add(&block->link, head); - return; - } +static bool rbtree_block_offset_less(struct rb_node *block, + const struct rb_node *node) +{ + return drm_buddy_block_offset_less(rbtree_get_free_block(block), + rbtree_get_free_block(node)); +}
- list_for_each_entry(node, head, link) - if (drm_buddy_block_offset(block) < drm_buddy_block_offset(node)) - break; +static void rbtree_insert(struct drm_buddy *mm, + struct drm_buddy_block *block) +{ + rb_add(&block->rb, + &mm->free_tree[drm_buddy_block_order(block)], + rbtree_block_offset_less); +} + +static void rbtree_remove(struct drm_buddy *mm, + struct drm_buddy_block *block) +{ + struct rb_root *root; + + root = &mm->free_tree[drm_buddy_block_order(block)]; + rb_erase(&block->rb, root);
- __list_add(&block->link, node->link.prev, &node->link); + RB_CLEAR_NODE(&block->rb); +} + +static struct drm_buddy_block * +rbtree_last_entry(struct drm_buddy *mm, unsigned int order) +{ + struct rb_node *node = rb_last(&mm->free_tree[order]); + + return node ? rb_entry(node, struct drm_buddy_block, rb) : NULL; +} + +static bool rbtree_is_empty(struct drm_buddy *mm, unsigned int order) +{ + return RB_EMPTY_ROOT(&mm->free_tree[order]); }
static void clear_reset(struct drm_buddy_block *block) @@ -70,12 +100,13 @@ static void mark_cleared(struct drm_buddy_block *block) block->header |= DRM_BUDDY_HEADER_CLEAR; }
-static void mark_allocated(struct drm_buddy_block *block) +static void mark_allocated(struct drm_buddy *mm, + struct drm_buddy_block *block) { block->header &= ~DRM_BUDDY_HEADER_STATE; block->header |= DRM_BUDDY_ALLOCATED;
- list_del(&block->link); + rbtree_remove(mm, block); }
static void mark_free(struct drm_buddy *mm, @@ -84,15 +115,16 @@ static void mark_free(struct drm_buddy *mm, block->header &= ~DRM_BUDDY_HEADER_STATE; block->header |= DRM_BUDDY_FREE;
- list_insert_sorted(mm, block); + rbtree_insert(mm, block); }
-static void mark_split(struct drm_buddy_block *block) +static void mark_split(struct drm_buddy *mm, + struct drm_buddy_block *block) { block->header &= ~DRM_BUDDY_HEADER_STATE; block->header |= DRM_BUDDY_SPLIT;
- list_del(&block->link); + rbtree_remove(mm, block); }
static inline bool overlaps(u64 s1, u64 e1, u64 s2, u64 e2) @@ -148,7 +180,7 @@ static unsigned int __drm_buddy_free(struct drm_buddy *mm, mark_cleared(parent); }
- list_del(&buddy->link); + rbtree_remove(mm, buddy); if (force_merge && drm_buddy_block_is_clear(buddy)) mm->clear_avail -= drm_buddy_block_size(mm, buddy);
@@ -179,13 +211,19 @@ static int __force_merge(struct drm_buddy *mm, return -EINVAL;
for (i = min_order - 1; i >= 0; i--) { - struct drm_buddy_block *block, *prev; + struct rb_root *root = &mm->free_tree[i]; + struct rb_node *iter; + + iter = rb_last(root);
- list_for_each_entry_safe_reverse(block, prev, &mm->free_list[i], link) { - struct drm_buddy_block *buddy; + while (iter) { + struct drm_buddy_block *block, *buddy; u64 block_start, block_end;
- if (!block->parent) + block = rbtree_get_free_block(iter); + iter = rb_prev(iter); + + if (!block || !block->parent) continue;
block_start = drm_buddy_block_offset(block); @@ -201,15 +239,10 @@ static int __force_merge(struct drm_buddy *mm, WARN_ON(drm_buddy_block_is_clear(block) == drm_buddy_block_is_clear(buddy));
- /* - * If the prev block is same as buddy, don't access the - * block in the next iteration as we would free the - * buddy block as part of the free function. - */ - if (prev == buddy) - prev = list_prev_entry(prev, link); + if (iter == &buddy->rb) + iter = rb_prev(iter);
- list_del(&block->link); + rbtree_remove(mm, block); if (drm_buddy_block_is_clear(block)) mm->clear_avail -= drm_buddy_block_size(mm, block);
@@ -258,14 +291,14 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size)
BUG_ON(mm->max_order > DRM_BUDDY_MAX_ORDER);
- mm->free_list = kmalloc_array(mm->max_order + 1, - sizeof(struct list_head), + mm->free_tree = kmalloc_array(mm->max_order + 1, + sizeof(struct rb_root), GFP_KERNEL); - if (!mm->free_list) + if (!mm->free_tree) return -ENOMEM;
for (i = 0; i <= mm->max_order; ++i) - INIT_LIST_HEAD(&mm->free_list[i]); + mm->free_tree[i] = RB_ROOT;
mm->n_roots = hweight64(size);
@@ -273,7 +306,7 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) sizeof(struct drm_buddy_block *), GFP_KERNEL); if (!mm->roots) - goto out_free_list; + goto out_free_tree;
offset = 0; i = 0; @@ -312,8 +345,8 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) while (i--) drm_block_free(mm, mm->roots[i]); kfree(mm->roots); -out_free_list: - kfree(mm->free_list); +out_free_tree: + kfree(mm->free_tree); return -ENOMEM; } EXPORT_SYMBOL(drm_buddy_init); @@ -323,7 +356,7 @@ EXPORT_SYMBOL(drm_buddy_init); * * @mm: DRM buddy manager to free * - * Cleanup memory manager resources and the freelist + * Cleanup memory manager resources and the freetree */ void drm_buddy_fini(struct drm_buddy *mm) { @@ -350,7 +383,7 @@ void drm_buddy_fini(struct drm_buddy *mm) WARN_ON(mm->avail != mm->size);
kfree(mm->roots); - kfree(mm->free_list); + kfree(mm->free_tree); } EXPORT_SYMBOL(drm_buddy_fini);
@@ -383,7 +416,7 @@ static int split_block(struct drm_buddy *mm, clear_reset(block); }
- mark_split(block); + mark_split(mm, block);
return 0; } @@ -412,7 +445,7 @@ EXPORT_SYMBOL(drm_get_buddy); * @is_clear: blocks clear state * * Reset the clear state based on @is_clear value for each block - * in the freelist. + * in the freetree. */ void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) { @@ -431,9 +464,9 @@ void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) }
for (i = 0; i <= mm->max_order; ++i) { - struct drm_buddy_block *block; + struct drm_buddy_block *block, *tmp;
- list_for_each_entry_reverse(block, &mm->free_list[i], link) { + rbtree_postorder_for_each_entry_safe(block, tmp, &mm->free_tree[i], rb) { if (is_clear != drm_buddy_block_is_clear(block)) { if (is_clear) { mark_cleared(block); @@ -639,14 +672,18 @@ get_maxblock(struct drm_buddy *mm, unsigned int order, unsigned int i;
for (i = order; i <= mm->max_order; ++i) { + struct rb_node *iter = rb_last(&mm->free_tree[i]); struct drm_buddy_block *tmp_block;
- list_for_each_entry_reverse(tmp_block, &mm->free_list[i], link) { - if (block_incompatible(tmp_block, flags)) - continue; + while (iter) { + tmp_block = rbtree_get_free_block(iter);
- block = tmp_block; - break; + if (!block_incompatible(tmp_block, flags)) { + block = tmp_block; + break; + } + + iter = rb_prev(iter); }
if (!block) @@ -667,7 +704,7 @@ get_maxblock(struct drm_buddy *mm, unsigned int order, }
static struct drm_buddy_block * -alloc_from_freelist(struct drm_buddy *mm, +alloc_from_freetree(struct drm_buddy *mm, unsigned int order, unsigned long flags) { @@ -682,14 +719,18 @@ alloc_from_freelist(struct drm_buddy *mm, tmp = drm_buddy_block_order(block); } else { for (tmp = order; tmp <= mm->max_order; ++tmp) { + struct rb_node *iter = rb_last(&mm->free_tree[tmp]); struct drm_buddy_block *tmp_block;
- list_for_each_entry_reverse(tmp_block, &mm->free_list[tmp], link) { - if (block_incompatible(tmp_block, flags)) - continue; + while (iter) { + tmp_block = rbtree_get_free_block(iter);
- block = tmp_block; - break; + if (!block_incompatible(tmp_block, flags)) { + block = tmp_block; + break; + } + + iter = rb_prev(iter); }
if (block) @@ -700,10 +741,8 @@ alloc_from_freelist(struct drm_buddy *mm, if (!block) { /* Fallback method */ for (tmp = order; tmp <= mm->max_order; ++tmp) { - if (!list_empty(&mm->free_list[tmp])) { - block = list_last_entry(&mm->free_list[tmp], - struct drm_buddy_block, - link); + if (!rbtree_is_empty(mm, tmp)) { + block = rbtree_last_entry(mm, tmp); if (block) break; } @@ -771,7 +810,7 @@ static int __alloc_range(struct drm_buddy *mm,
if (contains(start, end, block_start, block_end)) { if (drm_buddy_block_is_free(block)) { - mark_allocated(block); + mark_allocated(mm, block); total_allocated += drm_buddy_block_size(mm, block); mm->avail -= drm_buddy_block_size(mm, block); if (drm_buddy_block_is_clear(block)) @@ -849,8 +888,8 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, { u64 rhs_offset, lhs_offset, lhs_size, filled; struct drm_buddy_block *block; - struct list_head *list; LIST_HEAD(blocks_lhs); + struct rb_node *iter; unsigned long pages; unsigned int order; u64 modify_size; @@ -862,11 +901,14 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, if (order == 0) return -ENOSPC;
- list = &mm->free_list[order]; - if (list_empty(list)) + if (rbtree_is_empty(mm, order)) return -ENOSPC;
- list_for_each_entry_reverse(block, list, link) { + iter = rb_last(&mm->free_tree[order]); + + while (iter) { + block = rbtree_get_free_block(iter); + /* Allocate blocks traversing RHS */ rhs_offset = drm_buddy_block_offset(block); err = __drm_buddy_alloc_range(mm, rhs_offset, size, @@ -891,6 +933,8 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, } /* Free blocks for the next iteration */ drm_buddy_free_list_internal(mm, blocks); + + iter = rb_prev(iter); }
return -ENOSPC; @@ -976,7 +1020,7 @@ int drm_buddy_block_trim(struct drm_buddy *mm, list_add(&block->tmp_link, &dfs); err = __alloc_range(mm, &dfs, new_start, new_size, blocks, NULL); if (err) { - mark_allocated(block); + mark_allocated(mm, block); mm->avail -= drm_buddy_block_size(mm, block); if (drm_buddy_block_is_clear(block)) mm->clear_avail -= drm_buddy_block_size(mm, block); @@ -999,8 +1043,8 @@ __drm_buddy_alloc_blocks(struct drm_buddy *mm, return __drm_buddy_alloc_range_bias(mm, start, end, order, flags); else - /* Allocate from freelist */ - return alloc_from_freelist(mm, order, flags); + /* Allocate from freetree */ + return alloc_from_freetree(mm, order, flags); }
/** @@ -1017,8 +1061,8 @@ __drm_buddy_alloc_blocks(struct drm_buddy *mm, * alloc_range_bias() called on range limitations, which traverses * the tree and returns the desired block. * - * alloc_from_freelist() called when *no* range restrictions - * are enforced, which picks the block from the freelist. + * alloc_from_freetree() called when *no* range restrictions + * are enforced, which picks the block from the freetree. * * Returns: * 0 on success, error code on failure. @@ -1120,7 +1164,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, } } while (1);
- mark_allocated(block); + mark_allocated(mm, block); mm->avail -= drm_buddy_block_size(mm, block); if (drm_buddy_block_is_clear(block)) mm->clear_avail -= drm_buddy_block_size(mm, block); @@ -1201,10 +1245,10 @@ void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p) mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20);
for (order = mm->max_order; order >= 0; order--) { - struct drm_buddy_block *block; + struct drm_buddy_block *block, *tmp; u64 count = 0, free;
- list_for_each_entry(block, &mm->free_list[order], link) { + rbtree_postorder_for_each_entry_safe(block, tmp, &mm->free_tree[order], rb) { BUG_ON(!drm_buddy_block_is_free(block)); count++; } diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h index 513837632b7d..9ee105d4309f 100644 --- a/include/drm/drm_buddy.h +++ b/include/drm/drm_buddy.h @@ -10,6 +10,7 @@ #include <linux/list.h> #include <linux/slab.h> #include <linux/sched.h> +#include <linux/rbtree.h>
#include <drm/drm_print.h>
@@ -53,7 +54,11 @@ struct drm_buddy_block { * a list, if so desired. As soon as the block is freed with * drm_buddy_free* ownership is given back to the mm. */ - struct list_head link; + union { + struct rb_node rb; + struct list_head link; + }; + struct list_head tmp_link; };
@@ -68,7 +73,7 @@ struct drm_buddy_block { */ struct drm_buddy { /* Maintain a free list for each order. */ - struct list_head *free_list; + struct rb_root *free_tree;
/* * Maintain explicit binary tree(s) to track the allocation of the @@ -94,7 +99,7 @@ struct drm_buddy { };
static inline u64 -drm_buddy_block_offset(struct drm_buddy_block *block) +drm_buddy_block_offset(const struct drm_buddy_block *block) { return block->header & DRM_BUDDY_HEADER_OFFSET; }
base-commit: 3a9cf301794c1a49d95eeb13119ff490fb5cfe88
Maintain two separate RB trees per order - one for clear (zeroed) blocks and another for dirty (uncleared) blocks. This separation improves code clarity and makes it more obvious which tree is being searched during allocation. It also improves scalability and efficiency when searching for a specific type of block, avoiding unnecessary checks and making the allocator more predictable under fragmentation.
The changes have been validated using the existing drm_buddy_test KUnit test cases, along with selected graphics workloads, to ensure correctness and avoid regressions.
v2: Missed adding the suggested-by tag. Added it in v2.
v3(Matthew): - Remove the double underscores from the internal functions. - Rename the internal functions to have less generic names. - Fix the error handling code. - Pass tree argument for the tree macro. - Use the existing dirty/free bit instead of new tree field. - Make free_trees[] instead of clear_tree and dirty_tree for more cleaner approach.
v4: - A bug was reported by Intel CI and it is fixed by Matthew Auld. - Replace the get_root function with &mm->free_trees[tree][order] (Matthew) - Remove the unnecessary rbtree_is_empty() check (Matthew) - Remove the unnecessary get_tree_for_flags() function. - Rename get_tree_for_block() name with get_block_tree() for more clarity.
v5(Jani Nikula): - Don't use static inline in .c files. - enum free_tree and enumerator names are quite generic for a header and usage and the whole enum should be an implementation detail.
v6: - Rewrite the __force_merge() function using the rb_last() and rb_prev().
Cc: stable@vger.kernel.org Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality") Signed-off-by: Arunpravin Paneer Selvam Arunpravin.PaneerSelvam@amd.com Suggested-by: Matthew Auld matthew.auld@intel.com Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4260 --- drivers/gpu/drm/drm_buddy.c | 321 ++++++++++++++++++++---------------- include/drm/drm_buddy.h | 2 +- 2 files changed, 182 insertions(+), 141 deletions(-)
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c index 67aa67229cc3..6e12a0b5d5e3 100644 --- a/drivers/gpu/drm/drm_buddy.c +++ b/drivers/gpu/drm/drm_buddy.c @@ -12,9 +12,16 @@
#include <drm/drm_buddy.h>
+enum drm_buddy_free_tree { + DRM_BUDDY_CLEAR_TREE = 0, + DRM_BUDDY_DIRTY_TREE, + DRM_BUDDY_MAX_FREE_TREES, +}; + static struct kmem_cache *slab_blocks;
-#define rbtree_get_free_block(node) rb_entry((node), struct drm_buddy_block, rb) +#define for_each_free_tree(tree) \ + for ((tree) = 0; (tree) < DRM_BUDDY_MAX_FREE_TREES; (tree)++)
static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm, struct drm_buddy_block *parent, @@ -45,6 +52,30 @@ static void drm_block_free(struct drm_buddy *mm, kmem_cache_free(slab_blocks, block); }
+static enum drm_buddy_free_tree +get_block_tree(struct drm_buddy_block *block) +{ + return drm_buddy_block_is_clear(block) ? + DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE; +} + +static struct drm_buddy_block * +rbtree_get_free_block(const struct rb_node *node) +{ + return node ? rb_entry(node, struct drm_buddy_block, rb) : NULL; +} + +static struct drm_buddy_block * +rbtree_last_free_block(struct rb_root *root) +{ + return rbtree_get_free_block(rb_last(root)); +} + +static bool rbtree_is_empty(struct rb_root *root) +{ + return RB_EMPTY_ROOT(root); +} + static bool drm_buddy_block_offset_less(const struct drm_buddy_block *block, const struct drm_buddy_block *node) { @@ -59,37 +90,28 @@ static bool rbtree_block_offset_less(struct rb_node *block, }
static void rbtree_insert(struct drm_buddy *mm, - struct drm_buddy_block *block) + struct drm_buddy_block *block, + enum drm_buddy_free_tree tree) { rb_add(&block->rb, - &mm->free_tree[drm_buddy_block_order(block)], + &mm->free_trees[tree][drm_buddy_block_order(block)], rbtree_block_offset_less); }
static void rbtree_remove(struct drm_buddy *mm, struct drm_buddy_block *block) { + unsigned int order = drm_buddy_block_order(block); struct rb_root *root; + enum drm_buddy_free_tree tree;
- root = &mm->free_tree[drm_buddy_block_order(block)]; - rb_erase(&block->rb, root); + tree = get_block_tree(block); + root = &mm->free_trees[tree][order];
+ rb_erase(&block->rb, root); RB_CLEAR_NODE(&block->rb); }
-static struct drm_buddy_block * -rbtree_last_entry(struct drm_buddy *mm, unsigned int order) -{ - struct rb_node *node = rb_last(&mm->free_tree[order]); - - return node ? rb_entry(node, struct drm_buddy_block, rb) : NULL; -} - -static bool rbtree_is_empty(struct drm_buddy *mm, unsigned int order) -{ - return RB_EMPTY_ROOT(&mm->free_tree[order]); -} - static void clear_reset(struct drm_buddy_block *block) { block->header &= ~DRM_BUDDY_HEADER_CLEAR; @@ -112,10 +134,13 @@ static void mark_allocated(struct drm_buddy *mm, static void mark_free(struct drm_buddy *mm, struct drm_buddy_block *block) { + enum drm_buddy_free_tree tree; + block->header &= ~DRM_BUDDY_HEADER_STATE; block->header |= DRM_BUDDY_FREE;
- rbtree_insert(mm, block); + tree = get_block_tree(block); + rbtree_insert(mm, block, tree); }
static void mark_split(struct drm_buddy *mm, @@ -201,6 +226,7 @@ static int __force_merge(struct drm_buddy *mm, u64 end, unsigned int min_order) { + enum drm_buddy_free_tree tree; unsigned int order; int i;
@@ -210,45 +236,48 @@ static int __force_merge(struct drm_buddy *mm, if (min_order > mm->max_order) return -EINVAL;
- for (i = min_order - 1; i >= 0; i--) { - struct rb_root *root = &mm->free_tree[i]; - struct rb_node *iter; + for_each_free_tree(tree) { + for (i = min_order - 1; i >= 0; i--) { + struct rb_node *iter = rb_last(&mm->free_trees[tree][i]);
- iter = rb_last(root); - - while (iter) { - struct drm_buddy_block *block, *buddy; - u64 block_start, block_end; + while (iter) { + struct drm_buddy_block *block, *buddy; + u64 block_start, block_end;
- block = rbtree_get_free_block(iter); - iter = rb_prev(iter); + block = rbtree_get_free_block(iter); + iter = rb_prev(iter);
- if (!block || !block->parent) - continue; + if (!block || !block->parent) + continue;
- block_start = drm_buddy_block_offset(block); - block_end = block_start + drm_buddy_block_size(mm, block) - 1; + block_start = drm_buddy_block_offset(block); + block_end = block_start + drm_buddy_block_size(mm, block) - 1;
- if (!contains(start, end, block_start, block_end)) - continue; + if (!contains(start, end, block_start, block_end)) + continue;
- buddy = __get_buddy(block); - if (!drm_buddy_block_is_free(buddy)) - continue; + buddy = __get_buddy(block); + if (!drm_buddy_block_is_free(buddy)) + continue;
- WARN_ON(drm_buddy_block_is_clear(block) == - drm_buddy_block_is_clear(buddy)); + WARN_ON(drm_buddy_block_is_clear(block) == + drm_buddy_block_is_clear(buddy));
- if (iter == &buddy->rb) - iter = rb_prev(iter); + /* + * Advance to the next node when the current node is the buddy, + * as freeing the block will also remove its buddy from the tree. + */ + if (iter == &buddy->rb) + iter = rb_prev(iter);
- rbtree_remove(mm, block); - if (drm_buddy_block_is_clear(block)) - mm->clear_avail -= drm_buddy_block_size(mm, block); + rbtree_remove(mm, block); + if (drm_buddy_block_is_clear(block)) + mm->clear_avail -= drm_buddy_block_size(mm, block);
- order = __drm_buddy_free(mm, block, true); - if (order >= min_order) - return 0; + order = __drm_buddy_free(mm, block, true); + if (order >= min_order) + return 0; + } } }
@@ -269,7 +298,7 @@ static int __force_merge(struct drm_buddy *mm, */ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) { - unsigned int i; + unsigned int i, j; u64 offset;
if (size < chunk_size) @@ -291,14 +320,22 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size)
BUG_ON(mm->max_order > DRM_BUDDY_MAX_ORDER);
- mm->free_tree = kmalloc_array(mm->max_order + 1, - sizeof(struct rb_root), - GFP_KERNEL); - if (!mm->free_tree) + mm->free_trees = kmalloc_array(DRM_BUDDY_MAX_FREE_TREES, + sizeof(*mm->free_trees), + GFP_KERNEL); + if (!mm->free_trees) return -ENOMEM;
- for (i = 0; i <= mm->max_order; ++i) - mm->free_tree[i] = RB_ROOT; + for (i = 0; i < DRM_BUDDY_MAX_FREE_TREES; i++) { + mm->free_trees[i] = kmalloc_array(mm->max_order + 1, + sizeof(struct rb_root), + GFP_KERNEL); + if (!mm->free_trees[i]) + goto out_free_tree; + + for (j = 0; j <= mm->max_order; ++j) + mm->free_trees[i][j] = RB_ROOT; + }
mm->n_roots = hweight64(size);
@@ -346,7 +383,9 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) drm_block_free(mm, mm->roots[i]); kfree(mm->roots); out_free_tree: - kfree(mm->free_tree); + while (i--) + kfree(mm->free_trees[i]); + kfree(mm->free_trees); return -ENOMEM; } EXPORT_SYMBOL(drm_buddy_init); @@ -382,8 +421,9 @@ void drm_buddy_fini(struct drm_buddy *mm)
WARN_ON(mm->avail != mm->size);
+ for (i = 0; i < DRM_BUDDY_MAX_FREE_TREES; i++) + kfree(mm->free_trees[i]); kfree(mm->roots); - kfree(mm->free_tree); } EXPORT_SYMBOL(drm_buddy_fini);
@@ -407,8 +447,7 @@ static int split_block(struct drm_buddy *mm, return -ENOMEM; }
- mark_free(mm, block->left); - mark_free(mm, block->right); + mark_split(mm, block);
if (drm_buddy_block_is_clear(block)) { mark_cleared(block->left); @@ -416,7 +455,8 @@ static int split_block(struct drm_buddy *mm, clear_reset(block); }
- mark_split(mm, block); + mark_free(mm, block->left); + mark_free(mm, block->right);
return 0; } @@ -449,6 +489,7 @@ EXPORT_SYMBOL(drm_get_buddy); */ void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) { + enum drm_buddy_free_tree src_tree, dst_tree; u64 root_size, size, start; unsigned int order; int i; @@ -463,19 +504,24 @@ void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) size -= root_size; }
+ src_tree = is_clear ? DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE; + dst_tree = is_clear ? DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE; + for (i = 0; i <= mm->max_order; ++i) { + struct rb_root *root = &mm->free_trees[src_tree][i]; struct drm_buddy_block *block, *tmp;
- rbtree_postorder_for_each_entry_safe(block, tmp, &mm->free_tree[i], rb) { - if (is_clear != drm_buddy_block_is_clear(block)) { - if (is_clear) { - mark_cleared(block); - mm->clear_avail += drm_buddy_block_size(mm, block); - } else { - clear_reset(block); - mm->clear_avail -= drm_buddy_block_size(mm, block); - } + rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) { + rbtree_remove(mm, block); + if (is_clear) { + mark_cleared(block); + mm->clear_avail += drm_buddy_block_size(mm, block); + } else { + clear_reset(block); + mm->clear_avail -= drm_buddy_block_size(mm, block); } + + rbtree_insert(mm, block, dst_tree); } } } @@ -665,27 +711,17 @@ __drm_buddy_alloc_range_bias(struct drm_buddy *mm, }
static struct drm_buddy_block * -get_maxblock(struct drm_buddy *mm, unsigned int order, - unsigned long flags) +get_maxblock(struct drm_buddy *mm, + unsigned int order, + enum drm_buddy_free_tree tree) { struct drm_buddy_block *max_block = NULL, *block = NULL; + struct rb_root *root; unsigned int i;
for (i = order; i <= mm->max_order; ++i) { - struct rb_node *iter = rb_last(&mm->free_tree[i]); - struct drm_buddy_block *tmp_block; - - while (iter) { - tmp_block = rbtree_get_free_block(iter); - - if (!block_incompatible(tmp_block, flags)) { - block = tmp_block; - break; - } - - iter = rb_prev(iter); - } - + root = &mm->free_trees[tree][i]; + block = rbtree_last_free_block(root); if (!block) continue;
@@ -709,43 +745,39 @@ alloc_from_freetree(struct drm_buddy *mm, unsigned long flags) { struct drm_buddy_block *block = NULL; + struct rb_root *root; + enum drm_buddy_free_tree tree; unsigned int tmp; int err;
+ tree = (flags & DRM_BUDDY_CLEAR_ALLOCATION) ? + DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE; + if (flags & DRM_BUDDY_TOPDOWN_ALLOCATION) { - block = get_maxblock(mm, order, flags); + block = get_maxblock(mm, order, tree); if (block) /* Store the obtained block order */ tmp = drm_buddy_block_order(block); } else { for (tmp = order; tmp <= mm->max_order; ++tmp) { - struct rb_node *iter = rb_last(&mm->free_tree[tmp]); - struct drm_buddy_block *tmp_block; - - while (iter) { - tmp_block = rbtree_get_free_block(iter); - - if (!block_incompatible(tmp_block, flags)) { - block = tmp_block; - break; - } - - iter = rb_prev(iter); - } - + /* Get RB tree root for this order and tree */ + root = &mm->free_trees[tree][tmp]; + block = rbtree_last_free_block(root); if (block) break; } }
if (!block) { - /* Fallback method */ + /* Try allocating from the other tree */ + tree = (tree == DRM_BUDDY_CLEAR_TREE) ? + DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE; + for (tmp = order; tmp <= mm->max_order; ++tmp) { - if (!rbtree_is_empty(mm, tmp)) { - block = rbtree_last_entry(mm, tmp); - if (block) - break; - } + root = &mm->free_trees[tree][tmp]; + block = rbtree_last_free_block(root); + if (block) + break; }
if (!block) @@ -887,9 +919,9 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, struct list_head *blocks) { u64 rhs_offset, lhs_offset, lhs_size, filled; + enum drm_buddy_free_tree tree; struct drm_buddy_block *block; LIST_HEAD(blocks_lhs); - struct rb_node *iter; unsigned long pages; unsigned int order; u64 modify_size; @@ -901,40 +933,43 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, if (order == 0) return -ENOSPC;
- if (rbtree_is_empty(mm, order)) + if (rbtree_is_empty(&mm->free_trees[DRM_BUDDY_CLEAR_TREE][order]) && + rbtree_is_empty(&mm->free_trees[DRM_BUDDY_DIRTY_TREE][order])) return -ENOSPC;
- iter = rb_last(&mm->free_tree[order]); - - while (iter) { - block = rbtree_get_free_block(iter); - - /* Allocate blocks traversing RHS */ - rhs_offset = drm_buddy_block_offset(block); - err = __drm_buddy_alloc_range(mm, rhs_offset, size, - &filled, blocks); - if (!err || err != -ENOSPC) - return err; - - lhs_size = max((size - filled), min_block_size); - if (!IS_ALIGNED(lhs_size, min_block_size)) - lhs_size = round_up(lhs_size, min_block_size); - - /* Allocate blocks traversing LHS */ - lhs_offset = drm_buddy_block_offset(block) - lhs_size; - err = __drm_buddy_alloc_range(mm, lhs_offset, lhs_size, - NULL, &blocks_lhs); - if (!err) { - list_splice(&blocks_lhs, blocks); - return 0; - } else if (err != -ENOSPC) { + for_each_free_tree(tree) { + struct rb_node *iter = rb_last(&mm->free_trees[tree][order]); + + while (iter) { + block = rbtree_get_free_block(iter); + + /* Allocate blocks traversing RHS */ + rhs_offset = drm_buddy_block_offset(block); + err = __drm_buddy_alloc_range(mm, rhs_offset, size, + &filled, blocks); + if (!err || err != -ENOSPC) + return err; + + lhs_size = max((size - filled), min_block_size); + if (!IS_ALIGNED(lhs_size, min_block_size)) + lhs_size = round_up(lhs_size, min_block_size); + + /* Allocate blocks traversing LHS */ + lhs_offset = drm_buddy_block_offset(block) - lhs_size; + err = __drm_buddy_alloc_range(mm, lhs_offset, lhs_size, + NULL, &blocks_lhs); + if (!err) { + list_splice(&blocks_lhs, blocks); + return 0; + } else if (err != -ENOSPC) { + drm_buddy_free_list_internal(mm, blocks); + return err; + } + /* Free blocks for the next iteration */ drm_buddy_free_list_internal(mm, blocks); - return err; - } - /* Free blocks for the next iteration */ - drm_buddy_free_list_internal(mm, blocks);
- iter = rb_prev(iter); + iter = rb_prev(iter); + } }
return -ENOSPC; @@ -1239,6 +1274,7 @@ EXPORT_SYMBOL(drm_buddy_block_print); */ void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p) { + enum drm_buddy_free_tree tree; int order;
drm_printf(p, "chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n", @@ -1246,11 +1282,16 @@ void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p)
for (order = mm->max_order; order >= 0; order--) { struct drm_buddy_block *block, *tmp; + struct rb_root *root; u64 count = 0, free;
- rbtree_postorder_for_each_entry_safe(block, tmp, &mm->free_tree[order], rb) { - BUG_ON(!drm_buddy_block_is_free(block)); - count++; + for_each_free_tree(tree) { + root = &mm->free_trees[tree][order]; + + rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) { + BUG_ON(!drm_buddy_block_is_free(block)); + count++; + } }
drm_printf(p, "order-%2d ", order); diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h index 9ee105d4309f..d7891d08f67a 100644 --- a/include/drm/drm_buddy.h +++ b/include/drm/drm_buddy.h @@ -73,7 +73,7 @@ struct drm_buddy_block { */ struct drm_buddy { /* Maintain a free list for each order. */ - struct rb_root *free_tree; + struct rb_root **free_trees;
/* * Maintain explicit binary tree(s) to track the allocation of the
On 23/09/2025 10:02, Arunpravin Paneer Selvam wrote:
Maintain two separate RB trees per order - one for clear (zeroed) blocks and another for dirty (uncleared) blocks. This separation improves code clarity and makes it more obvious which tree is being searched during allocation. It also improves scalability and efficiency when searching for a specific type of block, avoiding unnecessary checks and making the allocator more predictable under fragmentation.
The changes have been validated using the existing drm_buddy_test KUnit test cases, along with selected graphics workloads, to ensure correctness and avoid regressions.
v2: Missed adding the suggested-by tag. Added it in v2.
v3(Matthew):
- Remove the double underscores from the internal functions.
- Rename the internal functions to have less generic names.
- Fix the error handling code.
- Pass tree argument for the tree macro.
- Use the existing dirty/free bit instead of new tree field.
- Make free_trees[] instead of clear_tree and dirty_tree for more cleaner approach.
v4:
- A bug was reported by Intel CI and it is fixed by Matthew Auld.
- Replace the get_root function with &mm->free_trees[tree][order] (Matthew)
- Remove the unnecessary rbtree_is_empty() check (Matthew)
- Remove the unnecessary get_tree_for_flags() function.
- Rename get_tree_for_block() name with get_block_tree() for more clarity.
v5(Jani Nikula):
- Don't use static inline in .c files.
- enum free_tree and enumerator names are quite generic for a header and usage and the whole enum should be an implementation detail.
v6:
- Rewrite the __force_merge() function using the rb_last() and rb_prev().
Cc: stable@vger.kernel.org Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality") Signed-off-by: Arunpravin Paneer Selvam Arunpravin.PaneerSelvam@amd.com Suggested-by: Matthew Auld matthew.auld@intel.com Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4260
drivers/gpu/drm/drm_buddy.c | 321 ++++++++++++++++++++---------------- include/drm/drm_buddy.h | 2 +- 2 files changed, 182 insertions(+), 141 deletions(-)
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c index 67aa67229cc3..6e12a0b5d5e3 100644 --- a/drivers/gpu/drm/drm_buddy.c +++ b/drivers/gpu/drm/drm_buddy.c @@ -12,9 +12,16 @@ #include <drm/drm_buddy.h> +enum drm_buddy_free_tree {
- DRM_BUDDY_CLEAR_TREE = 0,
- DRM_BUDDY_DIRTY_TREE,
- DRM_BUDDY_MAX_FREE_TREES,
+};
- static struct kmem_cache *slab_blocks;
-#define rbtree_get_free_block(node) rb_entry((node), struct drm_buddy_block, rb) +#define for_each_free_tree(tree) \
- for ((tree) = 0; (tree) < DRM_BUDDY_MAX_FREE_TREES; (tree)++)
Some places seem to open code this? Maybe just drop this or use it everywhere?
static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm, struct drm_buddy_block *parent, @@ -45,6 +52,30 @@ static void drm_block_free(struct drm_buddy *mm, kmem_cache_free(slab_blocks, block); } +static enum drm_buddy_free_tree +get_block_tree(struct drm_buddy_block *block) +{
- return drm_buddy_block_is_clear(block) ?
DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE;
+}
+static struct drm_buddy_block * +rbtree_get_free_block(const struct rb_node *node) +{
- return node ? rb_entry(node, struct drm_buddy_block, rb) : NULL;
+}
+static struct drm_buddy_block * +rbtree_last_free_block(struct rb_root *root) +{
- return rbtree_get_free_block(rb_last(root));
+}
+static bool rbtree_is_empty(struct rb_root *root) +{
- return RB_EMPTY_ROOT(root);
+}
- static bool drm_buddy_block_offset_less(const struct drm_buddy_block *block, const struct drm_buddy_block *node) {
@@ -59,37 +90,28 @@ static bool rbtree_block_offset_less(struct rb_node *block, } static void rbtree_insert(struct drm_buddy *mm,
struct drm_buddy_block *block)
struct drm_buddy_block *block,
{ rb_add(&block->rb,enum drm_buddy_free_tree tree)
&mm->free_tree[drm_buddy_block_order(block)],
}&mm->free_trees[tree][drm_buddy_block_order(block)], rbtree_block_offset_less);
static void rbtree_remove(struct drm_buddy *mm, struct drm_buddy_block *block) {
- unsigned int order = drm_buddy_block_order(block); struct rb_root *root;
- enum drm_buddy_free_tree tree;
- root = &mm->free_tree[drm_buddy_block_order(block)];
- rb_erase(&block->rb, root);
- tree = get_block_tree(block);
- root = &mm->free_trees[tree][order];
- rb_erase(&block->rb, root); RB_CLEAR_NODE(&block->rb); }
-static struct drm_buddy_block * -rbtree_last_entry(struct drm_buddy *mm, unsigned int order) -{
- struct rb_node *node = rb_last(&mm->free_tree[order]);
- return node ? rb_entry(node, struct drm_buddy_block, rb) : NULL;
-}
-static bool rbtree_is_empty(struct drm_buddy *mm, unsigned int order) -{
- return RB_EMPTY_ROOT(&mm->free_tree[order]);
-}
- static void clear_reset(struct drm_buddy_block *block) { block->header &= ~DRM_BUDDY_HEADER_CLEAR;
@@ -112,10 +134,13 @@ static void mark_allocated(struct drm_buddy *mm, static void mark_free(struct drm_buddy *mm, struct drm_buddy_block *block) {
- enum drm_buddy_free_tree tree;
- block->header &= ~DRM_BUDDY_HEADER_STATE; block->header |= DRM_BUDDY_FREE;
- rbtree_insert(mm, block);
- tree = get_block_tree(block);
- rbtree_insert(mm, block, tree); }
static void mark_split(struct drm_buddy *mm, @@ -201,6 +226,7 @@ static int __force_merge(struct drm_buddy *mm, u64 end, unsigned int min_order) {
- enum drm_buddy_free_tree tree; unsigned int order; int i;
@@ -210,45 +236,48 @@ static int __force_merge(struct drm_buddy *mm, if (min_order > mm->max_order) return -EINVAL;
- for (i = min_order - 1; i >= 0; i--) {
struct rb_root *root = &mm->free_tree[i];
struct rb_node *iter;
- for_each_free_tree(tree) {
for (i = min_order - 1; i >= 0; i--) {
struct rb_node *iter = rb_last(&mm->free_trees[tree][i]);
iter = rb_last(root);
while (iter) {
struct drm_buddy_block *block, *buddy;
u64 block_start, block_end;
while (iter) {
struct drm_buddy_block *block, *buddy;
u64 block_start, block_end;
block = rbtree_get_free_block(iter);
iter = rb_prev(iter);
block = rbtree_get_free_block(iter);
iter = rb_prev(iter);
if (!block || !block->parent)
continue;
if (!block || !block->parent)
continue;
block_start = drm_buddy_block_offset(block);
block_end = block_start + drm_buddy_block_size(mm, block) - 1;
block_start = drm_buddy_block_offset(block);
block_end = block_start + drm_buddy_block_size(mm, block) - 1;
if (!contains(start, end, block_start, block_end))
continue;
if (!contains(start, end, block_start, block_end))
continue;
buddy = __get_buddy(block);
if (!drm_buddy_block_is_free(buddy))
continue;
buddy = __get_buddy(block);
if (!drm_buddy_block_is_free(buddy))
continue;
WARN_ON(drm_buddy_block_is_clear(block) ==
drm_buddy_block_is_clear(buddy));
WARN_ON(drm_buddy_block_is_clear(block) ==
drm_buddy_block_is_clear(buddy));
if (iter == &buddy->rb)
iter = rb_prev(iter);
/*
* Advance to the next node when the current node is the buddy,
* as freeing the block will also remove its buddy from the tree.
*/
if (iter == &buddy->rb)
iter = rb_prev(iter);
rbtree_remove(mm, block);
if (drm_buddy_block_is_clear(block))
mm->clear_avail -= drm_buddy_block_size(mm, block);
rbtree_remove(mm, block);
if (drm_buddy_block_is_clear(block))
mm->clear_avail -= drm_buddy_block_size(mm, block);
order = __drm_buddy_free(mm, block, true);
if (order >= min_order)
return 0;
order = __drm_buddy_free(mm, block, true);
if (order >= min_order)
return 0;
}
Do we need all these changes in __force_merge? Can't we just always pick the dirty tree and keep everything else the same? If something is non-merged there must be a dirty block preventing that, and when force merging everything unmerged will be re-labled as dirty anyway, so the second loop through the clean tree should always yield nothing?
}
} @@ -269,7 +298,7 @@ static int __force_merge(struct drm_buddy *mm, */ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) {
- unsigned int i;
- unsigned int i, j; u64 offset;
if (size < chunk_size) @@ -291,14 +320,22 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) BUG_ON(mm->max_order > DRM_BUDDY_MAX_ORDER);
- mm->free_tree = kmalloc_array(mm->max_order + 1,
sizeof(struct rb_root),
GFP_KERNEL);
- if (!mm->free_tree)
- mm->free_trees = kmalloc_array(DRM_BUDDY_MAX_FREE_TREES,
sizeof(*mm->free_trees),
GFP_KERNEL);
- if (!mm->free_trees) return -ENOMEM;
- for (i = 0; i <= mm->max_order; ++i)
mm->free_tree[i] = RB_ROOT;
- for (i = 0; i < DRM_BUDDY_MAX_FREE_TREES; i++) {
mm->free_trees[i] = kmalloc_array(mm->max_order + 1,
sizeof(struct rb_root),
GFP_KERNEL);
if (!mm->free_trees[i])
goto out_free_tree;
for (j = 0; j <= mm->max_order; ++j)
mm->free_trees[i][j] = RB_ROOT;
- }
mm->n_roots = hweight64(size); @@ -346,7 +383,9 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) drm_block_free(mm, mm->roots[i]); kfree(mm->roots); out_free_tree:
- kfree(mm->free_tree);
- while (i--)
kfree(mm->free_trees[i]);
out_free_roots is also decrementing 'i' here it seems, so looks like this might blow up?
- kfree(mm->free_trees); return -ENOMEM; } EXPORT_SYMBOL(drm_buddy_init);
@@ -382,8 +421,9 @@ void drm_buddy_fini(struct drm_buddy *mm) WARN_ON(mm->avail != mm->size);
- for (i = 0; i < DRM_BUDDY_MAX_FREE_TREES; i++)
kfree(mm->roots);kfree(mm->free_trees[i]);
- kfree(mm->free_tree); } EXPORT_SYMBOL(drm_buddy_fini);
@@ -407,8 +447,7 @@ static int split_block(struct drm_buddy *mm, return -ENOMEM; }
- mark_free(mm, block->left);
- mark_free(mm, block->right);
- mark_split(mm, block);
if (drm_buddy_block_is_clear(block)) { mark_cleared(block->left); @@ -416,7 +455,8 @@ static int split_block(struct drm_buddy *mm, clear_reset(block); }
- mark_split(mm, block);
- mark_free(mm, block->left);
- mark_free(mm, block->right);
return 0; } @@ -449,6 +489,7 @@ EXPORT_SYMBOL(drm_get_buddy); */ void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) {
- enum drm_buddy_free_tree src_tree, dst_tree; u64 root_size, size, start; unsigned int order; int i;
@@ -463,19 +504,24 @@ void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) size -= root_size; }
- src_tree = is_clear ? DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE;
- dst_tree = is_clear ? DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE;
- for (i = 0; i <= mm->max_order; ++i) {
struct drm_buddy_block *block, *tmp;struct rb_root *root = &mm->free_trees[src_tree][i];
rbtree_postorder_for_each_entry_safe(block, tmp, &mm->free_tree[i], rb) {
if (is_clear != drm_buddy_block_is_clear(block)) {
if (is_clear) {
mark_cleared(block);
mm->clear_avail += drm_buddy_block_size(mm, block);
} else {
clear_reset(block);
mm->clear_avail -= drm_buddy_block_size(mm, block);
}
rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) {
rbtree_remove(mm, block);
if (is_clear) {
mark_cleared(block);
mm->clear_avail += drm_buddy_block_size(mm, block);
} else {
clear_reset(block);
mm->clear_avail -= drm_buddy_block_size(mm, block); }
} } }rbtree_insert(mm, block, dst_tree);
@@ -665,27 +711,17 @@ __drm_buddy_alloc_range_bias(struct drm_buddy *mm, } static struct drm_buddy_block * -get_maxblock(struct drm_buddy *mm, unsigned int order,
unsigned long flags)
+get_maxblock(struct drm_buddy *mm,
unsigned int order,
{ struct drm_buddy_block *max_block = NULL, *block = NULL;enum drm_buddy_free_tree tree)
- struct rb_root *root; unsigned int i;
for (i = order; i <= mm->max_order; ++i) {
struct rb_node *iter = rb_last(&mm->free_tree[i]);
struct drm_buddy_block *tmp_block;
while (iter) {
tmp_block = rbtree_get_free_block(iter);
if (!block_incompatible(tmp_block, flags)) {
block = tmp_block;
break;
}
iter = rb_prev(iter);
}
root = &mm->free_trees[tree][i];
if (!block) continue;block = rbtree_last_free_block(root);
@@ -709,43 +745,39 @@ alloc_from_freetree(struct drm_buddy *mm, unsigned long flags) { struct drm_buddy_block *block = NULL;
- struct rb_root *root;
- enum drm_buddy_free_tree tree; unsigned int tmp; int err;
- tree = (flags & DRM_BUDDY_CLEAR_ALLOCATION) ?
DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE;
- if (flags & DRM_BUDDY_TOPDOWN_ALLOCATION) {
block = get_maxblock(mm, order, flags);
if (block) /* Store the obtained block order */ tmp = drm_buddy_block_order(block); } else { for (tmp = order; tmp <= mm->max_order; ++tmp) {block = get_maxblock(mm, order, tree);
struct rb_node *iter = rb_last(&mm->free_tree[tmp]);
struct drm_buddy_block *tmp_block;
while (iter) {
tmp_block = rbtree_get_free_block(iter);
if (!block_incompatible(tmp_block, flags)) {
block = tmp_block;
break;
}
iter = rb_prev(iter);
}
/* Get RB tree root for this order and tree */
root = &mm->free_trees[tree][tmp];
} }block = rbtree_last_free_block(root); if (block) break;
if (!block) {
/* Fallback method */
/* Try allocating from the other tree */
tree = (tree == DRM_BUDDY_CLEAR_TREE) ?
DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE;
- for (tmp = order; tmp <= mm->max_order; ++tmp) {
if (!rbtree_is_empty(mm, tmp)) {
Did you mean to drop the is_empty() check here? If it's overkill I think better just not add it in the previous patch?
block = rbtree_last_entry(mm, tmp);
if (block)
break;
}
root = &mm->free_trees[tree][tmp];
block = rbtree_last_free_block(root);
if (block)
}break;
if (!block) @@ -887,9 +919,9 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, struct list_head *blocks) { u64 rhs_offset, lhs_offset, lhs_size, filled;
- enum drm_buddy_free_tree tree; struct drm_buddy_block *block; LIST_HEAD(blocks_lhs);
- struct rb_node *iter; unsigned long pages; unsigned int order; u64 modify_size;
@@ -901,40 +933,43 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, if (order == 0) return -ENOSPC;
- if (rbtree_is_empty(mm, order))
- if (rbtree_is_empty(&mm->free_trees[DRM_BUDDY_CLEAR_TREE][order]) &&
rbtree_is_empty(&mm->free_trees[DRM_BUDDY_DIRTY_TREE][order]))
Could potentially merge this with the for_each_tree loop below?
Otherwise, Reviewed-by: Matthew Auld matthew.auld@intel.com
return -ENOSPC;
- iter = rb_last(&mm->free_tree[order]);
- while (iter) {
block = rbtree_get_free_block(iter);
/* Allocate blocks traversing RHS */
rhs_offset = drm_buddy_block_offset(block);
err = __drm_buddy_alloc_range(mm, rhs_offset, size,
&filled, blocks);
if (!err || err != -ENOSPC)
return err;
lhs_size = max((size - filled), min_block_size);
if (!IS_ALIGNED(lhs_size, min_block_size))
lhs_size = round_up(lhs_size, min_block_size);
/* Allocate blocks traversing LHS */
lhs_offset = drm_buddy_block_offset(block) - lhs_size;
err = __drm_buddy_alloc_range(mm, lhs_offset, lhs_size,
NULL, &blocks_lhs);
if (!err) {
list_splice(&blocks_lhs, blocks);
return 0;
} else if (err != -ENOSPC) {
- for_each_free_tree(tree) {
struct rb_node *iter = rb_last(&mm->free_trees[tree][order]);
while (iter) {
block = rbtree_get_free_block(iter);
/* Allocate blocks traversing RHS */
rhs_offset = drm_buddy_block_offset(block);
err = __drm_buddy_alloc_range(mm, rhs_offset, size,
&filled, blocks);
if (!err || err != -ENOSPC)
return err;
lhs_size = max((size - filled), min_block_size);
if (!IS_ALIGNED(lhs_size, min_block_size))
lhs_size = round_up(lhs_size, min_block_size);
/* Allocate blocks traversing LHS */
lhs_offset = drm_buddy_block_offset(block) - lhs_size;
err = __drm_buddy_alloc_range(mm, lhs_offset, lhs_size,
NULL, &blocks_lhs);
if (!err) {
list_splice(&blocks_lhs, blocks);
return 0;
} else if (err != -ENOSPC) {
drm_buddy_free_list_internal(mm, blocks);
return err;
}
/* Free blocks for the next iteration */ drm_buddy_free_list_internal(mm, blocks);
return err;
}
/* Free blocks for the next iteration */
drm_buddy_free_list_internal(mm, blocks);
iter = rb_prev(iter);
iter = rb_prev(iter);
}}
return -ENOSPC; @@ -1239,6 +1274,7 @@ EXPORT_SYMBOL(drm_buddy_block_print); */ void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p) {
- enum drm_buddy_free_tree tree; int order;
drm_printf(p, "chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n", @@ -1246,11 +1282,16 @@ void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p) for (order = mm->max_order; order >= 0; order--) { struct drm_buddy_block *block, *tmp;
u64 count = 0, free;struct rb_root *root;
rbtree_postorder_for_each_entry_safe(block, tmp, &mm->free_tree[order], rb) {
BUG_ON(!drm_buddy_block_is_free(block));
count++;
for_each_free_tree(tree) {
root = &mm->free_trees[tree][order];
rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) {
BUG_ON(!drm_buddy_block_is_free(block));
count++;
}}
drm_printf(p, "order-%2d ", order); diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h index 9ee105d4309f..d7891d08f67a 100644 --- a/include/drm/drm_buddy.h +++ b/include/drm/drm_buddy.h @@ -73,7 +73,7 @@ struct drm_buddy_block { */ struct drm_buddy { /* Maintain a free list for each order. */
- struct rb_root *free_tree;
- struct rb_root **free_trees;
/* * Maintain explicit binary tree(s) to track the allocation of the
Hi Matthew,
Ping ?
Regards, Arun.
On 9/23/2025 2:32 PM, Arunpravin Paneer Selvam wrote:
Replace the freelist (O(n)) used for free block management with a red-black tree, providing more efficient O(log n) search, insert, and delete operations. This improves scalability and performance when managing large numbers of free blocks per order (e.g., hundreds or thousands).
In the VK-CTS memory stress subtest, the buddy manager merges fragmented memory and inserts freed blocks into the freelist. Since freelist insertion is O(n), this becomes a bottleneck as fragmentation increases. Benchmarking shows list_insert_sorted() consumes ~52.69% CPU with the freelist, compared to just 0.03% with the RB tree (rbtree_insert.isra.0), despite performing the same sorted insert.
This also improves performance in heavily fragmented workloads, such as games or graphics tests that stress memory.
As the buddy allocator evolves with new features such as clear-page tracking, the resulting fragmentation and complexity have grown. These RB-tree based design changes are introduced to address that growth and ensure the allocator continues to perform efficiently under fragmented conditions.
The RB tree implementation with separate clear/dirty trees provides:
- O(n log n) aggregate complexity for all operations instead of O(n^2)
- Elimination of soft lockups and system instability
- Improved code maintainability and clarity
- Better scalability for large memory systems
- Predictable performance under fragmentation
v3(Matthew):
- Remove RB_EMPTY_NODE check in force_merge function.
- Rename rb for loop macros to have less generic names and move to .c file.
- Make the rb node rb and link field as union.
v4(Jani Nikula):
- The kernel-doc comment should be "/**"
- Move all the rbtree macros to rbtree.h and add parens to ensure correct precedence.
v5:
- Remove the inline in a .c file (Jani Nikula).
v6(Peter Zijlstra):
- Add rb_add() function replacing the existing rbtree_insert() code.
v7:
- A full walk iteration in rbtree is slower than the list (Peter Zijlstra).
- The existing rbtree_postorder_for_each_entry_safe macro should be used in scenarios where traversal order is not a critical factor (Christian).
Cc: stable@vger.kernel.org Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality") Signed-off-by: Arunpravin Paneer Selvam Arunpravin.PaneerSelvam@amd.com
drivers/gpu/drm/drm_buddy.c | 188 ++++++++++++++++++++++-------------- include/drm/drm_buddy.h | 11 ++- 2 files changed, 124 insertions(+), 75 deletions(-)
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c index a94061f373de..67aa67229cc3 100644 --- a/drivers/gpu/drm/drm_buddy.c +++ b/drivers/gpu/drm/drm_buddy.c @@ -14,6 +14,8 @@ static struct kmem_cache *slab_blocks; +#define rbtree_get_free_block(node) rb_entry((node), struct drm_buddy_block, rb)
- static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm, struct drm_buddy_block *parent, unsigned int order,
@@ -31,6 +33,8 @@ static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm, block->header |= order; block->parent = parent;
- RB_CLEAR_NODE(&block->rb);
- BUG_ON(block->header & DRM_BUDDY_HEADER_UNUSED); return block; }
@@ -41,23 +45,49 @@ static void drm_block_free(struct drm_buddy *mm, kmem_cache_free(slab_blocks, block); } -static void list_insert_sorted(struct drm_buddy *mm,
struct drm_buddy_block *block)
+static bool drm_buddy_block_offset_less(const struct drm_buddy_block *block,
{const struct drm_buddy_block *node)
- struct drm_buddy_block *node;
- struct list_head *head;
- return drm_buddy_block_offset(block) < drm_buddy_block_offset(node);
+}
- head = &mm->free_list[drm_buddy_block_order(block)];
- if (list_empty(head)) {
list_add(&block->link, head);
return;
- }
+static bool rbtree_block_offset_less(struct rb_node *block,
const struct rb_node *node)
+{
- return drm_buddy_block_offset_less(rbtree_get_free_block(block),
rbtree_get_free_block(node));
+}
- list_for_each_entry(node, head, link)
if (drm_buddy_block_offset(block) < drm_buddy_block_offset(node))
break;
+static void rbtree_insert(struct drm_buddy *mm,
struct drm_buddy_block *block)
+{
- rb_add(&block->rb,
&mm->free_tree[drm_buddy_block_order(block)],
rbtree_block_offset_less);
+}
+static void rbtree_remove(struct drm_buddy *mm,
struct drm_buddy_block *block)
+{
- struct rb_root *root;
- root = &mm->free_tree[drm_buddy_block_order(block)];
- rb_erase(&block->rb, root);
- __list_add(&block->link, node->link.prev, &node->link);
- RB_CLEAR_NODE(&block->rb);
+}
+static struct drm_buddy_block * +rbtree_last_entry(struct drm_buddy *mm, unsigned int order) +{
- struct rb_node *node = rb_last(&mm->free_tree[order]);
- return node ? rb_entry(node, struct drm_buddy_block, rb) : NULL;
+}
+static bool rbtree_is_empty(struct drm_buddy *mm, unsigned int order) +{
- return RB_EMPTY_ROOT(&mm->free_tree[order]); }
static void clear_reset(struct drm_buddy_block *block) @@ -70,12 +100,13 @@ static void mark_cleared(struct drm_buddy_block *block) block->header |= DRM_BUDDY_HEADER_CLEAR; } -static void mark_allocated(struct drm_buddy_block *block) +static void mark_allocated(struct drm_buddy *mm,
{ block->header &= ~DRM_BUDDY_HEADER_STATE; block->header |= DRM_BUDDY_ALLOCATED;struct drm_buddy_block *block)
- list_del(&block->link);
- rbtree_remove(mm, block); }
static void mark_free(struct drm_buddy *mm, @@ -84,15 +115,16 @@ static void mark_free(struct drm_buddy *mm, block->header &= ~DRM_BUDDY_HEADER_STATE; block->header |= DRM_BUDDY_FREE;
- list_insert_sorted(mm, block);
- rbtree_insert(mm, block); }
-static void mark_split(struct drm_buddy_block *block) +static void mark_split(struct drm_buddy *mm,
{ block->header &= ~DRM_BUDDY_HEADER_STATE; block->header |= DRM_BUDDY_SPLIT;struct drm_buddy_block *block)
- list_del(&block->link);
- rbtree_remove(mm, block); }
static inline bool overlaps(u64 s1, u64 e1, u64 s2, u64 e2) @@ -148,7 +180,7 @@ static unsigned int __drm_buddy_free(struct drm_buddy *mm, mark_cleared(parent); }
list_del(&buddy->link);
if (force_merge && drm_buddy_block_is_clear(buddy)) mm->clear_avail -= drm_buddy_block_size(mm, buddy);rbtree_remove(mm, buddy);
@@ -179,13 +211,19 @@ static int __force_merge(struct drm_buddy *mm, return -EINVAL; for (i = min_order - 1; i >= 0; i--) {
struct drm_buddy_block *block, *prev;
struct rb_root *root = &mm->free_tree[i];
struct rb_node *iter;
iter = rb_last(root);
list_for_each_entry_safe_reverse(block, prev, &mm->free_list[i], link) {
struct drm_buddy_block *buddy;
while (iter) {
struct drm_buddy_block *block, *buddy; u64 block_start, block_end;
if (!block->parent)
block = rbtree_get_free_block(iter);
iter = rb_prev(iter);
if (!block || !block->parent) continue;
block_start = drm_buddy_block_offset(block); @@ -201,15 +239,10 @@ static int __force_merge(struct drm_buddy *mm, WARN_ON(drm_buddy_block_is_clear(block) == drm_buddy_block_is_clear(buddy));
/*
* If the prev block is same as buddy, don't access the
* block in the next iteration as we would free the
* buddy block as part of the free function.
*/
if (prev == buddy)
prev = list_prev_entry(prev, link);
if (iter == &buddy->rb)
iter = rb_prev(iter);
list_del(&block->link);
rbtree_remove(mm, block); if (drm_buddy_block_is_clear(block)) mm->clear_avail -= drm_buddy_block_size(mm, block);
@@ -258,14 +291,14 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) BUG_ON(mm->max_order > DRM_BUDDY_MAX_ORDER);
- mm->free_list = kmalloc_array(mm->max_order + 1,
sizeof(struct list_head),
- mm->free_tree = kmalloc_array(mm->max_order + 1,
sizeof(struct rb_root), GFP_KERNEL);
- if (!mm->free_list)
- if (!mm->free_tree) return -ENOMEM;
for (i = 0; i <= mm->max_order; ++i)
INIT_LIST_HEAD(&mm->free_list[i]);
mm->free_tree[i] = RB_ROOT;
mm->n_roots = hweight64(size); @@ -273,7 +306,7 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) sizeof(struct drm_buddy_block *), GFP_KERNEL); if (!mm->roots)
goto out_free_list;
goto out_free_tree;
offset = 0; i = 0; @@ -312,8 +345,8 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) while (i--) drm_block_free(mm, mm->roots[i]); kfree(mm->roots); -out_free_list:
- kfree(mm->free_list);
+out_free_tree:
- kfree(mm->free_tree); return -ENOMEM; } EXPORT_SYMBOL(drm_buddy_init);
@@ -323,7 +356,7 @@ EXPORT_SYMBOL(drm_buddy_init);
- @mm: DRM buddy manager to free
- Cleanup memory manager resources and the freelist
*/ void drm_buddy_fini(struct drm_buddy *mm) {
- Cleanup memory manager resources and the freetree
@@ -350,7 +383,7 @@ void drm_buddy_fini(struct drm_buddy *mm) WARN_ON(mm->avail != mm->size); kfree(mm->roots);
- kfree(mm->free_list);
- kfree(mm->free_tree); } EXPORT_SYMBOL(drm_buddy_fini);
@@ -383,7 +416,7 @@ static int split_block(struct drm_buddy *mm, clear_reset(block); }
- mark_split(block);
- mark_split(mm, block);
return 0; } @@ -412,7 +445,7 @@ EXPORT_SYMBOL(drm_get_buddy);
- @is_clear: blocks clear state
- Reset the clear state based on @is_clear value for each block
- in the freelist.
*/ void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) {
- in the freetree.
@@ -431,9 +464,9 @@ void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) } for (i = 0; i <= mm->max_order; ++i) {
struct drm_buddy_block *block;
struct drm_buddy_block *block, *tmp;
list_for_each_entry_reverse(block, &mm->free_list[i], link) {
rbtree_postorder_for_each_entry_safe(block, tmp, &mm->free_tree[i], rb) { if (is_clear != drm_buddy_block_is_clear(block)) { if (is_clear) { mark_cleared(block);
@@ -639,14 +672,18 @@ get_maxblock(struct drm_buddy *mm, unsigned int order, unsigned int i; for (i = order; i <= mm->max_order; ++i) {
struct drm_buddy_block *tmp_block;struct rb_node *iter = rb_last(&mm->free_tree[i]);
list_for_each_entry_reverse(tmp_block, &mm->free_list[i], link) {
if (block_incompatible(tmp_block, flags))
continue;
while (iter) {
tmp_block = rbtree_get_free_block(iter);
block = tmp_block;
break;
if (!block_incompatible(tmp_block, flags)) {
block = tmp_block;
break;
}
}iter = rb_prev(iter);
if (!block) @@ -667,7 +704,7 @@ get_maxblock(struct drm_buddy *mm, unsigned int order, } static struct drm_buddy_block * -alloc_from_freelist(struct drm_buddy *mm, +alloc_from_freetree(struct drm_buddy *mm, unsigned int order, unsigned long flags) { @@ -682,14 +719,18 @@ alloc_from_freelist(struct drm_buddy *mm, tmp = drm_buddy_block_order(block); } else { for (tmp = order; tmp <= mm->max_order; ++tmp) {
struct rb_node *iter = rb_last(&mm->free_tree[tmp]); struct drm_buddy_block *tmp_block;
list_for_each_entry_reverse(tmp_block, &mm->free_list[tmp], link) {
if (block_incompatible(tmp_block, flags))
continue;
while (iter) {
tmp_block = rbtree_get_free_block(iter);
block = tmp_block;
break;
if (!block_incompatible(tmp_block, flags)) {
block = tmp_block;
break;
}
iter = rb_prev(iter); }
if (block) @@ -700,10 +741,8 @@ alloc_from_freelist(struct drm_buddy *mm, if (!block) { /* Fallback method */ for (tmp = order; tmp <= mm->max_order; ++tmp) {
if (!list_empty(&mm->free_list[tmp])) {
block = list_last_entry(&mm->free_list[tmp],
struct drm_buddy_block,
link);
if (!rbtree_is_empty(mm, tmp)) {
block = rbtree_last_entry(mm, tmp); if (block) break; }
@@ -771,7 +810,7 @@ static int __alloc_range(struct drm_buddy *mm, if (contains(start, end, block_start, block_end)) { if (drm_buddy_block_is_free(block)) {
mark_allocated(block);
mark_allocated(mm, block); total_allocated += drm_buddy_block_size(mm, block); mm->avail -= drm_buddy_block_size(mm, block); if (drm_buddy_block_is_clear(block))
@@ -849,8 +888,8 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, { u64 rhs_offset, lhs_offset, lhs_size, filled; struct drm_buddy_block *block;
- struct list_head *list; LIST_HEAD(blocks_lhs);
- struct rb_node *iter; unsigned long pages; unsigned int order; u64 modify_size;
@@ -862,11 +901,14 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, if (order == 0) return -ENOSPC;
- list = &mm->free_list[order];
- if (list_empty(list))
- if (rbtree_is_empty(mm, order)) return -ENOSPC;
- list_for_each_entry_reverse(block, list, link) {
- iter = rb_last(&mm->free_tree[order]);
- while (iter) {
block = rbtree_get_free_block(iter);
- /* Allocate blocks traversing RHS */ rhs_offset = drm_buddy_block_offset(block); err = __drm_buddy_alloc_range(mm, rhs_offset, size,
@@ -891,6 +933,8 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, } /* Free blocks for the next iteration */ drm_buddy_free_list_internal(mm, blocks);
}iter = rb_prev(iter);
return -ENOSPC; @@ -976,7 +1020,7 @@ int drm_buddy_block_trim(struct drm_buddy *mm, list_add(&block->tmp_link, &dfs); err = __alloc_range(mm, &dfs, new_start, new_size, blocks, NULL); if (err) {
mark_allocated(block);
mm->avail -= drm_buddy_block_size(mm, block); if (drm_buddy_block_is_clear(block)) mm->clear_avail -= drm_buddy_block_size(mm, block);mark_allocated(mm, block);
@@ -999,8 +1043,8 @@ __drm_buddy_alloc_blocks(struct drm_buddy *mm, return __drm_buddy_alloc_range_bias(mm, start, end, order, flags); else
/* Allocate from freelist */
return alloc_from_freelist(mm, order, flags);
/* Allocate from freetree */
}return alloc_from_freetree(mm, order, flags);
/** @@ -1017,8 +1061,8 @@ __drm_buddy_alloc_blocks(struct drm_buddy *mm,
- alloc_range_bias() called on range limitations, which traverses
- the tree and returns the desired block.
- alloc_from_freelist() called when *no* range restrictions
- are enforced, which picks the block from the freelist.
- alloc_from_freetree() called when *no* range restrictions
- are enforced, which picks the block from the freetree.
- Returns:
- 0 on success, error code on failure.
@@ -1120,7 +1164,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, } } while (1);
mark_allocated(block);
mm->avail -= drm_buddy_block_size(mm, block); if (drm_buddy_block_is_clear(block)) mm->clear_avail -= drm_buddy_block_size(mm, block);mark_allocated(mm, block);
@@ -1201,10 +1245,10 @@ void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p) mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20); for (order = mm->max_order; order >= 0; order--) {
struct drm_buddy_block *block;
u64 count = 0, free;struct drm_buddy_block *block, *tmp;
list_for_each_entry(block, &mm->free_list[order], link) {
}rbtree_postorder_for_each_entry_safe(block, tmp, &mm->free_tree[order], rb) { BUG_ON(!drm_buddy_block_is_free(block)); count++;
diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h index 513837632b7d..9ee105d4309f 100644 --- a/include/drm/drm_buddy.h +++ b/include/drm/drm_buddy.h @@ -10,6 +10,7 @@ #include <linux/list.h> #include <linux/slab.h> #include <linux/sched.h> +#include <linux/rbtree.h> #include <drm/drm_print.h> @@ -53,7 +54,11 @@ struct drm_buddy_block { * a list, if so desired. As soon as the block is freed with * drm_buddy_free* ownership is given back to the mm. */
- struct list_head link;
- union {
struct rb_node rb;
struct list_head link;
- };
- struct list_head tmp_link; };
@@ -68,7 +73,7 @@ struct drm_buddy_block { */ struct drm_buddy { /* Maintain a free list for each order. */
- struct list_head *free_list;
- struct rb_root *free_tree;
/* * Maintain explicit binary tree(s) to track the allocation of the @@ -94,7 +99,7 @@ struct drm_buddy { }; static inline u64 -drm_buddy_block_offset(struct drm_buddy_block *block) +drm_buddy_block_offset(const struct drm_buddy_block *block) { return block->header & DRM_BUDDY_HEADER_OFFSET; }
base-commit: 3a9cf301794c1a49d95eeb13119ff490fb5cfe88
On 23/09/2025 10:02, Arunpravin Paneer Selvam wrote:
Replace the freelist (O(n)) used for free block management with a red-black tree, providing more efficient O(log n) search, insert, and delete operations. This improves scalability and performance when managing large numbers of free blocks per order (e.g., hundreds or thousands).
In the VK-CTS memory stress subtest, the buddy manager merges fragmented memory and inserts freed blocks into the freelist. Since freelist insertion is O(n), this becomes a bottleneck as fragmentation increases. Benchmarking shows list_insert_sorted() consumes ~52.69% CPU with the freelist, compared to just 0.03% with the RB tree (rbtree_insert.isra.0), despite performing the same sorted insert.
This also improves performance in heavily fragmented workloads, such as games or graphics tests that stress memory.
As the buddy allocator evolves with new features such as clear-page tracking, the resulting fragmentation and complexity have grown. These RB-tree based design changes are introduced to address that growth and ensure the allocator continues to perform efficiently under fragmented conditions.
The RB tree implementation with separate clear/dirty trees provides:
- O(n log n) aggregate complexity for all operations instead of O(n^2)
- Elimination of soft lockups and system instability
- Improved code maintainability and clarity
- Better scalability for large memory systems
- Predictable performance under fragmentation
v3(Matthew):
- Remove RB_EMPTY_NODE check in force_merge function.
- Rename rb for loop macros to have less generic names and move to .c file.
- Make the rb node rb and link field as union.
v4(Jani Nikula):
- The kernel-doc comment should be "/**"
- Move all the rbtree macros to rbtree.h and add parens to ensure correct precedence.
v5:
- Remove the inline in a .c file (Jani Nikula).
v6(Peter Zijlstra):
- Add rb_add() function replacing the existing rbtree_insert() code.
v7:
- A full walk iteration in rbtree is slower than the list (Peter Zijlstra).
- The existing rbtree_postorder_for_each_entry_safe macro should be used in scenarios where traversal order is not a critical factor (Christian).
Cc: stable@vger.kernel.org Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality") Signed-off-by: Arunpravin Paneer Selvam Arunpravin.PaneerSelvam@amd.com
Reviewed-by: Matthew Auld matthew.auld@intel.com
linux-stable-mirror@lists.linaro.org