Henry's bug[1] and fix[2] prompted some further inspection by
Jean.
This series provides fixes for the remaining issues Jean identified, as
well as reworking the channel paths to reduce cleanup required in error
paths. It is based on v6.16-rc1.
Lightly tested under qemu and on an AST2600 EVB. Further testing on
platforms designed around the snoop device appreciated.
[1]: https://bugzilla.kernel.org/show_bug.cgi?id=219934
[2]: https://lore.kernel.org/all/20250401074647.21300-1-bsdhenrymartin@gmail.com/
Signed-off-by: Andrew Jeffery <andrew(a)codeconstruct.com.au>
---
Changes in v2:
- Address comments on v1 from Jean
- Implement channel indexing using enums to avoid unnecessary tests
- Switch to devm_clk_get_enabled()
- Use dev_err_probe() where possible
- Link to v1: https://patch.msgid.link/20250411-aspeed-lpc-snoop-fixes-v1-0-64f522e3ad6f@…
---
Andrew Jeffery (10):
soc: aspeed: lpc-snoop: Cleanup resources in stack-order
soc: aspeed: lpc-snoop: Don't disable channels that aren't enabled
soc: aspeed: lpc-snoop: Ensure model_data is valid
soc: aspeed: lpc-snoop: Constrain parameters in channel paths
soc: aspeed: lpc-snoop: Rename 'channel' to 'index' in channel paths
soc: aspeed: lpc-snoop: Rearrange channel paths
soc: aspeed: lpc-snoop: Switch to devm_clk_get_enabled()
soc: aspeed: lpc-snoop: Use dev_err_probe() where possible
soc: aspeed: lpc-snoop: Consolidate channel initialisation
soc: aspeed: lpc-snoop: Lift channel config to const structs
drivers/soc/aspeed/aspeed-lpc-snoop.c | 224 +++++++++++++++++-----------------
1 file changed, 110 insertions(+), 114 deletions(-)
---
base-commit: 19272b37aa4f83ca52bdf9c16d5d81bdd1354494
change-id: 20250401-aspeed-lpc-snoop-fixes-e5d2883da3a3
Best regards,
--
Andrew Jeffery <andrew(a)codeconstruct.com.au>
Use common wrappers operating directly on the struct sg_table objects to
fix incorrect use of statterlists related calls. dma_unmap_sg() function
has to be called with the number of elements originally passed to the
dma_map_sg() function, not the one returned in sgtable's nents.
CC: stable(a)vger.kernel.org
Fixes: 425902f5c8e3 ("fpga zynq: Use the scatterlist interface")
Signed-off-by: Marek Szyprowski <m.szyprowski(a)samsung.com>
---
v2:
- fixed build break (missing flags parameter)
---
drivers/fpga/zynq-fpga.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git drivers/fpga/zynq-fpga.c drivers/fpga/zynq-fpga.c
index f7e08f7ea9ef..0be0d569589d 100644
--- drivers/fpga/zynq-fpga.c
+++ drivers/fpga/zynq-fpga.c
@@ -406,7 +406,7 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt)
}
priv->dma_nelms =
- dma_map_sg(mgr->dev.parent, sgt->sgl, sgt->nents, DMA_TO_DEVICE);
+ dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0);
if (priv->dma_nelms == 0) {
dev_err(&mgr->dev, "Unable to DMA map (TO_DEVICE)\n");
return -ENOMEM;
@@ -478,7 +478,7 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt)
clk_disable(priv->clk);
out_free:
- dma_unmap_sg(mgr->dev.parent, sgt->sgl, sgt->nents, DMA_TO_DEVICE);
+ dma_unmap_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0);
return err;
}
--
2.34.1
On Sun, 15 Jun 2025, "Nautiyal, Ankit K" <ankit.k.nautiyal(a)intel.com> wrote:
> On 6/13/2025 3:06 PM, Jani Nikula wrote:
>> On Fri, 13 Jun 2025, Ankit Nautiyal<ankit.k.nautiyal(a)intel.com> wrote:
>>> *ana_cp_int = max(1, min(ana_cp_int_temp, 127));
>> Unrelated to this patch, but this should be:
>>
>> *ana_cp_int = clamp(ana_cp_int_temp, 1, 127);
>>
>> There's a similar issue with ana_cp_prop also in the file.
>>
> Agreed. Should there be a separate patch for this?
Yes. That's why I emphasized "unrelated to this patch". ;)
BR,
Jani.
--
Jani Nikula, Intel
Use common wrappers operating directly on the struct sg_table objects to
fix incorrect use of statterlists related calls. dma_unmap_sg() function
has to be called with the number of elements originally passed to the
dma_map_sg() function, not the one returned in sgtable's nents.
CC: stable(a)vger.kernel.org
Fixes: 425902f5c8e3 ("fpga zynq: Use the scatterlist interface")
Signed-off-by: Marek Szyprowski <m.szyprowski(a)samsung.com>
---
drivers/fpga/zynq-fpga.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/fpga/zynq-fpga.c b/drivers/fpga/zynq-fpga.c
index f7e08f7ea9ef..9bd39d1d4048 100644
--- a/drivers/fpga/zynq-fpga.c
+++ b/drivers/fpga/zynq-fpga.c
@@ -406,7 +406,7 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt)
}
priv->dma_nelms =
- dma_map_sg(mgr->dev.parent, sgt->sgl, sgt->nents, DMA_TO_DEVICE);
+ dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE);
if (priv->dma_nelms == 0) {
dev_err(&mgr->dev, "Unable to DMA map (TO_DEVICE)\n");
return -ENOMEM;
@@ -478,7 +478,7 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt)
clk_disable(priv->clk);
out_free:
- dma_unmap_sg(mgr->dev.parent, sgt->sgl, sgt->nents, DMA_TO_DEVICE);
+ dma_unmap_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE);
return err;
}
--
2.34.1
This patch series attempts to enable the use of xe DRM driver on non-4KiB
kernel page platforms. This involves fixing the ttm/bo interface, as well
as parts of the userspace API to make use of kernel `PAGE_SIZE' for
alignment instead of the assumed `SZ_4K', it also fixes incorrect usage of
`PAGE_SIZE' in the GuC and ring buffer interface code to make sure all
instructions/commands were aligned to 4KiB barriers (per the Programmer's
Manual for the GPUs covered by this DRM driver).
This issue was first discovered and reported by members of the LoongArch
user communities, whose hardware commonly ran on 16KiB-page kernels. The
patch series began on an unassuming branch of a downstream kernel tree
maintained by Shang Yatsen.[^1]
It worked well but remained sparsely documented, a lot of the work done
here relied on Shang Yatsen's original patch.
AOSC OS then picked it up[^2] to provide Intel Xe/Arc support for users of
its LoongArch port, for which I worked extensively on. After months of
positive user feedback and from encouragement from Kexy Biscuit, my
colleague at the community, I decided to examine its potential for
upstreaming, cross-reference kernel and Intel documentation to better
document and revise this patch.
Now that this series has been tested good (for boot up, OpenGL, and
playback of a standardised set of video samples[^3] on the following
platforms (motherboard + GPU model):
- x86-64, 4KiB kernel page:
- MS-7D42 + Intel Arc A580
- COLORFIRE B760M-MEOW WIFI D5 + Intel Arc B580
- LoongArch, 16KiB kernel page:
- XA61200 + GUNNIR DG1 Blue Halberd (Intel DG1)
- XA61200 + GUNNIR Iris Xe Index 4 (Intel DG1)
- XA61200 + GUNNIR Intel Iris Xe Max Index V2 (Intel DG1)
- XA61200 + GUNNIR Intel Arc A380 Index 6G (Intel Arc A380)
- XA61200 + ASRock Arc A380 Challenger ITX OC (Intel Arc A380)
- XA61200 + Intel Arc A580
- XA61200 + GUNNIR Intel Arc A750 Photon 8G OC (Intel Arc A750)
- XA61200 + Intel Arc B580
- XB612B0 + GUNNIR Intel Iris Xe Max Index V2 (Intel DG1)
- XB612B0 + GUNNIR Intel Arc A380 Index 6G (Intel Arc A380)
- ASUS XC-LS3A6M + GUNNIR Intel Arc B580 INDEX 12G (Intel Arc B580)
On these platforms, basic functionalities tested good but the driver was
unstable with occasional resets (I do suspect however, that this platform
suffers from PCIe coherence issues, as instability only occurs under heavy
VRAM I/O load):
- AArch64, 4KiB/64KiB kernel pages:
- ERUN-FD3000 (Phytium D3000) + GUNNIR Intel Iris Xe Max Index V2
(Intel DG1)
- ERUN-FD3000 (Phytium D3000) + GUNNIR Intel Arc A380 Index 6G
(Intel Arc A380)
- ERUN-FD3000 (Phytium D3000) + GUNNIR Intel Arc A750 Photon 8G OC
(Intel Arc A750)
I think that this patch series is now ready for your comment and review.
Please forgive me if I made any simple mistake or used wrong terminologies,
but I have never worked on a patch for the DRM subsystem and my experience
is still quite thin.
But anyway, just letting you all know that Intel Xe/Arc works on non-4KiB
kernel page platforms (and honestly, it's great to use, especially for
games and media playback)!
[^1]: https://github.com/FanFansfan/loongson-linux/tree/loongarch-xe
[^2]: We maintained Shang Yatsen's patch until our v6.13.3 tree, until
we decided to test and send this series upstream,
https://github.com/AOSC-Tracking/linux/tree/aosc/v6.13.3
[^3]: Delicious hot pot!
https://repo.aosc.io/ahvl/sample-videos-20250223.tar.zst
---
Matthew(s), Lucas, and Francois:
Thanks again for your patience and review.
I recently had a job change and it put me off this series for months, but
I'm back (and should be a lot more responsive now) - sorry! Let's get this
ball rolling again.
I was unfortunately unable to revise 1/5 from v1 as you requested, neither
of your suggestions to allow allocation of VRAM smaller than page size
worked... So I kept that part as is.
As for the your comment in 5/5, I'm not sure about what the right approach
to implement a SZ_64K >= PAGE_SIZE assert was, as there are many other
instances of similar ternary conditional operators in the xe code. Correct
me if I'm wrong but I felt that it might be better handled in a separate
patch series?
---
Changes in v2:
- Define `GUC_ALIGN' and use them in GuC code to improve clarity.
- Update documentation on `DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT'.
- Rebase, and other minor changes.
- Link to v1:
https://lore.kernel.org/all/20250226-xe-non-4k-fix-v1-0-80f23b5ee40e@aosc.i…
To: Lucas De Marchi <lucas.demarchi(a)intel.com>
To: Thomas Hellström <thomas.hellstrom(a)linux.intel.com>
To: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
To: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
To: Maxime Ripard <mripard(a)kernel.org>
To: Thomas Zimmermann <tzimmermann(a)suse.de>
To: David Airlie <airlied(a)gmail.com>
To: Simona Vetter <simona(a)ffwll.ch>
To: José Roberto de Souza <jose.souza(a)intel.com>
To: Francois Dugast <francois.dugast(a)intel.com>
To: Matthew Brost <matthew.brost(a)intel.com>
To: Alan Previn <alan.previn.teres.alexis(a)intel.com>
To: Zhanjun Dong <zhanjun.dong(a)intel.com>
To: Matt Roper <matthew.d.roper(a)intel.com>
To: Mateusz Naklicki <mateusz.naklicki(a)intel.com>
Cc: Mauro Carvalho Chehab <mauro.chehab(a)linux.intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski(a)intel.com>
Cc: intel-xe(a)lists.freedesktop.org
Cc: dri-devel(a)lists.freedesktop.org
Cc: linux-kernel(a)vger.kernel.org
Suggested-by: Kexy Biscuit <kexybiscuit(a)aosc.io>
Co-developed-by: Shang Yatsen <429839446(a)qq.com>
Signed-off-by: Shang Yatsen <429839446(a)qq.com>
Signed-off-by: Mingcong Bai <jeffbai(a)aosc.io>
---
Mingcong Bai (5):
drm/xe/bo: fix alignment with non-4KiB kernel page sizes
drm/xe/guc: use GUC_SIZE (SZ_4K) for alignment
drm/xe/regs: fix RING_CTL_SIZE(size) calculation
drm/xe: use 4KiB alignment for cursor jumps
drm/xe/query: use PAGE_SIZE as the minimum page alignment
drivers/gpu/drm/xe/regs/xe_engine_regs.h | 2 +-
drivers/gpu/drm/xe/xe_bo.c | 8 ++++----
drivers/gpu/drm/xe/xe_guc.c | 4 ++--
drivers/gpu/drm/xe/xe_guc.h | 3 +++
drivers/gpu/drm/xe/xe_guc_ads.c | 32 ++++++++++++++++----------------
drivers/gpu/drm/xe/xe_guc_capture.c | 8 ++++----
drivers/gpu/drm/xe/xe_guc_ct.c | 2 +-
drivers/gpu/drm/xe/xe_guc_log.c | 5 +++--
drivers/gpu/drm/xe/xe_guc_pc.c | 4 ++--
drivers/gpu/drm/xe/xe_migrate.c | 4 ++--
drivers/gpu/drm/xe/xe_query.c | 2 +-
include/uapi/drm/xe_drm.h | 7 +++++--
12 files changed, 44 insertions(+), 37 deletions(-)
---
base-commit: 546b1c9e93c2bb8cf5ed24e0be1c86bb089b3253
change-id: 20250603-upstream-xe-non-4k-v2-4acf253c9bfd
Best regards,
--
Mingcong Bai <jeffbai(a)aosc.io>
The patch titled
Subject: bcache: remove unnecessary select MIN_HEAP
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
bcache-remove-unnecessary-select-min_heap.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Kuan-Wei Chiu <visitorckw(a)gmail.com>
Subject: bcache: remove unnecessary select MIN_HEAP
Date: Sun, 15 Jun 2025 04:23:53 +0800
After reverting the transition to the generic min heap library, bcache no
longer depends on MIN_HEAP. The select entry can be removed to reduce
code size and shrink the kernel's attack surface.
This change effectively reverts the bcache-related part of commit
92a8b224b833 ("lib/min_heap: introduce non-inline versions of min heap API
functions").
This is part of a series of changes to address a performance regression
caused by the use of the generic min_heap implementation.
As reported by Robert, bcache now suffers from latency spikes, with P100
(max) latency increasing from 600 ms to 2.4 seconds every 5 minutes.
These regressions degrade bcache's effectiveness as a low-latency cache
layer and lead to frequent timeouts and application stalls in production
environments.
Link: https://lore.kernel.org/lkml/CAJhEC05+0S69z+3+FB2Cd0hD+pCRyWTKLEOsc8BOmH73p…
Link: https://lkml.kernel.org/r/20250614202353.1632957-4-visitorckw@gmail.com
Fixes: 866898efbb25 ("bcache: remove heap-related macros and switch to generic min_heap")
Fixes: 92a8b224b833 ("lib/min_heap: introduce non-inline versions of min heap API functions")
Signed-off-by: Kuan-Wei Chiu <visitorckw(a)gmail.com>
Reported-by: Robert Pang <robertpang(a)google.com>
Closes: https://lore.kernel.org/linux-bcache/CAJhEC06F_AtrPgw2-7CvCqZgeStgCtitbD-ry…
Acked-by: Coly Li <colyli(a)kernel.org>
Cc: Ching-Chun (Jim) Huang <jserv(a)ccns.ncku.edu.tw>
Cc: Kent Overstreet <kent.overstreet(a)linux.dev>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
drivers/md/bcache/Kconfig | 1 -
1 file changed, 1 deletion(-)
--- a/drivers/md/bcache/Kconfig~bcache-remove-unnecessary-select-min_heap
+++ a/drivers/md/bcache/Kconfig
@@ -5,7 +5,6 @@ config BCACHE
select BLOCK_HOLDER_DEPRECATED if SYSFS
select CRC64
select CLOSURES
- select MIN_HEAP
help
Allows a block device to be used as cache for other devices; uses
a btree for indexing and the layout is optimized for SSDs.
_
Patches currently in -mm which might be from visitorckw(a)gmail.com are
revert-bcache-update-min_heap_callbacks-to-use-default-builtin-swap.patch
revert-bcache-remove-heap-related-macros-and-switch-to-generic-min_heap.patch
bcache-remove-unnecessary-select-min_heap.patch
lib-math-gcd-use-static-key-to-select-implementation-at-runtime.patch
riscv-optimize-gcd-code-size-when-config_riscv_isa_zbb-is-disabled.patch
riscv-optimize-gcd-performance-on-risc-v-without-zbb-extension.patch
The patch titled
Subject: Revert "bcache: update min_heap_callbacks to use default builtin swap"
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
revert-bcache-update-min_heap_callbacks-to-use-default-builtin-swap.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Kuan-Wei Chiu <visitorckw(a)gmail.com>
Subject: Revert "bcache: update min_heap_callbacks to use default builtin swap"
Date: Sun, 15 Jun 2025 04:23:51 +0800
Patch series "bcache: Revert min_heap migration due to performance
regression".
This patch series reverts the migration of bcache from its original heap
implementation to the generic min_heap library. While the original change
aimed to simplify the code and improve maintainability, it introduced a
severe performance regression in real-world scenarios.
As reported by Robert, systems using bcache now suffer from periodic
latency spikes, with P100 (max) latency increasing from 600 ms to 2.4
seconds every 5 minutes. This degrades bcache's value as a low-latency
caching layer, and leads to frequent timeouts and application stalls in
production environments.
The primary cause of this regression is the behavior of the generic
min_heap implementation's bottom-up sift_down, which performs up to 2 *
log2(n) comparisons when many elements are equal. The original top-down
variant used by bcache only required O(1) comparisons in such cases. The
issue was further exacerbated by commit 92a8b224b833 ("lib/min_heap:
introduce non-inline versions of min heap API functions"), which
introduced non-inlined versions of the min_heap API, adding function call
overhead to a performance-critical hot path.
This patch (of 3):
This reverts commit 3d8a9a1c35227c3f1b0bd132c9f0a80dbda07b65.
Although removing the custom swap function simplified the code, this
change is part of a broader migration to the generic min_heap API that
introduced significant performance regressions in bcache.
As reported by Robert, bcache now suffers from latency spikes, with P100
(max) latency increasing from 600 ms to 2.4 seconds every 5 minutes.
These regressions degrade bcache's effectiveness as a low-latency cache
layer and lead to frequent timeouts and application stalls in production
environments.
This revert is part of a series of changes to restore previous performance
by undoing the min_heap transition.
Link: https://lkml.kernel.org/r/20250614202353.1632957-1-visitorckw@gmail.com
Link: https://lore.kernel.org/lkml/CAJhEC05+0S69z+3+FB2Cd0hD+pCRyWTKLEOsc8BOmH73p…
Link: https://lkml.kernel.org/r/20250614202353.1632957-2-visitorckw@gmail.com
Fixes: 866898efbb25 ("bcache: remove heap-related macros and switch to generic min_heap")
Fixes: 92a8b224b833 ("lib/min_heap: introduce non-inline versions of min heap API functions")
Signed-off-by: Kuan-Wei Chiu <visitorckw(a)gmail.com>
Reported-by: Robert Pang <robertpang(a)google.com>
Closes: https://lore.kernel.org/linux-bcache/CAJhEC06F_AtrPgw2-7CvCqZgeStgCtitbD-ry…
Acked-by: Coly Li <colyli(a)kernel.org>
Cc: Ching-Chun (Jim) Huang <jserv(a)ccns.ncku.edu.tw>
Cc: Kent Overstreet <kent.overstreet(a)linux.dev>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
drivers/md/bcache/alloc.c | 11 +++++++++--
drivers/md/bcache/bset.c | 14 +++++++++++---
drivers/md/bcache/extents.c | 10 +++++++++-
drivers/md/bcache/movinggc.c | 10 +++++++++-
4 files changed, 38 insertions(+), 7 deletions(-)
--- a/drivers/md/bcache/alloc.c~revert-bcache-update-min_heap_callbacks-to-use-default-builtin-swap
+++ a/drivers/md/bcache/alloc.c
@@ -189,16 +189,23 @@ static inline bool new_bucket_min_cmp(co
return new_bucket_prio(ca, *lhs) < new_bucket_prio(ca, *rhs);
}
+static inline void new_bucket_swap(void *l, void *r, void __always_unused *args)
+{
+ struct bucket **lhs = l, **rhs = r;
+
+ swap(*lhs, *rhs);
+}
+
static void invalidate_buckets_lru(struct cache *ca)
{
struct bucket *b;
const struct min_heap_callbacks bucket_max_cmp_callback = {
.less = new_bucket_max_cmp,
- .swp = NULL,
+ .swp = new_bucket_swap,
};
const struct min_heap_callbacks bucket_min_cmp_callback = {
.less = new_bucket_min_cmp,
- .swp = NULL,
+ .swp = new_bucket_swap,
};
ca->heap.nr = 0;
--- a/drivers/md/bcache/bset.c~revert-bcache-update-min_heap_callbacks-to-use-default-builtin-swap
+++ a/drivers/md/bcache/bset.c
@@ -1093,6 +1093,14 @@ static inline bool new_btree_iter_cmp(co
return bkey_cmp(_l->k, _r->k) <= 0;
}
+static inline void new_btree_iter_swap(void *iter1, void *iter2, void __always_unused *args)
+{
+ struct btree_iter_set *_iter1 = iter1;
+ struct btree_iter_set *_iter2 = iter2;
+
+ swap(*_iter1, *_iter2);
+}
+
static inline bool btree_iter_end(struct btree_iter *iter)
{
return !iter->heap.nr;
@@ -1103,7 +1111,7 @@ void bch_btree_iter_push(struct btree_it
{
const struct min_heap_callbacks callbacks = {
.less = new_btree_iter_cmp,
- .swp = NULL,
+ .swp = new_btree_iter_swap,
};
if (k != end)
@@ -1149,7 +1157,7 @@ static inline struct bkey *__bch_btree_i
struct bkey *ret = NULL;
const struct min_heap_callbacks callbacks = {
.less = cmp,
- .swp = NULL,
+ .swp = new_btree_iter_swap,
};
if (!btree_iter_end(iter)) {
@@ -1223,7 +1231,7 @@ static void btree_mergesort(struct btree
: bch_ptr_invalid;
const struct min_heap_callbacks callbacks = {
.less = b->ops->sort_cmp,
- .swp = NULL,
+ .swp = new_btree_iter_swap,
};
/* Heapify the iterator, using our comparison function */
--- a/drivers/md/bcache/extents.c~revert-bcache-update-min_heap_callbacks-to-use-default-builtin-swap
+++ a/drivers/md/bcache/extents.c
@@ -266,12 +266,20 @@ static bool new_bch_extent_sort_cmp(cons
return !(c ? c > 0 : _l->k < _r->k);
}
+static inline void new_btree_iter_swap(void *iter1, void *iter2, void __always_unused *args)
+{
+ struct btree_iter_set *_iter1 = iter1;
+ struct btree_iter_set *_iter2 = iter2;
+
+ swap(*_iter1, *_iter2);
+}
+
static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter,
struct bkey *tmp)
{
const struct min_heap_callbacks callbacks = {
.less = new_bch_extent_sort_cmp,
- .swp = NULL,
+ .swp = new_btree_iter_swap,
};
while (iter->heap.nr > 1) {
struct btree_iter_set *top = iter->heap.data, *i = top + 1;
--- a/drivers/md/bcache/movinggc.c~revert-bcache-update-min_heap_callbacks-to-use-default-builtin-swap
+++ a/drivers/md/bcache/movinggc.c
@@ -190,6 +190,14 @@ static bool new_bucket_cmp(const void *l
return GC_SECTORS_USED(*_l) >= GC_SECTORS_USED(*_r);
}
+static void new_bucket_swap(void *l, void *r, void __always_unused *args)
+{
+ struct bucket **_l = l;
+ struct bucket **_r = r;
+
+ swap(*_l, *_r);
+}
+
static unsigned int bucket_heap_top(struct cache *ca)
{
struct bucket *b;
@@ -204,7 +212,7 @@ void bch_moving_gc(struct cache_set *c)
unsigned long sectors_to_move, reserve_sectors;
const struct min_heap_callbacks callbacks = {
.less = new_bucket_cmp,
- .swp = NULL,
+ .swp = new_bucket_swap,
};
if (!c->copy_gc_enabled)
_
Patches currently in -mm which might be from visitorckw(a)gmail.com are
revert-bcache-update-min_heap_callbacks-to-use-default-builtin-swap.patch
revert-bcache-remove-heap-related-macros-and-switch-to-generic-min_heap.patch
bcache-remove-unnecessary-select-min_heap.patch
lib-math-gcd-use-static-key-to-select-implementation-at-runtime.patch
riscv-optimize-gcd-code-size-when-config_riscv_isa_zbb-is-disabled.patch
riscv-optimize-gcd-performance-on-risc-v-without-zbb-extension.patch