[BUG]
When testing subpage block size btrfs (block size < page size), I hit
the following spin lock hang on x86_64, with the experimental 2K block
size support:
<TASK>
_raw_spin_lock_irq+0x2f/0x40
wait_subpage_spinlock+0x69/0x80 [btrfs]
btrfs_release_folio+0x46/0x70 [btrfs]
folio_unmap_invalidate+0xcb/0x250
folio_end_writeback+0x127/0x1b0
btrfs_subpage_clear_writeback+0xef/0x140 [btrfs]
end_bbio_data_write+0x13a/0x3c0 [btrfs]
btrfs_bio_end_io+0x6f/0xc0 [btrfs]
process_one_work+0x156/0x310
worker_thread+0x252/0x390
? __pfx_worker_thread+0x10/0x10
kthread+0xef/0x250
? finish_task_switch.isra.0+0x8a/0x250
? __pfx_kthread+0x10/0x10
ret_from_fork+0x34/0x50
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1a/0x30
</TASK>
[CAUSE]
It's a self deadlock with the following sequence:
btrfs_subpage_clear_writeback()
|- spin_lock_irqsave(&subpage->lock);
|- folio_end_writeback()
|- folio_end_dropbehind_write()
|- folio_unmap_invalidate()
|- btrfs_release_folio()
|- wait_subpage_spinlock()
|- spin_lock_irq(&subpage->lock);
!! DEADLOCK !!
We're trying to acquire the same spin lock already held by ourselves.
This has never been reproducibled on aarch64 as it looks like some x86_64
specific folio reclaim behavior?
[FIX]
Move the folio_end_writeback() call out of the spin lock critical
section.
And since we no longer have all the bitmap operation and the writeback
flag clearing happening inside the critical section, we must do extra
checks to make sure only the last one clearing the writeback bitmap can
clear the folio writeback flag.
Fixes: 3470da3b7d87 ("btrfs: subpage: introduce helpers for writeback status")
Cc: stable(a)vger.kernel.org # 5.15+
Signed-off-by: Qu Wenruo <wqu(a)suse.com>
---
fs/btrfs/subpage.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
index 4d1bf1124ba0..3ce3d7093ddb 100644
--- a/fs/btrfs/subpage.c
+++ b/fs/btrfs/subpage.c
@@ -466,15 +466,21 @@ void btrfs_subpage_clear_writeback(const struct btrfs_fs_info *fs_info,
struct btrfs_subpage *subpage = folio_get_private(folio);
unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
writeback, start, len);
+ bool was_writeback;
+ bool last = false;
unsigned long flags;
spin_lock_irqsave(&subpage->lock, flags);
+ was_writeback = !subpage_test_bitmap_all_zero(fs_info, folio, writeback);
bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
- if (subpage_test_bitmap_all_zero(fs_info, folio, writeback)) {
+ if (subpage_test_bitmap_all_zero(fs_info, folio, writeback) &&
+ was_writeback) {
ASSERT(folio_test_writeback(folio));
- folio_end_writeback(folio);
+ last = true;
}
spin_unlock_irqrestore(&subpage->lock, flags);
+ if (last)
+ folio_end_writeback(folio);
}
void btrfs_subpage_set_ordered(const struct btrfs_fs_info *fs_info,
--
2.48.1
From: Barry Song <v-songbaohua(a)oppo.com>
userfaultfd_move() checks whether the PTE entry is present or a
swap entry.
- If the PTE entry is present, move_present_pte() handles folio
migration by setting:
src_folio->index = linear_page_index(dst_vma, dst_addr);
- If the PTE entry is a swap entry, move_swap_pte() simply copies
the PTE to the new dst_addr.
This approach is incorrect because, even if the PTE is a swap entry,
it can still reference a folio that remains in the swap cache.
This creates a race window between steps 2 and 4.
1. add_to_swap: The folio is added to the swapcache.
2. try_to_unmap: PTEs are converted to swap entries.
3. pageout: The folio is written back.
4. Swapcache is cleared.
If userfaultfd_move() occurs in the window between steps 2 and 4,
after the swap PTE has been moved to the destination, accessing the
destination triggers do_swap_page(), which may locate the folio in
the swapcache. However, since the folio's index has not been updated
to match the destination VMA, do_swap_page() will detect a mismatch.
This can result in two critical issues depending on the system
configuration.
If KSM is disabled, both small and large folios can trigger a BUG
during the add_rmap operation due to:
page_pgoff(folio, page) != linear_page_index(vma, address)
[ 13.336953] page: refcount:6 mapcount:1 mapping:00000000f43db19c index:0xffffaf150 pfn:0x4667c
[ 13.337520] head: order:2 mapcount:1 entire_mapcount:0 nr_pages_mapped:1 pincount:0
[ 13.337716] memcg:ffff00000405f000
[ 13.337849] anon flags: 0x3fffc0000020459(locked|uptodate|dirty|owner_priv_1|head|swapbacked|node=0|zone=0|lastcpupid=0xffff)
[ 13.338630] raw: 03fffc0000020459 ffff80008507b538 ffff80008507b538 ffff000006260361
[ 13.338831] raw: 0000000ffffaf150 0000000000004000 0000000600000000 ffff00000405f000
[ 13.339031] head: 03fffc0000020459 ffff80008507b538 ffff80008507b538 ffff000006260361
[ 13.339204] head: 0000000ffffaf150 0000000000004000 0000000600000000 ffff00000405f000
[ 13.339375] head: 03fffc0000000202 fffffdffc0199f01 ffffffff00000000 0000000000000001
[ 13.339546] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[ 13.339736] page dumped because: VM_BUG_ON_PAGE(page_pgoff(folio, page) != linear_page_index(vma, address))
[ 13.340190] ------------[ cut here ]------------
[ 13.340316] kernel BUG at mm/rmap.c:1380!
[ 13.340683] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP
[ 13.340969] Modules linked in:
[ 13.341257] CPU: 1 UID: 0 PID: 107 Comm: a.out Not tainted 6.14.0-rc3-gcf42737e247a-dirty #299
[ 13.341470] Hardware name: linux,dummy-virt (DT)
[ 13.341671] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 13.341815] pc : __page_check_anon_rmap+0xa0/0xb0
[ 13.341920] lr : __page_check_anon_rmap+0xa0/0xb0
[ 13.342018] sp : ffff80008752bb20
[ 13.342093] x29: ffff80008752bb20 x28: fffffdffc0199f00 x27: 0000000000000001
[ 13.342404] x26: 0000000000000000 x25: 0000000000000001 x24: 0000000000000001
[ 13.342575] x23: 0000ffffaf0d0000 x22: 0000ffffaf0d0000 x21: fffffdffc0199f00
[ 13.342731] x20: fffffdffc0199f00 x19: ffff000006210700 x18: 00000000ffffffff
[ 13.342881] x17: 6c203d2120296567 x16: 6170202c6f696c6f x15: 662866666f67705f
[ 13.343033] x14: 6567617028454741 x13: 2929737365726464 x12: ffff800083728ab0
[ 13.343183] x11: ffff800082996bf8 x10: 0000000000000fd7 x9 : ffff80008011bc40
[ 13.343351] x8 : 0000000000017fe8 x7 : 00000000fffff000 x6 : ffff8000829eebf8
[ 13.343498] x5 : c0000000fffff000 x4 : 0000000000000000 x3 : 0000000000000000
[ 13.343645] x2 : 0000000000000000 x1 : ffff0000062db980 x0 : 000000000000005f
[ 13.343876] Call trace:
[ 13.344045] __page_check_anon_rmap+0xa0/0xb0 (P)
[ 13.344234] folio_add_anon_rmap_ptes+0x22c/0x320
[ 13.344333] do_swap_page+0x1060/0x1400
[ 13.344417] __handle_mm_fault+0x61c/0xbc8
[ 13.344504] handle_mm_fault+0xd8/0x2e8
[ 13.344586] do_page_fault+0x20c/0x770
[ 13.344673] do_translation_fault+0xb4/0xf0
[ 13.344759] do_mem_abort+0x48/0xa0
[ 13.344842] el0_da+0x58/0x130
[ 13.344914] el0t_64_sync_handler+0xc4/0x138
[ 13.345002] el0t_64_sync+0x1ac/0x1b0
[ 13.345208] Code: aa1503e0 f000f801 910f6021 97ff5779 (d4210000)
[ 13.345504] ---[ end trace 0000000000000000 ]---
[ 13.345715] note: a.out[107] exited with irqs disabled
[ 13.345954] note: a.out[107] exited with preempt_count 2
If KSM is enabled, Peter Xu also discovered that do_swap_page() may
trigger an unexpected CoW operation for small folios because
ksm_might_need_to_copy() allocates a new folio when the folio index
does not match linear_page_index(vma, addr).
This patch also checks the swapcache when handling swap entries. If a
match is found in the swapcache, it processes it similarly to a present
PTE.
However, there are some differences. For example, the folio is no longer
exclusive because folio_try_share_anon_rmap_pte() is performed during
unmapping.
Furthermore, in the case of swapcache, the folio has already been
unmapped, eliminating the risk of concurrent rmap walks and removing the
need to acquire src_folio's anon_vma or lock.
Note that for large folios, in the swapcache handling path, we directly
return -EBUSY since split_folio() will return -EBUSY regardless if
the folio is under writeback or unmapped. This is not an urgent issue,
so a follow-up patch may address it separately.
Fixes: adef440691bab ("userfaultfd: UFFDIO_MOVE uABI")
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Cc: Axel Rasmussen <axelrasmussen(a)google.com>
Cc: Brian Geffon <bgeffon(a)google.com>
Cc: Christian Brauner <brauner(a)kernel.org>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Hugh Dickins <hughd(a)google.com>
Cc: Jann Horn <jannh(a)google.com>
Cc: Kalesh Singh <kaleshsingh(a)google.com>
Cc: Liam R. Howlett <Liam.Howlett(a)oracle.com>
Cc: Lokesh Gidra <lokeshgidra(a)google.com>
Cc: Matthew Wilcox (Oracle) <willy(a)infradead.org>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mike Rapoport (IBM) <rppt(a)kernel.org>
Cc: Nicolas Geoffray <ngeoffray(a)google.com>
Cc: Peter Xu <peterx(a)redhat.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: ZhangPeng <zhangpeng362(a)huawei.com>
Cc: Tangquan Zheng <zhengtangquan(a)oppo.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Barry Song <v-songbaohua(a)oppo.com>
---
mm/userfaultfd.c | 76 ++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 67 insertions(+), 9 deletions(-)
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 867898c4e30b..2df5d100e76d 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -18,6 +18,7 @@
#include <asm/tlbflush.h>
#include <asm/tlb.h>
#include "internal.h"
+#include "swap.h"
static __always_inline
bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end)
@@ -1072,16 +1073,14 @@ static int move_present_pte(struct mm_struct *mm,
return err;
}
-static int move_swap_pte(struct mm_struct *mm,
+static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,
unsigned long dst_addr, unsigned long src_addr,
pte_t *dst_pte, pte_t *src_pte,
pte_t orig_dst_pte, pte_t orig_src_pte,
pmd_t *dst_pmd, pmd_t dst_pmdval,
- spinlock_t *dst_ptl, spinlock_t *src_ptl)
+ spinlock_t *dst_ptl, spinlock_t *src_ptl,
+ struct folio *src_folio)
{
- if (!pte_swp_exclusive(orig_src_pte))
- return -EBUSY;
-
double_pt_lock(dst_ptl, src_ptl);
if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte,
@@ -1090,10 +1089,20 @@ static int move_swap_pte(struct mm_struct *mm,
return -EAGAIN;
}
+ /*
+ * The src_folio resides in the swapcache, requiring an update to its
+ * index and mapping to align with the dst_vma, where a swap-in may
+ * occur and hit the swapcache after moving the PTE.
+ */
+ if (src_folio) {
+ folio_move_anon_rmap(src_folio, dst_vma);
+ src_folio->index = linear_page_index(dst_vma, dst_addr);
+ }
+
orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte);
set_pte_at(mm, dst_addr, dst_pte, orig_src_pte);
- double_pt_unlock(dst_ptl, src_ptl);
+ double_pt_unlock(dst_ptl, src_ptl);
return 0;
}
@@ -1137,6 +1146,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
__u64 mode)
{
swp_entry_t entry;
+ struct swap_info_struct *si = NULL;
pte_t orig_src_pte, orig_dst_pte;
pte_t src_folio_pte;
spinlock_t *src_ptl, *dst_ptl;
@@ -1318,6 +1328,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
orig_dst_pte, orig_src_pte, dst_pmd,
dst_pmdval, dst_ptl, src_ptl, src_folio);
} else {
+ struct folio *folio = NULL;
+
entry = pte_to_swp_entry(orig_src_pte);
if (non_swap_entry(entry)) {
if (is_migration_entry(entry)) {
@@ -1331,9 +1343,53 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
goto out;
}
- err = move_swap_pte(mm, dst_addr, src_addr, dst_pte, src_pte,
- orig_dst_pte, orig_src_pte, dst_pmd,
- dst_pmdval, dst_ptl, src_ptl);
+ if (!pte_swp_exclusive(orig_src_pte)) {
+ err = -EBUSY;
+ goto out;
+ }
+
+ si = get_swap_device(entry);
+ if (unlikely(!si)) {
+ err = -EAGAIN;
+ goto out;
+ }
+ /*
+ * Verify the existence of the swapcache. If present, the folio's
+ * index and mapping must be updated even when the PTE is a swap
+ * entry. The anon_vma lock is not taken during this process since
+ * the folio has already been unmapped, and the swap entry is
+ * exclusive, preventing rmap walks.
+ *
+ * For large folios, return -EBUSY immediately, as split_folio()
+ * also returns -EBUSY when attempting to split unmapped large
+ * folios in the swapcache. This issue needs to be resolved
+ * separately to allow proper handling.
+ */
+ if (!src_folio)
+ folio = filemap_get_folio(swap_address_space(entry),
+ swap_cache_index(entry));
+ if (!IS_ERR_OR_NULL(folio)) {
+ if (folio && folio_test_large(folio)) {
+ err = -EBUSY;
+ folio_put(folio);
+ goto out;
+ }
+ src_folio = folio;
+ src_folio_pte = orig_src_pte;
+ if (!folio_trylock(src_folio)) {
+ pte_unmap(&orig_src_pte);
+ pte_unmap(&orig_dst_pte);
+ src_pte = dst_pte = NULL;
+ /* now we can block and wait */
+ folio_lock(src_folio);
+ put_swap_device(si);
+ si = NULL;
+ goto retry;
+ }
+ }
+ err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte,
+ orig_dst_pte, orig_src_pte, dst_pmd, dst_pmdval,
+ dst_ptl, src_ptl, src_folio);
}
out:
@@ -1350,6 +1406,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
if (src_pte)
pte_unmap(src_pte);
mmu_notifier_invalidate_range_end(&range);
+ if (si)
+ put_swap_device(si);
return err;
}
--
2.39.3 (Apple Git-146)
Here are two unrelated fixes, plus an extra patch:
- Patch 1: prevent a warning by removing an unneeded and incorrect small
optimisation in the path-manager. A fix for v5.10.
- Patch 2: reset a subflow when MPTCP opts have been dropped after
having correctly added a new path. A fix for v5.19.
- Patch 3: add a safety check to prevent issues like the one fixed by
the second patch.
Signed-off-by: Matthieu Baerts (NGI0) <matttbe(a)kernel.org>
---
Matthieu Baerts (NGI0) (2):
mptcp: reset when MPTCP opts are dropped after join
mptcp: safety check before fallback
Paolo Abeni (1):
mptcp: always handle address removal under msk socket lock
net/mptcp/pm_netlink.c | 5 -----
net/mptcp/protocol.h | 2 ++
net/mptcp/subflow.c | 15 +--------------
3 files changed, 3 insertions(+), 19 deletions(-)
---
base-commit: f15176b8b6e72ac30e14fd273282d2b72562d26b
change-id: 20250224-net-mptcp-misc-fixes-8af87640dfef
Best regards,
--
Matthieu Baerts (NGI0) <matttbe(a)kernel.org>
There are some issues with the enetc driver, some of which are specific
to the LS1028A platform, and some of which were introduced recently when
i.MX95 ENETC support was added, so this patch set aims to clean up those
issues.
---
v1 link: https://lore.kernel.org/imx/20250217093906.506214-1-wei.fang@nxp.com/
v2 changes:
1. Remove the unneeded semicolon from patch 1
2. Modify the commit message of patch 1
3. Add new patch 9 to fix another off-by-one issue
v2 link: https://lore.kernel.org/imx/20250219054247.733243-1-wei.fang@nxp.com/
v3 changes:
1. remove the patch "net: enetc: correct the EMDIO base offset for ENETC v4"
2. Add a helper function enetc_unwind_tx_frame()
3. Change the subject of patch 2, and refactor the implementation.
4. Use enetc_unwind_tx_frame() in patch 8, and roll back 'i' when
enetc_map_tx_tso_data() returns an error
5. Collect Reviewed-by and Tested-by tags
---
Wei Fang (8):
net: enetc: fix the off-by-one issue in enetc_map_tx_buffs()
net: enetc: keep track of correct Tx BD count in
enetc_map_tx_tso_buffs()
net: enetc: correct the xdp_tx statistics
net: enetc: VFs do not support HWTSTAMP_TX_ONESTEP_SYNC
net: enetc: update UDP checksum when updating originTimestamp field
net: enetc: add missing enetc4_link_deinit()
net: enetc: remove the mm_lock from the ENETC v4 driver
net: enetc: fix the off-by-one issue in enetc_map_tx_tso_buffs()
drivers/net/ethernet/freescale/enetc/enetc.c | 103 +++++++++++++-----
.../net/ethernet/freescale/enetc/enetc4_pf.c | 2 +-
.../ethernet/freescale/enetc/enetc_ethtool.c | 7 +-
3 files changed, 80 insertions(+), 32 deletions(-)
--
2.34.1
The patch titled
Subject: mm: zswap: fix crypto_free_acomp deadlock in zswap_cpu_comp_dead
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
mm-zswap-fix-crypto_free_acomp-deadlock-in-zswap_cpu_comp_dead.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Herbert Xu <herbert(a)gondor.apana.org.au>
Subject: mm: zswap: fix crypto_free_acomp deadlock in zswap_cpu_comp_dead
Date: Tue, 25 Feb 2025 16:53:58 +0800
Call crypto_free_acomp outside of the mutex in zswap_cpu_comp_dead() as
otherwise this could deadlock as the allocation path may lead back into
zswap while holding the same lock. Zap the pointers to acomp and buffer
after freeing.
Also move the NULL check on acomp_ctx so that it takes place before
the mutex dereference.
Link: https://lkml.kernel.org/r/Z72FJnbA39zWh4zS@gondor.apana.org.au
Fixes: 12dcb0ef5406 ("mm: zswap: properly synchronize freeing resources during CPU hotunplug")
Reported-by: syzbot+1a517ccfcbc6a7ab0f82(a)syzkaller.appspotmail.com
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Cc: David S. Miller <davem(a)davemloft.net>
Cc: Yosry Ahmed <yosry.ahmed(a)linux.dev>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/zswap.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
--- a/mm/zswap.c~mm-zswap-fix-crypto_free_acomp-deadlock-in-zswap_cpu_comp_dead
+++ a/mm/zswap.c
@@ -881,18 +881,23 @@ static int zswap_cpu_comp_dead(unsigned
{
struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);
+ struct crypto_acomp *acomp = NULL;
+
+ if (IS_ERR_OR_NULL(acomp_ctx))
+ return 0;
mutex_lock(&acomp_ctx->mutex);
- if (!IS_ERR_OR_NULL(acomp_ctx)) {
- if (!IS_ERR_OR_NULL(acomp_ctx->req))
- acomp_request_free(acomp_ctx->req);
- acomp_ctx->req = NULL;
- if (!IS_ERR_OR_NULL(acomp_ctx->acomp))
- crypto_free_acomp(acomp_ctx->acomp);
- kfree(acomp_ctx->buffer);
- }
+ if (!IS_ERR_OR_NULL(acomp_ctx->req))
+ acomp_request_free(acomp_ctx->req);
+ acomp_ctx->req = NULL;
+ acomp = acomp_ctx->acomp;
+ acomp_ctx->acomp = NULL;
+ kfree(acomp_ctx->buffer);
+ acomp_ctx->buffer = NULL;
mutex_unlock(&acomp_ctx->mutex);
+ crypto_free_acomp(acomp);
+
return 0;
}
_
Patches currently in -mm which might be from herbert(a)gondor.apana.org.au are
mm-zswap-fix-crypto_free_acomp-deadlock-in-zswap_cpu_comp_dead.patch
Commit b79e8fd954c4 ("drm/xe: Remove dependency on intel_engine_regs.h")
introduced an internal set of engine registers, however, as part of this
change, it has also introduced two duplicate `define' lines for
`RING_CTL_SIZE(size)'. This commit was introduced to the tree in v6.8-rc1.
While this is harmless as the definitions did not change, so no compiler
warning was observed.
Drop this line anyway for the sake of correctness.
Cc: <stable(a)vger.kernel.org> # v6.8-rc1+
Fixes: b79e8fd954c4 ("drm/xe: Remove dependency on intel_engine_regs.h")
Signed-off-by: Mingcong Bai <jeffbai(a)aosc.io>
---
drivers/gpu/drm/xe/regs/xe_engine_regs.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
index d86219dedde2a..b732c89816dff 100644
--- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
@@ -53,7 +53,6 @@
#define RING_CTL(base) XE_REG((base) + 0x3c)
#define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */
-#define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */
#define RING_START_UDW(base) XE_REG((base) + 0x48)
--
2.48.1