From: Eric Biggers <ebiggers(a)google.com>
Make fscrypt no longer use Crypto API drivers for non-inline crypto
accelerators, even when the Crypto API prioritizes them over CPU-based
code (which unfortunately it often does). These drivers tend to be
really problematic, especially for fscrypt's synchronous workload.
Specifically, exclude drivers that have CRYPTO_ALG_KERN_DRIVER_ONLY or
CRYPTO_ALG_ALLOCATES_MEMORY set. (Later, CRYPTO_ALG_ASYNC should be
excluded too. That's omitted for now to keep this commit backportable,
since until recently some CPU-based code had CRYPTO_ALG_ASYNC set.)
There are two major issues with these drivers: bugs and performance.
First, these drivers tend to be buggy. They're fundamentally much more
error-prone and harder to test than the CPU-based code, and they often
don't get tested before kernel releases. Released drivers have
en/decrypted data incorrectly. These bugs cause real issues for fscrypt
users who often didn't even want to use these drivers, for example:
- https://github.com/google/fscryptctl/issues/32
- https://github.com/google/fscryptctl/issues/9
- https://lore.kernel.org/r/PH0PR02MB731916ECDB6C613665863B6CFFAA2@PH0PR02MB7…
These drivers have also caused issues for dm-crypt users, including data
corruption and deadlocks. Since Linux v5.10, dm-crypt has disabled most
of these drivers by excluding CRYPTO_ALG_ALLOCATES_MEMORY.
Second, the CPU-based crypto tends to be faster, often *much* faster.
This may seem counterintuitive, but benchmarks clearly show it. There's
a *lot* of overhead associated with going to a hardware driver, off the
CPU, and back again. Measuring synchronous AES-256-XTS encryption of
4096-byte messages (fscrypt's workload) on two platforms with non-inline
crypto accelerators that I have access to:
Intel Emerald Rapids server:
xts-aes-vaes-avx512: 16171 MB/s [CPU-based, Vector AES]
xts(ecb(aes-generic)): 305 MB/s [CPU-based, generic C code]
qat_aes_xts: 289 MB/s [Offload, Intel QuickAssist]
Qualcomm SM8650 HDK:
xts-aes-ce: 4301 MB/s [CPU-based, ARMv8 Crypto Extensions]
xts(ecb(aes-generic)): 265 MB/s [CPU-based, generic C code]
xts-aes-qce: 73 MB/s [Offload, Qualcomm Crypto Engine]
So, using the "accelerators" is over 50 times slower than just using the
CPU. Not only that, it's even slower than the generic C code, which
suggests that even on platforms whose CPUs lack AES instructions the
performance benefit of any accelerator would be marginal at best.
The usefulness of the accelerators could be improved with a different
software architecture that allows blocks to be efficiently en/decrypted
in parallel. But fscrypt does not do that today, and even the async
support in the Crypto API isn't really all that efficient. And even if
the accelerator was used perfectly efficiently, it seems unlikely to
help on small I/O requests, for which latency is really important.
As of this writing, the Crypto API prioritizes qat_aes_xts over
xts-aes-vaes-avx512. Therefore, this commit greatly improves fscrypt
performance on Intel servers that have QAT and the QAT driver enabled.
qat_aes_xts is going to be deprioritized in the Crypto API (like I did
for xts-aes-qce recently too). But as this seems to be a common pattern
with all the "accelerators", fscrypt should just disable all of them.
An argument that has been given in favor of non-inline crypto
accelerators is that they can protect keys in hardware. But fscrypt
does not take advantage of that, so it is irrelevant. (Also, it would
be quite difficult for fscrypt to do that.)
Note that fscrypt does support inline encryption engines, using raw or
hardware-wrapped keys. These actually do work well and are widely used.
These do not use the "Crypto API" and are unaffected by this commit.
Fixes: b30ab0e03407 ("ext4 crypto: add ext4 encryption facilities")
Cc: stable(a)vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
---
Changed in v2:
- Improved commit message and comment
- Dropped CRYPTO_ALG_ASYNC from the mask, to make this patch
backport-friendly
- Added Fixes and Cc stable
fs/crypto/fscrypt_private.h | 16 ++++++++++++++++
fs/crypto/hkdf.c | 2 +-
fs/crypto/keysetup.c | 3 ++-
fs/crypto/keysetup_v1.c | 3 ++-
4 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
index c1d92074b65c5..0e95c7a095d49 100644
--- a/fs/crypto/fscrypt_private.h
+++ b/fs/crypto/fscrypt_private.h
@@ -43,10 +43,26 @@
* hardware-wrapped keys has made it misleading as it's only for raw keys.
* Don't use it in kernel code; use one of the above constants instead.
*/
#undef FSCRYPT_MAX_KEY_SIZE
+/*
+ * This mask is passed as the third argument to the crypto_alloc_*() functions
+ * to prevent fscrypt from using the Crypto API drivers for non-inline crypto
+ * accelerators. Those drivers have been problematic for fscrypt. fscrypt
+ * users have reported hangs and even incorrect en/decryption with these
+ * drivers. Since going to the driver, off CPU, and back again is really slow,
+ * such drivers can be over 50 times slower than the CPU-based code for
+ * fscrypt's synchronous workload. Even on platforms that lack AES instructions
+ * on the CPU, any performance benefit is likely to be marginal at best.
+ *
+ * Note that fscrypt also supports inline encryption engines. Those don't use
+ * the Crypto API and work much better than non-inline accelerators.
+ */
+#define FSCRYPT_CRYPTOAPI_MASK \
+ (CRYPTO_ALG_ALLOCATES_MEMORY | CRYPTO_ALG_KERN_DRIVER_ONLY)
+
#define FSCRYPT_CONTEXT_V1 1
#define FSCRYPT_CONTEXT_V2 2
/* Keep this in sync with include/uapi/linux/fscrypt.h */
#define FSCRYPT_MODE_MAX FSCRYPT_MODE_AES_256_HCTR2
diff --git a/fs/crypto/hkdf.c b/fs/crypto/hkdf.c
index 0f3028adc9c72..5b9c21cfe2b45 100644
--- a/fs/crypto/hkdf.c
+++ b/fs/crypto/hkdf.c
@@ -56,11 +56,11 @@ int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
struct crypto_shash *hmac_tfm;
static const u8 default_salt[HKDF_HASHLEN];
u8 prk[HKDF_HASHLEN];
int err;
- hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, 0);
+ hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, FSCRYPT_CRYPTOAPI_MASK);
if (IS_ERR(hmac_tfm)) {
fscrypt_err(NULL, "Error allocating " HKDF_HMAC_ALG ": %ld",
PTR_ERR(hmac_tfm));
return PTR_ERR(hmac_tfm);
}
diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
index 0d71843af9469..d8113a7196979 100644
--- a/fs/crypto/keysetup.c
+++ b/fs/crypto/keysetup.c
@@ -101,11 +101,12 @@ fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key,
const struct inode *inode)
{
struct crypto_skcipher *tfm;
int err;
- tfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0);
+ tfm = crypto_alloc_skcipher(mode->cipher_str, 0,
+ FSCRYPT_CRYPTOAPI_MASK);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT) {
fscrypt_warn(inode,
"Missing crypto API support for %s (API name: \"%s\")",
mode->friendly_name, mode->cipher_str);
diff --git a/fs/crypto/keysetup_v1.c b/fs/crypto/keysetup_v1.c
index b70521c55132b..158ceae8a5bce 100644
--- a/fs/crypto/keysetup_v1.c
+++ b/fs/crypto/keysetup_v1.c
@@ -50,11 +50,12 @@ static int derive_key_aes(const u8 *master_key,
{
int res = 0;
struct skcipher_request *req = NULL;
DECLARE_CRYPTO_WAIT(wait);
struct scatterlist src_sg, dst_sg;
- struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
+ struct crypto_skcipher *tfm =
+ crypto_alloc_skcipher("ecb(aes)", 0, FSCRYPT_CRYPTOAPI_MASK);
if (IS_ERR(tfm)) {
res = PTR_ERR(tfm);
tfm = NULL;
goto out;
base-commit: 19272b37aa4f83ca52bdf9c16d5d81bdd1354494
--
2.49.0
Hi,
Just wanted to check if you have received my previous email.
Any update for me?
Awaiting your reply.
Regards,
Delilah
___________________________________________________________________________________
From: Delilah Murray
Subject: Attendee’s List “Mobile World Congress Shanghai 2025”.
Hi,
We're excited to offer exclusive access to the “Mobile World Congress Shanghai 2025” Visitor Contact List.
Event Recap:-
Date: 18 - 20 Jun 2025
Location: Shanghai, China
Registrants Counts: 42,276 Visitors Contacts
Data Fields Available: Individual Email Address, Cell Phone Number, Contact Name, Job Title, Company Name, Website, Physical Address, LinkedIn Profile, and more.
This list gives you a direct line to your ideal audience—no gatekeepers, no guesswork.
If you're interested in the list, just reply "Send me Pricing" or sample?
Best regards,
Delilah Murray
Sr. Marketing Manager
Prefer not to receive these emails? Just reply “NOT INTERESTED”.
[cc += Joel Mathew Thomas]
On Tue, Jun 10, 2025 at 08:16:05AM -0400, Sasha Levin wrote:
> This is a note to let you know that I've just added the patch titled
>
> PCI: pciehp: Ignore Link Down/Up caused by Secondary Bus Reset
>
> to the 6.15-stable tree which can be found at:
> http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
>
> The filename of the patch is:
> pci-pciehp-ignore-link-down-up-caused-by-secondary-b.patch
> and it can be found in the queue-6.15 subdirectory.
>
> If you, or anyone else, feels it should not be added to the stable tree,
> please let <stable(a)vger.kernel.org> know about it.
Hi Sasha, thanks for selecting the above (which is 2af781a9edc4 upstream)
as a 6.15 backport.
A small feature request, could you amend the stable tooling to cc
people tagged as Reported-by and Tested-by? I think they're the
ones most interested in seeing something backported.
Thanks!
Lukas
> commit 161a7237de69f65ccfe68da318343f3719149480
> Author: Lukas Wunner <lukas(a)wunner.de>
> Date: Thu Apr 10 17:27:12 2025 +0200
>
> PCI: pciehp: Ignore Link Down/Up caused by Secondary Bus Reset
>
> [ Upstream commit 2af781a9edc4ef5f6684c0710cc3542d9be48b31 ]
>
> When a Secondary Bus Reset is issued at a hotplug port, it causes a Data
> Link Layer State Changed event as a side effect. On hotplug ports using
> in-band presence detect, it additionally causes a Presence Detect Changed
> event.
>
> These spurious events should not result in teardown and re-enumeration of
> the device in the slot. Hence commit 2e35afaefe64 ("PCI: pciehp: Add
> reset_slot() method") masked the Presence Detect Changed Enable bit in the
> Slot Control register during a Secondary Bus Reset. Commit 06a8d89af551
> ("PCI: pciehp: Disable link notification across slot reset") additionally
> masked the Data Link Layer State Changed Enable bit.
>
> However masking those bits only disables interrupt generation (PCIe r6.2
> sec 6.7.3.1). The events are still visible in the Slot Status register
> and picked up by the IRQ handler if it runs during a Secondary Bus Reset.
> This can happen if the interrupt is shared or if an unmasked hotplug event
> occurs, e.g. Attention Button Pressed or Power Fault Detected.
>
> The likelihood of this happening used to be small, so it wasn't much of a
> problem in practice. That has changed with the recent introduction of
> bandwidth control in v6.13-rc1 with commit 665745f27487 ("PCI/bwctrl:
> Re-add BW notification portdrv as PCIe BW controller"):
>
> Bandwidth control shares the interrupt with PCIe hotplug. A Secondary Bus
> Reset causes a Link Bandwidth Notification, so the hotplug IRQ handler
> runs, picks up the masked events and tears down the device in the slot.
>
> As a result, Joel reports VFIO passthrough failure of a GPU, which Ilpo
> root-caused to the incorrect handling of masked hotplug events.
>
> Clearly, a more reliable way is needed to ignore spurious hotplug events.
>
> For Downstream Port Containment, a new ignore mechanism was introduced by
> commit a97396c6eb13 ("PCI: pciehp: Ignore Link Down/Up caused by DPC").
> It has been working reliably for the past four years.
>
> Adapt it for Secondary Bus Resets.
>
> Introduce two helpers to annotate code sections which cause spurious link
> changes: pci_hp_ignore_link_change() and pci_hp_unignore_link_change()
> Use those helpers in lieu of masking interrupts in the Slot Control
> register.
>
> Introduce a helper to check whether such a code section is executing
> concurrently and if so, await it: pci_hp_spurious_link_change()
> Invoke the helper in the hotplug IRQ thread pciehp_ist(). Re-use the
> IRQ thread's existing code which ignores DPC-induced link changes unless
> the link is unexpectedly down after reset recovery or the device was
> replaced during the bus reset.
>
> That code block in pciehp_ist() was previously only executed if a Data
> Link Layer State Changed event has occurred. Additionally execute it for
> Presence Detect Changed events. That's necessary for compatibility with
> PCIe r1.0 hotplug ports because Data Link Layer State Changed didn't exist
> before PCIe r1.1. DPC was added with PCIe r3.1 and thus DPC-capable
> hotplug ports always support Data Link Layer State Changed events.
> But the same cannot be assumed for Secondary Bus Reset, which already
> existed in PCIe r1.0.
>
> Secondary Bus Reset is only one of many causes of spurious link changes.
> Others include runtime suspend to D3cold, firmware updates or FPGA
> reconfiguration. The new pci_hp_{,un}ignore_link_change() helpers may be
> used by all kinds of drivers to annotate such code sections, hence their
> declarations are publicly visible in <linux/pci.h>. A case in point is
> the Mellanox Ethernet driver which disables a firmware reset feature if
> the Ethernet card is attached to a hotplug port, see commit 3d7a3f2612d7
> ("net/mlx5: Nack sync reset request when HotPlug is enabled"). Going
> forward, PCIe hotplug will be able to cope gracefully with all such use
> cases once the code sections are properly annotated.
>
> The new helpers internally use two bits in struct pci_dev's priv_flags as
> well as a wait_queue. This mirrors what was done for DPC by commit
> a97396c6eb13 ("PCI: pciehp: Ignore Link Down/Up caused by DPC"). That may
> be insufficient if spurious link changes are caused by multiple sources
> simultaneously. An example might be a Secondary Bus Reset issued by AER
> during FPGA reconfiguration. If this turns out to happen in real life,
> support for it can easily be added by replacing the PCI_LINK_CHANGING flag
> with an atomic_t counter incremented by pci_hp_ignore_link_change() and
> decremented by pci_hp_unignore_link_change(). Instead of awaiting a zero
> PCI_LINK_CHANGING flag, the pci_hp_spurious_link_change() helper would
> then simply await a zero counter.
>
> Fixes: 665745f27487 ("PCI/bwctrl: Re-add BW notification portdrv as PCIe BW controller")
> Reported-by: Joel Mathew Thomas <proxy0(a)tutamail.com>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219765
> Signed-off-by: Lukas Wunner <lukas(a)wunner.de>
> Signed-off-by: Bjorn Helgaas <bhelgaas(a)google.com>
> Tested-by: Joel Mathew Thomas <proxy0(a)tutamail.com>
> Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy(a)linux.intel.com>
> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen(a)linux.intel.com>
> Link: https://patch.msgid.link/d04deaf49d634a2edf42bf3c06ed81b4ca54d17b.174429823…
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Hello again,
This is a series for 6.1.y for fixes from 6.11. It corresponds to the
6.6.y series here:
https://lore.kernel.org/linux-xfs/20241218191725.63098-1-catherine.hoang@or…
During porting, I noticed 6.1.y was missing a fix series from 6.5
that is a dependency of the fixes from 6.11 so I included those
first.
These were tested via the auto group on 9 configs with no regressions
seen. These were also already ack'd on the xfs-stable mailing list.
series from 6.5:
https://lore.kernel.org/linux-xfs/168506055189.3727958.722711918040129046.s…
63ef7a35912d xfs: fix interval filtering in multi-step fsmap queries
7975aba19cba xfs: fix integer overflows in the fsmap rtbitmap and logdev backends
d898137d789c xfs: fix getfsmap reporting past the last rt extent
f045dd00328d xfs: clean up the rtbitmap fsmap backend
a949a1c2a198 xfs: fix logdev fsmap query result filtering
3ee9351e7490 xfs: validate fsmap offsets specified in the query keys
75dc03453122 xfs: fix xfs_btree_query_range callers to initialize btree rec fully
fix of 63ef7a35912dd ("xfs: fix interval filtering in multi-step fsmap queries")
https://lore.kernel.org/linux-xfs/169335025661.3518128.12423331693506002020…
cfa2df68b7ce xfs: fix an agbno overflow in __xfs_getfsmap_datadev
6.6 series for 6.11:
https://lore.kernel.org/linux-xfs/20241218191725.63098-1-catherine.hoang@or…
85d0947db262 xfs: fix the contact address for the sysfs ABI documentation
c08d03996cea xfs: verify buffer, inode, and dquot items every tx commit
ff627196ddc1 xfs: use consistent uid/gid when grabbing dquots for inodes
7531c9ab2e55 xfs: declare xfs_file.c symbols in xfs_file.h
c070b8802159 xfs: create a new helper to return a file's allocation unit
2e63ed9b0175 xfs: Fix xfs_flush_unmap_range() range for RT
fe962ab3c4f1 xfs: Fix xfs_prepare_shift() range for RT
ca96d83c9307 xfs: don't walk off the end of a directory data block
27336a327b40 xfs: remove unused parameter in macro XFS_DQUOT_LOGRES
b2dcbd8a928c xfs: attr forks require attr, not attr2
4a82db7a4b73 xfs: conditionally allow FS_XFLAG_REALTIME changes if S_DAX is set
9fadc53d793c xfs: Fix the owner setting issue for rmap query in xfs fsmap
35bd108619c2 xfs: use XFS_BUF_DADDR_NULL for daddrs in getfsmap code
29fcb5fef608 xfs: take m_growlock when running growfsrt
e5d1ae2d4d0b xfs: reset rootdir extent size hint after growfsrt
[skipped for 6.1 as scrub is not supported in 6.1:]
cb95cb2450e3 xfs: convert comma to semicolon
1bee32f33c0a xfs: fix file_path handling in tracepoints
- Leah
Christoph Hellwig (1):
xfs: fix the contact address for the sysfs ABI documentation
Darrick J. Wong (17):
xfs: fix interval filtering in multi-step fsmap queries
xfs: fix integer overflows in the fsmap rtbitmap and logdev backends
xfs: fix getfsmap reporting past the last rt extent
xfs: clean up the rtbitmap fsmap backend
xfs: fix logdev fsmap query result filtering
xfs: validate fsmap offsets specified in the query keys
xfs: fix xfs_btree_query_range callers to initialize btree rec fully
xfs: fix an agbno overflow in __xfs_getfsmap_datadev
xfs: verify buffer, inode, and dquot items every tx commit
xfs: use consistent uid/gid when grabbing dquots for inodes
xfs: declare xfs_file.c symbols in xfs_file.h
xfs: create a new helper to return a file's allocation unit
xfs: attr forks require attr, not attr2
xfs: conditionally allow FS_XFLAG_REALTIME changes if S_DAX is set
xfs: use XFS_BUF_DADDR_NULL for daddrs in getfsmap code
xfs: take m_growlock when running growfsrt
xfs: reset rootdir extent size hint after growfsrt
John Garry (2):
xfs: Fix xfs_flush_unmap_range() range for RT
xfs: Fix xfs_prepare_shift() range for RT
Julian Sun (1):
xfs: remove unused parameter in macro XFS_DQUOT_LOGRES
Zizhi Wo (1):
xfs: Fix the owner setting issue for rmap query in xfs fsmap
lei lu (1):
xfs: don't walk off the end of a directory data block
Documentation/ABI/testing/sysfs-fs-xfs | 8 +-
fs/xfs/Kconfig | 12 ++
fs/xfs/libxfs/xfs_alloc.c | 10 +-
fs/xfs/libxfs/xfs_dir2_data.c | 31 ++-
fs/xfs/libxfs/xfs_dir2_priv.h | 7 +
fs/xfs/libxfs/xfs_quota_defs.h | 2 +-
fs/xfs/libxfs/xfs_refcount.c | 13 +-
fs/xfs/libxfs/xfs_rmap.c | 10 +-
fs/xfs/libxfs/xfs_trans_resv.c | 28 +--
fs/xfs/scrub/bmap.c | 8 +-
fs/xfs/xfs.h | 4 +
fs/xfs/xfs_bmap_util.c | 22 +-
fs/xfs/xfs_buf_item.c | 32 +++
fs/xfs/xfs_dquot_item.c | 31 +++
fs/xfs/xfs_file.c | 33 ++-
fs/xfs/xfs_file.h | 15 ++
fs/xfs/xfs_fsmap.c | 266 ++++++++++++++-----------
fs/xfs/xfs_inode.c | 29 ++-
fs/xfs/xfs_inode.h | 2 +
fs/xfs/xfs_inode_item.c | 32 +++
fs/xfs/xfs_ioctl.c | 12 ++
fs/xfs/xfs_iops.c | 1 +
fs/xfs/xfs_iops.h | 3 -
fs/xfs/xfs_rtalloc.c | 78 ++++++--
fs/xfs/xfs_symlink.c | 8 +-
fs/xfs/xfs_trace.h | 25 +++
26 files changed, 505 insertions(+), 217 deletions(-)
create mode 100644 fs/xfs/xfs_file.h
--
2.50.0.rc1.591.g9c95f17f64-goog
Hi, stable team,
Please backport the following commits to 6.12.y:
1/ 239f11209e5f ("serial: sh-sci: Move runtime PM enable to
sci_probe_single()")
2/ 5f1017069933 ("serial: sh-sci: Clean sci_ports[0] after at earlycon
exit")
3/ 651dee03696e ("serial: sh-sci: Increment the runtime usage counter for
the earlycon device")
These applies cleanly on top of 6.12.y (if applied in the order provided
above) and fix the debug console on Renesas devices.
Thank you,
Claudiu Beznea
From: Sergio Perez Gonzalez <sperezglz(a)gmail.com>
[ Upstream commit 3920a758800762917177a6b5ab39707d8e376fe6 ]
Issue flagged by coverity. Add a safety check for the return value
of dma_set_mask_and_coherent, go to a safe exit if it returns error.
Link: https://scan7.scan.coverity.com/#/project-view/53936/11354?selectedIssue=16…
Signed-off-by: Sergio Perez Gonzalez <sperezglz(a)gmail.com>
Reviewed-by: Claudiu Beznea <claudiu.beznea(a)tuxon.dev>
Link: https://patch.msgid.link/20250526032034.84900-1-sperezglz@gmail.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
Based on my analysis of the commit and the codebase context, here's my
assessment: **YES** This commit should be backported to stable kernel
trees. Here's my extensive analysis: ## Code Analysis The commit adds
proper error checking for `dma_set_mask_and_coherent()` in the macb
network driver. Looking at the specific changes: **Before the fix:**
```c if (GEM_BFEXT(DAW64, gem_readl(bp, DCFG6))) {
dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(44)); bp->hw_dma_cap
|= HW_DMA_CAP_64B; } ``` **After the fix:** ```c if (GEM_BFEXT(DAW64,
gem_readl(bp, DCFG6))) { err = dma_set_mask_and_coherent(&pdev->dev,
DMA_BIT_MASK(44)); if (err) { dev_err(&pdev->dev, "failed to set DMA
mask\n"); goto err_out_free_netdev; } bp->hw_dma_cap |= HW_DMA_CAP_64B;
} ``` ## Why this should be backported: 1. **Fixes a Real Bug**: The
original code ignores the return value of `dma_set_mask_and_coherent()`.
If this function fails (which can happen on systems with restricted DMA
capabilities), the driver would continue execution assuming 44-bit DMA
addressing is available when it's not, leading to potential memory
corruption or system crashes. 2. **Small, Contained Fix**: This is a
minimal change that only adds proper error checking without changing any
core logic or introducing new features. It follows the stable tree
principle of being a small, targeted bugfix. 3. **Follows Established
Patterns**: Looking at the similar commits provided, this matches
exactly the pattern of commits marked "YES" for backporting (commits #1,
#2, and #5) which all add proper error checking for DMA mask functions.
4. **Critical Network Driver**: The macb driver is used in production
systems, and DMA-related bugs can cause data corruption or system
instability, making this a high-priority fix for stable trees. 5.
**Proper Error Handling Path**: The fix correctly uses the existing
`err_out_free_netdev` label, which properly cleans up allocated
resources (line 5373-5383 in macb_main.c), preventing resource leaks
when DMA setup fails. 6. **Static Analysis Tool Flagged**: The commit
message mentions this was flagged by Coverity, indicating it's a
legitimate code quality issue that could manifest as a runtime bug. 7.
**Matches Stable Tree Criteria**: - Important bugfix ✓ - Minimal risk of
regression ✓ - No architectural changes ✓ - Confined to one
driver/subsystem ✓ - Clear side effects (proper cleanup on failure) ✓
This commit addresses the same class of vulnerability as the other "YES"
examples: ignoring return values of critical DMA functions that can fail
and lead to undefined behavior. The fix is surgical, safe, and addresses
a genuine runtime issue in a widely-used network driver.
drivers/net/ethernet/cadence/macb_main.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index e1e8bd2ec155b..d1f1ae5ea161c 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -5283,7 +5283,11 @@ static int macb_probe(struct platform_device *pdev)
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
if (GEM_BFEXT(DAW64, gem_readl(bp, DCFG6))) {
- dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(44));
+ err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(44));
+ if (err) {
+ dev_err(&pdev->dev, "failed to set DMA mask\n");
+ goto err_out_free_netdev;
+ }
bp->hw_dma_cap |= HW_DMA_CAP_64B;
}
#endif
--
2.39.5
This patch series introduces equality-aware variants of the min heap
API that use a top-down heapify strategy to improve performance when
many elements are equal under the comparison function. It also updates
the documentation accordingly and modifies bcache to use the new APIs
to fix a performance regression caused by the switch to the generic min
heap library.
In particular, invalidate_buckets_lru() in bcache suffered from
increased comparison overhead due to the bottom-up strategy introduced
in commit 866898efbb25 ("bcache: remove heap-related macros and switch
to generic min_heap"). The regression is addressed by switching to the
equality-aware variants and using the inline versions to avoid function
call overhead in this hot path.
Cc: stable(a)vger.kernel.org
---
To avoid duplicated effort and expedite resolution, Robert kindly
agreed that I should submit my already-completed series instead. Many
thanks to him for his cooperation and support.
Kuan-Wei Chiu (8):
lib min_heap: Add equal-elements-aware sift_down variant
lib min_heap: Add typedef for sift_down function pointer
lib min_heap: add eqaware variant of min_heapify_all()
lib min_heap: add eqaware variant of min_heap_pop()
lib min_heap: add eqaware variant of min_heap_pop_push()
lib min_heap: add eqaware variant of min_heap_del()
Documentation/core-api: min_heap: Document _eqaware variants of
min-heap APIs
bcache: Fix the tail IO latency regression by using equality-aware min
heap API
Documentation/core-api/min_heap.rst | 20 +++++
drivers/md/bcache/alloc.c | 15 ++--
include/linux/min_heap.h | 131 +++++++++++++++++++++++-----
lib/min_heap.c | 23 +++--
4 files changed, 154 insertions(+), 35 deletions(-)
--
2.34.1
The patch titled
Subject: mm/huge_memory: don't ignore queried cachemode in vmf_insert_pfn_pud()
has been added to the -mm mm-unstable branch. Its filename is
mm-huge_memory-dont-ignore-queried-cachemode-in-vmf_insert_pfn_pud.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: David Hildenbrand <david(a)redhat.com>
Subject: mm/huge_memory: don't ignore queried cachemode in vmf_insert_pfn_pud()
Date: Fri, 13 Jun 2025 11:27:00 +0200
Patch series "mm/huge_memory: vmf_insert_folio_*() and
vmf_insert_pfn_pud() fixes", v3.
While working on improving vm_normal_page() and friends, I stumbled over
this issues: refcounted "normal" folios must not be marked using
pmd_special() / pud_special(). Otherwise, we're effectively telling the
system that these folios are no "normal", violating the rules we
documented for vm_normal_page().
Fortunately, there are not many pmd_special()/pud_special() users yet. So
far there doesn't seem to be serious damage.
Tested using the ndctl tests ("ndctl:dax" suite).
This patch (of 3):
We set up the cache mode but ... don't forward the updated pgprot to
insert_pfn_pud().
Only a problem on x86-64 PAT when mapping PFNs using PUDs that require a
special cachemode.
Fix it by using the proper pgprot where the cachemode was setup.
It is unclear in which configurations we would get the cachemode wrong:
through vfio seems possible. Getting cachemodes wrong is usually ...
bad. As the fix is easy, let's backport it to stable.
Identified by code inspection.
Link: https://lkml.kernel.org/r/20250613092702.1943533-1-david@redhat.com
Link: https://lkml.kernel.org/r/20250613092702.1943533-2-david@redhat.com
Fixes: 7b806d229ef1 ("mm: remove vmf_insert_pfn_xxx_prot() for huge page-table entries")
Signed-off-by: David Hildenbrand <david(a)redhat.com>
Reviewed-by: Dan Williams <dan.j.williams(a)intel.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes(a)oracle.com>
Reviewed-by: Jason Gunthorpe <jgg(a)nvidia.com>
Reviewed-by: Oscar Salvador <osalvador(a)suse.de>
Tested-by: Dan Williams <dan.j.williams(a)intel.com>
Cc: Alistair Popple <apopple(a)nvidia.com>
Cc: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Cc: Dev Jain <dev.jain(a)arm.com>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Mariano Pache <npache(a)redhat.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Zi Yan <ziy(a)nvidia.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/huge_memory.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
--- a/mm/huge_memory.c~mm-huge_memory-dont-ignore-queried-cachemode-in-vmf_insert_pfn_pud
+++ a/mm/huge_memory.c
@@ -1516,10 +1516,9 @@ static pud_t maybe_pud_mkwrite(pud_t pud
}
static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
- pud_t *pud, pfn_t pfn, bool write)
+ pud_t *pud, pfn_t pfn, pgprot_t prot, bool write)
{
struct mm_struct *mm = vma->vm_mm;
- pgprot_t prot = vma->vm_page_prot;
pud_t entry;
if (!pud_none(*pud)) {
@@ -1581,7 +1580,7 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_
pfnmap_setup_cachemode_pfn(pfn_t_to_pfn(pfn), &pgprot);
ptl = pud_lock(vma->vm_mm, vmf->pud);
- insert_pfn_pud(vma, addr, vmf->pud, pfn, write);
+ insert_pfn_pud(vma, addr, vmf->pud, pfn, pgprot, write);
spin_unlock(ptl);
return VM_FAULT_NOPAGE;
@@ -1625,7 +1624,7 @@ vm_fault_t vmf_insert_folio_pud(struct v
add_mm_counter(mm, mm_counter_file(folio), HPAGE_PUD_NR);
}
insert_pfn_pud(vma, addr, vmf->pud, pfn_to_pfn_t(folio_pfn(folio)),
- write);
+ vma->vm_page_prot, write);
spin_unlock(ptl);
return VM_FAULT_NOPAGE;
_
Patches currently in -mm which might be from david(a)redhat.com are
mm-gup-revert-mm-gup-fix-infinite-loop-within-__get_longterm_locked.patch
mm-gup-remove-vm_bug_ons.patch
mm-gup-remove-vm_bug_ons-fix.patch
mm-huge_memory-dont-ignore-queried-cachemode-in-vmf_insert_pfn_pud.patch
mm-huge_memory-dont-mark-refcounted-folios-special-in-vmf_insert_folio_pmd.patch
mm-huge_memory-dont-mark-refcounted-folios-special-in-vmf_insert_folio_pud.patch
There's been a mistake when extracting the geometry of the W35N02 and
W35N04 chips from the datasheet. There is a single plane, however there
are respectively 2 and 4 LUNs. They are actually referred in the
datasheet as dies (equivalent of target), but as there is no die select
operation and the chips only feature a single configuration register for
the entire chip (instead of one per die), we can reasonably assume we
are talking about LUNs and not dies.
Reported-by: Andreas Dannenberg <dannenberg(a)ti.com>
Suggested-by: Vignesh Raghavendra <vigneshr(a)ti.com>
Fixes: 25e08bf66660 ("mtd: spinand: winbond: Add support for W35N02JW and W35N04JW chips")
Cc: stable(a)vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal(a)bootlin.com>
---
drivers/mtd/nand/spi/winbond.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/mtd/nand/spi/winbond.c b/drivers/mtd/nand/spi/winbond.c
index 19f8dd4a6370..2808bbd7a16e 100644
--- a/drivers/mtd/nand/spi/winbond.c
+++ b/drivers/mtd/nand/spi/winbond.c
@@ -289,7 +289,7 @@ static const struct spinand_info winbond_spinand_table[] = {
SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)),
SPINAND_INFO("W35N02JW", /* 1.8V */
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdf, 0x22),
- NAND_MEMORG(1, 4096, 128, 64, 512, 10, 2, 1, 1),
+ NAND_MEMORG(1, 4096, 128, 64, 512, 10, 1, 2, 1),
NAND_ECCREQ(1, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_octal_variants,
&write_cache_octal_variants,
@@ -298,7 +298,7 @@ static const struct spinand_info winbond_spinand_table[] = {
SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)),
SPINAND_INFO("W35N04JW", /* 1.8V */
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdf, 0x23),
- NAND_MEMORG(1, 4096, 128, 64, 512, 10, 4, 1, 1),
+ NAND_MEMORG(1, 4096, 128, 64, 512, 10, 1, 4, 1),
NAND_ECCREQ(1, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_octal_variants,
&write_cache_octal_variants,
--
2.48.1
In virtio-net, we have not yet supported multi-buffer XDP packet in
zerocopy mode when there is a binding XDP program. However, in that
case, when receiving multi-buffer XDP packet, we skip the XDP program
and return XDP_PASS. As a result, the packet is passed to normal network
stack which is an incorrect behavior. This commit instead returns
XDP_DROP in that case.
Fixes: 99c861b44eb1 ("virtio_net: xsk: rx: support recv merge mode")
Cc: stable(a)vger.kernel.org
Signed-off-by: Bui Quang Minh <minhquangbui99(a)gmail.com>
---
drivers/net/virtio_net.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index e53ba600605a..4c35324d6e5b 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1309,9 +1309,14 @@ static struct sk_buff *virtnet_receive_xsk_merge(struct net_device *dev, struct
ret = XDP_PASS;
rcu_read_lock();
prog = rcu_dereference(rq->xdp_prog);
- /* TODO: support multi buffer. */
- if (prog && num_buf == 1)
- ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, stats);
+ if (prog) {
+ /* TODO: support multi buffer. */
+ if (num_buf == 1)
+ ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit,
+ stats);
+ else
+ ret = XDP_DROP;
+ }
rcu_read_unlock();
switch (ret) {
--
2.43.0