From: yangge <yangge1116(a)126.com>
If a large number of CMA memory are configured in system (for example, the
CMA memory accounts for 50% of the system memory), starting a virtual
virtual machine with device passthrough, it will
call pin_user_pages_remote(..., FOLL_LONGTERM, ...) to pin memory.
Normally if a page is present and in CMA area, pin_user_pages_remote()
will migrate the page from CMA area to non-CMA area because of
FOLL_LONGTERM flag. But the current code will cause the migration failure
due to unexpected page refcounts, and eventually cause the virtual machine
fail to start.
If a page is added in LRU batch, its refcount increases one, remove the
page from LRU batch decreases one. Page migration requires the page is not
referenced by others except page mapping. Before migrating a page, we
should try to drain the page from LRU batch in case the page is in it,
however, folio_test_lru() is not sufficient to tell whether the page is
in LRU batch or not, if the page is in LRU batch, the migration will fail.
To solve the problem above, we modify the logic of adding to LRU batch.
Before adding a page to LRU batch, we clear the LRU flag of the page so
that we can check whether the page is in LRU batch by folio_test_lru(page).
It's quite valuable, because likely we don't want to blindly drain the LRU
batch simply because there is some unexpected reference on a page, as
described above.
This change makes the LRU flag of a page invisible for longer, which
may impact some programs. For example, as long as a page is on a LRU
batch, we cannot isolate it, and we cannot check if it's an LRU page.
Further, a page can now only be on exactly one LRU batch. This doesn't
seem to matter much, because a new page is allocated from buddy and
added to the lru batch, or be isolated, it's LRU flag may also be
invisible for a long time.
Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: yangge <yangge1116(a)126.com>
---
mm/swap.c | 43 +++++++++++++++++++++++++++++++------------
1 file changed, 31 insertions(+), 12 deletions(-)
V4:
Adjust commit message according to David's comments
V3:
Add fixes tag
V2:
Adjust code and commit message according to David's comments
diff --git a/mm/swap.c b/mm/swap.c
index dc205bd..9caf6b0 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -211,10 +211,6 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
for (i = 0; i < folio_batch_count(fbatch); i++) {
struct folio *folio = fbatch->folios[i];
- /* block memcg migration while the folio moves between lru */
- if (move_fn != lru_add_fn && !folio_test_clear_lru(folio))
- continue;
-
folio_lruvec_relock_irqsave(folio, &lruvec, &flags);
move_fn(lruvec, folio);
@@ -255,11 +251,16 @@ static void lru_move_tail_fn(struct lruvec *lruvec, struct folio *folio)
void folio_rotate_reclaimable(struct folio *folio)
{
if (!folio_test_locked(folio) && !folio_test_dirty(folio) &&
- !folio_test_unevictable(folio) && folio_test_lru(folio)) {
+ !folio_test_unevictable(folio)) {
struct folio_batch *fbatch;
unsigned long flags;
folio_get(folio);
+ if (!folio_test_clear_lru(folio)) {
+ folio_put(folio);
+ return;
+ }
+
local_lock_irqsave(&lru_rotate.lock, flags);
fbatch = this_cpu_ptr(&lru_rotate.fbatch);
folio_batch_add_and_move(fbatch, folio, lru_move_tail_fn);
@@ -352,11 +353,15 @@ static void folio_activate_drain(int cpu)
void folio_activate(struct folio *folio)
{
- if (folio_test_lru(folio) && !folio_test_active(folio) &&
- !folio_test_unevictable(folio)) {
+ if (!folio_test_active(folio) && !folio_test_unevictable(folio)) {
struct folio_batch *fbatch;
folio_get(folio);
+ if (!folio_test_clear_lru(folio)) {
+ folio_put(folio);
+ return;
+ }
+
local_lock(&cpu_fbatches.lock);
fbatch = this_cpu_ptr(&cpu_fbatches.activate);
folio_batch_add_and_move(fbatch, folio, folio_activate_fn);
@@ -700,6 +705,11 @@ void deactivate_file_folio(struct folio *folio)
return;
folio_get(folio);
+ if (!folio_test_clear_lru(folio)) {
+ folio_put(folio);
+ return;
+ }
+
local_lock(&cpu_fbatches.lock);
fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate_file);
folio_batch_add_and_move(fbatch, folio, lru_deactivate_file_fn);
@@ -716,11 +726,16 @@ void deactivate_file_folio(struct folio *folio)
*/
void folio_deactivate(struct folio *folio)
{
- if (folio_test_lru(folio) && !folio_test_unevictable(folio) &&
- (folio_test_active(folio) || lru_gen_enabled())) {
+ if (!folio_test_unevictable(folio) && (folio_test_active(folio) ||
+ lru_gen_enabled())) {
struct folio_batch *fbatch;
folio_get(folio);
+ if (!folio_test_clear_lru(folio)) {
+ folio_put(folio);
+ return;
+ }
+
local_lock(&cpu_fbatches.lock);
fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate);
folio_batch_add_and_move(fbatch, folio, lru_deactivate_fn);
@@ -737,12 +752,16 @@ void folio_deactivate(struct folio *folio)
*/
void folio_mark_lazyfree(struct folio *folio)
{
- if (folio_test_lru(folio) && folio_test_anon(folio) &&
- folio_test_swapbacked(folio) && !folio_test_swapcache(folio) &&
- !folio_test_unevictable(folio)) {
+ if (folio_test_anon(folio) && folio_test_swapbacked(folio) &&
+ !folio_test_swapcache(folio) && !folio_test_unevictable(folio)) {
struct folio_batch *fbatch;
folio_get(folio);
+ if (!folio_test_clear_lru(folio)) {
+ folio_put(folio);
+ return;
+ }
+
local_lock(&cpu_fbatches.lock);
fbatch = this_cpu_ptr(&cpu_fbatches.lru_lazyfree);
folio_batch_add_and_move(fbatch, folio, lru_lazyfree_fn);
--
2.7.4
In the function "wled_probe", the "wled->name" is dynamically allocated
(wled_probe -> wled_configure -> devm_kasprintf), which is possible
to be null.
In the call trace: wled_probe -> devm_backlight_device_register
-> backlight_device_register, this "name" variable is directly
dereferenced without checking. We add a null-check statement.
Fixes: f86b77583d88 ("backlight: pm8941: Convert to using %pOFn instead of device_node.name")
Signed-off-by: Haoyu Li <lihaoyu499(a)gmail.com>
Cc: stable(a)vger.kernel.org
---
drivers/video/backlight/backlight.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/video/backlight/backlight.c b/drivers/video/backlight/backlight.c
index f699e5827ccb..b21670bd86de 100644
--- a/drivers/video/backlight/backlight.c
+++ b/drivers/video/backlight/backlight.c
@@ -414,6 +414,8 @@ struct backlight_device *backlight_device_register(const char *name,
struct backlight_device *new_bd;
int rc;
+ if (!name)
+ return ERR_PTR(-EINVAL);
pr_debug("backlight_device_register: name=%s\n", name);
new_bd = kzalloc(sizeof(struct backlight_device), GFP_KERNEL);
--
2.34.1
When I converted rk808 to device managed resources I converted the rk808
specific pm_power_off handler to devm_register_sys_off_handler() using
SYS_OFF_MODE_POWER_OFF_PREPARE, which is allowed to sleep. I did this
because the driver's poweroff function makes use of regmap and the backend
of that might sleep.
But the PMIC poweroff function will kill off the board power and the
kernel does some extra steps after the prepare handler. Thus the prepare
handler should not be used for the PMIC's poweroff routine. Instead the
normal SYS_OFF_MODE_POWER_OFF phase should be used. The old pm_power_off
method is also being called from there, so this would have been a
cleaner conversion anyways.
But it still makes sense to investigate the sleep handling and check
if there are any issues. Apparently the Rockchip and Meson I2C drivers
(the only platforms using the PMICs handled by this driver) both have
support for atomic transfers and thus may be called from the proper
poweroff context.
Things are different on the SPI side. That is so far only used by rk806
and that one is only used by Rockchip RK3588. Unfortunately the Rockchip
SPI driver does not support atomic transfers. That means using the
normal POWER_OFF handler would introduce the following error splash
during shutdown on all RK3588 boards currently supported upstream:
[ 13.761353] ------------[ cut here ]------------
[ 13.761764] Voluntary context switch within RCU read-side critical section!
[ 13.761776] WARNING: CPU: 0 PID: 1 at kernel/rcu/tree_plugin.h:330 rcu_note_context_switch+0x3ac/0x404
[ 13.763219] Modules linked in:
[ 13.763498] CPU: 0 UID: 0 PID: 1 Comm: systemd-shutdow Not tainted 6.10.0-12284-g2818a9a19514 #1499
[ 13.764297] Hardware name: Rockchip RK3588 EVB1 V10 Board (DT)
[ 13.764812] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 13.765427] pc : rcu_note_context_switch+0x3ac/0x404
[ 13.765871] lr : rcu_note_context_switch+0x3ac/0x404
[ 13.766314] sp : ffff800084f4b5b0
[ 13.766609] x29: ffff800084f4b5b0 x28: ffff00040139b800 x27: 00007dfb4439ae80
[ 13.767245] x26: ffff00040139bc80 x25: 0000000000000000 x24: ffff800082118470
[ 13.767880] x23: 0000000000000000 x22: ffff000400300000 x21: ffff000400300000
[ 13.768515] x20: ffff800083a9d600 x19: ffff0004fee48600 x18: fffffffffffed448
[ 13.769151] x17: 000000040044ffff x16: 005000f2b5503510 x15: 0000000000000048
[ 13.769787] x14: fffffffffffed490 x13: ffff80008473b3c0 x12: 0000000000000900
[ 13.770421] x11: 0000000000000300 x10: ffff800084797bc0 x9 : ffff80008473b3c0
[ 13.771057] x8 : 00000000ffffefff x7 : ffff8000847933c0 x6 : 0000000000000300
[ 13.771692] x5 : 0000000000000301 x4 : 40000000fffff300 x3 : 0000000000000000
[ 13.772328] x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff000400300000
[ 13.772964] Call trace:
[ 13.773184] rcu_note_context_switch+0x3ac/0x404
[ 13.773598] __schedule+0x94/0xb0c
[ 13.773907] schedule+0x34/0x104
[ 13.774198] schedule_timeout+0x84/0xfc
[ 13.774544] wait_for_completion_timeout+0x78/0x14c
[ 13.774980] spi_transfer_one_message+0x588/0x690
[ 13.775403] __spi_pump_transfer_message+0x19c/0x4ec
[ 13.775846] __spi_sync+0x2a8/0x3c4
[ 13.776161] spi_write_then_read+0x120/0x208
[ 13.776543] rk806_spi_bus_read+0x54/0x88
[ 13.776905] _regmap_raw_read+0xec/0x16c
[ 13.777257] _regmap_bus_read+0x44/0x7c
[ 13.777601] _regmap_read+0x60/0xd8
[ 13.777915] _regmap_update_bits+0xf4/0x13c
[ 13.778289] regmap_update_bits_base+0x64/0x98
[ 13.778686] rk808_power_off+0x70/0xfc
[ 13.779024] sys_off_notify+0x40/0x6c
[ 13.779356] atomic_notifier_call_chain+0x60/0x90
[ 13.779776] do_kernel_power_off+0x54/0x6c
[ 13.780146] machine_power_off+0x18/0x24
[ 13.780499] kernel_power_off+0x70/0x7c
[ 13.780845] __do_sys_reboot+0x210/0x270
[ 13.781198] __arm64_sys_reboot+0x24/0x30
[ 13.781558] invoke_syscall+0x48/0x10c
[ 13.781897] el0_svc_common+0x3c/0xe8
[ 13.782228] do_el0_svc+0x20/0x2c
[ 13.782528] el0_svc+0x34/0xd8
[ 13.782806] el0t_64_sync_handler+0x120/0x12c
[ 13.783197] el0t_64_sync+0x190/0x194
[ 13.783527] ---[ end trace 0000000000000000 ]---
To avoid this we keep the SYS_OFF_MODE_POWER_OFF_PREPARE handler for the
SPI backend. This is not great, but at least avoids regressions and the
fix should be small enough to allow backporting.
As a side-effect this also works around a shutdown problem on the Asus
C201. For reasons unknown that skips calling the prepare handler and
directly calls the final shutdown handler.
Fixes: 4fec8a5a85c49 ("mfd: rk808: Convert to device managed resources")
Cc: stable(a)vger.kernel.org
Reported-by: Urja <urja(a)urja.dev>
Signed-off-by: Sebastian Reichel <sebastian.reichel(a)collabora.com>
---
drivers/mfd/rk8xx-core.c | 15 +++++++++++++--
drivers/mfd/rk8xx-i2c.c | 2 +-
drivers/mfd/rk8xx-spi.c | 2 +-
include/linux/mfd/rk808.h | 2 +-
4 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/drivers/mfd/rk8xx-core.c b/drivers/mfd/rk8xx-core.c
index 5eda3c0dbbdf..757ef8181328 100644
--- a/drivers/mfd/rk8xx-core.c
+++ b/drivers/mfd/rk8xx-core.c
@@ -692,10 +692,11 @@ void rk8xx_shutdown(struct device *dev)
}
EXPORT_SYMBOL_GPL(rk8xx_shutdown);
-int rk8xx_probe(struct device *dev, int variant, unsigned int irq, struct regmap *regmap)
+int rk8xx_probe(struct device *dev, int variant, unsigned int irq, struct regmap *regmap, bool is_spi)
{
struct rk808 *rk808;
const struct rk808_reg_data *pre_init_reg;
+ enum sys_off_mode pwr_off_mode = SYS_OFF_MODE_POWER_OFF;
const struct mfd_cell *cells;
int dual_support = 0;
int nr_pre_init_regs;
@@ -785,10 +786,20 @@ int rk8xx_probe(struct device *dev, int variant, unsigned int irq, struct regmap
if (ret)
return dev_err_probe(dev, ret, "failed to add MFD devices\n");
+ /*
+ * Currently the Rockchip SPI driver always sleeps when doing SPI
+ * transfers. This is not allowed in the SYS_OFF_MODE_POWER_OFF
+ * handler, so we are using the prepare handler as a workaround.
+ * This should be removed once the Rockchip SPI driver has been
+ * adapted.
+ */
+ if (is_spi)
+ pwr_off_mode = SYS_OFF_MODE_POWER_OFF_PREPARE;
+
if (device_property_read_bool(dev, "rockchip,system-power-controller") ||
device_property_read_bool(dev, "system-power-controller")) {
ret = devm_register_sys_off_handler(dev,
- SYS_OFF_MODE_POWER_OFF_PREPARE, SYS_OFF_PRIO_HIGH,
+ pwr_off_mode, SYS_OFF_PRIO_HIGH,
&rk808_power_off, rk808);
if (ret)
return dev_err_probe(dev, ret,
diff --git a/drivers/mfd/rk8xx-i2c.c b/drivers/mfd/rk8xx-i2c.c
index 69a6b297d723..a2029decd654 100644
--- a/drivers/mfd/rk8xx-i2c.c
+++ b/drivers/mfd/rk8xx-i2c.c
@@ -189,7 +189,7 @@ static int rk8xx_i2c_probe(struct i2c_client *client)
return dev_err_probe(&client->dev, PTR_ERR(regmap),
"regmap initialization failed\n");
- return rk8xx_probe(&client->dev, data->variant, client->irq, regmap);
+ return rk8xx_probe(&client->dev, data->variant, client->irq, regmap, false);
}
static void rk8xx_i2c_shutdown(struct i2c_client *client)
diff --git a/drivers/mfd/rk8xx-spi.c b/drivers/mfd/rk8xx-spi.c
index 3405fb82ff9f..20f9428f94bb 100644
--- a/drivers/mfd/rk8xx-spi.c
+++ b/drivers/mfd/rk8xx-spi.c
@@ -94,7 +94,7 @@ static int rk8xx_spi_probe(struct spi_device *spi)
return dev_err_probe(&spi->dev, PTR_ERR(regmap),
"Failed to init regmap\n");
- return rk8xx_probe(&spi->dev, RK806_ID, spi->irq, regmap);
+ return rk8xx_probe(&spi->dev, RK806_ID, spi->irq, regmap, true);
}
static const struct of_device_id rk8xx_spi_of_match[] = {
diff --git a/include/linux/mfd/rk808.h b/include/linux/mfd/rk808.h
index 69cbea78b430..be15b84cff9e 100644
--- a/include/linux/mfd/rk808.h
+++ b/include/linux/mfd/rk808.h
@@ -1349,7 +1349,7 @@ struct rk808 {
};
void rk8xx_shutdown(struct device *dev);
-int rk8xx_probe(struct device *dev, int variant, unsigned int irq, struct regmap *regmap);
+int rk8xx_probe(struct device *dev, int variant, unsigned int irq, struct regmap *regmap, bool is_spi);
int rk8xx_suspend(struct device *dev);
int rk8xx_resume(struct device *dev);
--
2.43.0
During the High-Speed Isochronous Audio transfers, xHCI
controller on certain AMD platforms experiences momentary data
loss. This results in Missed Service Errors (MSE) being
generated by the xHCI.
The root cause of the MSE is attributed to the ISOC OUT endpoint
being omitted from scheduling. This can happen either when an IN
endpoint with a 64ms service interval is pre-scheduled prior to
the ISOC OUT endpoint or when the interval of the ISOC OUT
endpoint is shorter than that of the IN endpoint. Consequently,
the OUT service is neglected when an IN endpoint with a service
interval exceeding 32ms is scheduled concurrently (every 64ms in
this scenario).
This issue is particularly seen on certain older AMD platforms.
To mitigate this problem, it is recommended to adjust the service
interval of the IN endpoint to not exceed 32ms (interval 8). This
adjustment ensures that the OUT endpoint will not be bypassed,
even if a smaller interval value is utilized.
Cc: stable(a)vger.kernel.org
Signed-off-by: Raju Rangoju <Raju.Rangoju(a)amd.com>
---
Changes since v3:
- Bump up the enum number XHCI_LIMIT_ENDPOINT_INTERVAL_9
Changes since v2:
- added stable tag to backport to all stable kernels
Changes since v1:
- replaced hex values with pci device names
- corrected the commit message
drivers/usb/host/xhci-mem.c | 5 +++++
drivers/usb/host/xhci-pci.c | 25 +++++++++++++++++++++++++
drivers/usb/host/xhci.h | 1 +
3 files changed, 31 insertions(+)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index 92703efda1f7..d3182ba98788 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -1420,6 +1420,11 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
/* Periodic endpoint bInterval limit quirk */
if (usb_endpoint_xfer_int(&ep->desc) ||
usb_endpoint_xfer_isoc(&ep->desc)) {
+ if ((xhci->quirks & XHCI_LIMIT_ENDPOINT_INTERVAL_9) &&
+ usb_endpoint_xfer_int(&ep->desc) &&
+ interval >= 9) {
+ interval = 8;
+ }
if ((xhci->quirks & XHCI_LIMIT_ENDPOINT_INTERVAL_7) &&
udev->speed >= USB_SPEED_HIGH &&
interval >= 7) {
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index 2d1e205c14c6..d23884afdf3f 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -69,12 +69,22 @@
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_XHCI 0x15ec
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI 0x15f0
+#define PCI_DEVICE_ID_AMD_ARIEL_TYPEC_XHCI 0x13ed
+#define PCI_DEVICE_ID_AMD_ARIEL_TYPEA_XHCI 0x13ee
+#define PCI_DEVICE_ID_AMD_STARSHIP_XHCI 0x148c
+#define PCI_DEVICE_ID_AMD_FIREFLIGHT_15D4_XHCI 0x15d4
+#define PCI_DEVICE_ID_AMD_FIREFLIGHT_15D5_XHCI 0x15d5
+#define PCI_DEVICE_ID_AMD_RAVEN_15E0_XHCI 0x15e0
+#define PCI_DEVICE_ID_AMD_RAVEN_15E1_XHCI 0x15e1
+#define PCI_DEVICE_ID_AMD_RAVEN2_XHCI 0x15e5
#define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x1639
#define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9
#define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba
#define PCI_DEVICE_ID_AMD_PROMONTORYA_2 0x43bb
#define PCI_DEVICE_ID_AMD_PROMONTORYA_1 0x43bc
+#define PCI_DEVICE_ID_ATI_NAVI10_7316_XHCI 0x7316
+
#define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042
#define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
#define PCI_DEVICE_ID_ASMEDIA_1142_XHCI 0x1242
@@ -278,6 +288,21 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
if (pdev->vendor == PCI_VENDOR_ID_NEC)
xhci->quirks |= XHCI_NEC_HOST;
+ if (pdev->vendor == PCI_VENDOR_ID_AMD &&
+ (pdev->device == PCI_DEVICE_ID_AMD_ARIEL_TYPEC_XHCI ||
+ pdev->device == PCI_DEVICE_ID_AMD_ARIEL_TYPEA_XHCI ||
+ pdev->device == PCI_DEVICE_ID_AMD_STARSHIP_XHCI ||
+ pdev->device == PCI_DEVICE_ID_AMD_FIREFLIGHT_15D4_XHCI ||
+ pdev->device == PCI_DEVICE_ID_AMD_FIREFLIGHT_15D5_XHCI ||
+ pdev->device == PCI_DEVICE_ID_AMD_RAVEN_15E0_XHCI ||
+ pdev->device == PCI_DEVICE_ID_AMD_RAVEN_15E1_XHCI ||
+ pdev->device == PCI_DEVICE_ID_AMD_RAVEN2_XHCI))
+ xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_9;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ATI &&
+ pdev->device == PCI_DEVICE_ID_ATI_NAVI10_7316_XHCI)
+ xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_9;
+
if (pdev->vendor == PCI_VENDOR_ID_AMD && xhci->hci_version == 0x96)
xhci->quirks |= XHCI_AMD_0x96_HOST;
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index 4914f0a10cff..36b77d3c0e7b 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1633,6 +1633,7 @@ struct xhci_hcd {
#define XHCI_WRITE_64_HI_LO BIT_ULL(47)
#define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48)
#define XHCI_ETRON_HOST BIT_ULL(49)
+#define XHCI_LIMIT_ENDPOINT_INTERVAL_9 BIT_ULL(50)
unsigned int num_active_eps;
unsigned int limit_active_eps;
--
2.34.1
From: Kairui Song <kasong(a)tencent.com>
This series fixes the page cache corruption issue reported by Christian
Theune [1]. The issue was reported affects kernels back to 5.19.
Current maintained effected branches includes 6.1 and 6.6 and the fix
was included in 6.10 already.
This series can be applied for both 6.1 and 6.6.
Patch 3/3 is the fixing patch. It was initially submitted and merge as
an optimization but found to have fixed the corruption by handling race
correctly.
Patch 1/3 and 2/3 is required for 3/3.
Patch 3/3 included some unit test code, making the LOC of the backport a
bit higher, but should be OK to be kept, since they are just test code.
Note there seems still some unresolved problem in Link [1] but that
should be a different issue, and the commits being backported have been
well tested, they fix the corruption issue just fine.
Link: https://lore.kernel.org/linux-mm/A5A976CB-DB57-4513-A700-656580488AB6@flyin… [1]
Kairui Song (3):
mm/filemap: return early if failed to allocate memory for split
lib/xarray: introduce a new helper xas_get_order
mm/filemap: optimize filemap folio adding
include/linux/xarray.h | 6 +++
lib/test_xarray.c | 93 ++++++++++++++++++++++++++++++++++++++++++
lib/xarray.c | 49 ++++++++++++++--------
mm/filemap.c | 50 ++++++++++++++++++-----
4 files changed, 169 insertions(+), 29 deletions(-)
--
2.46.1