The patch below does not apply to the 6.14-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.14.y
git checkout FETCH_HEAD
git cherry-pick -x 90abee6d7895d5eef18c91d870d8168be4e76e9d
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable(a)vger.kernel.org>' --in-reply-to '2025042149-busily-amaretto-e684@gregkh' --subject-prefix 'PATCH 6.14.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 90abee6d7895d5eef18c91d870d8168be4e76e9d Mon Sep 17 00:00:00 2001
From: Johannes Weiner <hannes(a)cmpxchg.org>
Date: Mon, 7 Apr 2025 14:01:53 -0400
Subject: [PATCH] mm: page_alloc: speed up fallbacks in rmqueue_bulk()
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
The test robot identified c2f6ea38fc1b ("mm: page_alloc: don't steal
single pages from biggest buddy") as the root cause of a 56.4% regression
in vm-scalability::lru-file-mmap-read.
Carlos reports an earlier patch, c0cd6f557b90 ("mm: page_alloc: fix
freelist movement during block conversion"), as the root cause for a
regression in worst-case zone->lock+irqoff hold times.
Both of these patches modify the page allocator's fallback path to be less
greedy in an effort to stave off fragmentation. The flip side of this is
that fallbacks are also less productive each time around, which means the
fallback search can run much more frequently.
Carlos' traces point to rmqueue_bulk() specifically, which tries to refill
the percpu cache by allocating a large batch of pages in a loop. It
highlights how once the native freelists are exhausted, the fallback code
first scans orders top-down for whole blocks to claim, then falls back to
a bottom-up search for the smallest buddy to steal. For the next batch
page, it goes through the same thing again.
This can be made more efficient. Since rmqueue_bulk() holds the
zone->lock over the entire batch, the freelists are not subject to outside
changes; when the search for a block to claim has already failed, there is
no point in trying again for the next page.
Modify __rmqueue() to remember the last successful fallback mode, and
restart directly from there on the next rmqueue_bulk() iteration.
Oliver confirms that this improves beyond the regression that the test
robot reported against c2f6ea38fc1b:
commit:
f3b92176f4 ("tools/selftests: add guard region test for /proc/$pid/pagemap")
c2f6ea38fc ("mm: page_alloc: don't steal single pages from biggest buddy")
acc4d5ff0b ("Merge tag 'net-6.15-rc0' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net")
2c847f27c3 ("mm: page_alloc: speed up fallbacks in rmqueue_bulk()") <--- your patch
f3b92176f4f7100f c2f6ea38fc1b640aa7a2e155cc1 acc4d5ff0b61eb1715c498b6536 2c847f27c37da65a93d23c237c5
---------------- --------------------------- --------------------------- ---------------------------
%stddev %change %stddev %change %stddev %change %stddev
\ | \ | \ | \
25525364 ± 3% -56.4% 11135467 -57.8% 10779336 +31.6% 33581409 vm-scalability.throughput
Carlos confirms that worst-case times are almost fully recovered
compared to before the earlier culprit patch:
2dd482ba627d (before freelist hygiene): 1ms
c0cd6f557b90 (after freelist hygiene): 90ms
next-20250319 (steal smallest buddy): 280ms
this patch : 8ms
[jackmanb(a)google.com: comment updates]
Link: https://lkml.kernel.org/r/D92AC0P9594X.3BML64MUKTF8Z@google.com
[hannes(a)cmpxchg.org: reset rmqueue_mode in rmqueue_buddy() error loop, per Yunsheng Lin]
Link: https://lkml.kernel.org/r/20250409140023.GA2313@cmpxchg.org
Link: https://lkml.kernel.org/r/20250407180154.63348-1-hannes@cmpxchg.org
Fixes: c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block conversion")
Fixes: c2f6ea38fc1b ("mm: page_alloc: don't steal single pages from biggest buddy")
Signed-off-by: Johannes Weiner <hannes(a)cmpxchg.org>
Signed-off-by: Brendan Jackman <jackmanb(a)google.com>
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Reported-by: Carlos Song <carlos.song(a)nxp.com>
Tested-by: Carlos Song <carlos.song(a)nxp.com>
Tested-by: kernel test robot <oliver.sang(a)intel.com>
Closes: https://lore.kernel.org/oe-lkp/202503271547.fc08b188-lkp@intel.com
Reviewed-by: Brendan Jackman <jackmanb(a)google.com>
Tested-by: Shivank Garg <shivankg(a)amd.com>
Acked-by: Zi Yan <ziy(a)nvidia.com>
Reviewed-by: Vlastimil Babka <vbabka(a)suse.cz>
Cc: <stable(a)vger.kernel.org> [6.10+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9a219fe8e130..1715e34b91af 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2183,23 +2183,15 @@ try_to_claim_block(struct zone *zone, struct page *page,
}
/*
- * Try finding a free buddy page on the fallback list.
- *
- * This will attempt to claim a whole pageblock for the requested type
- * to ensure grouping of such requests in the future.
- *
- * If a whole block cannot be claimed, steal an individual page, regressing to
- * __rmqueue_smallest() logic to at least break up as little contiguity as
- * possible.
+ * Try to allocate from some fallback migratetype by claiming the entire block,
+ * i.e. converting it to the allocation's start migratetype.
*
* The use of signed ints for order and current_order is a deliberate
* deviation from the rest of this file, to make the for loop
* condition simpler.
- *
- * Return the stolen page, or NULL if none can be found.
*/
static __always_inline struct page *
-__rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
+__rmqueue_claim(struct zone *zone, int order, int start_migratetype,
unsigned int alloc_flags)
{
struct free_area *area;
@@ -2237,14 +2229,29 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
page = try_to_claim_block(zone, page, current_order, order,
start_migratetype, fallback_mt,
alloc_flags);
- if (page)
- goto got_one;
+ if (page) {
+ trace_mm_page_alloc_extfrag(page, order, current_order,
+ start_migratetype, fallback_mt);
+ return page;
+ }
}
- if (alloc_flags & ALLOC_NOFRAGMENT)
- return NULL;
+ return NULL;
+}
+
+/*
+ * Try to steal a single page from some fallback migratetype. Leave the rest of
+ * the block as its current migratetype, potentially causing fragmentation.
+ */
+static __always_inline struct page *
+__rmqueue_steal(struct zone *zone, int order, int start_migratetype)
+{
+ struct free_area *area;
+ int current_order;
+ struct page *page;
+ int fallback_mt;
+ bool claim_block;
- /* No luck claiming pageblock. Find the smallest fallback page */
for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) {
area = &(zone->free_area[current_order]);
fallback_mt = find_suitable_fallback(area, current_order,
@@ -2254,25 +2261,28 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
page = get_page_from_free_area(area, fallback_mt);
page_del_and_expand(zone, page, order, current_order, fallback_mt);
- goto got_one;
+ trace_mm_page_alloc_extfrag(page, order, current_order,
+ start_migratetype, fallback_mt);
+ return page;
}
return NULL;
-
-got_one:
- trace_mm_page_alloc_extfrag(page, order, current_order,
- start_migratetype, fallback_mt);
-
- return page;
}
+enum rmqueue_mode {
+ RMQUEUE_NORMAL,
+ RMQUEUE_CMA,
+ RMQUEUE_CLAIM,
+ RMQUEUE_STEAL,
+};
+
/*
* Do the hard work of removing an element from the buddy allocator.
* Call me with the zone->lock already held.
*/
static __always_inline struct page *
__rmqueue(struct zone *zone, unsigned int order, int migratetype,
- unsigned int alloc_flags)
+ unsigned int alloc_flags, enum rmqueue_mode *mode)
{
struct page *page;
@@ -2291,16 +2301,48 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
}
}
- page = __rmqueue_smallest(zone, order, migratetype);
- if (unlikely(!page)) {
- if (alloc_flags & ALLOC_CMA)
+ /*
+ * First try the freelists of the requested migratetype, then try
+ * fallbacks modes with increasing levels of fragmentation risk.
+ *
+ * The fallback logic is expensive and rmqueue_bulk() calls in
+ * a loop with the zone->lock held, meaning the freelists are
+ * not subject to any outside changes. Remember in *mode where
+ * we found pay dirt, to save us the search on the next call.
+ */
+ switch (*mode) {
+ case RMQUEUE_NORMAL:
+ page = __rmqueue_smallest(zone, order, migratetype);
+ if (page)
+ return page;
+ fallthrough;
+ case RMQUEUE_CMA:
+ if (alloc_flags & ALLOC_CMA) {
page = __rmqueue_cma_fallback(zone, order);
-
- if (!page)
- page = __rmqueue_fallback(zone, order, migratetype,
- alloc_flags);
+ if (page) {
+ *mode = RMQUEUE_CMA;
+ return page;
+ }
+ }
+ fallthrough;
+ case RMQUEUE_CLAIM:
+ page = __rmqueue_claim(zone, order, migratetype, alloc_flags);
+ if (page) {
+ /* Replenished preferred freelist, back to normal mode. */
+ *mode = RMQUEUE_NORMAL;
+ return page;
+ }
+ fallthrough;
+ case RMQUEUE_STEAL:
+ if (!(alloc_flags & ALLOC_NOFRAGMENT)) {
+ page = __rmqueue_steal(zone, order, migratetype);
+ if (page) {
+ *mode = RMQUEUE_STEAL;
+ return page;
+ }
+ }
}
- return page;
+ return NULL;
}
/*
@@ -2312,6 +2354,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
unsigned long count, struct list_head *list,
int migratetype, unsigned int alloc_flags)
{
+ enum rmqueue_mode rmqm = RMQUEUE_NORMAL;
unsigned long flags;
int i;
@@ -2323,7 +2366,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
}
for (i = 0; i < count; ++i) {
struct page *page = __rmqueue(zone, order, migratetype,
- alloc_flags);
+ alloc_flags, &rmqm);
if (unlikely(page == NULL))
break;
@@ -2948,7 +2991,9 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
if (alloc_flags & ALLOC_HIGHATOMIC)
page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
if (!page) {
- page = __rmqueue(zone, order, migratetype, alloc_flags);
+ enum rmqueue_mode rmqm = RMQUEUE_NORMAL;
+
+ page = __rmqueue(zone, order, migratetype, alloc_flags, &rmqm);
/*
* If the allocation fails, allow OOM handling and
Hi there,
I wanted to follow up on my earlier message. Whenever you have a moment, I'd
love to hear your feedback.
Best regards,
Brielle
_____
From: Brielle Hayes
Sent: Friday, May 2, 2025 9:04 AM
To: linux-stable-mirror(a)lists.linaro.org
<mailto:linux-stable-mirror@lists.linaro.org>
Subject: Enhance Your Campaigns
Hi there,
I hope you're doing well.
Would you be interested in acquiring our opt-in data list for mobile
analytics platform users and customers worldwide?
We also provide current Users and Clients data lists for:
* Branch
* Adjust
* Mixpanel
* AppsFlyer
* CleverTap
* Amplitude Analytics
* Braze
* Singular
* Tenjin & many more...!
If this sounds relevant, I'd be happy to share more details along with a
sample for your review.
Best regards,
Brielle Hayes | Marketing Executive
Reply with "Remove" if you no longer wish to receive emails.
This is the start of the stable review cycle for the 5.15.182 release.
There are 55 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Fri, 09 May 2025 18:37:41 +0000.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.182-r…
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Linux 5.15.182-rc1
Nicolin Chen <nicolinc(a)nvidia.com>
iommu/arm-smmu-v3: Fix iommu_device_probe bug due to duplicated stream ids
Jason Gunthorpe <jgg(a)ziepe.ca>
iommu/arm-smmu-v3: Use the new rb tree helpers
Björn Töpel <bjorn(a)rivosinc.com>
riscv: uprobes: Add missing fence.i after building the XOL buffer
Stephan Gerhold <stephan.gerhold(a)linaro.org>
serial: msm: Configure correct working mode before starting earlycon
Suzuki K Poulose <suzuki.poulose(a)arm.com>
irqchip/gic-v2m: Prevent use after free of gicv2m_get_fwnode()
Thomas Gleixner <tglx(a)linutronix.de>
irqchip/gic-v2m: Mark a few functions __init
Xiang wangx <wangxiang(a)cdjrlc.com>
irqchip/gic-v2m: Add const to of_device_id
Christian Hewitt <christianshewitt(a)gmail.com>
Revert "drm/meson: vclk: fix calculation of 59.94 fractional rates"
Fiona Klute <fiona.klute(a)gmx.de>
net: phy: microchip: force IRQ polling mode for lan88xx
Sébastien Szymanski <sebastien.szymanski(a)armadeus.com>
ARM: dts: opos6ul: add ksz8081 phy properties
Cristian Marussi <cristian.marussi(a)arm.com>
firmware: arm_scmi: Balance device refcount when destroying devices
Yonglong Liu <liuyonglong(a)huawei.com>
net: hns3: fix deadlock issue when externel_lb and reset are executed together
Sergey Shtylyov <s.shtylyov(a)omp.ru>
of: module: add buffer overflow check in of_modalias()
Richard Zhu <hongxing.zhu(a)nxp.com>
PCI: imx6: Skip controller_id generation logic for i.MX7D
Jian Shen <shenjian15(a)huawei.com>
net: hns3: defer calling ptp_clock_register()
Hao Lan <lanhao(a)huawei.com>
net: hns3: fixed debugfs tm_qset size
Yonglong Liu <liuyonglong(a)huawei.com>
net: hns3: fix an interrupt residual problem
Yonglong Liu <liuyonglong(a)huawei.com>
net: hns3: add support for external loopback test
Jian Shen <shenjian15(a)huawei.com>
net: hns3: store rx VLAN tag offload state for VF
Mattias Barthel <mattias.barthel(a)atlascopco.com>
net: fec: ERR007885 Workaround for conventional TX
Thangaraj Samynathan <thangaraj.s(a)microchip.com>
net: lan743x: Fix memleak issue when GSO enabled
Michael Liang <mliang(a)purestorage.com>
nvme-tcp: fix premature queue removal and I/O failover
Michael Chan <michael.chan(a)broadcom.com>
bnxt_en: Fix ethtool -d byte order for 32-bit values
Shruti Parab <shruti.parab(a)broadcom.com>
bnxt_en: Fix out-of-bound memcpy() during ethtool -w
Shruti Parab <shruti.parab(a)broadcom.com>
bnxt_en: Fix coredump logic to free allocated buffer
Felix Fietkau <nbd(a)nbd.name>
net: ipv6: fix UDPv6 GSO segmentation with NAT
Simon Horman <horms(a)kernel.org>
net: dlink: Correct endianness handling of led_mode
Xuanqiang Luo <luoxuanqiang(a)kylinos.cn>
ice: Check VF VSI Pointer Value in ice_vc_add_fdir_fltr()
Brett Creeley <brett.creeley(a)intel.com>
ice: Refactor promiscuous functions
Victor Nogueira <victor(a)mojatatu.com>
net_sched: qfq: Fix double list add in class with netem as child qdisc
Victor Nogueira <victor(a)mojatatu.com>
net_sched: ets: Fix double list add in class with netem as child qdisc
Victor Nogueira <victor(a)mojatatu.com>
net_sched: hfsc: Fix a UAF vulnerability in class with netem as child qdisc
Victor Nogueira <victor(a)mojatatu.com>
net_sched: drr: Fix double list add in class with netem as child qdisc
Louis-Alexis Eyraud <louisalexis.eyraud(a)collabora.com>
net: ethernet: mtk-star-emac: rearm interrupts in rx_poll only when advised
Louis-Alexis Eyraud <louisalexis.eyraud(a)collabora.com>
net: ethernet: mtk-star-emac: fix spinlock recursion issues on rx/tx poll
Biao Huang <biao.huang(a)mediatek.com>
net: ethernet: mtk-star-emac: separate tx/rx handling with two NAPIs
Chris Mi <cmi(a)nvidia.com>
net/mlx5: E-switch, Fix error handling for enabling roce
Maor Gottlieb <maorg(a)nvidia.com>
net/mlx5: E-Switch, Initialize MAC Address for Default GID
Jakub Kicinski <kuba(a)kernel.org>
net/sched: act_mirred: don't override retval if we already lost the skb
Sean Christopherson <seanjc(a)google.com>
KVM: x86: Load DR6 with guest value only before entering .vcpu_run() loop
Jeongjun Park <aha310510(a)gmail.com>
tracing: Fix oob write in trace_seq_to_buffer()
Mingcong Bai <jeffbai(a)aosc.io>
iommu/vt-d: Apply quirk_iommu_igfx for 8086:0044 (QM57/QS57)
Pavel Paklov <Pavel.Paklov(a)cyberprotect.ru>
iommu/amd: Fix potential buffer overflow in parse_ivrs_acpihid
Benjamin Marzinski <bmarzins(a)redhat.com>
dm: always update the array size in realloc_argv on success
Mikulas Patocka <mpatocka(a)redhat.com>
dm-integrity: fix a warning on invalid table line
Wentao Liang <vulab(a)iscas.ac.cn>
wifi: brcm80211: fmac: Add error handling for brcmf_usb_dl_writeimage()
Ruslan Piasetskyi <ruslan.piasetskyi(a)gmail.com>
mmc: renesas_sdhi: Fix error handling in renesas_sdhi_probe
Vishal Badole <Vishal.Badole(a)amd.com>
amd-xgbe: Fix to ensure dependent features are toggled with RX checksum offload
Helge Deller <deller(a)gmx.de>
parisc: Fix double SIGFPE crash
Will Deacon <will(a)kernel.org>
arm64: errata: Add missing sentinels to Spectre-BHB MIDR arrays
Clark Wang <xiaoning.wang(a)nxp.com>
i2c: imx-lpi2c: Fix clock count when probe defers
Niravkumar L Rabara <niravkumar.l.rabara(a)altera.com>
EDAC/altera: Set DDR and SDMMC interrupt mask before registration
Niravkumar L Rabara <niravkumar.l.rabara(a)altera.com>
EDAC/altera: Test the correct error reg offset
Philipp Stanner <phasta(a)kernel.org>
drm/nouveau: Fix WARN_ON in nouveau_fence_context_kill()
Joachim Priesner <joachim.priesner(a)web.de>
ALSA: usb-audio: Add second USB ID for Jabra Evolve 65 headset
-------------
Diffstat:
Makefile | 4 +-
arch/arm/boot/dts/imx6ul-imx6ull-opos6ul.dtsi | 3 +
arch/arm64/kernel/proton-pack.c | 2 +
arch/parisc/math-emu/driver.c | 16 +-
arch/riscv/kernel/probes/uprobes.c | 10 +-
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm/svm.c | 13 +-
arch/x86/kvm/vmx/vmx.c | 11 +-
arch/x86/kvm/x86.c | 3 +
drivers/edac/altera_edac.c | 9 +-
drivers/edac/altera_edac.h | 2 +
drivers/firmware/arm_scmi/bus.c | 3 +
drivers/gpu/drm/meson/meson_vclk.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_fence.c | 2 +-
drivers/i2c/busses/i2c-imx-lpi2c.c | 4 +-
drivers/iommu/amd/init.c | 8 +
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 79 ++---
drivers/iommu/intel/iommu.c | 4 +-
drivers/irqchip/irq-gic-v2m.c | 8 +-
drivers/md/dm-integrity.c | 2 +-
drivers/md/dm-table.c | 5 +-
drivers/mmc/host/renesas_sdhi_core.c | 10 +-
drivers/net/ethernet/amd/xgbe/xgbe-desc.c | 9 +-
drivers/net/ethernet/amd/xgbe/xgbe-dev.c | 24 +-
drivers/net/ethernet/amd/xgbe/xgbe-drv.c | 11 +-
drivers/net/ethernet/amd/xgbe/xgbe.h | 4 +
drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c | 30 +-
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 36 ++-
drivers/net/ethernet/dlink/dl2k.c | 2 +-
drivers/net/ethernet/dlink/dl2k.h | 2 +-
drivers/net/ethernet/freescale/fec_main.c | 7 +-
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 2 +
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 119 ++++++--
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 3 +
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 61 ++--
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 26 +-
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c | 13 +-
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c | 25 +-
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h | 1 +
drivers/net/ethernet/intel/ice/ice_fltr.c | 58 ++++
drivers/net/ethernet/intel/ice/ice_fltr.h | 12 +
drivers/net/ethernet/intel/ice/ice_main.c | 49 +--
drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c | 5 +
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 139 ++++-----
drivers/net/ethernet/mediatek/mtk_star_emac.c | 339 ++++++++++++---------
.../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 5 +-
drivers/net/ethernet/mellanox/mlx5/core/rdma.c | 11 +-
drivers/net/ethernet/mellanox/mlx5/core/rdma.h | 4 +-
drivers/net/ethernet/microchip/lan743x_main.c | 8 +-
drivers/net/ethernet/microchip/lan743x_main.h | 1 +
drivers/net/phy/microchip.c | 46 +--
.../net/wireless/broadcom/brcm80211/brcmfmac/usb.c | 6 +-
drivers/nvme/host/tcp.c | 31 +-
drivers/of/device.c | 7 +-
drivers/pci/controller/dwc/pci-imx6.c | 5 +-
drivers/tty/serial/msm_serial.c | 6 +
kernel/trace/trace.c | 5 +-
net/ipv4/udp_offload.c | 61 +++-
net/sched/act_mirred.c | 22 +-
net/sched/sch_drr.c | 9 +-
net/sched/sch_ets.c | 9 +-
net/sched/sch_hfsc.c | 2 +-
net/sched/sch_qfq.c | 11 +-
sound/usb/format.c | 3 +-
66 files changed, 932 insertions(+), 505 deletions(-)
Hi All,
Chages since v4:
- unused pages leak is avoided
Chages since v3:
- pfn_to_virt() changed to page_to_virt() due to compile error
Chages since v2:
- page allocation moved out of the atomic context
Chages since v1:
- Fixes: and -stable tags added to the patch description
Thanks!
Alexander Gordeev (1):
kasan: Avoid sleepable page allocation from atomic context
mm/kasan/shadow.c | 77 ++++++++++++++++++++++++++++++++++++++---------
1 file changed, 63 insertions(+), 14 deletions(-)
--
2.45.2