This series of patches contains 3 separate changes that fix some bugs in
the qla2xxx driver.
---
v3:
- Fix build issue in patch 1
v2:
- Change a spinlock wrap to a WRITE_ONCE() in patch 1
- Add Reviewed-by tags on patches 2 and 3
---
Anastasia Kovaleva (3):
scsi: qla2xxx: Drop starvation counter on success
scsi: qla2xxx: Make target send correct LOGO
scsi: qla2xxx: Remove incorrect trap
drivers/scsi/qla2xxx/qla_iocb.c | 11 +++++++++++
drivers/scsi/qla2xxx/qla_isr.c | 4 ++++
drivers/scsi/qla2xxx/qla_target.c | 16 +++++++---------
3 files changed, 22 insertions(+), 9 deletions(-)
--
2.40.1
From: Yu Kuai <yukuai3(a)huawei.com>
Fix patch is patch 27, relied patches are from:
- patches from set [1] to add helpers to maple_tree, the last patch to
improve fork() performance is not backported;
- patches from set [2] to change maple_tree, and follow up fixes;
- patches from set [3] to convert offset_ctx from xarray to maple_tree;
Please notice that I'm not an expert in this area, and I'm afraid to
make manual changes. That's why patch 16 revert the commit that is
different from mainline and will cause conflict backporting new patches.
patch 28 pick the original mainline patch again.
(And this is what we did to fix the CVE in downstream kernels).
[1] https://lore.kernel.org/all/20231027033845.90608-1-zhangpeng.00@bytedance.c…
[2] https://lore.kernel.org/all/20231101171629.3612299-2-Liam.Howlett@oracle.co…
[3] https://lore.kernel.org/all/170820083431.6328.16233178852085891453.stgit@91…
Andrew Morton (1):
lib/maple_tree.c: fix build error due to hotfix alteration
Chuck Lever (5):
libfs: Re-arrange locking in offset_iterate_dir()
libfs: Define a minimum directory offset
libfs: Add simple_offset_empty()
maple_tree: Add mtree_alloc_cyclic()
libfs: Convert simple directory offsets to use a Maple Tree
Liam R. Howlett (12):
maple_tree: remove unnecessary default labels from switch statements
maple_tree: make mas_erase() more robust
maple_tree: move debug check to __mas_set_range()
maple_tree: add end of node tracking to the maple state
maple_tree: use cached node end in mas_next()
maple_tree: use cached node end in mas_destroy()
maple_tree: clean up inlines for some functions
maple_tree: separate ma_state node from status
maple_tree: remove mas_searchable()
maple_tree: use maple state end for write operations
maple_tree: don't find node end in mtree_lookup_walk()
maple_tree: mtree_range_walk() clean up
Lorenzo Stoakes (1):
maple_tree: correct tree corruption on spanning store
Peng Zhang (7):
maple_tree: add mt_free_one() and mt_attr() helpers
maple_tree: introduce {mtree,mas}_lock_nested()
maple_tree: introduce interfaces __mt_dup() and mtree_dup()
maple_tree: skip other tests when BENCH is enabled
maple_tree: preserve the tree attributes when destroying maple tree
maple_tree: add test for mtree_dup()
maple_tree: avoid checking other gaps after getting the largest gap
Yu Kuai (1):
Revert "maple_tree: correct tree corruption on spanning store"
yangerkun (1):
libfs: fix infinite directory reads for offset dir
fs/libfs.c | 129 ++-
include/linux/fs.h | 6 +-
include/linux/maple_tree.h | 356 +++---
include/linux/mm_types.h | 3 +-
lib/maple_tree.c | 1096 +++++++++++++------
lib/test_maple_tree.c | 218 ++--
mm/internal.h | 10 +-
mm/shmem.c | 4 +-
tools/include/linux/spinlock.h | 1 +
tools/testing/radix-tree/linux/maple_tree.h | 2 +-
tools/testing/radix-tree/maple.c | 390 ++++++-
11 files changed, 1564 insertions(+), 651 deletions(-)
--
2.39.2
The way InvenSense MPU-6050 accelerometer is mounted on the user-facing side
of the Pine64 PinePhone mainboard, which makes it rotated 90 degrees counter-
clockwise, [1] requires the accelerometer's x- and y-axis to be swapped, and
the direction of the accelerometer's y-axis to be inverted.
Rectify this by adding a mount-matrix to the accelerometer definition in the
Pine64 PinePhone dtsi file.
[1] https://files.pine64.org/doc/PinePhone/PinePhone%20mainboard%20bottom%20pla…
Fixes: 91f480d40942 ("arm64: dts: allwinner: Add initial support for Pine64 PinePhone")
Cc: stable(a)vger.kernel.org
Helped-by: Ondrej Jirman <megi(a)xff.cz>
Helped-by: Andrey Skvortsov <andrej.skvortzov(a)gmail.com>
Signed-off-by: Dragan Simic <dsimic(a)manjaro.org>
---
Notes:
See also the linux-sunxi thread [2] that has led to this patch, which
provides a rather detailed analysis with additional details and pictures.
This patch effectively replaces the patch submitted in that thread.
[2] https://lore.kernel.org/linux-sunxi/20240916204521.2033218-1-andrej.skvortz…
arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
index 6eab61a12cd8..b844759f52c0 100644
--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
+++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
@@ -212,6 +212,9 @@ accelerometer@68 {
interrupts = <7 5 IRQ_TYPE_EDGE_RISING>; /* PH5 */
vdd-supply = <®_dldo1>;
vddio-supply = <®_dldo1>;
+ mount-matrix = "0", "1", "0",
+ "-1", "0", "0",
+ "0", "0", "1";
};
};
Fixes file corruption issues when reading contents via ceph client.
Call netfs_reset_subreq_iter() to align subreq->io_iter before
calling netfs_clear_unread() to clear tail, as subreq->io_iter count
and subreq->transferred might not be aligned after incomplete I/O,
having the subreq's NETFS_SREQ_CLEAR_TAIL set.
Based on ee4cdf7b ("netfs: Speed up buffered reading"), which
introduces a fix for the issue in mainline.
Fixes: 92b6cc5d ("netfs: Add iov_iters to (sub)requests to describe various buffers")
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219237
Signed-off-by: Christian Ebner <c.ebner(a)proxmox.com>
---
Sending this patch in an attempt to backport the fix introduced by
commit ee4cdf7b ("netfs: Speed up buffered reading"), which however
can not be cherry picked for older kernels, as the patch is not
independent from other commits and touches a lot of unrelated (to
the fix) code.
fs/netfs/io.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/netfs/io.c b/fs/netfs/io.c
index d6ada4eba744..500119285346 100644
--- a/fs/netfs/io.c
+++ b/fs/netfs/io.c
@@ -528,6 +528,7 @@ void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,
incomplete:
if (test_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags)) {
+ netfs_reset_subreq_iter(rreq, subreq);
netfs_clear_unread(subreq);
subreq->transferred = subreq->len;
goto complete;
--
2.39.5
From: Mark Brown <broonie(a)kernel.org>
[ Upstream commit 6098475d4cb48d821bdf453c61118c56e26294f0 ]
Currently we have a global spi_add_lock which we take when adding new
devices so that we can check that we're not trying to reuse a chip
select that's already controlled. This means that if the SPI device is
itself a SPI controller and triggers the instantiation of further SPI
devices we trigger a deadlock as we try to register and instantiate
those devices while in the process of doing so for the parent controller
and hence already holding the global spi_add_lock. Since we only care
about concurrency within a single SPI bus move the lock to be per
controller, avoiding the deadlock.
This can be easily triggered in the case of spi-mux.
Reported-by: Uwe Kleine-König <u.kleine-koenig(a)pengutronix.de>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Hardik Gohil <hgohil(a)mvista.com>
---
This fix was not backported to v5.4 and 5.10
Along with this fix please also apply this fix on top of this
spi: fix use-after-free of the add_lock mutex
commit 6c53b45c71b4920b5e62f0ea8079a1da382b9434 upstream.
Commit 6098475d4cb4 ("spi: Fix deadlock when adding SPI controllers on
SPI buses") introduced a per-controller mutex. But mutex_unlock() of
said lock is called after the controller is already freed:
drivers/spi/spi.c | 15 +++++----------
include/linux/spi/spi.h | 3 +++
2 files changed, 8 insertions(+), 10 deletions(-)
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index 0f9410e..58f1947 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -472,12 +472,6 @@ static LIST_HEAD(spi_controller_list);
*/
static DEFINE_MUTEX(board_lock);
-/*
- * Prevents addition of devices with same chip select and
- * addition of devices below an unregistering controller.
- */
-static DEFINE_MUTEX(spi_add_lock);
-
/**
* spi_alloc_device - Allocate a new SPI device
* @ctlr: Controller to which device is connected
@@ -580,7 +574,7 @@ int spi_add_device(struct spi_device *spi)
* chipselect **BEFORE** we call setup(), else we'll trash
* its configuration. Lock against concurrent add() calls.
*/
- mutex_lock(&spi_add_lock);
+ mutex_lock(&ctlr->add_lock);
status = bus_for_each_dev(&spi_bus_type, NULL, spi, spi_dev_check);
if (status) {
@@ -624,7 +618,7 @@ int spi_add_device(struct spi_device *spi)
}
done:
- mutex_unlock(&spi_add_lock);
+ mutex_unlock(&ctlr->add_lock);
return status;
}
EXPORT_SYMBOL_GPL(spi_add_device);
@@ -2512,6 +2506,7 @@ int spi_register_controller(struct spi_controller *ctlr)
spin_lock_init(&ctlr->bus_lock_spinlock);
mutex_init(&ctlr->bus_lock_mutex);
mutex_init(&ctlr->io_mutex);
+ mutex_init(&ctlr->add_lock);
ctlr->bus_lock_flag = 0;
init_completion(&ctlr->xfer_completion);
if (!ctlr->max_dma_len)
@@ -2657,7 +2652,7 @@ void spi_unregister_controller(struct spi_controller *ctlr)
/* Prevent addition of new devices, unregister existing ones */
if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
- mutex_lock(&spi_add_lock);
+ mutex_lock(&ctlr->add_lock);
device_for_each_child(&ctlr->dev, NULL, __unregister);
@@ -2688,7 +2683,7 @@ void spi_unregister_controller(struct spi_controller *ctlr)
mutex_unlock(&board_lock);
if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
- mutex_unlock(&spi_add_lock);
+ mutex_unlock(&ctlr->add_lock);
}
EXPORT_SYMBOL_GPL(spi_unregister_controller);
diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
index ca39b33..1b9cb90 100644
--- a/include/linux/spi/spi.h
+++ b/include/linux/spi/spi.h
@@ -483,6 +483,9 @@ struct spi_controller {
/* I/O mutex */
struct mutex io_mutex;
+ /* Used to avoid adding the same CS twice */
+ struct mutex add_lock;
+
/* lock and mutex for SPI bus locking */
spinlock_t bus_lock_spinlock;
struct mutex bus_lock_mutex;
--
2.7.4
Only the first device that is passed when the domain is set up will
have its reserved regions reserved in the iova address space. So if
there are other devices in the group with unique reserved regions,
those regions will not get reserved in the iova address space. All of
the ranges do get set up in the iopagetables via calls to
iommu_create_device_direct_mappings for all devices in a group.
In the case of vt-d system this resulted in messages like the following:
[ 1632.693264] DMAR: ERROR: DMA PTE for vPFN 0xf1f7e already set (to f1f7e003 not 173025001)
To make sure iova ranges are reserved for the reserved regions all of
the devices, call iova_reserve_iommu_regions in iommu_dma_init_domain
prior to exiting in the case where the domain is already initialized.
Cc: Robin Murphy <robin.murphy(a)arm.com>
Cc: Joerg Roedel <joro(a)8bytes.org>
Cc: Will Deacon <will(a)kernel.org>
Cc: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Fixes: 7c1b058c8b5a ("iommu/dma: Handle IOMMU API reserved regions")
Signed-off-by: Jerry Snitselaar <jsnitsel(a)redhat.com>
---
Robin: I wasn't positive if this is the correct solution, or if it should be
done for the entire group at once.
drivers/iommu/dma-iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 2a9fa0c8cc00..5fd3cccbb233 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -707,7 +707,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev
goto done_unlock;
}
- ret = 0;
+ ret = iova_reserve_iommu_regions(dev, domain);
goto done_unlock;
}
--
2.44.0
Hi all,
I'm writing as a bystander working with 6.1.y stable branch and possibly
lacking some context with the established DRM -> stable patch flow, Cc'ing
a large number of people.
The commit being reverted from 6.1.y is the one that duplicates the
changes already backported to that branch with another commit. It is
essentially a "similar" commit but cherry-picked at some point during the
DRM development process.
The duplicate has no runtime effect but should not actually remain in the
stable trees. It was already reverted [1] from 6.6/6.10/6.11 but still made
its way later to 6.1.
[1]: https://lore.kernel.org/stable/20241007035711.46624-1-jsg@jsg.id.au/T/#u
At [1] Greg KH also stated that the observed problems are quite common
while backporting DRM patches to stable trees. The current duplicate patch
has in every sense a cosmetic impact but in other circumstances and for
other patches this may have gone wrong.
So, is there any way to adjust this process?
BTW, a question to the stable-team: what Git magic (3-way-merge?) let the
duplicate patch be applied successfully? The patch context in stable trees
was different to that moment so should the duplicate have been expected to
fail to be applied?
--
Fedor
This series prepares the powerpc Kconfig and Kbuild files for clang's
per-task stack protector support. clang requires
'-mstack-protector-guard-offset' to always be passed with the other
'-mstack-protector-guard' flags, which does not always happen with the
powerpc implementation, unlike arm, arm64, and riscv implementations.
This series brings powerpc in line with those other architectures, which
allows clang's support to work right away when it is merged.
Additionally, there is one other fix needed for the Kconfig test to work
correctly when targeting 32-bit.
I have tested this series in QEMU against LKDTM's REPORT_STACK_CANARY
with ppc64le_guest_defconfig and pmac32_defconfig built with a toolchain
that contains Keith's in-progress pull request, which should land for
LLVM 20:
https://github.com/llvm/llvm-project/pull/110928
---
Changes in v2:
- Combined patch 1 and 3, as they are fixing the same test for similar
reasons; adjust commit message accordingly (Christophe)
- Moved stack protector guard flags on one line in Makefile (Christophe)
- Add 'Cc: stable' targeting 6.1 and newer for the sake of simplicity,
as it is the oldest stable release where this series applies cleanly
(folks who want it on earlier releases can request or perform a
backport separately).
- Pick up Keith's Reviewed-by and Tested-by on both patches.
- Add a blurb to commit message of patch 1 explaining why clang's
register selection behavior differs from GCC.
- Link to v1: https://lore.kernel.org/r/20241007-powerpc-fix-stackprotector-test-clang-v1…
---
Nathan Chancellor (2):
powerpc: Fix stack protector Kconfig test for clang
powerpc: Adjust adding stack protector flags to KBUILD_CLAGS for clang
arch/powerpc/Kconfig | 4 ++--
arch/powerpc/Makefile | 13 ++++---------
2 files changed, 6 insertions(+), 11 deletions(-)
---
base-commit: 8cf0b93919e13d1e8d4466eb4080a4c4d9d66d7b
change-id: 20241004-powerpc-fix-stackprotector-test-clang-84e67ed82f62
Best regards,
--
Nathan Chancellor <nathan(a)kernel.org>
From: "Uladzislau Rezki (Sony)" <urezki(a)gmail.com>
From: Uladzislau Rezki <urezki(a)gmail.com>
commit 3c5d61ae919cc377c71118ccc76fa6e8518023f8 upstream.
Add a kvfree_rcu_barrier() function. It waits until all
in-flight pointers are freed over RCU machinery. It does
not wait any GP completion and it is within its right to
return immediately if there are no outstanding pointers.
This function is useful when there is a need to guarantee
that a memory is fully freed before destroying memory caches.
For example, during unloading a kernel module.
Signed-off-by: Uladzislau Rezki (Sony) <urezki(a)gmail.com>
Signed-off-by: Vlastimil Babka <vbabka(a)suse.cz>
---
include/linux/rcutiny.h | 5 ++
include/linux/rcutree.h | 1 +
kernel/rcu/tree.c | 109 +++++++++++++++++++++++++++++++++++++---
3 files changed, 107 insertions(+), 8 deletions(-)
diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
index d9ac7b136aea..522123050ff8 100644
--- a/include/linux/rcutiny.h
+++ b/include/linux/rcutiny.h
@@ -111,6 +111,11 @@ static inline void __kvfree_call_rcu(struct rcu_head *head, void *ptr)
kvfree(ptr);
}
+static inline void kvfree_rcu_barrier(void)
+{
+ rcu_barrier();
+}
+
#ifdef CONFIG_KASAN_GENERIC
void kvfree_call_rcu(struct rcu_head *head, void *ptr);
#else
diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
index 254244202ea9..58e7db80f3a8 100644
--- a/include/linux/rcutree.h
+++ b/include/linux/rcutree.h
@@ -35,6 +35,7 @@ static inline void rcu_virt_note_context_switch(void)
void synchronize_rcu_expedited(void);
void kvfree_call_rcu(struct rcu_head *head, void *ptr);
+void kvfree_rcu_barrier(void);
void rcu_barrier(void);
void rcu_momentary_dyntick_idle(void);
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index e641cc681901..be00aac5f4e7 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3584,18 +3584,15 @@ kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp)
}
/*
- * This function is invoked after the KFREE_DRAIN_JIFFIES timeout.
+ * Return: %true if a work is queued, %false otherwise.
*/
-static void kfree_rcu_monitor(struct work_struct *work)
+static bool
+kvfree_rcu_queue_batch(struct kfree_rcu_cpu *krcp)
{
- struct kfree_rcu_cpu *krcp = container_of(work,
- struct kfree_rcu_cpu, monitor_work.work);
unsigned long flags;
+ bool queued = false;
int i, j;
- // Drain ready for reclaim.
- kvfree_rcu_drain_ready(krcp);
-
raw_spin_lock_irqsave(&krcp->lock, flags);
// Attempt to start a new batch.
@@ -3634,11 +3631,27 @@ static void kfree_rcu_monitor(struct work_struct *work)
// be that the work is in the pending state when
// channels have been detached following by each
// other.
- queue_rcu_work(system_wq, &krwp->rcu_work);
+ queued = queue_rcu_work(system_wq, &krwp->rcu_work);
}
}
raw_spin_unlock_irqrestore(&krcp->lock, flags);
+ return queued;
+}
+
+/*
+ * This function is invoked after the KFREE_DRAIN_JIFFIES timeout.
+ */
+static void kfree_rcu_monitor(struct work_struct *work)
+{
+ struct kfree_rcu_cpu *krcp = container_of(work,
+ struct kfree_rcu_cpu, monitor_work.work);
+
+ // Drain ready for reclaim.
+ kvfree_rcu_drain_ready(krcp);
+
+ // Queue a batch for a rest.
+ kvfree_rcu_queue_batch(krcp);
// If there is nothing to detach, it means that our job is
// successfully done here. In case of having at least one
@@ -3859,6 +3872,86 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
}
EXPORT_SYMBOL_GPL(kvfree_call_rcu);
+/**
+ * kvfree_rcu_barrier - Wait until all in-flight kvfree_rcu() complete.
+ *
+ * Note that a single argument of kvfree_rcu() call has a slow path that
+ * triggers synchronize_rcu() following by freeing a pointer. It is done
+ * before the return from the function. Therefore for any single-argument
+ * call that will result in a kfree() to a cache that is to be destroyed
+ * during module exit, it is developer's responsibility to ensure that all
+ * such calls have returned before the call to kmem_cache_destroy().
+ */
+void kvfree_rcu_barrier(void)
+{
+ struct kfree_rcu_cpu_work *krwp;
+ struct kfree_rcu_cpu *krcp;
+ bool queued;
+ int i, cpu;
+
+ /*
+ * Firstly we detach objects and queue them over an RCU-batch
+ * for all CPUs. Finally queued works are flushed for each CPU.
+ *
+ * Please note. If there are outstanding batches for a particular
+ * CPU, those have to be finished first following by queuing a new.
+ */
+ for_each_possible_cpu(cpu) {
+ krcp = per_cpu_ptr(&krc, cpu);
+
+ /*
+ * Check if this CPU has any objects which have been queued for a
+ * new GP completion. If not(means nothing to detach), we are done
+ * with it. If any batch is pending/running for this "krcp", below
+ * per-cpu flush_rcu_work() waits its completion(see last step).
+ */
+ if (!need_offload_krc(krcp))
+ continue;
+
+ while (1) {
+ /*
+ * If we are not able to queue a new RCU work it means:
+ * - batches for this CPU are still in flight which should
+ * be flushed first and then repeat;
+ * - no objects to detach, because of concurrency.
+ */
+ queued = kvfree_rcu_queue_batch(krcp);
+
+ /*
+ * Bail out, if there is no need to offload this "krcp"
+ * anymore. As noted earlier it can run concurrently.
+ */
+ if (queued || !need_offload_krc(krcp))
+ break;
+
+ /* There are ongoing batches. */
+ for (i = 0; i < KFREE_N_BATCHES; i++) {
+ krwp = &(krcp->krw_arr[i]);
+ flush_rcu_work(&krwp->rcu_work);
+ }
+ }
+ }
+
+ /*
+ * Now we guarantee that all objects are flushed.
+ */
+ for_each_possible_cpu(cpu) {
+ krcp = per_cpu_ptr(&krc, cpu);
+
+ /*
+ * A monitor work can drain ready to reclaim objects
+ * directly. Wait its completion if running or pending.
+ */
+ cancel_delayed_work_sync(&krcp->monitor_work);
+
+ for (i = 0; i < KFREE_N_BATCHES; i++) {
+ krwp = &(krcp->krw_arr[i]);
+ flush_rcu_work(&krwp->rcu_work);
+ }
+ }
+}
+EXPORT_SYMBOL_GPL(kvfree_rcu_barrier);
+
static unsigned long
kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
{
base-commit: 17365d66f1c6aa6bf4f4cb9842f5edeac027bcfb
--
2.47.0.105.g07ac214952-goog