Hi
[This is an automated email]
This commit has been processed because it contains a -stable tag.
The stable tag indicates that it's relevant for the following trees: all
The bot has tested the following trees: v5.5.11, v5.4.27, v4.19.112, v4.14.174, v4.9.217, v4.4.217.
v5.5.11: Build OK!
v5.4.27: Build OK!
v4.19.112: Failed to apply! Possible dependencies:
03512ceb60ae ("ieee80211: remove redundant leading zeroes")
09b4a4faf9d0 ("mac80211: introduce capability flags for VHT EXT NSS support")
0eeb2b674f05 ("mac80211: add an option for station management TXQ")
1c9559734eca ("mac80211: remove unnecessary key condition")
57a3a454f303 ("iwlwifi: split HE capabilities between AP and STA")
62872a9b9a10 ("mac80211: Fix PTK rekey freezes and clear text leak")
77f7ffdc335d ("mac80211: minstrel_ht: add flag to indicate missing/inaccurate tx A-MPDU length")
77ff2c6b4984 ("mac80211: update HE IEs to D3.3")
80aaa9c16415 ("mac80211: Add he_capa debugfs entry")
96fc6efb9ad9 ("mac80211: IEEE 802.11 Extended Key ID support")
add7453ad62f ("wireless: align to draft 11ax D3.0")
adf8ed01e4fd ("mac80211: add an optional TXQ for other PS-buffered frames")
caf56338c22f ("mac80211: indicate support for multiple BSSID")
daa5b83513a7 ("mac80211: update HE operation fields to D3.0")
v4.14.174: Failed to apply! Possible dependencies:
110b32f065f3 ("iwlwifi: mvm: rs: add basic implementation of the new RS API handlers")
1c73acf58bd6 ("iwlwifi: acpi: move ACPI method definitions to acpi.h")
28e9c00fe1f0 ("iwlwifi: remove upper case letters in sku_capa_band_*_enable")
4ae80f6c8d86 ("iwlwifi: support api ver2 of NVM_GET_INFO resp")
4b82455ca51e ("iwlwifi: use flags to denote modifiers for the channel maps")
4c625c564ba2 ("iwlwifi: get rid of fw/nvm.c")
514c30696fbc ("iwlwifi: add support for IEEE802.11ax")
57a3a454f303 ("iwlwifi: split HE capabilities between AP and STA")
77ff2c6b4984 ("mac80211: update HE IEs to D3.3")
813df5cef3bb ("iwlwifi: acpi: add common code to read from ACPI")
8a6171a7b601 ("iwlwifi: fw: add FW APIs for HE")
9c4f7d512740 ("iwlwifi: move all NVM parsing code to the common files")
9f66a397c877 ("iwlwifi: mvm: rs: add ops for the new rate scaling in the FW")
e7a3b8d87910 ("iwlwifi: acpi: move ACPI-related definitions to acpi.h")
v4.9.217: Failed to apply! Possible dependencies:
01796ff2fa6e ("iwlwifi: mvm: always free inactive queue when moving ownership")
0aaece81114e ("iwlwifi: split firmware API from iwl-trans.h")
1ea423b0e047 ("iwlwifi: remove unnecessary dev_cmd_headroom parameter")
310181ec34e2 ("iwlwifi: move to TVQM mode")
5594d80e9bf4 ("iwlwifi: support two phys for a000 devices")
623e7766be90 ("iwlwifi: pcie: introduce split point to a000 devices")
65e254821cee ("iwlwifi: mvm: use firmware station PM notification for AP_LINK_PS")
6b35ff91572f ("iwlwifi: pcie: introduce a000 TX queues management")
727c02dfb848 ("iwlwifi: pcie: cleanup rfkill checks")
77ff2c6b4984 ("mac80211: update HE IEs to D3.3")
8236f7db2724 ("iwlwifi: mvm: assign cab queue to the correct station")
87d0e1af9db3 ("iwlwifi: mvm: separate queue mapping from queue enablement")
bb49701b41de ("iwlwifi: mvm: support a000 SCD queue configuration")
cf90da352a32 ("iwlwifi: mvm: use mvm_disable_queue instead of sharing logic")
d172a5eff629 ("iwlwifi: reorganize firmware API")
df88c08d5c7e ("iwlwifi: mvm: release static queues on bcast release")
eda50cde58de ("iwlwifi: pcie: add context information support")
v4.4.217: Failed to apply! Possible dependencies:
0aaece81114e ("iwlwifi: split firmware API from iwl-trans.h")
13555e8ba2f4 ("iwlwifi: mvm: add 9000-series RX API")
1a616dd2f171 ("iwlwifi: dump prph registers in a common place for all transports")
1e0bbebaae66 ("mac80211: enable starting BA session with custom timeout")
2f89a5d7d377 ("iwlwifi: mvm: move fw-dbg code to separate file")
39bdb17ebb5b ("iwlwifi: update host command messages to new format")
41837ca962ec ("iwlwifi: pcie: allow to pretend to have Tx CSUM for debug")
4707fde5cdef ("iwlwifi: mvm: use build-time assertion for fw trigger ID")
5b88792cd850 ("iwlwifi: move to wide ID for all commands")
6c4fbcbc1c95 ("iwlwifi: add support for 12K Receive Buffers")
77ff2c6b4984 ("mac80211: update HE IEs to D3.3")
92fe83430b89 ("iwlwifi: uninline iwl_trans_send_cmd")
d172a5eff629 ("iwlwifi: reorganize firmware API")
dcbb4746286a ("iwlwifi: trans: support a callback for ASYNC commands")
NOTE: The patch will not be queued to stable trees until it is upstream.
How should we proceed with this patch?
--
Thanks
Sasha
From: Saeed Mirzamohammadi <saeed.mirzamohammadi(a)oracle.com>
[ Upstream commit 327d5b2fee91c404a3956c324193892cf2cc9528 ]
The IOMMU driver calculates the guest addressability for a DMA request
based on the value of the mgaw reported from the IOMMU. However, this
is a fused value and as mentioned in the spec, the guest width
should be calculated based on the minimum of supported adjusted guest
address width (SAGAW) and MGAW.
This is from specification:
"Guest addressability for a given DMA request is limited to the
minimum of the value reported through this field and the adjusted
guest address width of the corresponding page-table structure.
(Adjusted guest address widths supported by hardware are reported
through the SAGAW field)."
This causes domain initialization to fail and following
errors appear for EHCI PCI driver:
[ 2.486393] ehci-pci 0000:01:00.4: EHCI Host Controller
[ 2.486624] ehci-pci 0000:01:00.4: new USB bus registered, assigned bus
number 1
[ 2.489127] ehci-pci 0000:01:00.4: DMAR: Allocating domain failed
[ 2.489350] ehci-pci 0000:01:00.4: DMAR: 32bit DMA uses non-identity
mapping
[ 2.489359] ehci-pci 0000:01:00.4: can't setup: -12
[ 2.489531] ehci-pci 0000:01:00.4: USB bus 1 deregistered
[ 2.490023] ehci-pci 0000:01:00.4: init 0000:01:00.4 fail, -12
[ 2.490358] ehci-pci: probe of 0000:01:00.4 failed with error -12
This issue happens when the value of the sagaw corresponds to a
48-bit agaw. This fix updates the calculation of the agaw based on
the minimum of IOMMU's sagaw value and MGAW.
This issue happens on the code path of getting a private domain for a
device. A private domain was needed when the domain of an iommu group
couldn't meet the requirement of a device. The IOMMU core has been
evolved to eliminate the need for private domain, hence this code path
has alreay been removed from the upstream since commit 327d5b2fee91c
("iommu/vt-d: Allow 32bit devices to uses DMA domain"). Instead of back
porting all patches that are required for removing the private domain,
this simply fixes it in the affected stable kernel between v4.16 and v5.7.
[baolu: The orignal patch could be found here
https://lore.kernel.org/linux-iommu/20210412202736.70765-1-saeed.mirzamoham….
I added commit message according to Greg's comments at
https://lore.kernel.org/linux-iommu/YHZ%2FT9x7Xjf1r6fI@kroah.com/.]
Cc: Joerg Roedel <joro(a)8bytes.org>
Cc: Ashok Raj <ashok.raj(a)intel.com>
Cc: stable(a)vger.kernel.org #v4.16+
Signed-off-by: Saeed Mirzamohammadi <saeed.mirzamohammadi(a)oracle.com>
Tested-by: Camille Lu <camille.lu(a)hpe.com>
Signed-off-by: Lu Baolu <baolu.lu(a)linux.intel.com>
---
drivers/iommu/intel-iommu.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 953d86ca6d2b..a2a03df97704 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1853,7 +1853,7 @@ static inline int guestwidth_to_adjustwidth(int gaw)
static int domain_init(struct dmar_domain *domain, struct intel_iommu *iommu,
int guest_width)
{
- int adjust_width, agaw;
+ int adjust_width, agaw, cap_width;
unsigned long sagaw;
int err;
@@ -1867,8 +1867,9 @@ static int domain_init(struct dmar_domain *domain, struct intel_iommu *iommu,
domain_reserve_special_ranges(domain);
/* calculate AGAW */
- if (guest_width > cap_mgaw(iommu->cap))
- guest_width = cap_mgaw(iommu->cap);
+ cap_width = min_t(int, cap_mgaw(iommu->cap), agaw_to_width(iommu->agaw));
+ if (guest_width > cap_width)
+ guest_width = cap_width;
domain->gaw = guest_width;
adjust_width = guestwidth_to_adjustwidth(guest_width);
agaw = width_to_agaw(adjust_width);
--
2.25.1
The patch below does not apply to the 5.13-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 20c0b380f971e7d48f5d978bc27d827f7eabb21a Mon Sep 17 00:00:00 2001
From: Nadav Amit <namit(a)vmware.com>
Date: Sat, 7 Aug 2021 17:13:42 -0700
Subject: [PATCH] io_uring: Use WRITE_ONCE() when writing to sq_flags
The compiler should be forbidden from any strange optimization for async
writes to user visible data-structures. Without proper protection, the
compiler can cause write-tearing or invent writes that would confuse the
userspace.
However, there are writes to sq_flags which are not protected by
WRITE_ONCE(). Use WRITE_ONCE() for these writes.
This is purely a theoretical issue. Presumably, any compiler is very
unlikely to do such optimizations.
Fixes: 75b28affdd6a ("io_uring: allocate the two rings together")
Cc: Jens Axboe <axboe(a)kernel.dk>
Cc: Pavel Begunkov <asml.silence(a)gmail.com>
Signed-off-by: Nadav Amit <namit(a)vmware.com>
Link: https://lore.kernel.org/r/20210808001342.964634-3-namit@vmware.com
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 1093df3977b8..ca064486cb41 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1500,7 +1500,8 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
all_flushed = list_empty(&ctx->cq_overflow_list);
if (all_flushed) {
clear_bit(0, &ctx->check_cq_overflow);
- ctx->rings->sq_flags &= ~IORING_SQ_CQ_OVERFLOW;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags & ~IORING_SQ_CQ_OVERFLOW);
}
if (posted)
@@ -1579,7 +1580,9 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
}
if (list_empty(&ctx->cq_overflow_list)) {
set_bit(0, &ctx->check_cq_overflow);
- ctx->rings->sq_flags |= IORING_SQ_CQ_OVERFLOW;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags | IORING_SQ_CQ_OVERFLOW);
+
}
ocqe->cqe.user_data = user_data;
ocqe->cqe.res = res;
@@ -6804,14 +6807,16 @@ static inline void io_ring_set_wakeup_flag(struct io_ring_ctx *ctx)
{
/* Tell userspace we may need a wakeup call */
spin_lock_irq(&ctx->completion_lock);
- ctx->rings->sq_flags |= IORING_SQ_NEED_WAKEUP;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags | IORING_SQ_NEED_WAKEUP);
spin_unlock_irq(&ctx->completion_lock);
}
static inline void io_ring_clear_wakeup_flag(struct io_ring_ctx *ctx)
{
spin_lock_irq(&ctx->completion_lock);
- ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags & ~IORING_SQ_NEED_WAKEUP);
spin_unlock_irq(&ctx->completion_lock);
}
The patch below does not apply to the 5.10-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 20c0b380f971e7d48f5d978bc27d827f7eabb21a Mon Sep 17 00:00:00 2001
From: Nadav Amit <namit(a)vmware.com>
Date: Sat, 7 Aug 2021 17:13:42 -0700
Subject: [PATCH] io_uring: Use WRITE_ONCE() when writing to sq_flags
The compiler should be forbidden from any strange optimization for async
writes to user visible data-structures. Without proper protection, the
compiler can cause write-tearing or invent writes that would confuse the
userspace.
However, there are writes to sq_flags which are not protected by
WRITE_ONCE(). Use WRITE_ONCE() for these writes.
This is purely a theoretical issue. Presumably, any compiler is very
unlikely to do such optimizations.
Fixes: 75b28affdd6a ("io_uring: allocate the two rings together")
Cc: Jens Axboe <axboe(a)kernel.dk>
Cc: Pavel Begunkov <asml.silence(a)gmail.com>
Signed-off-by: Nadav Amit <namit(a)vmware.com>
Link: https://lore.kernel.org/r/20210808001342.964634-3-namit@vmware.com
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 1093df3977b8..ca064486cb41 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1500,7 +1500,8 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
all_flushed = list_empty(&ctx->cq_overflow_list);
if (all_flushed) {
clear_bit(0, &ctx->check_cq_overflow);
- ctx->rings->sq_flags &= ~IORING_SQ_CQ_OVERFLOW;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags & ~IORING_SQ_CQ_OVERFLOW);
}
if (posted)
@@ -1579,7 +1580,9 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
}
if (list_empty(&ctx->cq_overflow_list)) {
set_bit(0, &ctx->check_cq_overflow);
- ctx->rings->sq_flags |= IORING_SQ_CQ_OVERFLOW;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags | IORING_SQ_CQ_OVERFLOW);
+
}
ocqe->cqe.user_data = user_data;
ocqe->cqe.res = res;
@@ -6804,14 +6807,16 @@ static inline void io_ring_set_wakeup_flag(struct io_ring_ctx *ctx)
{
/* Tell userspace we may need a wakeup call */
spin_lock_irq(&ctx->completion_lock);
- ctx->rings->sq_flags |= IORING_SQ_NEED_WAKEUP;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags | IORING_SQ_NEED_WAKEUP);
spin_unlock_irq(&ctx->completion_lock);
}
static inline void io_ring_clear_wakeup_flag(struct io_ring_ctx *ctx)
{
spin_lock_irq(&ctx->completion_lock);
- ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags & ~IORING_SQ_NEED_WAKEUP);
spin_unlock_irq(&ctx->completion_lock);
}
The patch below does not apply to the 5.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 20c0b380f971e7d48f5d978bc27d827f7eabb21a Mon Sep 17 00:00:00 2001
From: Nadav Amit <namit(a)vmware.com>
Date: Sat, 7 Aug 2021 17:13:42 -0700
Subject: [PATCH] io_uring: Use WRITE_ONCE() when writing to sq_flags
The compiler should be forbidden from any strange optimization for async
writes to user visible data-structures. Without proper protection, the
compiler can cause write-tearing or invent writes that would confuse the
userspace.
However, there are writes to sq_flags which are not protected by
WRITE_ONCE(). Use WRITE_ONCE() for these writes.
This is purely a theoretical issue. Presumably, any compiler is very
unlikely to do such optimizations.
Fixes: 75b28affdd6a ("io_uring: allocate the two rings together")
Cc: Jens Axboe <axboe(a)kernel.dk>
Cc: Pavel Begunkov <asml.silence(a)gmail.com>
Signed-off-by: Nadav Amit <namit(a)vmware.com>
Link: https://lore.kernel.org/r/20210808001342.964634-3-namit@vmware.com
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 1093df3977b8..ca064486cb41 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1500,7 +1500,8 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
all_flushed = list_empty(&ctx->cq_overflow_list);
if (all_flushed) {
clear_bit(0, &ctx->check_cq_overflow);
- ctx->rings->sq_flags &= ~IORING_SQ_CQ_OVERFLOW;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags & ~IORING_SQ_CQ_OVERFLOW);
}
if (posted)
@@ -1579,7 +1580,9 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
}
if (list_empty(&ctx->cq_overflow_list)) {
set_bit(0, &ctx->check_cq_overflow);
- ctx->rings->sq_flags |= IORING_SQ_CQ_OVERFLOW;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags | IORING_SQ_CQ_OVERFLOW);
+
}
ocqe->cqe.user_data = user_data;
ocqe->cqe.res = res;
@@ -6804,14 +6807,16 @@ static inline void io_ring_set_wakeup_flag(struct io_ring_ctx *ctx)
{
/* Tell userspace we may need a wakeup call */
spin_lock_irq(&ctx->completion_lock);
- ctx->rings->sq_flags |= IORING_SQ_NEED_WAKEUP;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags | IORING_SQ_NEED_WAKEUP);
spin_unlock_irq(&ctx->completion_lock);
}
static inline void io_ring_clear_wakeup_flag(struct io_ring_ctx *ctx)
{
spin_lock_irq(&ctx->completion_lock);
- ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
+ WRITE_ONCE(ctx->rings->sq_flags,
+ ctx->rings->sq_flags & ~IORING_SQ_NEED_WAKEUP);
spin_unlock_irq(&ctx->completion_lock);
}