On systems with SME access to the SMPRI_EL1 priority management register is
controlled by the nSMPRI_EL1 fine grained trap and TPIDR2_EL0 is controlled
by nTPIDR2_EL0. We manage these traps in nVHE mode but do not do so when in
VHE mode, add the required management.
Without this these registers could be used as side channels where implemented.
Fixes: 861262ab8627 ("KVM: arm64: Handle SME host state when running guests")
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Cc: stable(a)vger.kernel.org
---
arch/arm64/kvm/hyp/vhe/switch.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 7acb87eaa092..9dac3a1a85f7 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -63,10 +63,20 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
__activate_traps_fpsimd32(vcpu);
}
- if (cpus_have_final_cap(ARM64_SME))
+ if (cpus_have_final_cap(ARM64_SME)) {
write_sysreg(read_sysreg(sctlr_el2) & ~SCTLR_ELx_ENTP2,
sctlr_el2);
+ sysreg_clear_set_s(SYS_HFGRTR_EL2,
+ HFGxTR_EL2_nSMPRI_EL1_MASK |
+ HFGxTR_EL2_nTPIDR2_EL0_MASK,
+ 0);
+ sysreg_clear_set_s(SYS_HFGWTR_EL2,
+ HFGxTR_EL2_nSMPRI_EL1_MASK |
+ HFGxTR_EL2_nTPIDR2_EL0_MASK,
+ 0);
+ }
+
write_sysreg(val, cpacr_el1);
write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el1);
@@ -88,9 +98,21 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
*/
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
- if (cpus_have_final_cap(ARM64_SME))
+ if (cpus_have_final_cap(ARM64_SME)) {
+ /*
+ * Enable access to SMPRI_EL1 - we don't need to
+ * control nTPIDR2_EL0 in VHE mode.
+ */
+ sysreg_clear_set_s(SYS_HFGRTR_EL2, 0,
+ HFGxTR_EL2_nSMPRI_EL1_MASK |
+ HFGxTR_EL2_nTPIDR2_EL0_MASK);
+ sysreg_clear_set_s(SYS_HFGWTR_EL2, 0,
+ HFGxTR_EL2_nSMPRI_EL1_MASK |
+ HFGxTR_EL2_nTPIDR2_EL0_MASK);
+
write_sysreg(read_sysreg(sctlr_el2) | SCTLR_ELx_ENTP2,
sctlr_el2);
+ }
write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
--
2.30.2
Hello,
Please can you consider my "ALSA: usb-audio: Add quirks for MacroSilicon
MS2100/MS2106 devices" patch, with upstream commit ID
6e2c9105e0b743c92a157389d40f00b81bdd09fe for inclusion in all -stable
kernels. Apart from the device IDs, it is a copy of the similar existing
patch for MS2109 devices, which is already present in -stable kernels.
John Veness
The quilt patch titled
Subject: hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing
has been removed from the -mm tree. Its filename was
hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing.patch
This patch was dropped because an updated version will be merged
------------------------------------------------------
From: Mike Kravetz <mike.kravetz(a)oracle.com>
Subject: hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing
Date: Mon, 31 Oct 2022 15:34:40 -0700
madvise(MADV_DONTNEED) ends up calling zap_page_range() to clear the page
tables associated with the address range. For hugetlb vmas,
zap_page_range will call __unmap_hugepage_range_final. However,
__unmap_hugepage_range_final assumes the passed vma is about to be removed
and deletes the vma_lock to prevent pmd sharing as the vma is on the way
out. In the case of madvise(MADV_DONTNEED) the vma remains, but the
missing vma_lock prevents pmd sharing and could potentially lead to issues
with truncation/fault races.
This issue was originally reported here [1] as a BUG triggered in
page_try_dup_anon_rmap. Prior to the introduction of the hugetlb
vma_lock, __unmap_hugepage_range_final cleared the VM_MAYSHARE flag to
prevent pmd sharing. Subsequent faults on this vma were confused as
VM_MAYSHARE indicates a sharable vma, but was not set so page_mapping was
not set in new pages added to the page table. This resulted in pages that
appeared anonymous in a VM_SHARED vma and triggered the BUG.
Create a new routine clear_hugetlb_page_range() that can be called from
madvise(MADV_DONTNEED) for hugetlb vmas. It has the same setup as
zap_page_range, but does not delete the vma_lock. Also, add a new zap
flag ZAP_FLAG_UNMAP to indicate an unmap call from unmap_vmas(). This is
used to indicate the 'final' unmapping of a vma. The routine
__unmap_hugepage_range to take a notification_needed argument. This is
used to prevent duplicate notifications.
[1] https://lore.kernel.org/lkml/CAO4mrfdLMXsao9RF4fUE8-Wfde8xmjsKrTNMNC9wjUb6J…
Link: https://lkml.kernel.org/r/20221031223440.285187-1-mike.kravetz@oracle.com
Fixes: 90e7e7f5ef3f ("mm: enable MADV_DONTNEED for hugetlb mappings")
Signed-off-by: Mike Kravetz <mike.kravetz(a)oracle.com>
Reported-by: Wei Chen <harperchen1110(a)gmail.com>
Cc: Axel Rasmussen <axelrasmussen(a)google.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Matthew Wilcox (Oracle) <willy(a)infradead.org>
Cc: Mina Almasry <almasrymina(a)google.com>
Cc: Nadav Amit <nadav.amit(a)gmail.com>
Cc: Naoya Horiguchi <naoya.horiguchi(a)linux.dev>
Cc: Peter Xu <peterx(a)redhat.com>
Cc: Rik van Riel <riel(a)surriel.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/hugetlb.h | 7 +++
include/linux/mm.h | 3 +
mm/hugetlb.c | 80 ++++++++++++++++++++++++++++----------
mm/memory.c | 18 +++++---
4 files changed, 82 insertions(+), 26 deletions(-)
--- a/include/linux/hugetlb.h~hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing
+++ a/include/linux/hugetlb.h
@@ -156,6 +156,8 @@ long follow_hugetlb_page(struct mm_struc
void unmap_hugepage_range(struct vm_area_struct *,
unsigned long, unsigned long, struct page *,
zap_flags_t);
+void clear_hugetlb_page_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end);
void __unmap_hugepage_range_final(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long start, unsigned long end,
@@ -460,6 +462,11 @@ static inline void __unmap_hugepage_rang
BUG();
}
+static void __maybe_unused clear_hugetlb_page_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+}
+
static inline vm_fault_t hugetlb_fault(struct mm_struct *mm,
struct vm_area_struct *vma, unsigned long address,
unsigned int flags)
--- a/include/linux/mm.h~hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing
+++ a/include/linux/mm.h
@@ -3475,4 +3475,7 @@ madvise_set_anon_name(struct mm_struct *
*/
#define ZAP_FLAG_DROP_MARKER ((__force zap_flags_t) BIT(0))
+/* Set in unmap_vmas() to indicate an unmap call. Only used by hugetlb */
+#define ZAP_FLAG_UNMAP ((__force zap_flags_t) BIT(1))
+
#endif /* _LINUX_MM_H */
--- a/mm/hugetlb.c~hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing
+++ a/mm/hugetlb.c
@@ -5064,7 +5064,6 @@ static void __unmap_hugepage_range(struc
struct page *page;
struct hstate *h = hstate_vma(vma);
unsigned long sz = huge_page_size(h);
- struct mmu_notifier_range range;
unsigned long last_addr_mask;
bool force_flush = false;
@@ -5079,13 +5078,6 @@ static void __unmap_hugepage_range(struc
tlb_change_page_size(tlb, sz);
tlb_start_vma(tlb, vma);
- /*
- * If sharing possible, alert mmu notifiers of worst case.
- */
- mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, mm, start,
- end);
- adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end);
- mmu_notifier_invalidate_range_start(&range);
last_addr_mask = hugetlb_mask_last_page(h);
address = start;
for (; address < end; address += sz) {
@@ -5174,7 +5166,6 @@ static void __unmap_hugepage_range(struc
if (ref_page)
break;
}
- mmu_notifier_invalidate_range_end(&range);
tlb_end_vma(tlb, vma);
/*
@@ -5194,37 +5185,86 @@ static void __unmap_hugepage_range(struc
tlb_flush_mmu_tlbonly(tlb);
}
-void __unmap_hugepage_range_final(struct mmu_gather *tlb,
+static void __unmap_hugepage_range_locking(struct mmu_gather *tlb,
struct vm_area_struct *vma, unsigned long start,
unsigned long end, struct page *ref_page,
zap_flags_t zap_flags)
{
+ bool final = zap_flags & ZAP_FLAG_UNMAP;
+
hugetlb_vma_lock_write(vma);
i_mmap_lock_write(vma->vm_file->f_mapping);
__unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags);
- /*
- * Unlock and free the vma lock before releasing i_mmap_rwsem. When
- * the vma_lock is freed, this makes the vma ineligible for pmd
- * sharing. And, i_mmap_rwsem is required to set up pmd sharing.
- * This is important as page tables for this unmapped range will
- * be asynchrously deleted. If the page tables are shared, there
- * will be issues when accessed by someone else.
- */
- __hugetlb_vma_unlock_write_free(vma);
+ if (final) {
+ /*
+ * Unlock and free the vma lock before releasing i_mmap_rwsem.
+ * When the vma_lock is freed, this makes the vma ineligible
+ * for pmd sharing. And, i_mmap_rwsem is required to set up
+ * pmd sharing. This is important as page tables for this
+ * unmapped range will be asynchrously deleted. If the page
+ * tables are shared, there will be issues when accessed by
+ * someone else.
+ */
+ __hugetlb_vma_unlock_write_free(vma);
+ i_mmap_unlock_write(vma->vm_file->f_mapping);
+ } else {
+ i_mmap_unlock_write(vma->vm_file->f_mapping);
+ hugetlb_vma_unlock_write(vma);
+ }
+}
+
+void __unmap_hugepage_range_final(struct mmu_gather *tlb,
+ struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, struct page *ref_page,
+ zap_flags_t zap_flags)
+{
+ __unmap_hugepage_range_locking(tlb, vma, start, end, ref_page,
+ zap_flags);
+}
+
+#ifdef CONFIG_ADVISE_SYSCALLS
+/*
+ * Similar setup as in zap_page_range(). madvise(MADV_DONTNEED) can not call
+ * zap_page_range for hugetlb vmas as __unmap_hugepage_range_final will delete
+ * the associated vma_lock.
+ */
+void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end)
+{
+ struct mmu_notifier_range range;
+ struct mmu_gather tlb;
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
+ start, end);
+ adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end);
+ tlb_gather_mmu(&tlb, vma->vm_mm);
+ update_hiwater_rss(vma->vm_mm);
+ mmu_notifier_invalidate_range_start(&range);
- i_mmap_unlock_write(vma->vm_file->f_mapping);
+ __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0);
+
+ mmu_notifier_invalidate_range_end(&range);
+ tlb_finish_mmu(&tlb);
}
+#endif
void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end, struct page *ref_page,
zap_flags_t zap_flags)
{
+ struct mmu_notifier_range range;
struct mmu_gather tlb;
+ mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, vma->vm_mm,
+ start, end);
+ adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end);
tlb_gather_mmu(&tlb, vma->vm_mm);
+
__unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags);
+
+ mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
}
--- a/mm/memory.c~hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing
+++ a/mm/memory.c
@@ -1720,7 +1720,7 @@ void unmap_vmas(struct mmu_gather *tlb,
{
struct mmu_notifier_range range;
struct zap_details details = {
- .zap_flags = ZAP_FLAG_DROP_MARKER,
+ .zap_flags = ZAP_FLAG_DROP_MARKER | ZAP_FLAG_UNMAP,
/* Careful - we need to zap private pages too! */
.even_cows = true,
};
@@ -1753,15 +1753,21 @@ void zap_page_range(struct vm_area_struc
MA_STATE(mas, mt, vma->vm_end, vma->vm_end);
lru_add_drain();
- mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
- start, start + size);
tlb_gather_mmu(&tlb, vma->vm_mm);
update_hiwater_rss(vma->vm_mm);
- mmu_notifier_invalidate_range_start(&range);
do {
- unmap_single_vma(&tlb, vma, start, range.end, NULL);
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma,
+ vma->vm_mm,
+ max(start, vma->vm_start),
+ min(start + size, vma->vm_end));
+ if (is_vm_hugetlb_page(vma))
+ adjust_range_if_pmd_sharing_possible(vma,
+ &range.start,
+ &range.end);
+ mmu_notifier_invalidate_range_start(&range);
+ unmap_single_vma(&tlb, vma, start, start + size, NULL);
+ mmu_notifier_invalidate_range_end(&range);
} while ((vma = mas_find(&mas, end - 1)) != NULL);
- mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
}
_
Patches currently in -mm which might be from mike.kravetz(a)oracle.com are
hugetlb-simplify-hugetlb-handling-in-follow_page_mask.patch
hugetlb-simplify-hugetlb-handling-in-follow_page_mask-v4.patch
hugetlb-simplify-hugetlb-handling-in-follow_page_mask-v5.patch
commit 702de2c21eed04c67cefaaedc248ef16e5f6b293 upstream.
We are seeing an IRQ storm on the global receive IRQ line under heavy
CAN bus load conditions with both CAN channels enabled.
Conditions:
The global receive IRQ line is shared between can0 and can1, either of
the channels can trigger interrupt while the other channel's IRQ line
is disabled (RFIE).
When global a receive IRQ interrupt occurs, we mask the interrupt in
the IRQ handler. Clearing and unmasking of the interrupt is happening
in rx_poll(). There is a race condition where rx_poll() unmasks the
interrupt, but the next IRQ handler does not mask the IRQ due to
NAPIF_STATE_MISSED flag (e.g.: can0 RX FIFO interrupt is disabled and
can1 is triggering RX interrupt, the delay in rx_poll() processing
results in setting NAPIF_STATE_MISSED flag) leading to an IRQ storm.
This patch fixes the issue by checking IRQ active and enabled before
handling the IRQ on a particular channel.
Fixes: dd3bd23eb438 ("can: rcar_canfd: Add Renesas R-Car CAN FD driver")
Suggested-by: Marc Kleine-Budde <mkl(a)pengutronix.de>
Signed-off-by: Biju Das <biju.das.jz(a)bp.renesas.com>
Link: https://lore.kernel.org/all/20221025155657.1426948-2-biju.das.jz@bp.renesas…
Cc: stable(a)vger.kernel.org#5.15.y
[mkl: adjust commit message]
Signed-off-by: Marc Kleine-Budde <mkl(a)pengutronix.de>
[biju: removed gpriv from RCANFD_RFCC_RFIE macro]
Signed-off-by: Biju Das <biju.das.jz(a)bp.renesas.com>
---
Resending to 5.15 with confilcts[1] fixed
[1] https://lore.kernel.org/stable/1667194204110137@kroah.com/T/#u
---
drivers/net/can/rcar/rcar_canfd.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c
index 2f44c567ebd7..9991bb475ae1 100644
--- a/drivers/net/can/rcar/rcar_canfd.c
+++ b/drivers/net/can/rcar/rcar_canfd.c
@@ -1106,11 +1106,13 @@ static void rcar_canfd_handle_global_receive(struct rcar_canfd_global *gpriv, u3
{
struct rcar_canfd_channel *priv = gpriv->ch[ch];
u32 ridx = ch + RCANFD_RFFIFO_IDX;
- u32 sts;
+ u32 sts, cc;
/* Handle Rx interrupts */
sts = rcar_canfd_read(priv->base, RCANFD_RFSTS(ridx));
- if (likely(sts & RCANFD_RFSTS_RFIF)) {
+ cc = rcar_canfd_read(priv->base, RCANFD_RFCC(ridx));
+ if (likely(sts & RCANFD_RFSTS_RFIF &&
+ cc & RCANFD_RFCC_RFIE)) {
if (napi_schedule_prep(&priv->napi)) {
/* Disable Rx FIFO interrupts */
rcar_canfd_clear_bit(priv->base,
--
2.25.1
This is the start of the stable review cycle for the 4.14.297 release.
There are 34 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Wed, 02 Nov 2022 07:01:32 +0000.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.297-r…
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Linux 4.14.297-rc1
Daniel Sneddon <daniel.sneddon(a)linux.intel.com>
x86/speculation: Add RSB VM Exit protections
Pawan Gupta <pawan.kumar.gupta(a)linux.intel.com>
x86/bugs: Warn when "ibrs" mitigation is selected on Enhanced IBRS parts
Nathan Chancellor <nathan(a)kernel.org>
x86/speculation: Use DECLARE_PER_CPU for x86_spec_ctrl_current
Pawan Gupta <pawan.kumar.gupta(a)linux.intel.com>
x86/speculation: Disable RRSBA behavior
Pawan Gupta <pawan.kumar.gupta(a)linux.intel.com>
x86/bugs: Add Cannon lake to RETBleed affected CPU list
Andrew Cooper <andrew.cooper3(a)citrix.com>
x86/cpu/amd: Enumerate BTC_NO
Peter Zijlstra <peterz(a)infradead.org>
x86/common: Stamp out the stepping madness
Josh Poimboeuf <jpoimboe(a)kernel.org>
x86/speculation: Fill RSB on vmexit for IBRS
Josh Poimboeuf <jpoimboe(a)kernel.org>
KVM: VMX: Fix IBRS handling after vmexit
Josh Poimboeuf <jpoimboe(a)kernel.org>
KVM: VMX: Prevent guest RSB poisoning attacks with eIBRS
Josh Poimboeuf <jpoimboe(a)kernel.org>
x86/speculation: Remove x86_spec_ctrl_mask
Josh Poimboeuf <jpoimboe(a)kernel.org>
x86/speculation: Use cached host SPEC_CTRL value for guest entry/exit
Josh Poimboeuf <jpoimboe(a)kernel.org>
x86/speculation: Fix SPEC_CTRL write on SMT state change
Josh Poimboeuf <jpoimboe(a)kernel.org>
x86/speculation: Fix firmware entry SPEC_CTRL handling
Josh Poimboeuf <jpoimboe(a)kernel.org>
x86/speculation: Fix RSB filling with CONFIG_RETPOLINE=n
Pawan Gupta <pawan.kumar.gupta(a)linux.intel.com>
x86/speculation: Add LFENCE to RSB fill sequence
Peter Zijlstra <peterz(a)infradead.org>
x86/speculation: Change FILL_RETURN_BUFFER to work with objtool
Peter Zijlstra <peterz(a)infradead.org>
entel_idle: Disable IBRS during long idle
Peter Zijlstra <peterz(a)infradead.org>
x86/bugs: Report Intel retbleed vulnerability
Peter Zijlstra <peterz(a)infradead.org>
x86/bugs: Split spectre_v2_select_mitigation() and spectre_v2_user_select_mitigation()
Pawan Gupta <pawan.kumar.gupta(a)linux.intel.com>
x86/speculation: Add spectre_v2=ibrs option to support Kernel IBRS
Peter Zijlstra <peterz(a)infradead.org>
x86/bugs: Optimize SPEC_CTRL MSR writes
Thadeu Lima de Souza Cascardo <cascardo(a)canonical.com>
x86/entry: Add kernel IBRS implementation
Peter Zijlstra <peterz(a)infradead.org>
x86/bugs: Keep a per-CPU IA32_SPEC_CTRL value
Alexandre Chartre <alexandre.chartre(a)oracle.com>
x86/bugs: Add AMD retbleed= boot parameter
Alexandre Chartre <alexandre.chartre(a)oracle.com>
x86/bugs: Report AMD retbleed vulnerability
Peter Zijlstra <peterz(a)infradead.org>
x86/cpufeatures: Move RETPOLINE flags to word 11
Peter Zijlstra <peterz(a)infradead.org>
x86/entry: Remove skip_r11rcx
Mark Gross <mgross(a)linux.intel.com>
x86/cpu: Add a steppings field to struct x86_cpu_id
Thomas Gleixner <tglx(a)linutronix.de>
x86/cpu: Add consistent CPU match macros
Thomas Gleixner <tglx(a)linutronix.de>
x86/devicetable: Move x86 specific macro out of generic code
Ingo Molnar <mingo(a)kernel.org>
x86/cpufeature: Fix various quality problems in the <asm/cpu_device_hd.h> header
Kan Liang <kan.liang(a)linux.intel.com>
x86/cpufeature: Add facility to check for min microcode revisions
Suraj Jitindar Singh <surajjs(a)amazon.com>
Revert "x86/cpu: Add a steppings field to struct x86_cpu_id"
-------------
Diffstat:
Documentation/admin-guide/hw-vuln/spectre.rst | 8 +
Documentation/admin-guide/kernel-parameters.txt | 13 +
Makefile | 4 +-
arch/x86/entry/calling.h | 68 +++-
arch/x86/entry/entry_32.S | 2 -
arch/x86/entry/entry_64.S | 38 ++-
arch/x86/entry/entry_64_compat.S | 12 +-
arch/x86/include/asm/cpu_device_id.h | 168 +++++++++-
arch/x86/include/asm/cpufeatures.h | 16 +-
arch/x86/include/asm/intel-family.h | 6 +
arch/x86/include/asm/msr-index.h | 14 +
arch/x86/include/asm/nospec-branch.h | 48 +--
arch/x86/kernel/cpu/amd.c | 21 +-
arch/x86/kernel/cpu/bugs.c | 415 ++++++++++++++++++++----
arch/x86/kernel/cpu/common.c | 68 ++--
arch/x86/kernel/cpu/match.c | 44 ++-
arch/x86/kernel/cpu/scattered.c | 1 +
arch/x86/kernel/process.c | 2 +-
arch/x86/kvm/svm.c | 1 +
arch/x86/kvm/vmx.c | 51 ++-
drivers/base/cpu.c | 8 +
drivers/cpufreq/acpi-cpufreq.c | 1 +
drivers/cpufreq/amd_freq_sensitivity.c | 1 +
drivers/idle/intel_idle.c | 45 ++-
include/linux/cpu.h | 2 +
include/linux/mod_devicetable.h | 4 +-
tools/arch/x86/include/asm/cpufeatures.h | 1 +
27 files changed, 899 insertions(+), 163 deletions(-)