ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates
a pud / pmd map. The following preconditions are met at their entry.
- All pte entries for a target pud/pmd address range have been cleared.
- System-wide TLB purges have been peformed for a target pud/pmd address
range.
The preconditions assure that there is no stale TLB entry for the range.
Speculation may not cache TLB entries since it requires all levels of page
entries, including ptes, to have P & A-bits set for an associated address.
However, speculation may cache pud/pmd entries (paging-structure caches)
when they have P-bit set.
Add a system-wide TLB purge (INVLPG) to a single page after clearing
pud/pmd entry's P-bit.
SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches,
states that:
INVLPG invalidates all paging-structure caches associated with the
current PCID regardless of the liner addresses to which they correspond.
Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Toshi Kani <toshi.kani(a)hpe.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Joerg Roedel <joro(a)8bytes.org>
Cc: <stable(a)vger.kernel.org>
---
arch/x86/mm/pgtable.c | 34 ++++++++++++++++++++++++++++------
1 file changed, 28 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index f60fdf411103..7e96594c7e97 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -721,24 +721,42 @@ int pmd_clear_huge(pmd_t *pmd)
* @pud: Pointer to a PUD.
* @addr: Virtual address associated with pud.
*
- * Context: The pud range has been unmaped and TLB purged.
+ * Context: The pud range has been unmapped and TLB purged.
* Return: 1 if clearing the entry succeeded. 0 otherwise.
*/
int pud_free_pmd_page(pud_t *pud, unsigned long addr)
{
- pmd_t *pmd;
+ pmd_t *pmd, *pmd_sv;
+ pte_t *pte;
int i;
if (pud_none(*pud))
return 1;
pmd = (pmd_t *)pud_page_vaddr(*pud);
+ pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL);
+ if (!pmd_sv)
+ return 0;
- for (i = 0; i < PTRS_PER_PMD; i++)
- if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE)))
- return 0;
+ for (i = 0; i < PTRS_PER_PMD; i++) {
+ pmd_sv[i] = pmd[i];
+ if (!pmd_none(pmd[i]))
+ pmd_clear(&pmd[i]);
+ }
pud_clear(pud);
+
+ /* INVLPG to clear all paging-structure caches */
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
+
+ for (i = 0; i < PTRS_PER_PMD; i++) {
+ if (!pmd_none(pmd_sv[i])) {
+ pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]);
+ free_page((unsigned long)pte);
+ }
+ }
+
+ free_page((unsigned long)pmd_sv);
free_page((unsigned long)pmd);
return 1;
@@ -749,7 +767,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
* @pmd: Pointer to a PMD.
* @addr: Virtual address associated with pmd.
*
- * Context: The pmd range has been unmaped and TLB purged.
+ * Context: The pmd range has been unmapped and TLB purged.
* Return: 1 if clearing the entry succeeded. 0 otherwise.
*/
int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
@@ -761,6 +779,10 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
pte = (pte_t *)pmd_page_vaddr(*pmd);
pmd_clear(pmd);
+
+ /* INVLPG to clear all paging-structure caches */
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
+
free_page((unsigned long)pte);
return 1;
The patch titled
Subject: mm: don't allow deferred pages with NEED_PER_CPU_KM
has been added to the -mm tree. Its filename is
mm-dont-allow-deferred-pages-with-need_per_cpu_km.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-dont-allow-deferred-pages-with-…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-dont-allow-deferred-pages-with-…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Pavel Tatashin <pasha.tatashin(a)oracle.com>
Subject: mm: don't allow deferred pages with NEED_PER_CPU_KM
It is unsafe to do virtual to physical translations before mm_init() is
called if struct page is needed in order to determine the memory section
number (see SECTION_IN_PAGE_FLAGS). This is because only in mm_init() we
initialize struct pages for all the allocated memory when deferred struct
pages are used.
My recent fix c9e97a1997 ("mm: initialize pages on demand during boot")
exposed this problem, because it greatly reduced number of pages that are
initialized before mm_init(), but the problem existed even before my fix,
as Fengguang Wu found.
Below is a more detailed explanation of the problem.
We initialize struct pages in four places:
1. Early in boot a small set of struct pages is initialized to fill
the first section, and lower zones.
2. During mm_init() we initialize "struct pages" for all the memory
that is allocated, i.e reserved in memblock.
3. Using on-demand logic when pages are allocated after mm_init call (when
memblock is finished)
4. After smp_init() when the rest free deferred pages are initialized.
The problem occurs if we try to do va to phys translation of a memory
between steps 1 and 2. Because we have not yet initialized struct pages
for all the reserved pages, it is inherently unsafe to do va to phys if
the translation itself requires access of "struct page" as in case of this
combination: CONFIG_SPARSE && !CONFIG_SPARSE_VMEMMAP
The following path exposes the problem:
start_kernel()
trap_init()
setup_cpu_entry_areas()
setup_cpu_entry_area(cpu)
get_cpu_gdt_paddr(cpu)
per_cpu_ptr_to_phys(addr)
pcpu_addr_to_page(addr)
virt_to_page(addr)
pfn_to_page(__pa(addr) >> PAGE_SHIFT)
We disable this path by not allowing NEED_PER_CPU_KM with deferred struct
pages feature.
The problems are discussed in these threads:
http://lkml.kernel.org/r/20180418135300.inazvpxjxowogyge@wfg-t540p.sh.intel…http://lkml.kernel.org/r/20180419013128.iurzouiqxvcnpbvz@wfg-t540p.sh.intel…http://lkml.kernel.org/r/20180426202619.2768-1-pasha.tatashin@oracle.com
Link: http://lkml.kernel.org/r/20180515175124.1770-1-pasha.tatashin@oracle.com
Fixes: 3a80a7fa7989 ("mm: meminit: initialise a subset of struct pages if CONFIG_DEFERRED_STRUCT_PAGE_INIT is set")
Signed-off-by: Pavel Tatashin <pasha.tatashin(a)oracle.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Reviewed-by: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Steven Sistare <steven.sistare(a)oracle.com>
Cc: Daniel Jordan <daniel.m.jordan(a)oracle.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: Fengguang Wu <fengguang.wu(a)intel.com>
Cc: Dennis Zhou <dennisszhou(a)gmail.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff -puN mm/Kconfig~mm-dont-allow-deferred-pages-with-need_per_cpu_km mm/Kconfig
--- a/mm/Kconfig~mm-dont-allow-deferred-pages-with-need_per_cpu_km
+++ a/mm/Kconfig
@@ -636,6 +636,7 @@ config DEFERRED_STRUCT_PAGE_INIT
default n
depends on NO_BOOTMEM
depends on !FLATMEM
+ depends on !NEED_PER_CPU_KM
help
Ordinarily all struct pages are initialised during early boot in a
single thread. On very large machines this can take a considerable
_
Patches currently in -mm which might be from pasha.tatashin(a)oracle.com are
mm-dont-allow-deferred-pages-with-need_per_cpu_km.patch
sparc64-ng4-memset-32-bits-overflow.patch
ToT commit 97f3c0a4b0579b646b6b10ae5a3d59f0441cc12c
(ACPICA: acpi: acpica: fix acpi operand cache leak in nseval.c)
was assigned CVE-2017-13695
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-13695
and has been public since August 25 2017
Please apply to 3.18, 4.4 and 4.9 stable kernels for the reasons
outlined in the body of the patch:
"This cache leak causes a security threat because an old kernel (<= 4.9)
shows memory locations of kernel functions in stack dump. Some malicious
users could use this information to neutralize kernel ASLR."
Bonus Points: Since the patch is ToT upstream, relieving the bug that
results in the memory leak, even despite the non-CVE security status for
<=4.12 kernels, it may be advised to also include this patch in 4.14.y
stable as well.
Sincerely -- Mark Salyzyn
ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates
a pud / pmd map. The following preconditions are met at their entry.
- All pte entries for a target pud/pmd address range have been cleared.
- System-wide TLB purges have been peformed for a target pud/pmd address
range.
The preconditions assure that there is no stale TLB entry for the range.
Speculation may not cache TLB entries since it requires all levels of page
entries, including ptes, to have P & A-bits set for an associated address.
However, speculation may cache pud/pmd entries (paging-structure caches)
when they have P-bit set.
Add a system-wide TLB purge (INVLPG) to a single page after clearing
pud/pmd entry's P-bit.
SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches,
states that:
INVLPG invalidates all paging-structure caches associated with the
current PCID regardless of the liner addresses to which they correspond.
Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Toshi Kani <toshi.kani(a)hpe.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Joerg Roedel <joro(a)8bytes.org>
Cc: <stable(a)vger.kernel.org>
---
arch/x86/mm/pgtable.c | 32 ++++++++++++++++++++++++++------
1 file changed, 26 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 37e3cbac59b9..816fd41ee854 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -720,24 +720,40 @@ int pmd_clear_huge(pmd_t *pmd)
* @pud: Pointer to a PUD.
* @addr: Virtual address associated with pud.
*
- * Context: The pud range has been unmaped and TLB purged.
+ * Context: The pud range has been unmapped and TLB purged.
* Return: 1 if clearing the entry succeeded. 0 otherwise.
*/
int pud_free_pmd_page(pud_t *pud, unsigned long addr)
{
- pmd_t *pmd;
+ pmd_t *pmd, *pmd_sv;
+ pte_t *pte;
int i;
if (pud_none(*pud))
return 1;
pmd = (pmd_t *)pud_page_vaddr(*pud);
+ pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL);
- for (i = 0; i < PTRS_PER_PMD; i++)
- if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE)))
- return 0;
+ for (i = 0; i < PTRS_PER_PMD; i++) {
+ pmd_sv[i] = pmd[i];
+ if (!pmd_none(pmd[i]))
+ pmd_clear(&pmd[i]);
+ }
pud_clear(pud);
+
+ /* INVLPG to clear all paging-structure caches */
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
+
+ for (i = 0; i < PTRS_PER_PMD; i++) {
+ if (!pmd_none(pmd_sv[i])) {
+ pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]);
+ free_page((unsigned long)pte);
+ }
+ }
+
+ free_page((unsigned long)pmd_sv);
free_page((unsigned long)pmd);
return 1;
@@ -748,7 +764,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
* @pmd: Pointer to a PMD.
* @addr: Virtual address associated with pmd.
*
- * Context: The pmd range has been unmaped and TLB purged.
+ * Context: The pmd range has been unmapped and TLB purged.
* Return: 1 if clearing the entry succeeded. 0 otherwise.
*/
int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
@@ -760,6 +776,10 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
pte = (pte_t *)pmd_page_vaddr(*pmd);
pmd_clear(pmd);
+
+ /* INVLPG to clear all paging-structure caches */
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
+
free_page((unsigned long)pte);
return 1;
Hi Doug and Jason,
Here are some patches to go to for-next. These include the couple patches that
needed rework that were posted before the OFA conf. Well actually those patches
that had issues were just dropped with the exception of the one from Alex, to
add handling of kernel restart to hfi1 and qib. Patch 8 is his V2.
Nothing else too scary or exciting in here. Well OK so that's not quite right
the CQ completion vector patch is rather interesting. This adds support
for compeltion vectors for hfi1 and helps improve performance in things like
IPoIB.
There is a signifianct patch from Mitko that redoes a lof our fault injection
stuff. It's a big patch but I'm not sure it lends itself to being broken up
further.
One other thing of note is the "Create common functions" patch from Sebastian
depends on one of the patches that I sent for the -rc. It won't apply cleanly
without that.
---
Alex Estrin (2):
IB/hfi1: Complete check for locally terminated smp
IB/{hfi1,qib}: Add handling of kernel restart
Brian Welty (1):
IB/{hfi1,qib,rdmavt}: Move logic to allocate receive WQE into rdmavt
Kamenee Arumugam (1):
IB/Hfi1: Read CCE Revision register to verify the device is responsive
Michael J. Ruhl (4):
IB/hfi1: Return actual error value from program_rcvarray()
IB/hfi1: Use after free race condition in send context error path
IB/hfi1: Return correct value for device state
IB/hfi1: Reorder incorrect send context disable
Mike Marciniszyn (1):
IB/hfi1: Fix fault injection init/exit issues
Mitko Haralanov (1):
IB/hfi1: Rework fault injection machinery
Sebastian Sanchez (4):
IB/hfi1: Prevent LNI hang when LCB can't obtain lanes
IB/hfi1: Optimize kthread pointer locking when queuing CQ entries
IB/hfi1: Create common functions for affinity CPU mask operations
IB/{hfi1,rdmavt,qib}: Implement CQ completion vector support
drivers/infiniband/hw/hfi1/Makefile | 10 -
drivers/infiniband/hw/hfi1/affinity.c | 497 +++++++++++++++++++++++++--
drivers/infiniband/hw/hfi1/affinity.h | 10 -
drivers/infiniband/hw/hfi1/chip.c | 74 +++-
drivers/infiniband/hw/hfi1/chip.h | 15 +
drivers/infiniband/hw/hfi1/chip_registers.h | 7
drivers/infiniband/hw/hfi1/debugfs.c | 292 ----------------
drivers/infiniband/hw/hfi1/debugfs.h | 93 +++--
drivers/infiniband/hw/hfi1/driver.c | 20 +
drivers/infiniband/hw/hfi1/fault.c | 375 ++++++++++++++++++++
drivers/infiniband/hw/hfi1/fault.h | 109 ++++++
drivers/infiniband/hw/hfi1/file_ops.c | 2
drivers/infiniband/hw/hfi1/hfi.h | 14 +
drivers/infiniband/hw/hfi1/init.c | 28 +-
drivers/infiniband/hw/hfi1/mad.c | 36 +-
drivers/infiniband/hw/hfi1/pcie.c | 8
drivers/infiniband/hw/hfi1/pio.c | 44 ++
drivers/infiniband/hw/hfi1/rc.c | 8
drivers/infiniband/hw/hfi1/ruc.c | 154 --------
drivers/infiniband/hw/hfi1/trace.c | 3
drivers/infiniband/hw/hfi1/trace_dbg.h | 3
drivers/infiniband/hw/hfi1/uc.c | 4
drivers/infiniband/hw/hfi1/ud.c | 4
drivers/infiniband/hw/hfi1/user_exp_rcv.c | 1
drivers/infiniband/hw/hfi1/verbs.c | 20 -
drivers/infiniband/hw/hfi1/verbs.h | 8
drivers/infiniband/hw/qib/qib.h | 1
drivers/infiniband/hw/qib/qib_init.c | 13 +
drivers/infiniband/hw/qib/qib_rc.c | 8
drivers/infiniband/hw/qib/qib_ruc.c | 154 --------
drivers/infiniband/hw/qib/qib_uc.c | 4
drivers/infiniband/hw/qib/qib_ud.c | 4
drivers/infiniband/hw/qib/qib_verbs.c | 6
drivers/infiniband/hw/qib/qib_verbs.h | 2
drivers/infiniband/sw/rdmavt/cq.c | 74 ++--
drivers/infiniband/sw/rdmavt/cq.h | 6
drivers/infiniband/sw/rdmavt/qp.c | 149 ++++++++
drivers/infiniband/sw/rdmavt/trace_cq.h | 35 ++
drivers/infiniband/sw/rdmavt/vt.c | 35 +-
include/rdma/rdma_vt.h | 7
include/rdma/rdmavt_cq.h | 5
include/rdma/rdmavt_qp.h | 1
42 files changed, 1491 insertions(+), 852 deletions(-)
create mode 100644 drivers/infiniband/hw/hfi1/fault.c
create mode 100644 drivers/infiniband/hw/hfi1/fault.h
--
-Denny
This is a note to let you know that I've just added the patch titled
staging: android: ion: Switch to pr_warn_once in ion_buffer_destroy
to my staging git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
in the staging-next branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will also be merged in the next major kernel release
during the merge window.
If you have any questions about this process, please let me know.
>From 45ad559a29629cb1c64ee636563c69b71524f077 Mon Sep 17 00:00:00 2001
From: Laura Abbott <labbott(a)redhat.com>
Date: Mon, 14 May 2018 14:35:09 -0700
Subject: staging: android: ion: Switch to pr_warn_once in ion_buffer_destroy
Syzbot reported yet another warning with Ion:
WARNING: CPU: 0 PID: 1467 at drivers/staging/android/ion/ion.c:122
ion_buffer_destroy+0xd4/0x190 drivers/staging/android/ion/ion.c:122
Kernel panic - not syncing: panic_on_warn set ...
This is catching that a buffer was freed with an existing kernel mapping
still present. This can be easily be triggered from userspace by calling
DMA_BUF_SYNC_START without calling DMA_BUF_SYNC_END. Switch to a single
pr_warn_once to indicate the error without being disruptive.
Reported-by: syzbot+cd8bcd40cb049efa2770(a)syzkaller.appspotmail.com
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Laura Abbott <labbott(a)redhat.com>
Cc: stable <stable(a)vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/staging/android/ion/ion.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
index af682cbde767..9d1109e43ed4 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -111,8 +111,11 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap,
void ion_buffer_destroy(struct ion_buffer *buffer)
{
- if (WARN_ON(buffer->kmap_cnt > 0))
+ if (buffer->kmap_cnt > 0) {
+ pr_warn_once("%s: buffer still mapped in the kernel\n",
+ __func__);
buffer->heap->ops->unmap_kernel(buffer->heap, buffer);
+ }
buffer->heap->ops->free(buffer);
kfree(buffer);
}
--
2.17.0
The code is doing monolithic reads for all chunks except the last one
which is wrong since a monolithic read will issue the
READ0+ADDRS+READ_START sequence. It not only takes longer because it
forces the NAND chip to reload the page content into its internal
cache, but by doing that we also reset the column pointer to 0, which
means we'll always read the first chunk instead of moving to the next
one.
Rework the code to do a monolithic read only for the first chunk,
then switch to naked reads for all intermediate chunks and finally
issue a last naked read for the last chunk.
Fixes: 02f26ecf8c77 mtd: nand: add reworked Marvell NAND controller driver
Cc: stable(a)vger.kernel.org
Reported-by: Chris Packham <chris.packham(a)alliedtelesis.co.nz>
Signed-off-by: Boris Brezillon <boris.brezillon(a)bootlin.com>
Tested-by: Chris Packham <chris.packham(a)alliedtelesis.co.nz>
---
drivers/mtd/nand/raw/marvell_nand.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
index db5ec4e8bde9..ebb1d141b900 100644
--- a/drivers/mtd/nand/raw/marvell_nand.c
+++ b/drivers/mtd/nand/raw/marvell_nand.c
@@ -1194,11 +1194,13 @@ static void marvell_nfc_hw_ecc_bch_read_chunk(struct nand_chip *chip, int chunk,
NDCB0_CMD2(NAND_CMD_READSTART);
/*
- * Trigger the naked read operation only on the last chunk.
- * Otherwise, use monolithic read.
+ * Trigger the monolithic read on the first chunk, then naked read on
+ * intermediate chunks and finally a last naked read on the last chunk.
*/
- if (lt->nchunks == 1 || (chunk < lt->nchunks - 1))
+ if (chunk == 0)
nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_MONOLITHIC_RW);
+ else if (chunk < lt->nchunks - 1)
+ nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_NAKED_RW);
else
nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_LAST_NAKED_RW);
--
2.14.1
This patch set is based on v4.16.
Changes from v1:
- Add Reviewed-by in patch 1, 2, 3 and 4.
- Revise typo in patch 4.
- Add new patches as patch 5 and 6.
Yoshihiro Shimoda (6):
usb: gadget: udc: renesas_usb3: fix double phy_put()
usb: gadget: udc: renesas_usb3: should remove debugfs
usb: gadget: udc: renesas_usb3: should call pm_runtime_enable() before
add udc
usb: gadget: udc: renesas_usb3: should call devm_phy_get() before add
udc
usb: gadget: udc: renesas_usb3: should fail if devm_phy_get() returns
error
usb: gadget: udc: renesas_usb3: disable the controller's irqs for
reconnecting
drivers/usb/gadget/udc/renesas_usb3.c | 37 +++++++++++++++++++++++------------
1 file changed, 25 insertions(+), 12 deletions(-)
--
1.9.1
This is a note to let you know that I've just added the patch titled
staging: android: ion: Switch to pr_warn_once in ion_buffer_destroy
to my staging git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
in the staging-testing branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will be merged to the staging-next branch sometime soon,
after it passes testing, and the merge window is open.
If you have any questions about this process, please let me know.
>From 45ad559a29629cb1c64ee636563c69b71524f077 Mon Sep 17 00:00:00 2001
From: Laura Abbott <labbott(a)redhat.com>
Date: Mon, 14 May 2018 14:35:09 -0700
Subject: staging: android: ion: Switch to pr_warn_once in ion_buffer_destroy
Syzbot reported yet another warning with Ion:
WARNING: CPU: 0 PID: 1467 at drivers/staging/android/ion/ion.c:122
ion_buffer_destroy+0xd4/0x190 drivers/staging/android/ion/ion.c:122
Kernel panic - not syncing: panic_on_warn set ...
This is catching that a buffer was freed with an existing kernel mapping
still present. This can be easily be triggered from userspace by calling
DMA_BUF_SYNC_START without calling DMA_BUF_SYNC_END. Switch to a single
pr_warn_once to indicate the error without being disruptive.
Reported-by: syzbot+cd8bcd40cb049efa2770(a)syzkaller.appspotmail.com
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Laura Abbott <labbott(a)redhat.com>
Cc: stable <stable(a)vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/staging/android/ion/ion.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
index af682cbde767..9d1109e43ed4 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -111,8 +111,11 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap,
void ion_buffer_destroy(struct ion_buffer *buffer)
{
- if (WARN_ON(buffer->kmap_cnt > 0))
+ if (buffer->kmap_cnt > 0) {
+ pr_warn_once("%s: buffer still mapped in the kernel\n",
+ __func__);
buffer->heap->ops->unmap_kernel(buffer->heap, buffer);
+ }
buffer->heap->ops->free(buffer);
kfree(buffer);
}
--
2.17.0