This barrier only applies to the read-modify-write operations; in
particular, it does not apply to the atomic_set() primitive.
Replace the barrier with an smp_mb().
Fixes: 6c0ca7ae292ad ("sbitmap: fix wakeup hang after sbq resize")
Cc: stable(a)vger.kernel.org
Reported-by: "Paul E. McKenney" <paulmck(a)linux.ibm.com>
Reported-by: Peter Zijlstra <peterz(a)infradead.org>
Signed-off-by: Andrea Parri <andrea.parri(a)amarulasolutions.com>
Cc: Jens Axboe <axboe(a)kernel.dk>
Cc: Omar Sandoval <osandov(a)fb.com>
Cc: linux-block(a)vger.kernel.org
---
lib/sbitmap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index 155fe38756ecf..4a7fc4915dfc6 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
* to ensure that the batch size is updated before the wait
* counts.
*/
- smp_mb__before_atomic();
+ smp_mb();
for (i = 0; i < SBQ_WAIT_QUEUES; i++)
atomic_set(&sbq->ws[i].wait_cnt, 1);
}
--
2.7.4
This barrier only applies to the read-modify-write operations; in
particular, it does not apply to the atomic_set() primitive.
Replace the barrier with an smp_mb().
Fixes: dac56212e8127 ("bio: skip atomic inc/dec of ->bi_cnt for most use cases")
Cc: stable(a)vger.kernel.org
Reported-by: "Paul E. McKenney" <paulmck(a)linux.ibm.com>
Reported-by: Peter Zijlstra <peterz(a)infradead.org>
Signed-off-by: Andrea Parri <andrea.parri(a)amarulasolutions.com>
Cc: Jens Axboe <axboe(a)kernel.dk>
Cc: linux-block(a)vger.kernel.org
---
include/linux/bio.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index e584673c18814..5becbafb84e8a 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -224,7 +224,7 @@ static inline void bio_cnt_set(struct bio *bio, unsigned int count)
{
if (count != 1) {
bio->bi_flags |= (1 << BIO_REFFED);
- smp_mb__before_atomic();
+ smp_mb();
}
atomic_set(&bio->__bi_cnt, count);
}
--
2.7.4
On Fri, Apr 26, 2019 at 06:52:45PM +0000, Sasha Levin wrote:
> Hi,
>
> [This is an automated email]
>
> This commit has been processed because it contains a -stable tag.
> The stable tag indicates that it's relevant for the following trees: all
>
> The bot has tested the following trees: v5.0.9, v4.19.36, v4.14.113, v4.9.170, v4.4.178, v3.18.138.
>
> v5.0.9: Build OK!
> v4.19.36: Build OK!
> v4.14.113: Build OK!
> v4.9.170: Build OK!
> v4.4.178: Failed to apply! Possible dependencies:
> 2bd5bd15a518 ("ASoC: Intel: add bytct-rt5651 machine driver")
> 2dcffcee23a2 ("ASoC: Intel: Create independent acpi match module")
> 595788e475d0 ("ASoC: Intel: tag byt-rt5640 machine driver as deprecated")
> 95f098014815 ("ASoC: Intel: Move apci find machine routines")
> a395bdd6b24b ("ASoC: intel: Fix sst-dsp dependency on dw stuff")
> a92ea59b74e2 ("ASoC: Intel: sst: only select sst-firmware when DW DMAC is built-in")
> cfffcc66a89a ("ASoC: Intel: Load the atom DPCM driver only")
>
> v3.18.138: Failed to apply! Possible dependencies:
> 0d2135ecadb0 ("ASoC: Intel: Work around to fix HW D3 potential crash issue")
> 13735d1cecec ("ASoC: intel - kconfig: remove SND_SOC_INTEL_SST prompt")
> 161aa49ef1b9 ("ASoC: Intel: Add new dependency for Haswell machine")
> 2106241a6803 ("ASoC: Intel: create common folder and move common files in")
> 282a331fe25c ("ASoC: Intel: Add new dependency for Broadwell machine")
> 2e4f75919e5a ("ASoC: Intel: Add PM support to HSW/BDW PCM driver")
> 34084a436703 ("ASoC: intel: Remove superfluous backslash in Kconfig")
> 544c55c810a5 ("ASoC: Intel: Delete an unnecessary check before the function call "sst_dma_free"")
> 63ae1fe7739e ("ASoC: Intel: Add dependency on DesignWare DMA controller")
> 7dd6bd8926f3 ("ASoC: intel: kconfig - Move DW_DMAC_CORE dependency to machines")
> 85b88a8dd0c7 ("ASoC: Intel: Store the entry_point read from FW file")
> 9449d39b990d ("ASoC: Intel: add function to load firmware image")
> a395bdd6b24b ("ASoC: intel: Fix sst-dsp dependency on dw stuff")
> a92ea59b74e2 ("ASoC: Intel: sst: only select sst-firmware when DW DMAC is built-in")
> aed3c7b77c85 ("ASoC: Intel: Add PM support to HSW/BDW IPC driver")
> d96c53a193dd ("ASoC: Intel: Add generic support for DSP wake, sleep and stall")
> e9600bc166d5 ("ASoC: Intel: Make ADSP memory block allocation more generic")
>
>
> How should we proceed with this patch?
After reviews I'll send backport patches for v4.4.X and v3.18.X as necessary.
From: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
On HSW the pipe A panel fitter lives inside the display power well,
and the input MUX for the EDP transcoder needs to be configured
appropriately to route the data through the power well as needed.
Changing the MUX setting is not allowed while the pipe is active,
so we need to force a full modeset whenever we need to change it.
Currently we may end up doing a fastset which won't change the
MUX settings, but it will drop the power well reference, and that
kills the pipe.
Cc: stable(a)vger.kernel.org
Cc: Hans de Goede <hdegoede(a)redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Fixes: d19f958db23c ("drm/i915: Enable fastset for non-boot modesets.")
Signed-off-by: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
---
drivers/gpu/drm/i915/intel_display.c | 9 +++++++++
drivers/gpu/drm/i915/intel_pipe_crc.c | 13 ++++++++++---
2 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index c67f165b466c..691c9a929164 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -12133,6 +12133,7 @@ intel_pipe_config_compare(struct drm_i915_private *dev_priv,
struct intel_crtc_state *pipe_config,
bool adjust)
{
+ struct intel_crtc *crtc = to_intel_crtc(current_config->base.crtc);
bool ret = true;
bool fixup_inherited = adjust &&
(current_config->base.mode.private_flags & I915_MODE_FLAG_INHERITED) &&
@@ -12354,6 +12355,14 @@ intel_pipe_config_compare(struct drm_i915_private *dev_priv,
PIPE_CONF_CHECK_X(gmch_pfit.pgm_ratios);
PIPE_CONF_CHECK_X(gmch_pfit.lvds_border_bits);
+ /*
+ * Changing the EDP transcoder input mux
+ * (A_ONOFF vs. A_ON) requires a full modeset.
+ */
+ if (IS_HASWELL(dev_priv) && crtc->pipe == PIPE_A &&
+ current_config->cpu_transcoder == TRANSCODER_EDP)
+ PIPE_CONF_CHECK_BOOL(pch_pfit.enabled);
+
if (!adjust) {
PIPE_CONF_CHECK_I(pipe_src_w);
PIPE_CONF_CHECK_I(pipe_src_h);
diff --git a/drivers/gpu/drm/i915/intel_pipe_crc.c b/drivers/gpu/drm/i915/intel_pipe_crc.c
index e94b5b1bc1b7..e7c7be4911c1 100644
--- a/drivers/gpu/drm/i915/intel_pipe_crc.c
+++ b/drivers/gpu/drm/i915/intel_pipe_crc.c
@@ -311,10 +311,17 @@ intel_crtc_crc_setup_workarounds(struct intel_crtc *crtc, bool enable)
pipe_config->base.mode_changed = pipe_config->has_psr;
pipe_config->crc_enabled = enable;
- if (IS_HASWELL(dev_priv) && crtc->pipe == PIPE_A) {
+ if (IS_HASWELL(dev_priv) &&
+ pipe_config->base.active && crtc->pipe == PIPE_A &&
+ pipe_config->cpu_transcoder == TRANSCODER_EDP) {
+ bool old_need_power_well = pipe_config->pch_pfit.enabled ||
+ pipe_config->pch_pfit.force_thru;
+ bool new_need_power_well = pipe_config->pch_pfit.enabled ||
+ enable;
+
pipe_config->pch_pfit.force_thru = enable;
- if (pipe_config->cpu_transcoder == TRANSCODER_EDP &&
- pipe_config->pch_pfit.enabled != enable)
+
+ if (old_need_power_well != new_need_power_well)
pipe_config->base.connectors_changed = true;
}
--
2.21.0
Changes from v1:
* Fix compile errors on UML and non-x86 arches
* Clarify commit message and Fixes about the origin of the
bug and add the impact to powerpc / uml / unicore32
--
This is a bit of a mess, to put it mildly. But, it's a bug
that only seems to have showed up in 4.20 but wasn't noticed
until now because nobody uses MPX.
MPX has the arch_unmap() hook inside of munmap() because MPX
uses bounds tables that protect other areas of memory. When
memory is unmapped, there is also a need to unmap the MPX
bounds tables. Barring this, unused bounds tables can eat 80%
of the address space.
But, the recursive do_munmap() that gets called vi arch_unmap()
wreaks havoc with __do_munmap()'s state. It can result in
freeing populated page tables, accessing bogus VMA state,
double-freed VMAs and more.
To fix this, call arch_unmap() before __do_unmap() has a chance
to do anything meaningful. Also, remove the 'vma' argument
and force the MPX code to do its own, independent VMA lookup.
== UML / unicore32 impact ==
Remove unused 'vma' argument to arch_unmap(). No functional
change.
I compile tested this on UML but not unicore32.
== powerpc impact ==
powerpc uses arch_unmap() well to watch for munmap() on the
VDSO and zeroes out 'current->mm->context.vdso_base'. Moving
arch_unmap() makes this happen earlier in __do_munmap(). But,
'vdso_base' seems to only be used in perf and in the signal
delivery that happens near the return to userspace. I can not
find any likely impact to powerpc, other than the zeroing
happening a little earlier.
powerpc does not use the 'vma' argument and is unaffected by
its removal.
I compile-tested a 64-bit powerpc defconfig.
== x86 impact ==
For the common success case this is functionally identical to
what was there before. For the munmap() failure case, it's
possible that some MPX tables will be zapped for memory that
continues to be in use. But, this is an extraordinarily
unlikely scenario and the harm would be that MPX provides no
protection since the bounds table got reset (zeroed).
I can't imagine anyone doing this:
ptr = mmap();
// use ptr
ret = munmap(ptr);
if (ret)
// oh, there was an error, I'll
// keep using ptr.
Because if you're doing munmap(), you are *done* with the
memory. There's probably no good data in there _anyway_.
This passes the original reproducer from Richard Biener as
well as the existing mpx selftests/.
====
The long story:
munmap() has a couple of pieces:
1. Find the affected VMA(s)
2. Split the start/end one(s) if neceesary
3. Pull the VMAs out of the rbtree
4. Actually zap the memory via unmap_region(), including
freeing page tables (or queueing them to be freed).
5. Fixup some of the accounting (like fput()) and actually
free the VMA itself.
This specific ordering was actually introduced by:
dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
during the 4.20 merge window. The previous __do_munmap() code
was actually safe because the only thing after arch_unmap() was
remove_vma_list(). arch_unmap() could not see 'vma' in the
rbtree because it was detached, so it is not even capable of
doing operations unsafe for remove_vma_list()'s use of 'vma'.
Richard Biener reported a test that shows this in dmesg:
[1216548.787498] BUG: Bad rss-counter state mm:0000000017ce560b idx:1 val:551
[1216548.787500] BUG: non-zero pgtables_bytes on freeing mm: 24576
What triggered this was the recursive do_munmap() called via
arch_unmap(). It was freeing page tables that has not been
properly zapped.
But, the problem was bigger than this. For one, arch_unmap()
can free VMAs. But, the calling __do_munmap() has variables
that *point* to VMAs and obviously can't handle them just
getting freed while the pointer is still in use.
I tried a couple of things here. First, I tried to fix the page
table freeing problem in isolation, but I then found the VMA
issue. I also tried having the MPX code return a flag if it
modified the rbtree which would force __do_munmap() to re-walk
to restart. That spiralled out of control in complexity pretty
fast.
Just moving arch_unmap() and accepting that the bonkers failure
case might eat some bounds tables seems like the simplest viable
fix.
This was also reported in the following kernel bugzilla entry:
https://bugzilla.kernel.org/show_bug.cgi?id=203123
There are some reports that dd2283f2605 ("mm: mmap: zap pages
with read mmap_sem in munmap") triggered this issue. While that
commit certainly made the issues easier to hit, I belive the
fundamental issue has been with us as long as MPX itself, thus
the Fixes: tag below is for one of the original MPX commits.
Reported-by: Richard Biener <rguenther(a)suse.de>
Reported-by: H.J. Lu <hjl.tools(a)gmail.com>
Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
Cc: Yang Shi <yang.shi(a)linux.alibaba.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Andy Lutomirski <luto(a)amacapital.net>
Cc: x86(a)kernel.org
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: linux-kernel(a)vger.kernel.org
Cc: linux-mm(a)kvack.org
Cc: stable(a)vger.kernel.org
Cc: linuxppc-dev(a)lists.ozlabs.org
Cc: linux-um(a)lists.infradead.org
Cc: Benjamin Herrenschmidt <benh(a)kernel.crashing.org>
Cc: Paul Mackerras <paulus(a)samba.org>
Cc: Michael Ellerman <mpe(a)ellerman.id.au>
Cc: linux-arch(a)vger.kernel.org
Cc: Guan Xuetao <gxt(a)pku.edu.cn>
Cc: Jeff Dike <jdike(a)addtoit.com>
Cc: Richard Weinberger <richard(a)nod.at>
Cc: Anton Ivanov <anton.ivanov(a)cambridgegreys.com>
---
b/arch/powerpc/include/asm/mmu_context.h | 1 -
b/arch/um/include/asm/mmu_context.h | 1 -
b/arch/unicore32/include/asm/mmu_context.h | 1 -
b/arch/x86/include/asm/mmu_context.h | 6 +++---
b/arch/x86/include/asm/mpx.h | 5 ++---
b/arch/x86/mm/mpx.c | 10 ++++++----
b/include/asm-generic/mm_hooks.h | 1 -
b/mm/mmap.c | 15 ++++++++-------
8 files changed, 19 insertions(+), 21 deletions(-)
diff -puN mm/mmap.c~mpx-rss-pass-no-vma mm/mmap.c
--- a/mm/mmap.c~mpx-rss-pass-no-vma 2019-04-19 09:31:09.851509404 -0700
+++ b/mm/mmap.c 2019-04-19 09:31:09.864509404 -0700
@@ -2730,9 +2730,17 @@ int __do_munmap(struct mm_struct *mm, un
return -EINVAL;
len = PAGE_ALIGN(len);
+ end = start + len;
if (len == 0)
return -EINVAL;
+ /*
+ * arch_unmap() might do unmaps itself. It must be called
+ * and finish any rbtree manipulation before this code
+ * runs and also starts to manipulate the rbtree.
+ */
+ arch_unmap(mm, start, end);
+
/* Find the first overlapping VMA */
vma = find_vma(mm, start);
if (!vma)
@@ -2741,7 +2749,6 @@ int __do_munmap(struct mm_struct *mm, un
/* we have start < vma->vm_end */
/* if it doesn't overlap, we have nothing.. */
- end = start + len;
if (vma->vm_start >= end)
return 0;
@@ -2811,12 +2818,6 @@ int __do_munmap(struct mm_struct *mm, un
/* Detach vmas from rbtree */
detach_vmas_to_be_unmapped(mm, vma, prev, end);
- /*
- * mpx unmap needs to be called with mmap_sem held for write.
- * It is safe to call it before unmap_region().
- */
- arch_unmap(mm, vma, start, end);
-
if (downgrade)
downgrade_write(&mm->mmap_sem);
diff -puN arch/x86/include/asm/mmu_context.h~mpx-rss-pass-no-vma arch/x86/include/asm/mmu_context.h
--- a/arch/x86/include/asm/mmu_context.h~mpx-rss-pass-no-vma 2019-04-19 09:31:09.853509404 -0700
+++ b/arch/x86/include/asm/mmu_context.h 2019-04-19 09:31:09.865509404 -0700
@@ -277,8 +277,8 @@ static inline void arch_bprm_mm_init(str
mpx_mm_init(mm);
}
-static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
- unsigned long start, unsigned long end)
+static inline void arch_unmap(struct mm_struct *mm, unsigned long start,
+ unsigned long end)
{
/*
* mpx_notify_unmap() goes and reads a rarely-hot
@@ -298,7 +298,7 @@ static inline void arch_unmap(struct mm_
* consistently wrong.
*/
if (unlikely(cpu_feature_enabled(X86_FEATURE_MPX)))
- mpx_notify_unmap(mm, vma, start, end);
+ mpx_notify_unmap(mm, start, end);
}
/*
diff -puN include/asm-generic/mm_hooks.h~mpx-rss-pass-no-vma include/asm-generic/mm_hooks.h
--- a/include/asm-generic/mm_hooks.h~mpx-rss-pass-no-vma 2019-04-19 09:31:09.856509404 -0700
+++ b/include/asm-generic/mm_hooks.h 2019-04-19 09:31:09.865509404 -0700
@@ -18,7 +18,6 @@ static inline void arch_exit_mmap(struct
}
static inline void arch_unmap(struct mm_struct *mm,
- struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
}
diff -puN arch/x86/mm/mpx.c~mpx-rss-pass-no-vma arch/x86/mm/mpx.c
--- a/arch/x86/mm/mpx.c~mpx-rss-pass-no-vma 2019-04-19 09:31:09.858509404 -0700
+++ b/arch/x86/mm/mpx.c 2019-04-19 09:31:09.866509404 -0700
@@ -881,9 +881,10 @@ static int mpx_unmap_tables(struct mm_st
* the virtual address region start...end have already been split if
* necessary, and the 'vma' is the first vma in this range (start -> end).
*/
-void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
- unsigned long start, unsigned long end)
+void mpx_notify_unmap(struct mm_struct *mm, unsigned long start,
+ unsigned long end)
{
+ struct vm_area_struct *vma;
int ret;
/*
@@ -902,11 +903,12 @@ void mpx_notify_unmap(struct mm_struct *
* which should not occur normally. Being strict about it here
* helps ensure that we do not have an exploitable stack overflow.
*/
- do {
+ vma = find_vma(mm, start);
+ while (vma && vma->vm_start < end) {
if (vma->vm_flags & VM_MPX)
return;
vma = vma->vm_next;
- } while (vma && vma->vm_start < end);
+ }
ret = mpx_unmap_tables(mm, start, end);
if (ret)
diff -puN arch/x86/include/asm/mpx.h~mpx-rss-pass-no-vma arch/x86/include/asm/mpx.h
--- a/arch/x86/include/asm/mpx.h~mpx-rss-pass-no-vma 2019-04-19 09:31:09.860509404 -0700
+++ b/arch/x86/include/asm/mpx.h 2019-04-19 09:31:09.866509404 -0700
@@ -78,8 +78,8 @@ static inline void mpx_mm_init(struct mm
*/
mm->context.bd_addr = MPX_INVALID_BOUNDS_DIR;
}
-void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
- unsigned long start, unsigned long end);
+void mpx_notify_unmap(struct mm_struct *mm, unsigned long start,
+ unsigned long end);
unsigned long mpx_unmapped_area_check(unsigned long addr, unsigned long len,
unsigned long flags);
@@ -100,7 +100,6 @@ static inline void mpx_mm_init(struct mm
{
}
static inline void mpx_notify_unmap(struct mm_struct *mm,
- struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
}
diff -puN arch/um/include/asm/mmu_context.h~mpx-rss-pass-no-vma arch/um/include/asm/mmu_context.h
--- a/arch/um/include/asm/mmu_context.h~mpx-rss-pass-no-vma 2019-04-19 09:42:05.789507768 -0700
+++ b/arch/um/include/asm/mmu_context.h 2019-04-19 09:42:57.962507638 -0700
@@ -22,7 +22,6 @@ static inline int arch_dup_mmap(struct m
}
extern void arch_exit_mmap(struct mm_struct *mm);
static inline void arch_unmap(struct mm_struct *mm,
- struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
}
diff -puN arch/unicore32/include/asm/mmu_context.h~mpx-rss-pass-no-vma arch/unicore32/include/asm/mmu_context.h
--- a/arch/unicore32/include/asm/mmu_context.h~mpx-rss-pass-no-vma 2019-04-19 09:42:06.189507767 -0700
+++ b/arch/unicore32/include/asm/mmu_context.h 2019-04-19 09:43:25.425507569 -0700
@@ -88,7 +88,6 @@ static inline int arch_dup_mmap(struct m
}
static inline void arch_unmap(struct mm_struct *mm,
- struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
}
diff -puN arch/powerpc/include/asm/mmu_context.h~mpx-rss-pass-no-vma arch/powerpc/include/asm/mmu_context.h
--- a/arch/powerpc/include/asm/mmu_context.h~mpx-rss-pass-no-vma 2019-04-19 09:42:06.388507766 -0700
+++ b/arch/powerpc/include/asm/mmu_context.h 2019-04-19 09:43:27.392507564 -0700
@@ -237,7 +237,6 @@ extern void arch_exit_mmap(struct mm_str
#endif
static inline void arch_unmap(struct mm_struct *mm,
- struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
_
The logic for freeing an imported buffer with a virtual address is
broken. It will free the buffer instead of unmapping the dma buf.
Fix by reversing the if ladder and first check if the buffer is imported.
Fixes: b9068cde51ee ("drm/cma-helper: Add DRM_GEM_CMA_VMAP_DRIVER_OPS")
Cc: stable(a)vger.kernel.org
Reported-by: "Li, Tingqian" <tingqian.li(a)intel.com>
Signed-off-by: Noralf Trønnes <noralf(a)tronnes.org>
---
This bug is present in 5.0 and it only affects tinydrm drivers that import
buffers, which is rare if anyone at all is doing it. I'll apply this to
drm-misc-next and let it trickle down through stable unless someone thinks
otherwise.
Noralf.
drivers/gpu/drm/drm_gem_cma_helper.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index cc26625b4b33..e01ceed09e67 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -186,13 +186,13 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
cma_obj = to_drm_gem_cma_obj(gem_obj);
- if (cma_obj->vaddr) {
- dma_free_wc(gem_obj->dev->dev, cma_obj->base.size,
- cma_obj->vaddr, cma_obj->paddr);
- } else if (gem_obj->import_attach) {
+ if (gem_obj->import_attach) {
if (cma_obj->vaddr)
dma_buf_vunmap(gem_obj->import_attach->dmabuf, cma_obj->vaddr);
drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
+ } else if (cma_obj->vaddr) {
+ dma_free_wc(gem_obj->dev->dev, cma_obj->base.size,
+ cma_obj->vaddr, cma_obj->paddr);
}
drm_gem_object_release(gem_obj);
--
2.20.1
Janusz Krzysztofik (14):
media: ov6650: Fix MODDULE_DESCRIPTION
media: ov6650: Fix control handler not freed on init error
media: ov6650: Fix unverified arguments used in .set_fmt()
media: ov6650: Fix unverified arguments accepted by .get_fmt()
media: ov6650: Fix unverified arguments accepted by .enum_mbus_code()
media: ov6650: Fix unverified pad IDs accepted by
.get/set_selectioon()
media: ov6650: Fix unverified pad IDs accepted by
.g/s_frame_interval()
media: ov6650: Fix crop rectangle alignment not passed back
media: ov6650: Fix incorrect use of JPEG colorspace
media: ov6650: Fix some format attributes not under control
media: ov6650: Fix .get_fmt() V4L2_SUBDEV_FORMAT_TRY support
media: ov6650: Fix default format not applied on device probe
media: ov6650: Fix stored frame format not in sync with hardware
media: ov6650: Fix stored crop rectangle not in sync with hardware
drivers/media/i2c/ov6650.c | 183 ++++++++++++++++++++++++++-----------
1 file changed, 131 insertions(+), 52 deletions(-)
--
2.21.0
Some devices come online in write protected state and switch to
read-write once they are ready to process I/O requests. These devices
broke with commit 20bd1d026aac ("scsi: sd: Keep disk read-only when
re-reading partition") because we had no way to distinguish between a
user decision to set a block_device read-only and the actual hardware
device being write-protected.
Because partitions are dropped and recreated on revalidate we are
unable to persist any user-provided policy in hd_struct. Introduce a
bitmap in struct gendisk to track the user configuration. This bitmap
is updated when BLKROSET is called on a given disk or partition.
A helper function, get_user_ro(), is provided to determine whether the
ioctl has forced read-only state for a given block device. This helper
is used by set_disk_ro() and add_partition() to ensure that both
existing and newly created partitions will get the correct state.
- If BLKROSET sets a whole disk device read-only, all partitions will
now end up in a read-only state.
- If BLKROSET sets a given partition read-only, that partition will
remain read-only post revalidate.
- Otherwise both the whole disk device and any partitions will
reflect the write protect state of the underlying device.
Since nobody knows what "policy" means, rename the field to
"read_only" for clarity.
Cc: Jeremy Cline <jeremy(a)jcline.org>
Cc: Oleksii Kurochko <olkuroch(a)cisco.com>
Cc: stable(a)vger.kernel.org # v4.16+
Reported-by: Oleksii Kurochko <olkuroch(a)cisco.com>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=201221
Fixes: 20bd1d026aac ("scsi: sd: Keep disk read-only when re-reading partition")
Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
---
v3:
- Drop ?: since gcc complains about mixing int and bool (zeroday)
- Drop EXPORT_SYMBOL (hch)
- s/policy/read_only/ and make it a boolean
v2:
- Track user read-only state in a bitmap
- Work around the regression that caused us to drop user
preferences on revalidate
---
block/blk-core.c | 2 +-
block/genhd.c | 34 ++++++++++++++++++++++++----------
block/ioctl.c | 4 ++++
block/partition-generic.c | 7 +++++--
drivers/scsi/sd.c | 4 +---
include/linux/genhd.h | 11 +++++++----
6 files changed, 42 insertions(+), 20 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 4673ebe42255..932f179a9095 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -792,7 +792,7 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
{
const int op = bio_op(bio);
- if (part->policy && op_is_write(op)) {
+ if (part->read_only && op_is_write(op)) {
char b[BDEVNAME_SIZE];
if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
diff --git a/block/genhd.c b/block/genhd.c
index 703267865f14..9f3a93ca1851 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1539,26 +1539,40 @@ static void set_disk_ro_uevent(struct gendisk *gd, int ro)
kobject_uevent_env(&disk_to_dev(gd)->kobj, KOBJ_CHANGE, envp);
}
-void set_device_ro(struct block_device *bdev, int flag)
+void set_device_ro(struct block_device *bdev, bool state)
{
- bdev->bd_part->policy = flag;
+ bdev->bd_part->read_only = state;
}
EXPORT_SYMBOL(set_device_ro);
-void set_disk_ro(struct gendisk *disk, int flag)
+bool get_user_ro(struct gendisk *disk, unsigned int partno)
+{
+ /* Is the user read-only bit set for the whole disk device? */
+ if (test_bit(0, disk->user_ro_bitmap))
+ return true;
+
+ /* Is the user read-only bit set for this particular partition? */
+ if (test_bit(partno, disk->user_ro_bitmap))
+ return true;
+
+ return false;
+}
+
+void set_disk_ro(struct gendisk *disk, bool state)
{
struct disk_part_iter piter;
struct hd_struct *part;
- if (disk->part0.policy != flag) {
- set_disk_ro_uevent(disk, flag);
- disk->part0.policy = flag;
- }
+ if (disk->part0.read_only != state)
+ set_disk_ro_uevent(disk, state);
- disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
+ disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY_PART0);
while ((part = disk_part_iter_next(&piter)))
- part->policy = flag;
+ if (get_user_ro(disk, part->partno))
+ part->read_only = true;
+ else
+ part->read_only = state;
disk_part_iter_exit(&piter);
}
@@ -1568,7 +1582,7 @@ int bdev_read_only(struct block_device *bdev)
{
if (!bdev)
return 0;
- return bdev->bd_part->policy;
+ return bdev->bd_part->read_only;
}
EXPORT_SYMBOL(bdev_read_only);
diff --git a/block/ioctl.c b/block/ioctl.c
index 4825c78a6baa..41206df89485 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -451,6 +451,10 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
return ret;
if (get_user(n, (int __user *)arg))
return -EFAULT;
+ if (n)
+ set_bit(bdev->bd_partno, bdev->bd_disk->user_ro_bitmap);
+ else
+ clear_bit(bdev->bd_partno, bdev->bd_disk->user_ro_bitmap);
set_device_ro(bdev, n);
return 0;
}
diff --git a/block/partition-generic.c b/block/partition-generic.c
index 8e596a8dff32..2bade849cc5c 100644
--- a/block/partition-generic.c
+++ b/block/partition-generic.c
@@ -98,7 +98,7 @@ static ssize_t part_ro_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct hd_struct *p = dev_to_part(dev);
- return sprintf(buf, "%d\n", p->policy ? 1 : 0);
+ return sprintf(buf, "%u\n", p->read_only ? 1 : 0);
}
static ssize_t part_alignment_offset_show(struct device *dev,
@@ -338,7 +338,10 @@ struct hd_struct *add_partition(struct gendisk *disk, int partno,
queue_limit_discard_alignment(&disk->queue->limits, start);
p->nr_sects = len;
p->partno = partno;
- p->policy = get_disk_ro(disk);
+ if (get_user_ro(disk, partno))
+ p->read_only = true;
+ else
+ p->read_only = get_disk_ro(disk);
if (info) {
struct partition_meta_info *pinfo = alloc_part_info(disk);
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 251db30d0882..9d8e15d03d2b 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -2608,10 +2608,8 @@ sd_read_write_protect_flag(struct scsi_disk *sdkp, unsigned char *buffer)
int res;
struct scsi_device *sdp = sdkp->device;
struct scsi_mode_data data;
- int disk_ro = get_disk_ro(sdkp->disk);
int old_wp = sdkp->write_prot;
- set_disk_ro(sdkp->disk, 0);
if (sdp->skip_ms_page_3f) {
sd_first_printk(KERN_NOTICE, sdkp, "Assuming Write Enabled\n");
return;
@@ -2649,7 +2647,7 @@ sd_read_write_protect_flag(struct scsi_disk *sdkp, unsigned char *buffer)
"Test WP failed, assume Write Enabled\n");
} else {
sdkp->write_prot = ((data.device_specific & 0x80) != 0);
- set_disk_ro(sdkp->disk, sdkp->write_prot || disk_ro);
+ set_disk_ro(sdkp->disk, sdkp->write_prot);
if (sdkp->first_scan || old_wp != sdkp->write_prot) {
sd_printk(KERN_NOTICE, sdkp, "Write Protect is %s\n",
sdkp->write_prot ? "on" : "off");
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 06c0fd594097..1abde0e88ccb 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -118,7 +118,8 @@ struct hd_struct {
unsigned int discard_alignment;
struct device __dev;
struct kobject *holder_dir;
- int policy, partno;
+ bool read_only;
+ int partno;
struct partition_meta_info *info;
#ifdef CONFIG_FAIL_MAKE_REQUEST
int make_it_fail;
@@ -194,6 +195,7 @@ struct gendisk {
*/
struct disk_part_tbl __rcu *part_tbl;
struct hd_struct part0;
+ DECLARE_BITMAP(user_ro_bitmap, DISK_MAX_PARTS);
const struct block_device_operations *fops;
struct request_queue *queue;
@@ -431,12 +433,13 @@ extern void del_gendisk(struct gendisk *gp);
extern struct gendisk *get_gendisk(dev_t dev, int *partno);
extern struct block_device *bdget_disk(struct gendisk *disk, int partno);
-extern void set_device_ro(struct block_device *bdev, int flag);
-extern void set_disk_ro(struct gendisk *disk, int flag);
+extern void set_device_ro(struct block_device *bdev, bool state);
+extern void set_disk_ro(struct gendisk *disk, bool state);
+extern bool get_user_ro(struct gendisk *disk, unsigned int partno);
static inline int get_disk_ro(struct gendisk *disk)
{
- return disk->part0.policy;
+ return disk->part0.read_only;
}
extern void disk_block_events(struct gendisk *disk);
--
2.21.0
At present, the flow of calculating AC timing of read/write cycle in SDR
mode is that:
At first, calculate high hold time which is valid for both read and write
cycle using the max value between tREH_min and tWH_min.
Secondly, calculate WE# pulse width using tWP_min.
Thridly, calculate RE# pulse width using the bigger one between tREA_max
and tRP_min.
But NAND SPEC shows that Controller should also meet write/read cycle time.
That is write cycle time should be more than tWC_min and read cycle should
be more than tRC_min. Obviously, we do not achieve that now.
This patch corrects the low level time calculation to meet minimum
read/write cycle time required. After getting the high hold time, WE# low
level time will be promised to meet tWP_min and tWC_min requirement,
and RE# low level time will be promised to meet tREA_max, tRP_min and
tRC_min requirement.
Fixes: edfee3619c49 ("mtd: nand: mtk: add ->setup_data_interface() hook")
Cc: stable(a)vger.kernel.org
Signed-off-by: Xiaolei Li <xiaolei.li(a)mediatek.com>
---
drivers/mtd/nand/raw/mtk_nand.c | 24 +++++++++++++++++++++---
1 file changed, 21 insertions(+), 3 deletions(-)
diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c
index b6b4602f5132..4fbb0c6ecae3 100644
--- a/drivers/mtd/nand/raw/mtk_nand.c
+++ b/drivers/mtd/nand/raw/mtk_nand.c
@@ -508,7 +508,8 @@ static int mtk_nfc_setup_data_interface(struct nand_chip *chip, int csline,
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
const struct nand_sdr_timings *timings;
- u32 rate, tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt;
+ u32 rate, tpoecs, tprecs, tc2r, tw2r, twh, twst = 0, trlt = 0;
+ u32 thold;
timings = nand_get_sdr_timings(conf);
if (IS_ERR(timings))
@@ -544,11 +545,28 @@ static int mtk_nfc_setup_data_interface(struct nand_chip *chip, int csline,
twh = DIV_ROUND_UP(twh * rate, 1000000) - 1;
twh &= 0xf;
- twst = timings->tWP_min / 1000;
+ /* Calculate real WE#/RE# hold time in nanosecond */
+ thold = (twh + 1) * 1000000 / rate;
+ /* nanosecond to picosecond */
+ thold *= 1000;
+
+ /**
+ * WE# low level time should be expaned to meet WE# pulse time
+ * and WE# cycle time at the same time.
+ */
+ if (thold < timings->tWC_min)
+ twst = timings->tWC_min - thold;
+ twst = max(timings->tWP_min, twst) / 1000;
twst = DIV_ROUND_UP(twst * rate, 1000000) - 1;
twst &= 0xf;
- trlt = max(timings->tREA_max, timings->tRP_min) / 1000;
+ /**
+ * RE# low level time should be expaned to meet RE# pulse time,
+ * RE# access time and RE# cycle time at the same time.
+ */
+ if (thold < timings->tRC_min)
+ trlt = timings->tRC_min - thold;
+ trlt = max3(trlt, timings->tREA_max, timings->tRP_min) / 1000;
trlt = DIV_ROUND_UP(trlt * rate, 1000000) - 1;
trlt &= 0xf;
--
2.18.0