This is a note to let you know that I've just added the patch titled
drm/i915: Always call to intel_display_set_init_power() in resume_early.
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-i915-always-call-to-intel_display_set_init_power-in-resume_early.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d13a8479f3584613b6aacbb793eae64578b8f69a Mon Sep 17 00:00:00 2001
From: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Date: Tue, 16 Jan 2018 16:53:24 +0100
Subject: drm/i915: Always call to intel_display_set_init_power() in resume_early.
From: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
commit d13a8479f3584613b6aacbb793eae64578b8f69a upstream.
intel_power_domains_init_hw() calls set_init_power, but when using
runtime power management this call is skipped. This prevents hw readout
from taking place.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104172
Link: https://patchwork.freedesktop.org/patch/msgid/20180116155324.75120-1-maarte…
Fixes: bc87229f323e ("drm/i915/skl: enable PC9/10 power states during suspend-to-idle")
Cc: Nivedita Swaminathan <nivedita.swaminathan(a)intel.com>
Cc: Imre Deak <imre.deak(a)intel.com>
Cc: Patrik Jakobsson <patrik.jakobsson(a)linux.intel.com>
Cc: Jani Nikula <jani.nikula(a)linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen(a)linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Cc: <stable(a)vger.kernel.org> # v4.5+
Reviewed-by: Imre Deak <imre.deak(a)intel.com>
(cherry picked from commit ac25dfed15d470d7f23dd817e965b54aa3f94a1e)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/gpu/drm/i915/i915_drv.c | 2 ++
1 file changed, 2 insertions(+)
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1703,6 +1703,8 @@ static int i915_drm_resume_early(struct
if (IS_BROXTON(dev_priv) ||
!(dev_priv->suspended_to_idle && dev_priv->csr.dmc_payload))
intel_power_domains_init_hw(dev_priv, true);
+ else
+ intel_display_set_init_power(dev_priv, true);
enable_rpm_wakeref_asserts(dev_priv);
Patches currently in stable-queue which might be from maarten.lankhorst(a)linux.intel.com are
queue-4.9/drm-i915-always-call-to-intel_display_set_init_power-in-resume_early.patch
This is a note to let you know that I've just added the patch titled
drm/amdgpu: Notify sbios device ready before send request
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-amdgpu-notify-sbios-device-ready-before-send-request.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 1bced75f4ab04bec55aecb57d99435dc6d0ae5a0 Mon Sep 17 00:00:00 2001
From: Rex Zhu <Rex.Zhu(a)amd.com>
Date: Tue, 27 Feb 2018 18:20:53 +0800
Subject: drm/amdgpu: Notify sbios device ready before send request
From: Rex Zhu <Rex.Zhu(a)amd.com>
commit 1bced75f4ab04bec55aecb57d99435dc6d0ae5a0 upstream.
it is required if a platform supports PCIe root complex
core voltage reduction. After receiving this notification,
SBIOS can apply default PCIe root complex power policy.
Reviewed-by: Alex Deucher <alexander.deucher(a)amd.com>
Signed-off-by: Rex Zhu <Rex.Zhu(a)amd.com>
Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c | 3 +++
1 file changed, 3 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
@@ -540,6 +540,9 @@ int amdgpu_acpi_pcie_performance_request
size_t size;
u32 retry = 3;
+ if (amdgpu_acpi_pcie_notify_device_ready(adev))
+ return -EINVAL;
+
/* Get the device handle */
handle = ACPI_HANDLE(&adev->pdev->dev);
if (!handle)
Patches currently in stable-queue which might be from Rex.Zhu(a)amd.com are
queue-4.9/drm-amdgpu-notify-sbios-device-ready-before-send-request.patch
This is a note to let you know that I've just added the patch titled
drm/amdgpu: Fix deadlock on runtime suspend
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-amdgpu-fix-deadlock-on-runtime-suspend.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From aa0aad57909eb321746325951d66af88a83bc956 Mon Sep 17 00:00:00 2001
From: Lukas Wunner <lukas(a)wunner.de>
Date: Sun, 11 Feb 2018 10:38:28 +0100
Subject: drm/amdgpu: Fix deadlock on runtime suspend
From: Lukas Wunner <lukas(a)wunner.de>
commit aa0aad57909eb321746325951d66af88a83bc956 upstream.
amdgpu's ->runtime_suspend hook calls drm_kms_helper_poll_disable(),
which waits for the output poll worker to finish if it's running.
The output poll worker meanwhile calls pm_runtime_get_sync() in
amdgpu's ->detect hooks, which waits for the ongoing suspend to finish,
causing a deadlock.
Fix by not acquiring a runtime PM ref if the ->detect hooks are called
in the output poll worker's context. This is safe because the poll
worker is only enabled while runtime active and we know that
->runtime_suspend waits for it to finish.
Fixes: d38ceaf99ed0 ("drm/amdgpu: add core driver (v4)")
Cc: stable(a)vger.kernel.org # v4.2+: 27d4ee03078a: workqueue: Allow retrieval of current task's work struct
Cc: stable(a)vger.kernel.org # v4.2+: 25c058ccaf2e: drm: Allow determining if current task is output poll worker
Cc: Alex Deucher <alexander.deucher(a)amd.com>
Tested-by: Mike Lothian <mike(a)fireburn.co.uk>
Reviewed-by: Lyude Paul <lyude(a)redhat.com>
Signed-off-by: Lukas Wunner <lukas(a)wunner.de>
Link: https://patchwork.freedesktop.org/patch/msgid/4c9bf72aacae1eef062bd134cd112…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 58 ++++++++++++++++---------
1 file changed, 38 insertions(+), 20 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
@@ -739,9 +739,11 @@ amdgpu_connector_lvds_detect(struct drm_
enum drm_connector_status ret = connector_status_disconnected;
int r;
- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }
if (encoder) {
struct amdgpu_encoder *amdgpu_encoder = to_amdgpu_encoder(encoder);
@@ -760,8 +762,12 @@ amdgpu_connector_lvds_detect(struct drm_
/* check acpi lid status ??? */
amdgpu_connector_update_scratch_regs(connector, ret);
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }
+
return ret;
}
@@ -871,9 +877,11 @@ amdgpu_connector_vga_detect(struct drm_c
enum drm_connector_status ret = connector_status_disconnected;
int r;
- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }
encoder = amdgpu_connector_best_single_encoder(connector);
if (!encoder)
@@ -927,8 +935,10 @@ amdgpu_connector_vga_detect(struct drm_c
amdgpu_connector_update_scratch_regs(connector, ret);
out:
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }
return ret;
}
@@ -991,9 +1001,11 @@ amdgpu_connector_dvi_detect(struct drm_c
enum drm_connector_status ret = connector_status_disconnected;
bool dret = false, broken_edid = false;
- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }
if (!force && amdgpu_connector_check_hpd_status_unchanged(connector)) {
ret = connector->status;
@@ -1118,8 +1130,10 @@ out:
amdgpu_connector_update_scratch_regs(connector, ret);
exit:
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }
return ret;
}
@@ -1362,9 +1376,11 @@ amdgpu_connector_dp_detect(struct drm_co
struct drm_encoder *encoder = amdgpu_connector_best_single_encoder(connector);
int r;
- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }
if (!force && amdgpu_connector_check_hpd_status_unchanged(connector)) {
ret = connector->status;
@@ -1432,8 +1448,10 @@ amdgpu_connector_dp_detect(struct drm_co
amdgpu_connector_update_scratch_regs(connector, ret);
out:
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }
return ret;
}
Patches currently in stable-queue which might be from lukas(a)wunner.de are
queue-4.9/workqueue-allow-retrieval-of-current-task-s-work-struct.patch
queue-4.9/drm-radeon-fix-deadlock-on-runtime-suspend.patch
queue-4.9/drm-amdgpu-fix-deadlock-on-runtime-suspend.patch
queue-4.9/drm-allow-determining-if-current-task-is-output-poll-worker.patch
queue-4.9/drm-nouveau-fix-deadlock-on-runtime-suspend.patch
This is a note to let you know that I've just added the patch titled
drm/amdgpu:Correct max uvd handles
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-amdgpu-correct-max-uvd-handles.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 0e5ee33d2a54e4c55fe92857f23e1cbb0440d6de Mon Sep 17 00:00:00 2001
From: James Zhu <James.Zhu(a)amd.com>
Date: Tue, 6 Mar 2018 14:43:50 -0500
Subject: drm/amdgpu:Correct max uvd handles
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: James Zhu <James.Zhu(a)amd.com>
commit 0e5ee33d2a54e4c55fe92857f23e1cbb0440d6de upstream.
Max uvd handles should use adev->uvd.max_handles instead of
AMDGPU_MAX_UVD_HANDLES here.
Signed-off-by: James Zhu <James.Zhu(a)amd.com>
Reviewed-by: Leo Liu <leo.liu(a)amd.com>
Reviewed-by: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -277,7 +277,7 @@ int amdgpu_uvd_suspend(struct amdgpu_dev
if (atomic_read(&adev->uvd.handles[i]))
break;
- if (i == AMDGPU_MAX_UVD_HANDLES)
+ if (i == adev->uvd.max_handles)
return 0;
cancel_delayed_work_sync(&adev->uvd.idle_work);
Patches currently in stable-queue which might be from James.Zhu(a)amd.com are
queue-4.9/drm-amdgpu-correct-max-uvd-handles.patch
queue-4.9/drm-amdgpu-always-save-uvd-vcpu_bo-in-vm-mode.patch
This is a note to let you know that I've just added the patch titled
drm/amdgpu:Always save uvd vcpu_bo in VM Mode
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-amdgpu-always-save-uvd-vcpu_bo-in-vm-mode.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From f8bee6135e167f5b35b7789c74c2956dad14d0d5 Mon Sep 17 00:00:00 2001
From: James Zhu <James.Zhu(a)amd.com>
Date: Tue, 6 Mar 2018 14:52:35 -0500
Subject: drm/amdgpu:Always save uvd vcpu_bo in VM Mode
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: James Zhu <James.Zhu(a)amd.com>
commit f8bee6135e167f5b35b7789c74c2956dad14d0d5 upstream.
When UVD is in VM mode, there is not uvd handle exchanged,
uvd.handles are always 0. So vcpu_bo always need save,
Otherwise amdgpu driver will fail during suspend/resume.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105021
Signed-off-by: James Zhu <James.Zhu(a)amd.com>
Reviewed-by: Leo Liu <leo.liu(a)amd.com>
Reviewed-by: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -273,12 +273,15 @@ int amdgpu_uvd_suspend(struct amdgpu_dev
if (adev->uvd.vcpu_bo == NULL)
return 0;
- for (i = 0; i < adev->uvd.max_handles; ++i)
- if (atomic_read(&adev->uvd.handles[i]))
- break;
+ /* only valid for physical mode */
+ if (adev->asic_type < CHIP_POLARIS10) {
+ for (i = 0; i < adev->uvd.max_handles; ++i)
+ if (atomic_read(&adev->uvd.handles[i]))
+ break;
- if (i == adev->uvd.max_handles)
- return 0;
+ if (i == adev->uvd.max_handles)
+ return 0;
+ }
cancel_delayed_work_sync(&adev->uvd.idle_work);
Patches currently in stable-queue which might be from James.Zhu(a)amd.com are
queue-4.9/drm-amdgpu-correct-max-uvd-handles.patch
queue-4.9/drm-amdgpu-always-save-uvd-vcpu_bo-in-vm-mode.patch
This is a note to let you know that I've just added the patch titled
drm: Allow determining if current task is output poll worker
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-allow-determining-if-current-task-is-output-poll-worker.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 25c058ccaf2ebbc3e250ec1e199e161f91fe27d4 Mon Sep 17 00:00:00 2001
From: Lukas Wunner <lukas(a)wunner.de>
Date: Wed, 14 Feb 2018 06:41:25 +0100
Subject: drm: Allow determining if current task is output poll worker
From: Lukas Wunner <lukas(a)wunner.de>
commit 25c058ccaf2ebbc3e250ec1e199e161f91fe27d4 upstream.
Introduce a helper to determine if the current task is an output poll
worker.
This allows us to fix a long-standing deadlock in several DRM drivers
wherein the ->runtime_suspend callback waits for the output poll worker
to finish and the worker in turn calls a ->detect callback which waits
for runtime suspend to finish. The ->detect callback is invoked from
multiple call sites and waiting for runtime suspend to finish is the
correct thing to do except if it's executing in the context of the
worker.
v2: Expand kerneldoc to specifically mention deadlock between
output poll worker and autosuspend worker as use case. (Lyude)
Cc: Dave Airlie <airlied(a)redhat.com>
Cc: Ben Skeggs <bskeggs(a)redhat.com>
Cc: Alex Deucher <alexander.deucher(a)amd.com>
Reviewed-by: Lyude Paul <lyude(a)redhat.com>
Signed-off-by: Lukas Wunner <lukas(a)wunner.de>
Link: https://patchwork.freedesktop.org/patch/msgid/3549ce32e7f1467102e70d3e9cbf7…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/gpu/drm/drm_probe_helper.c | 20 ++++++++++++++++++++
include/drm/drm_crtc_helper.h | 1 +
2 files changed, 21 insertions(+)
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -461,6 +461,26 @@ out:
}
/**
+ * drm_kms_helper_is_poll_worker - is %current task an output poll worker?
+ *
+ * Determine if %current task is an output poll worker. This can be used
+ * to select distinct code paths for output polling versus other contexts.
+ *
+ * One use case is to avoid a deadlock between the output poll worker and
+ * the autosuspend worker wherein the latter waits for polling to finish
+ * upon calling drm_kms_helper_poll_disable(), while the former waits for
+ * runtime suspend to finish upon calling pm_runtime_get_sync() in a
+ * connector ->detect hook.
+ */
+bool drm_kms_helper_is_poll_worker(void)
+{
+ struct work_struct *work = current_work();
+
+ return work && work->func == output_poll_execute;
+}
+EXPORT_SYMBOL(drm_kms_helper_is_poll_worker);
+
+/**
* drm_kms_helper_poll_disable - disable output polling
* @dev: drm_device
*
--- a/include/drm/drm_crtc_helper.h
+++ b/include/drm/drm_crtc_helper.h
@@ -74,5 +74,6 @@ extern void drm_kms_helper_hotplug_event
extern void drm_kms_helper_poll_disable(struct drm_device *dev);
extern void drm_kms_helper_poll_enable(struct drm_device *dev);
extern void drm_kms_helper_poll_enable_locked(struct drm_device *dev);
+extern bool drm_kms_helper_is_poll_worker(void);
#endif
Patches currently in stable-queue which might be from lukas(a)wunner.de are
queue-4.9/workqueue-allow-retrieval-of-current-task-s-work-struct.patch
queue-4.9/drm-radeon-fix-deadlock-on-runtime-suspend.patch
queue-4.9/drm-amdgpu-fix-deadlock-on-runtime-suspend.patch
queue-4.9/drm-allow-determining-if-current-task-is-output-poll-worker.patch
queue-4.9/drm-nouveau-fix-deadlock-on-runtime-suspend.patch
This is a note to let you know that I've just added the patch titled
bcache: don't attach backing with duplicate UUID
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
bcache-don-t-attach-backing-with-duplicate-uuid.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 86755b7a96faed57f910f9e6b8061e019ac1ec08 Mon Sep 17 00:00:00 2001
From: Michael Lyle <mlyle(a)lyle.org>
Date: Mon, 5 Mar 2018 13:41:55 -0800
Subject: bcache: don't attach backing with duplicate UUID
From: Michael Lyle <mlyle(a)lyle.org>
commit 86755b7a96faed57f910f9e6b8061e019ac1ec08 upstream.
This can happen e.g. during disk cloning.
This is an incomplete fix: it does not catch duplicate UUIDs earlier
when things are still unattached. It does not unregister the device.
Further changes to cope better with this are planned but conflict with
Coly's ongoing improvements to handling device errors. In the meantime,
one can manually stop the device after this has happened.
Attempts to attach a duplicate device result in:
[ 136.372404] loop: module loaded
[ 136.424461] bcache: register_bdev() registered backing device loop0
[ 136.424464] bcache: bch_cached_dev_attach() Tried to attach loop0 but duplicate UUID already attached
My test procedure is:
dd if=/dev/sdb1 of=imgfile bs=1024 count=262144
losetup -f imgfile
Signed-off-by: Michael Lyle <mlyle(a)lyle.org>
Reviewed-by: Tang Junhui <tang.junhui(a)zte.com.cn>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/bcache/super.c | 11 +++++++++++
1 file changed, 11 insertions(+)
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -937,6 +937,7 @@ int bch_cached_dev_attach(struct cached_
uint32_t rtime = cpu_to_le32(get_seconds());
struct uuid_entry *u;
char buf[BDEVNAME_SIZE];
+ struct cached_dev *exist_dc, *t;
bdevname(dc->bdev, buf);
@@ -960,6 +961,16 @@ int bch_cached_dev_attach(struct cached_
return -EINVAL;
}
+ /* Check whether already attached */
+ list_for_each_entry_safe(exist_dc, t, &c->cached_devs, list) {
+ if (!memcmp(dc->sb.uuid, exist_dc->sb.uuid, 16)) {
+ pr_err("Tried to attach %s but duplicate UUID already attached",
+ buf);
+
+ return -EINVAL;
+ }
+ }
+
u = uuid_find(c, dc->sb.uuid);
if (u &&
Patches currently in stable-queue which might be from mlyle(a)lyle.org are
queue-4.9/bcache-fix-crashes-in-duplicate-cache-device-register.patch
queue-4.9/bcache-don-t-attach-backing-with-duplicate-uuid.patch