This is a note to let you know that I've just added the patch titled
KVM: x86: pvclock: Handle first-time write to pvclock-page contains random junk
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kvm-x86-pvclock-handle-first-time-write-to-pvclock-page-contains-random-junk.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 51c4b8bba674cfd2260d173602c4dac08e4c3a99 Mon Sep 17 00:00:00 2001
From: Liran Alon <liran.alon(a)oracle.com>
Date: Sun, 5 Nov 2017 16:11:30 +0200
Subject: KVM: x86: pvclock: Handle first-time write to pvclock-page contains random junk
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Liran Alon <liran.alon(a)oracle.com>
commit 51c4b8bba674cfd2260d173602c4dac08e4c3a99 upstream.
When guest passes KVM it's pvclock-page GPA via WRMSR to
MSR_KVM_SYSTEM_TIME / MSR_KVM_SYSTEM_TIME_NEW, KVM don't initialize
pvclock-page to some start-values. It just requests a clock-update which
will happen before entering to guest.
The clock-update logic will call kvm_setup_pvclock_page() to update the
pvclock-page with info. However, kvm_setup_pvclock_page() *wrongly*
assumes that the version-field is initialized to an even number. This is
wrong because at first-time write, field could be any-value.
Fix simply makes sure that if first-time version-field is odd, increment
it once more to make it even and only then start standard logic.
This follows same logic as done in other pvclock shared-pages (See
kvm_write_wall_clock() and record_steal_time()).
Signed-off-by: Liran Alon <liran.alon(a)oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko(a)oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk(a)oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk(a)oracle.com>
Reviewed-by: Paolo Bonzini <pbonzini(a)redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kvm/x86.c | 3 +++
1 file changed, 3 insertions(+)
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1812,6 +1812,9 @@ static int kvm_guest_time_update(struct
*/
BUILD_BUG_ON(offsetof(struct pvclock_vcpu_time_info, version) != 0);
+ if (guest_hv_clock.version & 1)
+ ++guest_hv_clock.version; /* first time write, random junk */
+
vcpu->hv_clock.version = guest_hv_clock.version + 1;
kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
&vcpu->hv_clock,
Patches currently in stable-queue which might be from liran.alon(a)oracle.com are
queue-4.4/kvm-x86-exit-to-user-mode-on-ud-intercept-when-emulator-requires.patch
queue-4.4/kvm-x86-pvclock-handle-first-time-write-to-pvclock-page-contains-random-junk.patch
This is a note to let you know that I've just added the patch titled
KVM: x86: inject exceptions produced by x86_decode_insn
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kvm-x86-inject-exceptions-produced-by-x86_decode_insn.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6ea6e84309ca7e0e850b3083e6b09344ee15c290 Mon Sep 17 00:00:00 2001
From: Paolo Bonzini <pbonzini(a)redhat.com>
Date: Fri, 10 Nov 2017 10:49:38 +0100
Subject: KVM: x86: inject exceptions produced by x86_decode_insn
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Paolo Bonzini <pbonzini(a)redhat.com>
commit 6ea6e84309ca7e0e850b3083e6b09344ee15c290 upstream.
Sometimes, a processor might execute an instruction while another
processor is updating the page tables for that instruction's code page,
but before the TLB shootdown completes. The interesting case happens
if the page is in the TLB.
In general, the processor will succeed in executing the instruction and
nothing bad happens. However, what if the instruction is an MMIO access?
If *that* happens, KVM invokes the emulator, and the emulator gets the
updated page tables. If the update side had marked the code page as non
present, the page table walk then will fail and so will x86_decode_insn.
Unfortunately, even though kvm_fetch_guest_virt is correctly returning
X86EMUL_PROPAGATE_FAULT, x86_decode_insn's caller treats the failure as
a fatal error if the instruction cannot simply be reexecuted (as is the
case for MMIO). And this in fact happened sometimes when rebooting
Windows 2012r2 guests. Just checking ctxt->have_exception and injecting
the exception if true is enough to fix the case.
Thanks to Eduardo Habkost for helping in the debugging of this issue.
Reported-by: Yanan Fu <yfu(a)redhat.com>
Cc: Eduardo Habkost <ehabkost(a)redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini(a)redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kvm/x86.c | 2 ++
1 file changed, 2 insertions(+)
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5429,6 +5429,8 @@ int x86_emulate_instruction(struct kvm_v
if (reexecute_instruction(vcpu, cr2, write_fault_to_spt,
emulation_type))
return EMULATE_DONE;
+ if (ctxt->have_exception && inject_emulated_exception(vcpu))
+ return EMULATE_DONE;
if (emulation_type & EMULTYPE_SKIP)
return EMULATE_FAIL;
return handle_emulation_failure(vcpu);
Patches currently in stable-queue which might be from pbonzini(a)redhat.com are
queue-4.4/kvm-x86-inject-exceptions-produced-by-x86_decode_insn.patch
queue-4.4/kvm-x86-exit-to-user-mode-on-ud-intercept-when-emulator-requires.patch
queue-4.4/kvm-x86-pvclock-handle-first-time-write-to-pvclock-page-contains-random-junk.patch
This is a note to let you know that I've just added the patch titled
KVM: x86: Exit to user-mode on #UD intercept when emulator requires
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kvm-x86-exit-to-user-mode-on-ud-intercept-when-emulator-requires.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 61cb57c9ed631c95b54f8e9090c89d18b3695b3c Mon Sep 17 00:00:00 2001
From: Liran Alon <liran.alon(a)oracle.com>
Date: Sun, 5 Nov 2017 16:56:32 +0200
Subject: KVM: x86: Exit to user-mode on #UD intercept when emulator requires
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Liran Alon <liran.alon(a)oracle.com>
commit 61cb57c9ed631c95b54f8e9090c89d18b3695b3c upstream.
Instruction emulation after trapping a #UD exception can result in an
MMIO access, for example when emulating a MOVBE on a processor that
doesn't support the instruction. In this case, the #UD vmexit handler
must exit to user mode, but there wasn't any code to do so. Add it for
both VMX and SVM.
Signed-off-by: Liran Alon <liran.alon(a)oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko(a)oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk(a)oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk(a)oracle.com>
Reviewed-by: Wanpeng Li <wanpeng.li(a)hotmail.com>
Reviewed-by: Paolo Bonzini <pbonzini(a)redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kvm/svm.c | 2 ++
arch/x86/kvm/vmx.c | 2 ++
2 files changed, 4 insertions(+)
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1696,6 +1696,8 @@ static int ud_interception(struct vcpu_s
int er;
er = emulate_instruction(&svm->vcpu, EMULTYPE_TRAP_UD);
+ if (er == EMULATE_USER_EXIT)
+ return 0;
if (er != EMULATE_DONE)
kvm_queue_exception(&svm->vcpu, UD_VECTOR);
return 1;
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5267,6 +5267,8 @@ static int handle_exception(struct kvm_v
return 1;
}
er = emulate_instruction(vcpu, EMULTYPE_TRAP_UD);
+ if (er == EMULATE_USER_EXIT)
+ return 0;
if (er != EMULATE_DONE)
kvm_queue_exception(vcpu, UD_VECTOR);
return 1;
Patches currently in stable-queue which might be from liran.alon(a)oracle.com are
queue-4.4/kvm-x86-exit-to-user-mode-on-ud-intercept-when-emulator-requires.patch
queue-4.4/kvm-x86-pvclock-handle-first-time-write-to-pvclock-page-contains-random-junk.patch
This is a note to let you know that I've just added the patch titled
btrfs: clear space cache inode generation always
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
btrfs-clear-space-cache-inode-generation-always.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 8e138e0d92c6c9d3d481674fb14e3439b495be37 Mon Sep 17 00:00:00 2001
From: Josef Bacik <jbacik(a)fb.com>
Date: Fri, 17 Nov 2017 14:50:46 -0500
Subject: btrfs: clear space cache inode generation always
From: Josef Bacik <jbacik(a)fb.com>
commit 8e138e0d92c6c9d3d481674fb14e3439b495be37 upstream.
We discovered a box that had double allocations, and suspected the space
cache may be to blame. While auditing the write out path I noticed that
if we've already setup the space cache we will just carry on. This
means that any error we hit after cache_save_setup before we go to
actually write the cache out we won't reset the inode generation, so
whatever was already written will be considered correct, except it'll be
stale. Fix this by _always_ resetting the generation on the block group
inode, this way we only ever have valid or invalid cache.
With this patch I was no longer able to reproduce cache corruption with
dm-log-writes and my bpf error injection tool.
Signed-off-by: Josef Bacik <jbacik(a)fb.com>
Signed-off-by: David Sterba <dsterba(a)suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/btrfs/extent-tree.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -3361,13 +3361,6 @@ again:
goto again;
}
- /* We've already setup this transaction, go ahead and exit */
- if (block_group->cache_generation == trans->transid &&
- i_size_read(inode)) {
- dcs = BTRFS_DC_SETUP;
- goto out_put;
- }
-
/*
* We want to set the generation to 0, that way if anything goes wrong
* from here on out we know not to trust this cache when we load up next
@@ -3391,6 +3384,13 @@ again:
}
WARN_ON(ret);
+ /* We've already setup this transaction, go ahead and exit */
+ if (block_group->cache_generation == trans->transid &&
+ i_size_read(inode)) {
+ dcs = BTRFS_DC_SETUP;
+ goto out_put;
+ }
+
if (i_size_read(inode) > 0) {
ret = btrfs_check_trunc_cache_free_space(root,
&root->fs_info->global_block_rsv);
Patches currently in stable-queue which might be from jbacik(a)fb.com are
queue-4.4/btrfs-clear-space-cache-inode-generation-always.patch
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From a64a62ce9a380213dc9e192f762266d70c9b40ec Mon Sep 17 00:00:00 2001
From: Lv Zheng <lv.zheng(a)intel.com>
Date: Tue, 26 Sep 2017 16:54:09 +0800
Subject: [PATCH] ACPI / EC: Fix regression related to PM ops support in ECDT
device
On platforms (ASUS X550ZE and possibly all ASUS X series) with valid ECDT
EC but invalid DSDT EC, EC PM ops won't be invoked as ECDT EC is not an
ACPI device. Thus the following commit actually removed post-resume
acpi_ec_enable_event() invocation for such platforms, and triggered a
regression on them that after being resumed, EC (actually should be ECDT)
driver stops handling EC events:
Commit: c2b46d679b30c5c0d7eb47a21085943242bdd8dc
Subject: ACPI / EC: Add PM operations to improve event handling for resume process
Notice that the root cause actually is "ECDT is not an ACPI device" rather
than "the timing of acpi_ec_enable_event() invocation", this patch fixes
this issue by enumerating ECDT EC as an ACPI device. Due to the existence
of the noirq stage, the ability of tuning the timing of
acpi_ec_enable_event() invocation is still meaningful.
This patch is a little bit different from the posted fix by moving
acpi_config_boot_ec() from acpi_ec_ecdt_start() to acpi_ec_add() to make
sure that EC event handling won't be stopped as long as the ACPI EC driver
is bound. Thus the following sequence shouldn't disable EC event handling:
unbind,suspend,resume,bind.
Fixes: c2b46d679b30 (ACPI / EC: Add PM operations to improve event handling for resume process)
Link: https://bugzilla.kernel.org/show_bug.cgi?id=196847
Reported-by: Luya Tshimbalanga <luya(a)fedoraproject.org>
Tested-by: Luya Tshimbalanga <luya(a)fedoraproject.org>
Cc: 4.9+ <stable(a)vger.kernel.org> # 4.9+
Signed-off-by: Lv Zheng <lv.zheng(a)intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index 82b3ce5e937e..df842465634a 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -1597,32 +1597,41 @@ static int acpi_ec_add(struct acpi_device *device)
{
struct acpi_ec *ec = NULL;
int ret;
+ bool is_ecdt = false;
+ acpi_status status;
strcpy(acpi_device_name(device), ACPI_EC_DEVICE_NAME);
strcpy(acpi_device_class(device), ACPI_EC_CLASS);
- ec = acpi_ec_alloc();
- if (!ec)
- return -ENOMEM;
- if (ec_parse_device(device->handle, 0, ec, NULL) !=
- AE_CTRL_TERMINATE) {
+ if (!strcmp(acpi_device_hid(device), ACPI_ECDT_HID)) {
+ is_ecdt = true;
+ ec = boot_ec;
+ } else {
+ ec = acpi_ec_alloc();
+ if (!ec)
+ return -ENOMEM;
+ status = ec_parse_device(device->handle, 0, ec, NULL);
+ if (status != AE_CTRL_TERMINATE) {
ret = -EINVAL;
goto err_alloc;
+ }
}
if (acpi_is_boot_ec(ec)) {
- boot_ec_is_ecdt = false;
- /*
- * Trust PNP0C09 namespace location rather than ECDT ID.
- *
- * But trust ECDT GPE rather than _GPE because of ASUS quirks,
- * so do not change boot_ec->gpe to ec->gpe.
- */
- boot_ec->handle = ec->handle;
- acpi_handle_debug(ec->handle, "duplicated.\n");
- acpi_ec_free(ec);
- ec = boot_ec;
- ret = acpi_config_boot_ec(ec, ec->handle, true, false);
+ boot_ec_is_ecdt = is_ecdt;
+ if (!is_ecdt) {
+ /*
+ * Trust PNP0C09 namespace location rather than
+ * ECDT ID. But trust ECDT GPE rather than _GPE
+ * because of ASUS quirks, so do not change
+ * boot_ec->gpe to ec->gpe.
+ */
+ boot_ec->handle = ec->handle;
+ acpi_handle_debug(ec->handle, "duplicated.\n");
+ acpi_ec_free(ec);
+ ec = boot_ec;
+ }
+ ret = acpi_config_boot_ec(ec, ec->handle, true, is_ecdt);
} else
ret = acpi_ec_setup(ec, true);
if (ret)
@@ -1635,8 +1644,10 @@ static int acpi_ec_add(struct acpi_device *device)
ret = !!request_region(ec->command_addr, 1, "EC cmd");
WARN(!ret, "Could not request EC cmd io port 0x%lx", ec->command_addr);
- /* Reprobe devices depending on the EC */
- acpi_walk_dep_device_list(ec->handle);
+ if (!is_ecdt) {
+ /* Reprobe devices depending on the EC */
+ acpi_walk_dep_device_list(ec->handle);
+ }
acpi_handle_debug(ec->handle, "enumerated.\n");
return 0;
@@ -1692,6 +1703,7 @@ ec_parse_io_ports(struct acpi_resource *resource, void *context)
static const struct acpi_device_id ec_device_ids[] = {
{"PNP0C09", 0},
+ {ACPI_ECDT_HID, 0},
{"", 0},
};
@@ -1764,11 +1776,14 @@ static int __init acpi_ec_ecdt_start(void)
* Note: ec->handle can be valid if this function is called after
* acpi_ec_add(), hence the fast path.
*/
- if (boot_ec->handle != ACPI_ROOT_OBJECT)
- handle = boot_ec->handle;
- else if (!acpi_ec_ecdt_get_handle(&handle))
- return -ENODEV;
- return acpi_config_boot_ec(boot_ec, handle, true, true);
+ if (boot_ec->handle == ACPI_ROOT_OBJECT) {
+ if (!acpi_ec_ecdt_get_handle(&handle))
+ return -ENODEV;
+ boot_ec->handle = handle;
+ }
+
+ /* Register to ACPI bus with PM ops attached */
+ return acpi_bus_register_early_device(ACPI_BUS_TYPE_ECDT_EC);
}
#if 0
@@ -2020,6 +2035,12 @@ int __init acpi_ec_init(void)
/* Drivers must be started after acpi_ec_query_init() */
dsdt_fail = acpi_bus_register_driver(&acpi_ec_driver);
+ /*
+ * Register ECDT to ACPI bus only when PNP0C09 probe fails. This is
+ * useful for platforms (confirmed on ASUS X550ZE) with valid ECDT
+ * settings but invalid DSDT settings.
+ * https://bugzilla.kernel.org/show_bug.cgi?id=196847
+ */
ecdt_fail = acpi_ec_ecdt_start();
return ecdt_fail && dsdt_fail ? -ENODEV : 0;
}
diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
index 4361c4415b4f..ede83d38beed 100644
--- a/drivers/acpi/internal.h
+++ b/drivers/acpi/internal.h
@@ -115,6 +115,7 @@ bool acpi_device_is_present(const struct acpi_device *adev);
bool acpi_device_is_battery(struct acpi_device *adev);
bool acpi_device_is_first_physical_node(struct acpi_device *adev,
const struct device *dev);
+int acpi_bus_register_early_device(int type);
/* --------------------------------------------------------------------------
Device Matching and Notification
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index 602f8ff212f2..2f2f50322ffb 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -1024,6 +1024,9 @@ static void acpi_device_get_busid(struct acpi_device *device)
case ACPI_BUS_TYPE_SLEEP_BUTTON:
strcpy(device->pnp.bus_id, "SLPF");
break;
+ case ACPI_BUS_TYPE_ECDT_EC:
+ strcpy(device->pnp.bus_id, "ECDT");
+ break;
default:
acpi_get_name(device->handle, ACPI_SINGLE_NAME, &buffer);
/* Clean up trailing underscores (if any) */
@@ -1304,6 +1307,9 @@ static void acpi_set_pnp_ids(acpi_handle handle, struct acpi_device_pnp *pnp,
case ACPI_BUS_TYPE_SLEEP_BUTTON:
acpi_add_id(pnp, ACPI_BUTTON_HID_SLEEPF);
break;
+ case ACPI_BUS_TYPE_ECDT_EC:
+ acpi_add_id(pnp, ACPI_ECDT_HID);
+ break;
}
}
@@ -2049,6 +2055,21 @@ void acpi_bus_trim(struct acpi_device *adev)
}
EXPORT_SYMBOL_GPL(acpi_bus_trim);
+int acpi_bus_register_early_device(int type)
+{
+ struct acpi_device *device = NULL;
+ int result;
+
+ result = acpi_add_single_object(&device, NULL,
+ type, ACPI_STA_DEFAULT);
+ if (result)
+ return result;
+
+ device->flags.match_driver = true;
+ return device_attach(&device->dev);
+}
+EXPORT_SYMBOL_GPL(acpi_bus_register_early_device);
+
static int acpi_bus_scan_fixed(void)
{
int result = 0;
diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
index fa1505292f6c..324a04df3785 100644
--- a/include/acpi/acpi_bus.h
+++ b/include/acpi/acpi_bus.h
@@ -105,6 +105,7 @@ enum acpi_bus_device_type {
ACPI_BUS_TYPE_THERMAL,
ACPI_BUS_TYPE_POWER_BUTTON,
ACPI_BUS_TYPE_SLEEP_BUTTON,
+ ACPI_BUS_TYPE_ECDT_EC,
ACPI_BUS_DEVICE_TYPE_COUNT
};
diff --git a/include/acpi/acpi_drivers.h b/include/acpi/acpi_drivers.h
index 29c691265b49..14499757338f 100644
--- a/include/acpi/acpi_drivers.h
+++ b/include/acpi/acpi_drivers.h
@@ -58,6 +58,7 @@
#define ACPI_VIDEO_HID "LNXVIDEO"
#define ACPI_BAY_HID "LNXIOBAY"
#define ACPI_DOCK_HID "LNXDOCK"
+#define ACPI_ECDT_HID "LNXEC"
/* Quirk for broken IBM BIOSes */
#define ACPI_SMBUS_IBM_HID "SMBUSIBM"
This is a note to let you know that I've just added the patch titled
KVM: x86: inject exceptions produced by x86_decode_insn
to the 3.18-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kvm-x86-inject-exceptions-produced-by-x86_decode_insn.patch
and it can be found in the queue-3.18 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6ea6e84309ca7e0e850b3083e6b09344ee15c290 Mon Sep 17 00:00:00 2001
From: Paolo Bonzini <pbonzini(a)redhat.com>
Date: Fri, 10 Nov 2017 10:49:38 +0100
Subject: KVM: x86: inject exceptions produced by x86_decode_insn
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Paolo Bonzini <pbonzini(a)redhat.com>
commit 6ea6e84309ca7e0e850b3083e6b09344ee15c290 upstream.
Sometimes, a processor might execute an instruction while another
processor is updating the page tables for that instruction's code page,
but before the TLB shootdown completes. The interesting case happens
if the page is in the TLB.
In general, the processor will succeed in executing the instruction and
nothing bad happens. However, what if the instruction is an MMIO access?
If *that* happens, KVM invokes the emulator, and the emulator gets the
updated page tables. If the update side had marked the code page as non
present, the page table walk then will fail and so will x86_decode_insn.
Unfortunately, even though kvm_fetch_guest_virt is correctly returning
X86EMUL_PROPAGATE_FAULT, x86_decode_insn's caller treats the failure as
a fatal error if the instruction cannot simply be reexecuted (as is the
case for MMIO). And this in fact happened sometimes when rebooting
Windows 2012r2 guests. Just checking ctxt->have_exception and injecting
the exception if true is enough to fix the case.
Thanks to Eduardo Habkost for helping in the debugging of this issue.
Reported-by: Yanan Fu <yfu(a)redhat.com>
Cc: Eduardo Habkost <ehabkost(a)redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini(a)redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kvm/x86.c | 2 ++
1 file changed, 2 insertions(+)
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5372,6 +5372,8 @@ int x86_emulate_instruction(struct kvm_v
if (reexecute_instruction(vcpu, cr2, write_fault_to_spt,
emulation_type))
return EMULATE_DONE;
+ if (ctxt->have_exception && inject_emulated_exception(vcpu))
+ return EMULATE_DONE;
if (emulation_type & EMULTYPE_SKIP)
return EMULATE_FAIL;
return handle_emulation_failure(vcpu);
Patches currently in stable-queue which might be from pbonzini(a)redhat.com are
queue-3.18/kvm-x86-inject-exceptions-produced-by-x86_decode_insn.patch
queue-3.18/kvm-x86-exit-to-user-mode-on-ud-intercept-when-emulator-requires.patch
This is a note to let you know that I've just added the patch titled
btrfs: clear space cache inode generation always
to the 3.18-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
btrfs-clear-space-cache-inode-generation-always.patch
and it can be found in the queue-3.18 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 8e138e0d92c6c9d3d481674fb14e3439b495be37 Mon Sep 17 00:00:00 2001
From: Josef Bacik <jbacik(a)fb.com>
Date: Fri, 17 Nov 2017 14:50:46 -0500
Subject: btrfs: clear space cache inode generation always
From: Josef Bacik <jbacik(a)fb.com>
commit 8e138e0d92c6c9d3d481674fb14e3439b495be37 upstream.
We discovered a box that had double allocations, and suspected the space
cache may be to blame. While auditing the write out path I noticed that
if we've already setup the space cache we will just carry on. This
means that any error we hit after cache_save_setup before we go to
actually write the cache out we won't reset the inode generation, so
whatever was already written will be considered correct, except it'll be
stale. Fix this by _always_ resetting the generation on the block group
inode, this way we only ever have valid or invalid cache.
With this patch I was no longer able to reproduce cache corruption with
dm-log-writes and my bpf error injection tool.
Signed-off-by: Josef Bacik <jbacik(a)fb.com>
Signed-off-by: David Sterba <dsterba(a)suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/btrfs/extent-tree.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -3219,13 +3219,6 @@ again:
goto again;
}
- /* We've already setup this transaction, go ahead and exit */
- if (block_group->cache_generation == trans->transid &&
- i_size_read(inode)) {
- dcs = BTRFS_DC_SETUP;
- goto out_put;
- }
-
/*
* We want to set the generation to 0, that way if anything goes wrong
* from here on out we know not to trust this cache when we load up next
@@ -3235,6 +3228,13 @@ again:
ret = btrfs_update_inode(trans, root, inode);
WARN_ON(ret);
+ /* We've already setup this transaction, go ahead and exit */
+ if (block_group->cache_generation == trans->transid &&
+ i_size_read(inode)) {
+ dcs = BTRFS_DC_SETUP;
+ goto out_put;
+ }
+
if (i_size_read(inode) > 0) {
ret = btrfs_check_trunc_cache_free_space(root,
&root->fs_info->global_block_rsv);
Patches currently in stable-queue which might be from jbacik(a)fb.com are
queue-3.18/btrfs-clear-space-cache-inode-generation-always.patch
This is a note to let you know that I've just added the patch titled
KVM: x86: Exit to user-mode on #UD intercept when emulator requires
to the 3.18-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kvm-x86-exit-to-user-mode-on-ud-intercept-when-emulator-requires.patch
and it can be found in the queue-3.18 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 61cb57c9ed631c95b54f8e9090c89d18b3695b3c Mon Sep 17 00:00:00 2001
From: Liran Alon <liran.alon(a)oracle.com>
Date: Sun, 5 Nov 2017 16:56:32 +0200
Subject: KVM: x86: Exit to user-mode on #UD intercept when emulator requires
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Liran Alon <liran.alon(a)oracle.com>
commit 61cb57c9ed631c95b54f8e9090c89d18b3695b3c upstream.
Instruction emulation after trapping a #UD exception can result in an
MMIO access, for example when emulating a MOVBE on a processor that
doesn't support the instruction. In this case, the #UD vmexit handler
must exit to user mode, but there wasn't any code to do so. Add it for
both VMX and SVM.
Signed-off-by: Liran Alon <liran.alon(a)oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko(a)oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk(a)oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk(a)oracle.com>
Reviewed-by: Wanpeng Li <wanpeng.li(a)hotmail.com>
Reviewed-by: Paolo Bonzini <pbonzini(a)redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kvm/svm.c | 2 ++
arch/x86/kvm/vmx.c | 2 ++
2 files changed, 4 insertions(+)
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1785,6 +1785,8 @@ static int ud_interception(struct vcpu_s
int er;
er = emulate_instruction(&svm->vcpu, EMULTYPE_TRAP_UD);
+ if (er == EMULATE_USER_EXIT)
+ return 0;
if (er != EMULATE_DONE)
kvm_queue_exception(&svm->vcpu, UD_VECTOR);
return 1;
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4899,6 +4899,8 @@ static int handle_exception(struct kvm_v
if (is_invalid_opcode(intr_info)) {
er = emulate_instruction(vcpu, EMULTYPE_TRAP_UD);
+ if (er == EMULATE_USER_EXIT)
+ return 0;
if (er != EMULATE_DONE)
kvm_queue_exception(vcpu, UD_VECTOR);
return 1;
Patches currently in stable-queue which might be from liran.alon(a)oracle.com are
queue-3.18/kvm-x86-exit-to-user-mode-on-ud-intercept-when-emulator-requires.patch
a3ac350793d ("ARM: OMAP2+: Fix WL1283 Bluetooth Baud Rate") Corrected
a baud rate problem where the baud rate needs to be 3000000 and not
300000.
Can you please apply this patch to linux-4.9.y?
Thank you
adam
ad48ed0c5763 ("mfd: twl4030-power: Fix pmic for boards that need vmmc1
on reboot") fixes an issue where certain boards don't reboot properly.
Can this please be applied to linux-4.9.y?
thank you
adam
This is a note to let you know that I've just added the patch titled
mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-thp-do-not-make-page-table-dirty-unconditionally-in-touch_pd.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From a8f97366452ed491d13cf1e44241bc0b5740b1f0 Mon Sep 17 00:00:00 2001
From: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Date: Mon, 27 Nov 2017 06:21:25 +0300
Subject: mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d()
From: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
commit a8f97366452ed491d13cf1e44241bc0b5740b1f0 upstream.
Currently, we unconditionally make page table dirty in touch_pmd().
It may result in false-positive can_follow_write_pmd().
We may avoid the situation, if we would only make the page table entry
dirty if caller asks for write access -- FOLL_WRITE.
The patch also changes touch_pud() in the same way.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
[Salvatore Bonaccorso: backport for 4.9:
- Adjust context
- Drop specific part for PUD-sized transparent hugepages. Support
for PUD-sized transparent hugepages was added in v4.11-rc1
]
Signed-off-by: Ben Hutchings <ben(a)decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
mm/huge_memory.c | 19 +++++++------------
1 file changed, 7 insertions(+), 12 deletions(-)
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -745,20 +745,15 @@ int vmf_insert_pfn_pmd(struct vm_area_st
EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd);
static void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
- pmd_t *pmd)
+ pmd_t *pmd, int flags)
{
pmd_t _pmd;
- /*
- * We should set the dirty bit only for FOLL_WRITE but for now
- * the dirty bit in the pmd is meaningless. And if the dirty
- * bit will become meaningful and we'll only set it with
- * FOLL_WRITE, an atomic set_bit will be required on the pmd to
- * set the young bit, instead of the current set_pmd_at.
- */
- _pmd = pmd_mkyoung(pmd_mkdirty(*pmd));
+ _pmd = pmd_mkyoung(*pmd);
+ if (flags & FOLL_WRITE)
+ _pmd = pmd_mkdirty(_pmd);
if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK,
- pmd, _pmd, 1))
+ pmd, _pmd, flags & FOLL_WRITE))
update_mmu_cache_pmd(vma, addr, pmd);
}
@@ -787,7 +782,7 @@ struct page *follow_devmap_pmd(struct vm
return NULL;
if (flags & FOLL_TOUCH)
- touch_pmd(vma, addr, pmd);
+ touch_pmd(vma, addr, pmd, flags);
/*
* device mapped pages can only be returned if the
@@ -1158,7 +1153,7 @@ struct page *follow_trans_huge_pmd(struc
page = pmd_page(*pmd);
VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
if (flags & FOLL_TOUCH)
- touch_pmd(vma, addr, pmd);
+ touch_pmd(vma, addr, pmd, flags);
if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) {
/*
* We don't mlock() pte-mapped THPs. This way we can avoid
Patches currently in stable-queue which might be from kirill.shutemov(a)linux.intel.com are
queue-4.9/mm-thp-do-not-make-page-table-dirty-unconditionally-in-touch_pd.patch
queue-4.9/mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
This is a note to let you know that I've just added the patch titled
mm/madvise.c: fix madvise() infinite loop under special circumstances
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 Mon Sep 17 00:00:00 2001
From: chenjie <chenjie6(a)huawei.com>
Date: Wed, 29 Nov 2017 16:10:54 -0800
Subject: mm/madvise.c: fix madvise() infinite loop under special circumstances
From: chenjie <chenjie6(a)huawei.com>
commit 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 upstream.
MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings.
Unfortunately madvise_willneed() doesn't communicate this information
properly to the generic madvise syscall implementation. The calling
convention is quite subtle there. madvise_vma() is supposed to either
return an error or update &prev otherwise the main loop will never
advance to the next vma and it will keep looping for ever without a way
to get out of the kernel.
It seems this has been broken since introduction. Nobody has noticed
because nobody seems to be using MADVISE_WILLNEED on these DAX mappings.
[mhocko(a)suse.com: rewrite changelog]
Link: http://lkml.kernel.org/r/20171127115318.911-1-guoxuenan@huawei.com
Fixes: fe77ba6f4f97 ("[PATCH] xip: madvice/fadvice: execute in place")
Signed-off-by: chenjie <chenjie6(a)huawei.com>
Signed-off-by: guoxuenan <guoxuenan(a)huawei.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: Minchan Kim <minchan(a)kernel.org>
Cc: zhangyi (F) <yi.zhang(a)huawei.com>
Cc: Miao Xie <miaoxie(a)huawei.com>
Cc: Mike Rapoport <rppt(a)linux.vnet.ibm.com>
Cc: Shaohua Li <shli(a)fb.com>
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Anshuman Khandual <khandual(a)linux.vnet.ibm.com>
Cc: Rik van Riel <riel(a)redhat.com>
Cc: Carsten Otte <cotte(a)de.ibm.com>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
mm/madvise.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -228,15 +228,14 @@ static long madvise_willneed(struct vm_a
{
struct file *file = vma->vm_file;
+ *prev = vma;
#ifdef CONFIG_SWAP
if (!file) {
- *prev = vma;
force_swapin_readahead(vma, start, end);
return 0;
}
if (shmem_mapping(file->f_mapping)) {
- *prev = vma;
force_shm_swapin_readahead(vma, start, end,
file->f_mapping);
return 0;
@@ -251,7 +250,6 @@ static long madvise_willneed(struct vm_a
return 0;
}
- *prev = vma;
start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
if (end > vma->vm_end)
end = vma->vm_end;
Patches currently in stable-queue which might be from chenjie6(a)huawei.com are
queue-4.9/mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
This is a note to let you know that I've just added the patch titled
mm, hugetlbfs: introduce ->split() to vm_operations_struct
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-hugetlbfs-introduce-split-to-vm_operations_struct.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 31383c6865a578834dd953d9dbc88e6b19fe3997 Mon Sep 17 00:00:00 2001
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Wed, 29 Nov 2017 16:10:28 -0800
Subject: mm, hugetlbfs: introduce ->split() to vm_operations_struct
From: Dan Williams <dan.j.williams(a)intel.com>
commit 31383c6865a578834dd953d9dbc88e6b19fe3997 upstream.
Patch series "device-dax: fix unaligned munmap handling"
When device-dax is operating in huge-page mode we want it to behave like
hugetlbfs and fail attempts to split vmas into unaligned ranges. It
would be messy to teach the munmap path about device-dax alignment
constraints in the same (hstate) way that hugetlbfs communicates this
constraint. Instead, these patches introduce a new ->split() vm
operation.
This patch (of 2):
The device-dax interface has similar constraints as hugetlbfs in that it
requires the munmap path to unmap in huge page aligned units. Rather
than add more custom vma handling code in __split_vma() introduce a new
vm operation to perform this vma specific check.
Link: http://lkml.kernel.org/r/151130418135.4029.6783191281930729710.stgit@dwilli…
Fixes: dee410792419 ("/dev/dax, core: file operations and dax-mmap")
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Cc: Jeff Moyer <jmoyer(a)redhat.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/mm.h | 1 +
mm/hugetlb.c | 8 ++++++++
mm/mmap.c | 8 +++++---
3 files changed, 14 insertions(+), 3 deletions(-)
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -347,6 +347,7 @@ struct fault_env {
struct vm_operations_struct {
void (*open)(struct vm_area_struct * area);
void (*close)(struct vm_area_struct * area);
+ int (*split)(struct vm_area_struct * area, unsigned long addr);
int (*mremap)(struct vm_area_struct * area);
int (*fault)(struct vm_area_struct *vma, struct vm_fault *vmf);
int (*pmd_fault)(struct vm_area_struct *, unsigned long address,
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3135,6 +3135,13 @@ static void hugetlb_vm_op_close(struct v
}
}
+static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr)
+{
+ if (addr & ~(huge_page_mask(hstate_vma(vma))))
+ return -EINVAL;
+ return 0;
+}
+
/*
* We cannot handle pagefaults against hugetlb pages at all. They cause
* handle_mm_fault() to try to instantiate regular-sized pages in the
@@ -3151,6 +3158,7 @@ const struct vm_operations_struct hugetl
.fault = hugetlb_vm_op_fault,
.open = hugetlb_vm_op_open,
.close = hugetlb_vm_op_close,
+ .split = hugetlb_vm_op_split,
};
static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2538,9 +2538,11 @@ static int __split_vma(struct mm_struct
struct vm_area_struct *new;
int err;
- if (is_vm_hugetlb_page(vma) && (addr &
- ~(huge_page_mask(hstate_vma(vma)))))
- return -EINVAL;
+ if (vma->vm_ops && vma->vm_ops->split) {
+ err = vma->vm_ops->split(vma, addr);
+ if (err)
+ return err;
+ }
new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
if (!new)
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.9/mm-hugetlbfs-introduce-split-to-vm_operations_struct.patch
queue-4.9/mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
This is a note to let you know that I've just added the patch titled
mm/cma: fix alloc_contig_range ret code/potential leak
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-cma-fix-alloc_contig_range-ret-code-potential-leak.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 63cd448908b5eb51d84c52f02b31b9b4ccd1cb5a Mon Sep 17 00:00:00 2001
From: Mike Kravetz <mike.kravetz(a)oracle.com>
Date: Wed, 29 Nov 2017 16:10:01 -0800
Subject: mm/cma: fix alloc_contig_range ret code/potential leak
From: Mike Kravetz <mike.kravetz(a)oracle.com>
commit 63cd448908b5eb51d84c52f02b31b9b4ccd1cb5a upstream.
If the call __alloc_contig_migrate_range() in alloc_contig_range returns
-EBUSY, processing continues so that test_pages_isolated() is called
where there is a tracepoint to identify the busy pages. However, it is
possible for busy pages to become available between the calls to these
two routines. In this case, the range of pages may be allocated.
Unfortunately, the original return code (ret == -EBUSY) is still set and
returned to the caller. Therefore, the caller believes the pages were
not allocated and they are leaked.
Update the comment to indicate that allocation is still possible even if
__alloc_contig_migrate_range returns -EBUSY. Also, clear return code in
this case so that it is not accidentally used or returned to caller.
Link: http://lkml.kernel.org/r/20171122185214.25285-1-mike.kravetz@oracle.com
Fixes: 8ef5849fa8a2 ("mm/cma: always check which page caused allocation failure")
Signed-off-by: Mike Kravetz <mike.kravetz(a)oracle.com>
Acked-by: Vlastimil Babka <vbabka(a)suse.cz>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Acked-by: Johannes Weiner <hannes(a)cmpxchg.org>
Acked-by: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Michal Nazarewicz <mina86(a)mina86.com>
Cc: Laura Abbott <labbott(a)redhat.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
mm/page_alloc.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7309,11 +7309,18 @@ int alloc_contig_range(unsigned long sta
/*
* In case of -EBUSY, we'd like to know which page causes problem.
- * So, just fall through. We will check it in test_pages_isolated().
+ * So, just fall through. test_pages_isolated() has a tracepoint
+ * which will report the busy page.
+ *
+ * It is possible that busy pages could become available before
+ * the call to test_pages_isolated, and the range will actually be
+ * allocated. So, if we fall through be sure to clear ret so that
+ * -EBUSY is not accidentally used or returned to caller.
*/
ret = __alloc_contig_migrate_range(&cc, start, end);
if (ret && ret != -EBUSY)
goto done;
+ ret =0;
/*
* Pages from [start, end) are within a MAX_ORDER_NR_PAGES
Patches currently in stable-queue which might be from mike.kravetz(a)oracle.com are
queue-4.9/mm-cma-fix-alloc_contig_range-ret-code-potential-leak.patch
This is a note to let you know that I've just added the patch titled
mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d()
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-thp-do-not-make-page-table-dirty-unconditionally-in-touch_pd.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From a8f97366452ed491d13cf1e44241bc0b5740b1f0 Mon Sep 17 00:00:00 2001
From: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Date: Mon, 27 Nov 2017 06:21:25 +0300
Subject: mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d()
From: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
commit a8f97366452ed491d13cf1e44241bc0b5740b1f0 upstream.
Currently, we unconditionally make page table dirty in touch_pmd().
It may result in false-positive can_follow_write_pmd().
We may avoid the situation, if we would only make the page table entry
dirty if caller asks for write access -- FOLL_WRITE.
The patch also changes touch_pud() in the same way.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
[Salvatore Bonaccorso: backport for 3.16:
- Adjust context
- Drop specific part for PUD-sized transparent hugepages. Support
for PUD-sized transparent hugepages was added in v4.11-rc1
]
Signed-off-by: Ben Hutchings <ben(a)decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
mm/huge_memory.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1304,17 +1304,11 @@ struct page *follow_trans_huge_pmd(struc
VM_BUG_ON_PAGE(!PageHead(page), page);
if (flags & FOLL_TOUCH) {
pmd_t _pmd;
- /*
- * We should set the dirty bit only for FOLL_WRITE but
- * for now the dirty bit in the pmd is meaningless.
- * And if the dirty bit will become meaningful and
- * we'll only set it with FOLL_WRITE, an atomic
- * set_bit will be required on the pmd to set the
- * young bit, instead of the current set_pmd_at.
- */
- _pmd = pmd_mkyoung(pmd_mkdirty(*pmd));
+ _pmd = pmd_mkyoung(*pmd);
+ if (flags & FOLL_WRITE)
+ _pmd = pmd_mkdirty(_pmd);
if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK,
- pmd, _pmd, 1))
+ pmd, _pmd, flags & FOLL_WRITE))
update_mmu_cache_pmd(vma, addr, pmd);
}
if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) {
Patches currently in stable-queue which might be from kirill.shutemov(a)linux.intel.com are
queue-4.4/mm-thp-do-not-make-page-table-dirty-unconditionally-in-touch_pd.patch
queue-4.4/mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
This is a note to let you know that I've just added the patch titled
mm/madvise.c: fix madvise() infinite loop under special circumstances
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 Mon Sep 17 00:00:00 2001
From: chenjie <chenjie6(a)huawei.com>
Date: Wed, 29 Nov 2017 16:10:54 -0800
Subject: mm/madvise.c: fix madvise() infinite loop under special circumstances
From: chenjie <chenjie6(a)huawei.com>
commit 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 upstream.
MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings.
Unfortunately madvise_willneed() doesn't communicate this information
properly to the generic madvise syscall implementation. The calling
convention is quite subtle there. madvise_vma() is supposed to either
return an error or update &prev otherwise the main loop will never
advance to the next vma and it will keep looping for ever without a way
to get out of the kernel.
It seems this has been broken since introduction. Nobody has noticed
because nobody seems to be using MADVISE_WILLNEED on these DAX mappings.
[mhocko(a)suse.com: rewrite changelog]
Link: http://lkml.kernel.org/r/20171127115318.911-1-guoxuenan@huawei.com
Fixes: fe77ba6f4f97 ("[PATCH] xip: madvice/fadvice: execute in place")
Signed-off-by: chenjie <chenjie6(a)huawei.com>
Signed-off-by: guoxuenan <guoxuenan(a)huawei.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: Minchan Kim <minchan(a)kernel.org>
Cc: zhangyi (F) <yi.zhang(a)huawei.com>
Cc: Miao Xie <miaoxie(a)huawei.com>
Cc: Mike Rapoport <rppt(a)linux.vnet.ibm.com>
Cc: Shaohua Li <shli(a)fb.com>
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Anshuman Khandual <khandual(a)linux.vnet.ibm.com>
Cc: Rik van Riel <riel(a)redhat.com>
Cc: Carsten Otte <cotte(a)de.ibm.com>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
mm/madvise.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -223,15 +223,14 @@ static long madvise_willneed(struct vm_a
{
struct file *file = vma->vm_file;
+ *prev = vma;
#ifdef CONFIG_SWAP
if (!file) {
- *prev = vma;
force_swapin_readahead(vma, start, end);
return 0;
}
if (shmem_mapping(file->f_mapping)) {
- *prev = vma;
force_shm_swapin_readahead(vma, start, end,
file->f_mapping);
return 0;
@@ -246,7 +245,6 @@ static long madvise_willneed(struct vm_a
return 0;
}
- *prev = vma;
start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
if (end > vma->vm_end)
end = vma->vm_end;
Patches currently in stable-queue which might be from chenjie6(a)huawei.com are
queue-4.4/mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
This is a note to let you know that I've just added the patch titled
v4l2: disable filesystem-dax mapping support
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
v4l2-disable-filesystem-dax-mapping-support.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From b70131de648c2b997d22f4653934438013f407a1 Mon Sep 17 00:00:00 2001
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Wed, 29 Nov 2017 16:10:43 -0800
Subject: v4l2: disable filesystem-dax mapping support
From: Dan Williams <dan.j.williams(a)intel.com>
commit b70131de648c2b997d22f4653934438013f407a1 upstream.
V4L2 memory registrations are incompatible with filesystem-dax that
needs the ability to revoke dma access to a mapping at will, or
otherwise allow the kernel to wait for completion of DMA. The
filesystem-dax implementation breaks the traditional solution of
truncate of active file backed mappings since there is no page-cache
page we can orphan to sustain ongoing DMA.
If v4l2 wants to support long lived DMA mappings it needs to arrange to
hold a file lease or use some other mechanism so that the kernel can
coordinate revoking DMA access when the filesystem needs to truncate
mappings.
Link: http://lkml.kernel.org/r/151068940499.7446.12846708245365671207.stgit@dwill…
Fixes: 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings")
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Reported-by: Jan Kara <jack(a)suse.cz>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Cc: Mauro Carvalho Chehab <mchehab(a)kernel.org>
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: Doug Ledford <dledford(a)redhat.com>
Cc: Hal Rosenstock <hal.rosenstock(a)gmail.com>
Cc: Inki Dae <inki.dae(a)samsung.com>
Cc: Jason Gunthorpe <jgg(a)mellanox.com>
Cc: Jeff Moyer <jmoyer(a)redhat.com>
Cc: Joonyoung Shim <jy0922.shim(a)samsung.com>
Cc: Kyungmin Park <kyungmin.park(a)samsung.com>
Cc: Mel Gorman <mgorman(a)suse.de>
Cc: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Cc: Sean Hefty <sean.hefty(a)intel.com>
Cc: Seung-Woo Kim <sw0312.kim(a)samsung.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/media/v4l2-core/videobuf-dma-sg.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--- a/drivers/media/v4l2-core/videobuf-dma-sg.c
+++ b/drivers/media/v4l2-core/videobuf-dma-sg.c
@@ -185,12 +185,13 @@ static int videobuf_dma_init_user_locked
dprintk(1, "init user [0x%lx+0x%lx => %d pages]\n",
data, size, dma->nr_pages);
- err = get_user_pages(data & PAGE_MASK, dma->nr_pages,
+ err = get_user_pages_longterm(data & PAGE_MASK, dma->nr_pages,
flags, dma->pages, NULL);
if (err != dma->nr_pages) {
dma->nr_pages = (err >= 0) ? err : 0;
- dprintk(1, "get_user_pages: err=%d [%d]\n", err, dma->nr_pages);
+ dprintk(1, "get_user_pages_longterm: err=%d [%d]\n", err,
+ dma->nr_pages);
return err < 0 ? err : -EINVAL;
}
return 0;
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.14/mm-hugetlbfs-introduce-split-to-vm_operations_struct.patch
queue-4.14/ib-core-disable-memory-registration-of-filesystem-dax-vmas.patch
queue-4.14/mm-introduce-get_user_pages_longterm.patch
queue-4.14/mm-fail-get_vaddr_frames-for-filesystem-dax-mappings.patch
queue-4.14/device-dax-implement-split-to-catch-invalid-munmap-attempts.patch
queue-4.14/v4l2-disable-filesystem-dax-mapping-support.patch
queue-4.14/mm-fix-device-dax-pud-write-faults-triggered-by-get_user_pages.patch
queue-4.14/mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
This is a note to let you know that I've just added the patch titled
mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-thp-do-not-make-page-table-dirty-unconditionally-in-touch_pd.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From a8f97366452ed491d13cf1e44241bc0b5740b1f0 Mon Sep 17 00:00:00 2001
From: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Date: Mon, 27 Nov 2017 06:21:25 +0300
Subject: mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d()
From: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
commit a8f97366452ed491d13cf1e44241bc0b5740b1f0 upstream.
Currently, we unconditionally make page table dirty in touch_pmd().
It may result in false-positive can_follow_write_pmd().
We may avoid the situation, if we would only make the page table entry
dirty if caller asks for write access -- FOLL_WRITE.
The patch also changes touch_pud() in the same way.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
mm/huge_memory.c | 36 +++++++++++++-----------------------
1 file changed, 13 insertions(+), 23 deletions(-)
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -842,20 +842,15 @@ EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
static void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
- pmd_t *pmd)
+ pmd_t *pmd, int flags)
{
pmd_t _pmd;
- /*
- * We should set the dirty bit only for FOLL_WRITE but for now
- * the dirty bit in the pmd is meaningless. And if the dirty
- * bit will become meaningful and we'll only set it with
- * FOLL_WRITE, an atomic set_bit will be required on the pmd to
- * set the young bit, instead of the current set_pmd_at.
- */
- _pmd = pmd_mkyoung(pmd_mkdirty(*pmd));
+ _pmd = pmd_mkyoung(*pmd);
+ if (flags & FOLL_WRITE)
+ _pmd = pmd_mkdirty(_pmd);
if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK,
- pmd, _pmd, 1))
+ pmd, _pmd, flags & FOLL_WRITE))
update_mmu_cache_pmd(vma, addr, pmd);
}
@@ -884,7 +879,7 @@ struct page *follow_devmap_pmd(struct vm
return NULL;
if (flags & FOLL_TOUCH)
- touch_pmd(vma, addr, pmd);
+ touch_pmd(vma, addr, pmd, flags);
/*
* device mapped pages can only be returned if the
@@ -995,20 +990,15 @@ out:
#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
- pud_t *pud)
+ pud_t *pud, int flags)
{
pud_t _pud;
- /*
- * We should set the dirty bit only for FOLL_WRITE but for now
- * the dirty bit in the pud is meaningless. And if the dirty
- * bit will become meaningful and we'll only set it with
- * FOLL_WRITE, an atomic set_bit will be required on the pud to
- * set the young bit, instead of the current set_pud_at.
- */
- _pud = pud_mkyoung(pud_mkdirty(*pud));
+ _pud = pud_mkyoung(*pud);
+ if (flags & FOLL_WRITE)
+ _pud = pud_mkdirty(_pud);
if (pudp_set_access_flags(vma, addr & HPAGE_PUD_MASK,
- pud, _pud, 1))
+ pud, _pud, flags & FOLL_WRITE))
update_mmu_cache_pud(vma, addr, pud);
}
@@ -1031,7 +1021,7 @@ struct page *follow_devmap_pud(struct vm
return NULL;
if (flags & FOLL_TOUCH)
- touch_pud(vma, addr, pud);
+ touch_pud(vma, addr, pud, flags);
/*
* device mapped pages can only be returned if the
@@ -1407,7 +1397,7 @@ struct page *follow_trans_huge_pmd(struc
page = pmd_page(*pmd);
VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
if (flags & FOLL_TOUCH)
- touch_pmd(vma, addr, pmd);
+ touch_pmd(vma, addr, pmd, flags);
if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) {
/*
* We don't mlock() pte-mapped THPs. This way we can avoid
Patches currently in stable-queue which might be from kirill.shutemov(a)linux.intel.com are
queue-4.14/mm-fix-device-dax-pud-write-faults-triggered-by-get_user_pages.patch
queue-4.14/mm-hugetlb-fix-null-pointer-dereference-on-5-level-paging-machine.patch
queue-4.14/mm-thp-do-not-make-page-table-dirty-unconditionally-in-touch_pd.patch
queue-4.14/mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
This is a note to let you know that I've just added the patch titled
mm: migrate: fix an incorrect call of prep_transhuge_page()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-migrate-fix-an-incorrect-call-of-prep_transhuge_page.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 40a899ed16486455f964e46d1af31fd4fded21c1 Mon Sep 17 00:00:00 2001
From: Zi Yan <zi.yan(a)cs.rutgers.edu>
Date: Wed, 29 Nov 2017 16:11:12 -0800
Subject: mm: migrate: fix an incorrect call of prep_transhuge_page()
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Zi Yan <zi.yan(a)cs.rutgers.edu>
commit 40a899ed16486455f964e46d1af31fd4fded21c1 upstream.
In https://lkml.org/lkml/2017/11/20/411, Andrea reported that during
memory hotplug/hot remove prep_transhuge_page() is called incorrectly on
non-THP pages for migration, when THP is on but THP migration is not
enabled. This leads to a bad state of target pages for migration.
By inspecting the code, if called on a non-THP, prep_transhuge_page()
will
1) change the value of the mapping of (page + 2), since it is used for
THP deferred list;
2) change the lru value of (page + 1), since it is used for THP's dtor.
Both can lead to data corruption of these two pages.
Andrea said:
"Pragmatically and from the point of view of the memory_hotplug subsys,
the effect is a kernel crash when pages are being migrated during a
memory hot remove offline and migration target pages are found in a
bad state"
This patch fixes it by only calling prep_transhuge_page() when we are
certain that the target page is THP.
Link: http://lkml.kernel.org/r/20171121021855.50525-1-zi.yan@sent.com
Fixes: 8135d8926c08 ("mm: memory_hotplug: memory hotremove supports thp migration")
Signed-off-by: Zi Yan <zi.yan(a)cs.rutgers.edu>
Reported-by: Andrea Reale <ar(a)linux.vnet.ibm.com>
Cc: Naoya Horiguchi <n-horiguchi(a)ah.jp.nec.com>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: "Jérôme Glisse" <jglisse(a)redhat.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 895ec0c4942e..a2246cf670ba 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -54,7 +54,7 @@ static inline struct page *new_page_nodemask(struct page *page,
new_page = __alloc_pages_nodemask(gfp_mask, order,
preferred_nid, nodemask);
- if (new_page && PageTransHuge(page))
+ if (new_page && PageTransHuge(new_page))
prep_transhuge_page(new_page);
return new_page;
Patches currently in stable-queue which might be from zi.yan(a)cs.rutgers.edu are
queue-4.14/mm-migrate-fix-an-incorrect-call-of-prep_transhuge_page.patch
This is a note to let you know that I've just added the patch titled
mm, memcg: fix mem_cgroup_swapout() for THPs
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-memcg-fix-mem_cgroup_swapout-for-thps.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d08afa149acfd00871484ada6dabc3880524cd1c Mon Sep 17 00:00:00 2001
From: Shakeel Butt <shakeelb(a)google.com>
Date: Wed, 29 Nov 2017 16:11:15 -0800
Subject: mm, memcg: fix mem_cgroup_swapout() for THPs
From: Shakeel Butt <shakeelb(a)google.com>
commit d08afa149acfd00871484ada6dabc3880524cd1c upstream.
Commit d6810d730022 ("memcg, THP, swap: make mem_cgroup_swapout()
support THP") changed mem_cgroup_swapout() to support transparent huge
page (THP).
However the patch missed one location which should be changed for
correctly handling THPs. The resulting bug will cause the memory
cgroups whose THPs were swapped out to become zombies on deletion.
Link: http://lkml.kernel.org/r/20171128161941.20931-1-shakeelb@google.com
Fixes: d6810d730022 ("memcg, THP, swap: make mem_cgroup_swapout() support THP")
Signed-off-by: Shakeel Butt <shakeelb(a)google.com>
Acked-by: Johannes Weiner <hannes(a)cmpxchg.org>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: Huang Ying <ying.huang(a)intel.com>
Cc: Vladimir Davydov <vdavydov.dev(a)gmail.com>
Cc: Greg Thelen <gthelen(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
---
mm/memcontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6044,7 +6044,7 @@ void mem_cgroup_swapout(struct page *pag
memcg_check_events(memcg, page);
if (!mem_cgroup_is_root(memcg))
- css_put(&memcg->css);
+ css_put_many(&memcg->css, nr_entries);
}
/**
Patches currently in stable-queue which might be from shakeelb(a)google.com are
queue-4.14/mm-memcg-fix-mem_cgroup_swapout-for-thps.patch
This is a note to let you know that I've just added the patch titled
mm/madvise.c: fix madvise() infinite loop under special circumstances
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 Mon Sep 17 00:00:00 2001
From: chenjie <chenjie6(a)huawei.com>
Date: Wed, 29 Nov 2017 16:10:54 -0800
Subject: mm/madvise.c: fix madvise() infinite loop under special circumstances
From: chenjie <chenjie6(a)huawei.com>
commit 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 upstream.
MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings.
Unfortunately madvise_willneed() doesn't communicate this information
properly to the generic madvise syscall implementation. The calling
convention is quite subtle there. madvise_vma() is supposed to either
return an error or update &prev otherwise the main loop will never
advance to the next vma and it will keep looping for ever without a way
to get out of the kernel.
It seems this has been broken since introduction. Nobody has noticed
because nobody seems to be using MADVISE_WILLNEED on these DAX mappings.
[mhocko(a)suse.com: rewrite changelog]
Link: http://lkml.kernel.org/r/20171127115318.911-1-guoxuenan@huawei.com
Fixes: fe77ba6f4f97 ("[PATCH] xip: madvice/fadvice: execute in place")
Signed-off-by: chenjie <chenjie6(a)huawei.com>
Signed-off-by: guoxuenan <guoxuenan(a)huawei.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: Minchan Kim <minchan(a)kernel.org>
Cc: zhangyi (F) <yi.zhang(a)huawei.com>
Cc: Miao Xie <miaoxie(a)huawei.com>
Cc: Mike Rapoport <rppt(a)linux.vnet.ibm.com>
Cc: Shaohua Li <shli(a)fb.com>
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Anshuman Khandual <khandual(a)linux.vnet.ibm.com>
Cc: Rik van Riel <riel(a)redhat.com>
Cc: Carsten Otte <cotte(a)de.ibm.com>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
mm/madvise.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -276,15 +276,14 @@ static long madvise_willneed(struct vm_a
{
struct file *file = vma->vm_file;
+ *prev = vma;
#ifdef CONFIG_SWAP
if (!file) {
- *prev = vma;
force_swapin_readahead(vma, start, end);
return 0;
}
if (shmem_mapping(file->f_mapping)) {
- *prev = vma;
force_shm_swapin_readahead(vma, start, end,
file->f_mapping);
return 0;
@@ -299,7 +298,6 @@ static long madvise_willneed(struct vm_a
return 0;
}
- *prev = vma;
start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
if (end > vma->vm_end)
end = vma->vm_end;
Patches currently in stable-queue which might be from chenjie6(a)huawei.com are
queue-4.14/mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
This is a note to let you know that I've just added the patch titled
mm: introduce get_user_pages_longterm
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-introduce-get_user_pages_longterm.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 2bb6d2837083de722bfdc369cb0d76ce188dd9b4 Mon Sep 17 00:00:00 2001
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Wed, 29 Nov 2017 16:10:35 -0800
Subject: mm: introduce get_user_pages_longterm
From: Dan Williams <dan.j.williams(a)intel.com>
commit 2bb6d2837083de722bfdc369cb0d76ce188dd9b4 upstream.
Patch series "introduce get_user_pages_longterm()", v2.
Here is a new get_user_pages api for cases where a driver intends to
keep an elevated page count indefinitely. This is distinct from usages
like iov_iter_get_pages where the elevated page counts are transient.
The iov_iter_get_pages cases immediately turn around and submit the
pages to a device driver which will put_page when the i/o operation
completes (under kernel control).
In the longterm case userspace is responsible for dropping the page
reference at some undefined point in the future. This is untenable for
filesystem-dax case where the filesystem is in control of the lifetime
of the block / page and needs reasonable limits on how long it can wait
for pages in a mapping to become idle.
Fixing filesystems to actually wait for dax pages to be idle before
blocks from a truncate/hole-punch operation are repurposed is saved for
a later patch series.
Also, allowing longterm registration of dax mappings is a future patch
series that introduces a "map with lease" semantic where the kernel can
revoke a lease and force userspace to drop its page references.
I have also tagged these for -stable to purposely break cases that might
assume that longterm memory registrations for filesystem-dax mappings
were supported by the kernel. The behavior regression this policy
change implies is one of the reasons we maintain the "dax enabled.
Warning: EXPERIMENTAL, use at your own risk" notification when mounting
a filesystem in dax mode.
It is worth noting the device-dax interface does not suffer the same
constraints since it does not support file space management operations
like hole-punch.
This patch (of 4):
Until there is a solution to the dma-to-dax vs truncate problem it is
not safe to allow long standing memory registrations against
filesytem-dax vmas. Device-dax vmas do not have this problem and are
explicitly allowed.
This is temporary until a "memory registration with layout-lease"
mechanism can be implemented for the affected sub-systems (RDMA and
V4L2).
[akpm(a)linux-foundation.org: use kcalloc()]
Link: http://lkml.kernel.org/r/151068939435.7446.13560129395419350737.stgit@dwill…
Fixes: 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings")
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Suggested-by: Christoph Hellwig <hch(a)lst.de>
Cc: Doug Ledford <dledford(a)redhat.com>
Cc: Hal Rosenstock <hal.rosenstock(a)gmail.com>
Cc: Inki Dae <inki.dae(a)samsung.com>
Cc: Jan Kara <jack(a)suse.cz>
Cc: Jason Gunthorpe <jgg(a)mellanox.com>
Cc: Jeff Moyer <jmoyer(a)redhat.com>
Cc: Joonyoung Shim <jy0922.shim(a)samsung.com>
Cc: Kyungmin Park <kyungmin.park(a)samsung.com>
Cc: Mauro Carvalho Chehab <mchehab(a)kernel.org>
Cc: Mel Gorman <mgorman(a)suse.de>
Cc: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Cc: Sean Hefty <sean.hefty(a)intel.com>
Cc: Seung-Woo Kim <sw0312.kim(a)samsung.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/fs.h | 14 +++++++++++
include/linux/mm.h | 13 ++++++++++
mm/gup.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 91 insertions(+)
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3175,6 +3175,20 @@ static inline bool vma_is_dax(struct vm_
return vma->vm_file && IS_DAX(vma->vm_file->f_mapping->host);
}
+static inline bool vma_is_fsdax(struct vm_area_struct *vma)
+{
+ struct inode *inode;
+
+ if (!vma->vm_file)
+ return false;
+ if (!vma_is_dax(vma))
+ return false;
+ inode = file_inode(vma->vm_file);
+ if (inode->i_mode == S_IFCHR)
+ return false; /* device-dax */
+ return true;
+}
+
static inline int iocb_flags(struct file *file)
{
int res = 0;
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1368,6 +1368,19 @@ long get_user_pages_locked(unsigned long
unsigned int gup_flags, struct page **pages, int *locked);
long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
struct page **pages, unsigned int gup_flags);
+#ifdef CONFIG_FS_DAX
+long get_user_pages_longterm(unsigned long start, unsigned long nr_pages,
+ unsigned int gup_flags, struct page **pages,
+ struct vm_area_struct **vmas);
+#else
+static inline long get_user_pages_longterm(unsigned long start,
+ unsigned long nr_pages, unsigned int gup_flags,
+ struct page **pages, struct vm_area_struct **vmas)
+{
+ return get_user_pages(start, nr_pages, gup_flags, pages, vmas);
+}
+#endif /* CONFIG_FS_DAX */
+
int get_user_pages_fast(unsigned long start, int nr_pages, int write,
struct page **pages);
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1095,6 +1095,70 @@ long get_user_pages(unsigned long start,
}
EXPORT_SYMBOL(get_user_pages);
+#ifdef CONFIG_FS_DAX
+/*
+ * This is the same as get_user_pages() in that it assumes we are
+ * operating on the current task's mm, but it goes further to validate
+ * that the vmas associated with the address range are suitable for
+ * longterm elevated page reference counts. For example, filesystem-dax
+ * mappings are subject to the lifetime enforced by the filesystem and
+ * we need guarantees that longterm users like RDMA and V4L2 only
+ * establish mappings that have a kernel enforced revocation mechanism.
+ *
+ * "longterm" == userspace controlled elevated page count lifetime.
+ * Contrast this to iov_iter_get_pages() usages which are transient.
+ */
+long get_user_pages_longterm(unsigned long start, unsigned long nr_pages,
+ unsigned int gup_flags, struct page **pages,
+ struct vm_area_struct **vmas_arg)
+{
+ struct vm_area_struct **vmas = vmas_arg;
+ struct vm_area_struct *vma_prev = NULL;
+ long rc, i;
+
+ if (!pages)
+ return -EINVAL;
+
+ if (!vmas) {
+ vmas = kcalloc(nr_pages, sizeof(struct vm_area_struct *),
+ GFP_KERNEL);
+ if (!vmas)
+ return -ENOMEM;
+ }
+
+ rc = get_user_pages(start, nr_pages, gup_flags, pages, vmas);
+
+ for (i = 0; i < rc; i++) {
+ struct vm_area_struct *vma = vmas[i];
+
+ if (vma == vma_prev)
+ continue;
+
+ vma_prev = vma;
+
+ if (vma_is_fsdax(vma))
+ break;
+ }
+
+ /*
+ * Either get_user_pages() failed, or the vma validation
+ * succeeded, in either case we don't need to put_page() before
+ * returning.
+ */
+ if (i >= rc)
+ goto out;
+
+ for (i = 0; i < rc; i++)
+ put_page(pages[i]);
+ rc = -EOPNOTSUPP;
+out:
+ if (vmas != vmas_arg)
+ kfree(vmas);
+ return rc;
+}
+EXPORT_SYMBOL(get_user_pages_longterm);
+#endif /* CONFIG_FS_DAX */
+
/**
* populate_vma_page_range() - populate a range of pages in the vma.
* @vma: target vma
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.14/mm-hugetlbfs-introduce-split-to-vm_operations_struct.patch
queue-4.14/ib-core-disable-memory-registration-of-filesystem-dax-vmas.patch
queue-4.14/mm-introduce-get_user_pages_longterm.patch
queue-4.14/mm-fail-get_vaddr_frames-for-filesystem-dax-mappings.patch
queue-4.14/device-dax-implement-split-to-catch-invalid-munmap-attempts.patch
queue-4.14/v4l2-disable-filesystem-dax-mapping-support.patch
queue-4.14/mm-fix-device-dax-pud-write-faults-triggered-by-get_user_pages.patch
queue-4.14/mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch
This is a note to let you know that I've just added the patch titled
mm, hugetlbfs: introduce ->split() to vm_operations_struct
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-hugetlbfs-introduce-split-to-vm_operations_struct.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 31383c6865a578834dd953d9dbc88e6b19fe3997 Mon Sep 17 00:00:00 2001
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Wed, 29 Nov 2017 16:10:28 -0800
Subject: mm, hugetlbfs: introduce ->split() to vm_operations_struct
From: Dan Williams <dan.j.williams(a)intel.com>
commit 31383c6865a578834dd953d9dbc88e6b19fe3997 upstream.
Patch series "device-dax: fix unaligned munmap handling"
When device-dax is operating in huge-page mode we want it to behave like
hugetlbfs and fail attempts to split vmas into unaligned ranges. It
would be messy to teach the munmap path about device-dax alignment
constraints in the same (hstate) way that hugetlbfs communicates this
constraint. Instead, these patches introduce a new ->split() vm
operation.
This patch (of 2):
The device-dax interface has similar constraints as hugetlbfs in that it
requires the munmap path to unmap in huge page aligned units. Rather
than add more custom vma handling code in __split_vma() introduce a new
vm operation to perform this vma specific check.
Link: http://lkml.kernel.org/r/151130418135.4029.6783191281930729710.stgit@dwilli…
Fixes: dee410792419 ("/dev/dax, core: file operations and dax-mmap")
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Cc: Jeff Moyer <jmoyer(a)redhat.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/mm.h | 1 +
mm/hugetlb.c | 8 ++++++++
mm/mmap.c | 8 +++++---
3 files changed, 14 insertions(+), 3 deletions(-)
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -367,6 +367,7 @@ enum page_entry_size {
struct vm_operations_struct {
void (*open)(struct vm_area_struct * area);
void (*close)(struct vm_area_struct * area);
+ int (*split)(struct vm_area_struct * area, unsigned long addr);
int (*mremap)(struct vm_area_struct * area);
int (*fault)(struct vm_fault *vmf);
int (*huge_fault)(struct vm_fault *vmf, enum page_entry_size pe_size);
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3125,6 +3125,13 @@ static void hugetlb_vm_op_close(struct v
}
}
+static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr)
+{
+ if (addr & ~(huge_page_mask(hstate_vma(vma))))
+ return -EINVAL;
+ return 0;
+}
+
/*
* We cannot handle pagefaults against hugetlb pages at all. They cause
* handle_mm_fault() to try to instantiate regular-sized pages in the
@@ -3141,6 +3148,7 @@ const struct vm_operations_struct hugetl
.fault = hugetlb_vm_op_fault,
.open = hugetlb_vm_op_open,
.close = hugetlb_vm_op_close,
+ .split = hugetlb_vm_op_split,
};
static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2540,9 +2540,11 @@ int __split_vma(struct mm_struct *mm, st
struct vm_area_struct *new;
int err;
- if (is_vm_hugetlb_page(vma) && (addr &
- ~(huge_page_mask(hstate_vma(vma)))))
- return -EINVAL;
+ if (vma->vm_ops && vma->vm_ops->split) {
+ err = vma->vm_ops->split(vma, addr);
+ if (err)
+ return err;
+ }
new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
if (!new)
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.14/mm-hugetlbfs-introduce-split-to-vm_operations_struct.patch
queue-4.14/ib-core-disable-memory-registration-of-filesystem-dax-vmas.patch
queue-4.14/mm-introduce-get_user_pages_longterm.patch
queue-4.14/mm-fail-get_vaddr_frames-for-filesystem-dax-mappings.patch
queue-4.14/device-dax-implement-split-to-catch-invalid-munmap-attempts.patch
queue-4.14/v4l2-disable-filesystem-dax-mapping-support.patch
queue-4.14/mm-fix-device-dax-pud-write-faults-triggered-by-get_user_pages.patch
queue-4.14/mm-madvise.c-fix-madvise-infinite-loop-under-special-circumstances.patch