This is a note to let you know that I've just added the patch titled
crypto: n2 - cure use after free
to the 3.18-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-n2-cure-use-after-free.patch
and it can be found in the queue-3.18 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 203f45003a3d03eea8fa28d74cfc74c354416fdb Mon Sep 17 00:00:00 2001
From: Jan Engelhardt <jengelh(a)inai.de>
Date: Tue, 19 Dec 2017 19:09:07 +0100
Subject: crypto: n2 - cure use after free
From: Jan Engelhardt <jengelh(a)inai.de>
commit 203f45003a3d03eea8fa28d74cfc74c354416fdb upstream.
queue_cache_init is first called for the Control Word Queue
(n2_crypto_probe). At that time, queue_cache[0] is NULL and a new
kmem_cache will be allocated. If the subsequent n2_register_algs call
fails, the kmem_cache will be released in queue_cache_destroy, but
queue_cache_init[0] is not set back to NULL.
So when the Module Arithmetic Unit gets probed next (n2_mau_probe),
queue_cache_init will not allocate a kmem_cache again, but leave it
as its bogus value, causing a BUG() to trigger when queue_cache[0] is
eventually passed to kmem_cache_zalloc:
n2_crypto: Found N2CP at /virtual-devices@100/n2cp@7
n2_crypto: Registered NCS HVAPI version 2.0
called queue_cache_init
n2_crypto: md5 alg registration failed
n2cp f028687c: /virtual-devices@100/n2cp@7: Unable to register algorithms.
called queue_cache_destroy
n2cp: probe of f028687c failed with error -22
n2_crypto: Found NCP at /virtual-devices@100/ncp@6
n2_crypto: Registered NCS HVAPI version 2.0
called queue_cache_init
kernel BUG at mm/slab.c:2993!
Call Trace:
[0000000000604488] kmem_cache_alloc+0x1a8/0x1e0
(inlined) kmem_cache_zalloc
(inlined) new_queue
(inlined) spu_queue_setup
(inlined) handle_exec_unit
[0000000010c61eb4] spu_mdesc_scan+0x1f4/0x460 [n2_crypto]
[0000000010c62b80] n2_mau_probe+0x100/0x220 [n2_crypto]
[000000000084b174] platform_drv_probe+0x34/0xc0
Signed-off-by: Jan Engelhardt <jengelh(a)inai.de>
Acked-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/crypto/n2_core.c | 3 +++
1 file changed, 3 insertions(+)
--- a/drivers/crypto/n2_core.c
+++ b/drivers/crypto/n2_core.c
@@ -1641,6 +1641,7 @@ static int queue_cache_init(void)
CWQ_ENTRY_SIZE, 0, NULL);
if (!queue_cache[HV_NCS_QTYPE_CWQ - 1]) {
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_MAU - 1]);
+ queue_cache[HV_NCS_QTYPE_MAU - 1] = NULL;
return -ENOMEM;
}
return 0;
@@ -1650,6 +1651,8 @@ static void queue_cache_destroy(void)
{
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_MAU - 1]);
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_CWQ - 1]);
+ queue_cache[HV_NCS_QTYPE_MAU - 1] = NULL;
+ queue_cache[HV_NCS_QTYPE_CWQ - 1] = NULL;
}
static int spu_queue_register(struct spu_queue *p, unsigned long q_type)
Patches currently in stable-queue which might be from jengelh(a)inai.de are
queue-3.18/crypto-n2-cure-use-after-free.patch
This is a note to let you know that I've just added the patch titled
sunxi-rsb: Include OF based modalias in device uevent
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
sunxi-rsb-include-of-based-modalias-in-device-uevent.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e2bf801ecd4e62222a46d1ba9e57e710171d29c1 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Stefan=20Br=C3=BCns?= <stefan.bruens(a)rwth-aachen.de>
Date: Mon, 27 Nov 2017 20:05:34 +0100
Subject: sunxi-rsb: Include OF based modalias in device uevent
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Stefan Brüns <stefan.bruens(a)rwth-aachen.de>
commit e2bf801ecd4e62222a46d1ba9e57e710171d29c1 upstream.
Include the OF-based modalias in the uevent sent when registering devices
on the sunxi RSB bus, so that user space has a chance to autoload the
kernel module for the device.
Fixes a regression caused by commit 3f241bfa60bd ("arm64: allwinner: a64:
pine64: Use dcdc1 regulator for mmc0"). When the axp20x-rsb module for
the AXP803 PMIC is built as a module, it is not loaded and the system
ends up with an disfunctional MMC controller.
Fixes: d787dcdb9c8f ("bus: sunxi-rsb: Add driver for Allwinner Reduced Serial Bus")
Acked-by: Chen-Yu Tsai <wens(a)csie.org>
Signed-off-by: Stefan Brüns <stefan.bruens(a)rwth-aachen.de>
Signed-off-by: Maxime Ripard <maxime.ripard(a)free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/bus/sunxi-rsb.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/bus/sunxi-rsb.c
+++ b/drivers/bus/sunxi-rsb.c
@@ -178,6 +178,7 @@ static struct bus_type sunxi_rsb_bus = {
.match = sunxi_rsb_device_match,
.probe = sunxi_rsb_device_probe,
.remove = sunxi_rsb_device_remove,
+ .uevent = of_device_uevent_modalias,
};
static void sunxi_rsb_dev_release(struct device *dev)
Patches currently in stable-queue which might be from stefan.bruens(a)rwth-aachen.de are
queue-4.9/sunxi-rsb-include-of-based-modalias-in-device-uevent.patch
This is a note to let you know that I've just added the patch titled
nbd: fix use-after-free of rq/bio in the xmit path
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
nbd-fix-use-after-free-of-rq-bio-in-the-xmit-path.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 429a787be6793554ee02aacc7e1f11ebcecc4453 Mon Sep 17 00:00:00 2001
From: Jens Axboe <axboe(a)fb.com>
Date: Thu, 17 Nov 2016 12:30:37 -0700
Subject: nbd: fix use-after-free of rq/bio in the xmit path
From: Jens Axboe <axboe(a)fb.com>
commit 429a787be6793554ee02aacc7e1f11ebcecc4453 upstream.
For writes, we can get a completion in while we're still iterating
the request and bio chain. If that happens, we're reading freed
memory and we can crash.
Break out after the last segment and avoid having the iterator
read freed memory.
Reviewed-by: Josef Bacik <jbacik(a)fb.com>
Signed-off-by: Jens Axboe <axboe(a)fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/block/nbd.c | 32 +++++++++++++++++++++++---------
1 file changed, 23 insertions(+), 9 deletions(-)
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -272,6 +272,7 @@ static int nbd_send_cmd(struct nbd_devic
int result, flags;
struct nbd_request request;
unsigned long size = blk_rq_bytes(req);
+ struct bio *bio;
u32 type;
if (req->cmd_type == REQ_TYPE_DRV_PRIV)
@@ -305,16 +306,20 @@ static int nbd_send_cmd(struct nbd_devic
return -EIO;
}
- if (type == NBD_CMD_WRITE) {
- struct req_iterator iter;
+ if (type != NBD_CMD_WRITE)
+ return 0;
+
+ flags = 0;
+ bio = req->bio;
+ while (bio) {
+ struct bio *next = bio->bi_next;
+ struct bvec_iter iter;
struct bio_vec bvec;
- /*
- * we are really probing at internals to determine
- * whether to set MSG_MORE or not...
- */
- rq_for_each_segment(bvec, req, iter) {
- flags = 0;
- if (!rq_iter_last(bvec, iter))
+
+ bio_for_each_segment(bvec, bio, iter) {
+ bool is_last = !next && bio_iter_last(bvec, iter);
+
+ if (is_last)
flags = MSG_MORE;
dev_dbg(nbd_to_dev(nbd), "request %p: sending %d bytes data\n",
cmd, bvec.bv_len);
@@ -325,7 +330,16 @@ static int nbd_send_cmd(struct nbd_devic
result);
return -EIO;
}
+ /*
+ * The completion might already have come in,
+ * so break for the last one instead of letting
+ * the iterator do it. This prevents use-after-free
+ * of the bio.
+ */
+ if (is_last)
+ break;
}
+ bio = next;
}
return 0;
}
Patches currently in stable-queue which might be from axboe(a)fb.com are
queue-4.9/nbd-fix-use-after-free-of-rq-bio-in-the-xmit-path.patch
This is a note to let you know that I've just added the patch titled
kernel/acct.c: fix the acct->needcheck check in check_free_space()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kernel-acct.c-fix-the-acct-needcheck-check-in-check_free_space.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 4d9570158b6260f449e317a5f9ed030c2504a615 Mon Sep 17 00:00:00 2001
From: Oleg Nesterov <oleg(a)redhat.com>
Date: Thu, 4 Jan 2018 16:17:49 -0800
Subject: kernel/acct.c: fix the acct->needcheck check in check_free_space()
From: Oleg Nesterov <oleg(a)redhat.com>
commit 4d9570158b6260f449e317a5f9ed030c2504a615 upstream.
As Tsukada explains, the time_is_before_jiffies(acct->needcheck) check
is very wrong, we need time_is_after_jiffies() to make sys_acct() work.
Ignoring the overflows, the code should "goto out" if needcheck >
jiffies, while currently it checks "needcheck < jiffies" and thus in the
likely case check_free_space() does nothing until jiffies overflow.
In particular this means that sys_acct() is simply broken, acct_on()
sets acct->needcheck = jiffies and expects that check_free_space()
should set acct->active = 1 after the free-space check, but this won't
happen if jiffies increments in between.
This was broken by commit 32dc73086015 ("get rid of timer in
kern/acct.c") in 2011, then another (correct) commit 795a2f22a8ea
("acct() should honour the limits from the very beginning") made the
problem more visible.
Link: http://lkml.kernel.org/r/20171213133940.GA6554@redhat.com
Fixes: 32dc73086015 ("get rid of timer in kern/acct.c")
Reported-by: TSUKADA Koutaro <tsukada(a)ascade.co.jp>
Suggested-by: TSUKADA Koutaro <tsukada(a)ascade.co.jp>
Signed-off-by: Oleg Nesterov <oleg(a)redhat.com>
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
kernel/acct.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/acct.c
+++ b/kernel/acct.c
@@ -99,7 +99,7 @@ static int check_free_space(struct bsd_a
{
struct kstatfs sbuf;
- if (time_is_before_jiffies(acct->needcheck))
+ if (time_is_after_jiffies(acct->needcheck))
goto out;
/* May block */
Patches currently in stable-queue which might be from oleg(a)redhat.com are
queue-4.9/kernel-acct.c-fix-the-acct-needcheck-check-in-check_free_space.patch
This is a note to let you know that I've just added the patch titled
fscache: Fix the default for fscache_maybe_release_page()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
fscache-fix-the-default-for-fscache_maybe_release_page.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 98801506552593c9b8ac11021b0cdad12cab4f6b Mon Sep 17 00:00:00 2001
From: David Howells <dhowells(a)redhat.com>
Date: Tue, 2 Jan 2018 10:02:19 +0000
Subject: fscache: Fix the default for fscache_maybe_release_page()
From: David Howells <dhowells(a)redhat.com>
commit 98801506552593c9b8ac11021b0cdad12cab4f6b upstream.
Fix the default for fscache_maybe_release_page() for when the cookie isn't
valid or the page isn't cached. It mustn't return false as that indicates
the page cannot yet be freed.
The problem with the default is that if, say, there's no cache, but a
network filesystem's pages are using up almost all the available memory, a
system can OOM because the filesystem ->releasepage() op will not allow
them to be released as fscache_maybe_release_page() incorrectly prevents
it.
This can be tested by writing a sequence of 512MiB files to an AFS mount.
It does not affect NFS or CIFS because both of those wrap the call in a
check of PG_fscache and it shouldn't bother Ceph as that only has
PG_private set whilst writeback is in progress. This might be an issue for
9P, however.
Note that the pages aren't entirely stuck. Removing a file or unmounting
will clear things because that uses ->invalidatepage() instead.
Fixes: 201a15428bd5 ("FS-Cache: Handle pages pending storage that get evicted under OOM conditions")
Reported-by: Marc Dionne <marc.dionne(a)auristor.com>
Signed-off-by: David Howells <dhowells(a)redhat.com>
Reviewed-by: Jeff Layton <jlayton(a)redhat.com>
Acked-by: Al Viro <viro(a)zeniv.linux.org.uk>
Tested-by: Marc Dionne <marc.dionne(a)auristor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/fscache.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/include/linux/fscache.h
+++ b/include/linux/fscache.h
@@ -764,7 +764,7 @@ bool fscache_maybe_release_page(struct f
{
if (fscache_cookie_valid(cookie) && PageFsCache(page))
return __fscache_maybe_release_page(cookie, page, gfp);
- return false;
+ return true;
}
/**
Patches currently in stable-queue which might be from dhowells(a)redhat.com are
queue-4.9/fscache-fix-the-default-for-fscache_maybe_release_page.patch
This is a note to let you know that I've just added the patch titled
crypto: pcrypt - fix freeing pcrypt instances
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-pcrypt-fix-freeing-pcrypt-instances.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d76c68109f37cb85b243a1cf0f40313afd2bae68 Mon Sep 17 00:00:00 2001
From: Eric Biggers <ebiggers(a)google.com>
Date: Wed, 20 Dec 2017 14:28:25 -0800
Subject: crypto: pcrypt - fix freeing pcrypt instances
From: Eric Biggers <ebiggers(a)google.com>
commit d76c68109f37cb85b243a1cf0f40313afd2bae68 upstream.
pcrypt is using the old way of freeing instances, where the ->free()
method specified in the 'struct crypto_template' is passed a pointer to
the 'struct crypto_instance'. But the crypto_instance is being
kfree()'d directly, which is incorrect because the memory was actually
allocated as an aead_instance, which contains the crypto_instance at a
nonzero offset. Thus, the wrong pointer was being kfree()'d.
Fix it by switching to the new way to free aead_instance's where the
->free() method is specified in the aead_instance itself.
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Fixes: 0496f56065e0 ("crypto: pcrypt - Add support for new AEAD interface")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
crypto/pcrypt.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
--- a/crypto/pcrypt.c
+++ b/crypto/pcrypt.c
@@ -254,6 +254,14 @@ static void pcrypt_aead_exit_tfm(struct
crypto_free_aead(ctx->child);
}
+static void pcrypt_free(struct aead_instance *inst)
+{
+ struct pcrypt_instance_ctx *ctx = aead_instance_ctx(inst);
+
+ crypto_drop_aead(&ctx->spawn);
+ kfree(inst);
+}
+
static int pcrypt_init_instance(struct crypto_instance *inst,
struct crypto_alg *alg)
{
@@ -319,6 +327,8 @@ static int pcrypt_create_aead(struct cry
inst->alg.encrypt = pcrypt_aead_encrypt;
inst->alg.decrypt = pcrypt_aead_decrypt;
+ inst->free = pcrypt_free;
+
err = aead_register_instance(tmpl, inst);
if (err)
goto out_drop_aead;
@@ -349,14 +359,6 @@ static int pcrypt_create(struct crypto_t
return -EINVAL;
}
-static void pcrypt_free(struct crypto_instance *inst)
-{
- struct pcrypt_instance_ctx *ctx = crypto_instance_ctx(inst);
-
- crypto_drop_aead(&ctx->spawn);
- kfree(inst);
-}
-
static int pcrypt_cpumask_change_notify(struct notifier_block *self,
unsigned long val, void *data)
{
@@ -469,7 +471,6 @@ static void pcrypt_fini_padata(struct pa
static struct crypto_template pcrypt_tmpl = {
.name = "pcrypt",
.create = pcrypt_create,
- .free = pcrypt_free,
.module = THIS_MODULE,
};
Patches currently in stable-queue which might be from ebiggers(a)google.com are
queue-4.9/crypto-chacha20poly1305-validate-the-digest-size.patch
queue-4.9/crypto-pcrypt-fix-freeing-pcrypt-instances.patch
This is a note to let you know that I've just added the patch titled
crypto: n2 - cure use after free
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-n2-cure-use-after-free.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 203f45003a3d03eea8fa28d74cfc74c354416fdb Mon Sep 17 00:00:00 2001
From: Jan Engelhardt <jengelh(a)inai.de>
Date: Tue, 19 Dec 2017 19:09:07 +0100
Subject: crypto: n2 - cure use after free
From: Jan Engelhardt <jengelh(a)inai.de>
commit 203f45003a3d03eea8fa28d74cfc74c354416fdb upstream.
queue_cache_init is first called for the Control Word Queue
(n2_crypto_probe). At that time, queue_cache[0] is NULL and a new
kmem_cache will be allocated. If the subsequent n2_register_algs call
fails, the kmem_cache will be released in queue_cache_destroy, but
queue_cache_init[0] is not set back to NULL.
So when the Module Arithmetic Unit gets probed next (n2_mau_probe),
queue_cache_init will not allocate a kmem_cache again, but leave it
as its bogus value, causing a BUG() to trigger when queue_cache[0] is
eventually passed to kmem_cache_zalloc:
n2_crypto: Found N2CP at /virtual-devices@100/n2cp@7
n2_crypto: Registered NCS HVAPI version 2.0
called queue_cache_init
n2_crypto: md5 alg registration failed
n2cp f028687c: /virtual-devices@100/n2cp@7: Unable to register algorithms.
called queue_cache_destroy
n2cp: probe of f028687c failed with error -22
n2_crypto: Found NCP at /virtual-devices@100/ncp@6
n2_crypto: Registered NCS HVAPI version 2.0
called queue_cache_init
kernel BUG at mm/slab.c:2993!
Call Trace:
[0000000000604488] kmem_cache_alloc+0x1a8/0x1e0
(inlined) kmem_cache_zalloc
(inlined) new_queue
(inlined) spu_queue_setup
(inlined) handle_exec_unit
[0000000010c61eb4] spu_mdesc_scan+0x1f4/0x460 [n2_crypto]
[0000000010c62b80] n2_mau_probe+0x100/0x220 [n2_crypto]
[000000000084b174] platform_drv_probe+0x34/0xc0
Signed-off-by: Jan Engelhardt <jengelh(a)inai.de>
Acked-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/crypto/n2_core.c | 3 +++
1 file changed, 3 insertions(+)
--- a/drivers/crypto/n2_core.c
+++ b/drivers/crypto/n2_core.c
@@ -1620,6 +1620,7 @@ static int queue_cache_init(void)
CWQ_ENTRY_SIZE, 0, NULL);
if (!queue_cache[HV_NCS_QTYPE_CWQ - 1]) {
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_MAU - 1]);
+ queue_cache[HV_NCS_QTYPE_MAU - 1] = NULL;
return -ENOMEM;
}
return 0;
@@ -1629,6 +1630,8 @@ static void queue_cache_destroy(void)
{
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_MAU - 1]);
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_CWQ - 1]);
+ queue_cache[HV_NCS_QTYPE_MAU - 1] = NULL;
+ queue_cache[HV_NCS_QTYPE_CWQ - 1] = NULL;
}
static int spu_queue_register(struct spu_queue *p, unsigned long q_type)
Patches currently in stable-queue which might be from jengelh(a)inai.de are
queue-4.9/crypto-n2-cure-use-after-free.patch
This is a note to let you know that I've just added the patch titled
crypto: chacha20poly1305 - validate the digest size
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-chacha20poly1305-validate-the-digest-size.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e57121d08c38dabec15cf3e1e2ad46721af30cae Mon Sep 17 00:00:00 2001
From: Eric Biggers <ebiggers(a)google.com>
Date: Mon, 11 Dec 2017 12:15:17 -0800
Subject: crypto: chacha20poly1305 - validate the digest size
From: Eric Biggers <ebiggers(a)google.com>
commit e57121d08c38dabec15cf3e1e2ad46721af30cae upstream.
If the rfc7539 template was instantiated with a hash algorithm with
digest size larger than 16 bytes (POLY1305_DIGEST_SIZE), then the digest
overran the 'tag' buffer in 'struct chachapoly_req_ctx', corrupting the
subsequent memory, including 'cryptlen'. This caused a crash during
crypto_skcipher_decrypt().
Fix it by, when instantiating the template, requiring that the
underlying hash algorithm has the digest size expected for Poly1305.
Reproducer:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
int algfd, reqfd;
struct sockaddr_alg addr = {
.salg_type = "aead",
.salg_name = "rfc7539(chacha20,sha256)",
};
unsigned char buf[32] = { 0 };
algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(algfd, (void *)&addr, sizeof(addr));
setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, sizeof(buf));
reqfd = accept(algfd, 0, 0);
write(reqfd, buf, 16);
read(reqfd, buf, 16);
}
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
crypto/chacha20poly1305.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/crypto/chacha20poly1305.c
+++ b/crypto/chacha20poly1305.c
@@ -610,6 +610,11 @@ static int chachapoly_create(struct cryp
algt->mask));
if (IS_ERR(poly))
return PTR_ERR(poly);
+ poly_hash = __crypto_hash_alg_common(poly);
+
+ err = -EINVAL;
+ if (poly_hash->digestsize != POLY1305_DIGEST_SIZE)
+ goto out_put_poly;
err = -ENOMEM;
inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
@@ -618,7 +623,6 @@ static int chachapoly_create(struct cryp
ctx = aead_instance_ctx(inst);
ctx->saltlen = CHACHAPOLY_IV_SIZE - ivsize;
- poly_hash = __crypto_hash_alg_common(poly);
err = crypto_init_ahash_spawn(&ctx->poly, poly_hash,
aead_crypto_instance(inst));
if (err)
Patches currently in stable-queue which might be from ebiggers(a)google.com are
queue-4.9/crypto-chacha20poly1305-validate-the-digest-size.patch
queue-4.9/crypto-pcrypt-fix-freeing-pcrypt-instances.patch
This is a note to let you know that I've just added the patch titled
sunxi-rsb: Include OF based modalias in device uevent
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
sunxi-rsb-include-of-based-modalias-in-device-uevent.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e2bf801ecd4e62222a46d1ba9e57e710171d29c1 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Stefan=20Br=C3=BCns?= <stefan.bruens(a)rwth-aachen.de>
Date: Mon, 27 Nov 2017 20:05:34 +0100
Subject: sunxi-rsb: Include OF based modalias in device uevent
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Stefan Brüns <stefan.bruens(a)rwth-aachen.de>
commit e2bf801ecd4e62222a46d1ba9e57e710171d29c1 upstream.
Include the OF-based modalias in the uevent sent when registering devices
on the sunxi RSB bus, so that user space has a chance to autoload the
kernel module for the device.
Fixes a regression caused by commit 3f241bfa60bd ("arm64: allwinner: a64:
pine64: Use dcdc1 regulator for mmc0"). When the axp20x-rsb module for
the AXP803 PMIC is built as a module, it is not loaded and the system
ends up with an disfunctional MMC controller.
Fixes: d787dcdb9c8f ("bus: sunxi-rsb: Add driver for Allwinner Reduced Serial Bus")
Acked-by: Chen-Yu Tsai <wens(a)csie.org>
Signed-off-by: Stefan Brüns <stefan.bruens(a)rwth-aachen.de>
Signed-off-by: Maxime Ripard <maxime.ripard(a)free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/bus/sunxi-rsb.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/bus/sunxi-rsb.c
+++ b/drivers/bus/sunxi-rsb.c
@@ -178,6 +178,7 @@ static struct bus_type sunxi_rsb_bus = {
.match = sunxi_rsb_device_match,
.probe = sunxi_rsb_device_probe,
.remove = sunxi_rsb_device_remove,
+ .uevent = of_device_uevent_modalias,
};
static void sunxi_rsb_dev_release(struct device *dev)
Patches currently in stable-queue which might be from stefan.bruens(a)rwth-aachen.de are
queue-4.4/sunxi-rsb-include-of-based-modalias-in-device-uevent.patch
This is a note to let you know that I've just added the patch titled
kernel/acct.c: fix the acct->needcheck check in check_free_space()
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kernel-acct.c-fix-the-acct-needcheck-check-in-check_free_space.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 4d9570158b6260f449e317a5f9ed030c2504a615 Mon Sep 17 00:00:00 2001
From: Oleg Nesterov <oleg(a)redhat.com>
Date: Thu, 4 Jan 2018 16:17:49 -0800
Subject: kernel/acct.c: fix the acct->needcheck check in check_free_space()
From: Oleg Nesterov <oleg(a)redhat.com>
commit 4d9570158b6260f449e317a5f9ed030c2504a615 upstream.
As Tsukada explains, the time_is_before_jiffies(acct->needcheck) check
is very wrong, we need time_is_after_jiffies() to make sys_acct() work.
Ignoring the overflows, the code should "goto out" if needcheck >
jiffies, while currently it checks "needcheck < jiffies" and thus in the
likely case check_free_space() does nothing until jiffies overflow.
In particular this means that sys_acct() is simply broken, acct_on()
sets acct->needcheck = jiffies and expects that check_free_space()
should set acct->active = 1 after the free-space check, but this won't
happen if jiffies increments in between.
This was broken by commit 32dc73086015 ("get rid of timer in
kern/acct.c") in 2011, then another (correct) commit 795a2f22a8ea
("acct() should honour the limits from the very beginning") made the
problem more visible.
Link: http://lkml.kernel.org/r/20171213133940.GA6554@redhat.com
Fixes: 32dc73086015 ("get rid of timer in kern/acct.c")
Reported-by: TSUKADA Koutaro <tsukada(a)ascade.co.jp>
Suggested-by: TSUKADA Koutaro <tsukada(a)ascade.co.jp>
Signed-off-by: Oleg Nesterov <oleg(a)redhat.com>
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
kernel/acct.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/acct.c
+++ b/kernel/acct.c
@@ -99,7 +99,7 @@ static int check_free_space(struct bsd_a
{
struct kstatfs sbuf;
- if (time_is_before_jiffies(acct->needcheck))
+ if (time_is_after_jiffies(acct->needcheck))
goto out;
/* May block */
Patches currently in stable-queue which might be from oleg(a)redhat.com are
queue-4.4/x86-kasan-write-protect-kasan-zero-shadow.patch
queue-4.4/kernel-acct.c-fix-the-acct-needcheck-check-in-check_free_space.patch
This is a note to let you know that I've just added the patch titled
fscache: Fix the default for fscache_maybe_release_page()
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
fscache-fix-the-default-for-fscache_maybe_release_page.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 98801506552593c9b8ac11021b0cdad12cab4f6b Mon Sep 17 00:00:00 2001
From: David Howells <dhowells(a)redhat.com>
Date: Tue, 2 Jan 2018 10:02:19 +0000
Subject: fscache: Fix the default for fscache_maybe_release_page()
From: David Howells <dhowells(a)redhat.com>
commit 98801506552593c9b8ac11021b0cdad12cab4f6b upstream.
Fix the default for fscache_maybe_release_page() for when the cookie isn't
valid or the page isn't cached. It mustn't return false as that indicates
the page cannot yet be freed.
The problem with the default is that if, say, there's no cache, but a
network filesystem's pages are using up almost all the available memory, a
system can OOM because the filesystem ->releasepage() op will not allow
them to be released as fscache_maybe_release_page() incorrectly prevents
it.
This can be tested by writing a sequence of 512MiB files to an AFS mount.
It does not affect NFS or CIFS because both of those wrap the call in a
check of PG_fscache and it shouldn't bother Ceph as that only has
PG_private set whilst writeback is in progress. This might be an issue for
9P, however.
Note that the pages aren't entirely stuck. Removing a file or unmounting
will clear things because that uses ->invalidatepage() instead.
Fixes: 201a15428bd5 ("FS-Cache: Handle pages pending storage that get evicted under OOM conditions")
Reported-by: Marc Dionne <marc.dionne(a)auristor.com>
Signed-off-by: David Howells <dhowells(a)redhat.com>
Reviewed-by: Jeff Layton <jlayton(a)redhat.com>
Acked-by: Al Viro <viro(a)zeniv.linux.org.uk>
Tested-by: Marc Dionne <marc.dionne(a)auristor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/fscache.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/include/linux/fscache.h
+++ b/include/linux/fscache.h
@@ -764,7 +764,7 @@ bool fscache_maybe_release_page(struct f
{
if (fscache_cookie_valid(cookie) && PageFsCache(page))
return __fscache_maybe_release_page(cookie, page, gfp);
- return false;
+ return true;
}
/**
Patches currently in stable-queue which might be from dhowells(a)redhat.com are
queue-4.4/fscache-fix-the-default-for-fscache_maybe_release_page.patch
This is a note to let you know that I've just added the patch titled
crypto: pcrypt - fix freeing pcrypt instances
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-pcrypt-fix-freeing-pcrypt-instances.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d76c68109f37cb85b243a1cf0f40313afd2bae68 Mon Sep 17 00:00:00 2001
From: Eric Biggers <ebiggers(a)google.com>
Date: Wed, 20 Dec 2017 14:28:25 -0800
Subject: crypto: pcrypt - fix freeing pcrypt instances
From: Eric Biggers <ebiggers(a)google.com>
commit d76c68109f37cb85b243a1cf0f40313afd2bae68 upstream.
pcrypt is using the old way of freeing instances, where the ->free()
method specified in the 'struct crypto_template' is passed a pointer to
the 'struct crypto_instance'. But the crypto_instance is being
kfree()'d directly, which is incorrect because the memory was actually
allocated as an aead_instance, which contains the crypto_instance at a
nonzero offset. Thus, the wrong pointer was being kfree()'d.
Fix it by switching to the new way to free aead_instance's where the
->free() method is specified in the aead_instance itself.
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Fixes: 0496f56065e0 ("crypto: pcrypt - Add support for new AEAD interface")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
crypto/pcrypt.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
--- a/crypto/pcrypt.c
+++ b/crypto/pcrypt.c
@@ -254,6 +254,14 @@ static void pcrypt_aead_exit_tfm(struct
crypto_free_aead(ctx->child);
}
+static void pcrypt_free(struct aead_instance *inst)
+{
+ struct pcrypt_instance_ctx *ctx = aead_instance_ctx(inst);
+
+ crypto_drop_aead(&ctx->spawn);
+ kfree(inst);
+}
+
static int pcrypt_init_instance(struct crypto_instance *inst,
struct crypto_alg *alg)
{
@@ -319,6 +327,8 @@ static int pcrypt_create_aead(struct cry
inst->alg.encrypt = pcrypt_aead_encrypt;
inst->alg.decrypt = pcrypt_aead_decrypt;
+ inst->free = pcrypt_free;
+
err = aead_register_instance(tmpl, inst);
if (err)
goto out_drop_aead;
@@ -349,14 +359,6 @@ static int pcrypt_create(struct crypto_t
return -EINVAL;
}
-static void pcrypt_free(struct crypto_instance *inst)
-{
- struct pcrypt_instance_ctx *ctx = crypto_instance_ctx(inst);
-
- crypto_drop_aead(&ctx->spawn);
- kfree(inst);
-}
-
static int pcrypt_cpumask_change_notify(struct notifier_block *self,
unsigned long val, void *data)
{
@@ -469,7 +471,6 @@ static void pcrypt_fini_padata(struct pa
static struct crypto_template pcrypt_tmpl = {
.name = "pcrypt",
.create = pcrypt_create,
- .free = pcrypt_free,
.module = THIS_MODULE,
};
Patches currently in stable-queue which might be from ebiggers(a)google.com are
queue-4.4/crypto-chacha20poly1305-validate-the-digest-size.patch
queue-4.4/crypto-pcrypt-fix-freeing-pcrypt-instances.patch
This is a note to let you know that I've just added the patch titled
crypto: n2 - cure use after free
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-n2-cure-use-after-free.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 203f45003a3d03eea8fa28d74cfc74c354416fdb Mon Sep 17 00:00:00 2001
From: Jan Engelhardt <jengelh(a)inai.de>
Date: Tue, 19 Dec 2017 19:09:07 +0100
Subject: crypto: n2 - cure use after free
From: Jan Engelhardt <jengelh(a)inai.de>
commit 203f45003a3d03eea8fa28d74cfc74c354416fdb upstream.
queue_cache_init is first called for the Control Word Queue
(n2_crypto_probe). At that time, queue_cache[0] is NULL and a new
kmem_cache will be allocated. If the subsequent n2_register_algs call
fails, the kmem_cache will be released in queue_cache_destroy, but
queue_cache_init[0] is not set back to NULL.
So when the Module Arithmetic Unit gets probed next (n2_mau_probe),
queue_cache_init will not allocate a kmem_cache again, but leave it
as its bogus value, causing a BUG() to trigger when queue_cache[0] is
eventually passed to kmem_cache_zalloc:
n2_crypto: Found N2CP at /virtual-devices@100/n2cp@7
n2_crypto: Registered NCS HVAPI version 2.0
called queue_cache_init
n2_crypto: md5 alg registration failed
n2cp f028687c: /virtual-devices@100/n2cp@7: Unable to register algorithms.
called queue_cache_destroy
n2cp: probe of f028687c failed with error -22
n2_crypto: Found NCP at /virtual-devices@100/ncp@6
n2_crypto: Registered NCS HVAPI version 2.0
called queue_cache_init
kernel BUG at mm/slab.c:2993!
Call Trace:
[0000000000604488] kmem_cache_alloc+0x1a8/0x1e0
(inlined) kmem_cache_zalloc
(inlined) new_queue
(inlined) spu_queue_setup
(inlined) handle_exec_unit
[0000000010c61eb4] spu_mdesc_scan+0x1f4/0x460 [n2_crypto]
[0000000010c62b80] n2_mau_probe+0x100/0x220 [n2_crypto]
[000000000084b174] platform_drv_probe+0x34/0xc0
Signed-off-by: Jan Engelhardt <jengelh(a)inai.de>
Acked-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/crypto/n2_core.c | 3 +++
1 file changed, 3 insertions(+)
--- a/drivers/crypto/n2_core.c
+++ b/drivers/crypto/n2_core.c
@@ -1641,6 +1641,7 @@ static int queue_cache_init(void)
CWQ_ENTRY_SIZE, 0, NULL);
if (!queue_cache[HV_NCS_QTYPE_CWQ - 1]) {
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_MAU - 1]);
+ queue_cache[HV_NCS_QTYPE_MAU - 1] = NULL;
return -ENOMEM;
}
return 0;
@@ -1650,6 +1651,8 @@ static void queue_cache_destroy(void)
{
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_MAU - 1]);
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_CWQ - 1]);
+ queue_cache[HV_NCS_QTYPE_MAU - 1] = NULL;
+ queue_cache[HV_NCS_QTYPE_CWQ - 1] = NULL;
}
static int spu_queue_register(struct spu_queue *p, unsigned long q_type)
Patches currently in stable-queue which might be from jengelh(a)inai.de are
queue-4.4/crypto-n2-cure-use-after-free.patch
This is a note to let you know that I've just added the patch titled
crypto: chacha20poly1305 - validate the digest size
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-chacha20poly1305-validate-the-digest-size.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e57121d08c38dabec15cf3e1e2ad46721af30cae Mon Sep 17 00:00:00 2001
From: Eric Biggers <ebiggers(a)google.com>
Date: Mon, 11 Dec 2017 12:15:17 -0800
Subject: crypto: chacha20poly1305 - validate the digest size
From: Eric Biggers <ebiggers(a)google.com>
commit e57121d08c38dabec15cf3e1e2ad46721af30cae upstream.
If the rfc7539 template was instantiated with a hash algorithm with
digest size larger than 16 bytes (POLY1305_DIGEST_SIZE), then the digest
overran the 'tag' buffer in 'struct chachapoly_req_ctx', corrupting the
subsequent memory, including 'cryptlen'. This caused a crash during
crypto_skcipher_decrypt().
Fix it by, when instantiating the template, requiring that the
underlying hash algorithm has the digest size expected for Poly1305.
Reproducer:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
int algfd, reqfd;
struct sockaddr_alg addr = {
.salg_type = "aead",
.salg_name = "rfc7539(chacha20,sha256)",
};
unsigned char buf[32] = { 0 };
algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(algfd, (void *)&addr, sizeof(addr));
setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, sizeof(buf));
reqfd = accept(algfd, 0, 0);
write(reqfd, buf, 16);
read(reqfd, buf, 16);
}
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
crypto/chacha20poly1305.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/crypto/chacha20poly1305.c
+++ b/crypto/chacha20poly1305.c
@@ -600,6 +600,11 @@ static int chachapoly_create(struct cryp
CRYPTO_ALG_TYPE_AHASH_MASK);
if (IS_ERR(poly))
return PTR_ERR(poly);
+ poly_hash = __crypto_hash_alg_common(poly);
+
+ err = -EINVAL;
+ if (poly_hash->digestsize != POLY1305_DIGEST_SIZE)
+ goto out_put_poly;
err = -ENOMEM;
inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
@@ -608,7 +613,6 @@ static int chachapoly_create(struct cryp
ctx = aead_instance_ctx(inst);
ctx->saltlen = CHACHAPOLY_IV_SIZE - ivsize;
- poly_hash = __crypto_hash_alg_common(poly);
err = crypto_init_ahash_spawn(&ctx->poly, poly_hash,
aead_crypto_instance(inst));
if (err)
Patches currently in stable-queue which might be from ebiggers(a)google.com are
queue-4.4/crypto-chacha20poly1305-validate-the-digest-size.patch
queue-4.4/crypto-pcrypt-fix-freeing-pcrypt-instances.patch
On Thu, Jan 04, 2018 at 09:21:05AM +1100, NeilBrown wrote:
>
> From: Thiago Rafael Becker <thiago.becker(a)gmail.com>
>
Thanks for the two backports, now queued up.
greg k-h
Hi,
Can you include commit:
commit 429a787be6793554ee02aacc7e1f11ebcecc4453
Author: Jens Axboe <axboe(a)fb.com>
Date: Thu Nov 17 12:30:37 2016 -0700
nbd: fix use-after-free of rq/bio in the xmit path
in the 4.9 stable release? Bug doesn't exist in 4.8 or 4.10, but it does
in 4.9 and it's a mistake that the above commit wasn't marked for 4.9
stable when added.
--
Jens Axboe
This is a note to let you know that I've just added the patch titled
x86/pti: Rename BUG_CPU_INSECURE to BUG_CPU_MELTDOWN
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-pti-rename-bug_cpu_insecure-to-bug_cpu_meltdown.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From de791821c295cc61419a06fe5562288417d1bc58 Mon Sep 17 00:00:00 2001
From: Thomas Gleixner <tglx(a)linutronix.de>
Date: Fri, 5 Jan 2018 15:27:34 +0100
Subject: x86/pti: Rename BUG_CPU_INSECURE to BUG_CPU_MELTDOWN
From: Thomas Gleixner <tglx(a)linutronix.de>
commit de791821c295cc61419a06fe5562288417d1bc58 upstream.
Use the name associated with the particular attack which needs page table
isolation for mitigation.
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: David Woodhouse <dwmw(a)amazon.co.uk>
Cc: Alan Cox <gnomes(a)lxorguk.ukuu.org.uk>
Cc: Jiri Koshina <jikos(a)kernel.org>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Tim Chen <tim.c.chen(a)linux.intel.com>
Cc: Andi Lutomirski <luto(a)amacapital.net>
Cc: Andi Kleen <ak(a)linux.intel.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Paul Turner <pjt(a)google.com>
Cc: Tom Lendacky <thomas.lendacky(a)amd.com>
Cc: Greg KH <gregkh(a)linux-foundation.org>
Cc: Dave Hansen <dave.hansen(a)intel.com>
Cc: Kees Cook <keescook(a)google.com>
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801051525300.1724@nanos
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/kernel/cpu/common.c | 2 +-
arch/x86/mm/pti.c | 6 +++---
3 files changed, 5 insertions(+), 5 deletions(-)
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -341,6 +341,6 @@
#define X86_BUG_SWAPGS_FENCE X86_BUG(11) /* SWAPGS without input dep on GS */
#define X86_BUG_MONITOR X86_BUG(12) /* IPI required to wake up remote CPU */
#define X86_BUG_AMD_E400 X86_BUG(13) /* CPU is among the affected by Erratum 400 */
-#define X86_BUG_CPU_INSECURE X86_BUG(14) /* CPU is insecure and needs kernel page table isolation */
+#define X86_BUG_CPU_MELTDOWN X86_BUG(14) /* CPU is affected by meltdown attack and needs kernel page table isolation */
#endif /* _ASM_X86_CPUFEATURES_H */
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -900,7 +900,7 @@ static void __init early_identify_cpu(st
setup_force_cpu_cap(X86_FEATURE_ALWAYS);
if (c->x86_vendor != X86_VENDOR_AMD)
- setup_force_cpu_bug(X86_BUG_CPU_INSECURE);
+ setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
fpu__init_system(c);
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -56,13 +56,13 @@
static void __init pti_print_if_insecure(const char *reason)
{
- if (boot_cpu_has_bug(X86_BUG_CPU_INSECURE))
+ if (boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
pr_info("%s\n", reason);
}
static void __init pti_print_if_secure(const char *reason)
{
- if (!boot_cpu_has_bug(X86_BUG_CPU_INSECURE))
+ if (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
pr_info("%s\n", reason);
}
@@ -96,7 +96,7 @@ void __init pti_check_boottime_disable(v
}
autosel:
- if (!boot_cpu_has_bug(X86_BUG_CPU_INSECURE))
+ if (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
return;
enable:
setup_force_cpu_cap(X86_FEATURE_PTI);
Patches currently in stable-queue which might be from tglx(a)linutronix.de are
queue-4.14/x86-tlb-drop-the-_gpl-from-the-cpu_tlbstate-export.patch
queue-4.14/x86-alternatives-add-missing-n-at-end-of-alternative-inline-asm.patch
queue-4.14/x86-kaslr-fix-the-vaddr_end-mess.patch
queue-4.14/efi-capsule-loader-reinstate-virtual-capsule-mapping.patch
queue-4.14/x86-mm-set-modules_end-to-0xffffffffff000000.patch
queue-4.14/mm-sparse.c-wrong-allocation-for-mem_section.patch
queue-4.14/x86-events-intel-ds-use-the-proper-cache-flush-method-for-mapping-ds-buffers.patch
queue-4.14/x86-pti-rename-bug_cpu_insecure-to-bug_cpu_meltdown.patch
queue-4.14/x86-mm-map-cpu_entry_area-at-the-same-place-on-4-5-level.patch
This is a note to let you know that I've just added the patch titled
x86/mm: Set MODULES_END to 0xffffffffff000000
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-mm-set-modules_end-to-0xffffffffff000000.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From f5a40711fa58f1c109165a4fec6078bf2dfd2bdc Mon Sep 17 00:00:00 2001
From: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Date: Thu, 28 Dec 2017 19:06:20 +0300
Subject: x86/mm: Set MODULES_END to 0xffffffffff000000
From: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
commit f5a40711fa58f1c109165a4fec6078bf2dfd2bdc upstream.
Since f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size")
kasan_mem_to_shadow(MODULES_END) could be not aligned to a page boundary.
So passing page unaligned address to kasan_populate_zero_shadow() have two
possible effects:
1) It may leave one page hole in supposed to be populated area. After commit
21506525fb8d ("x86/kasan/64: Teach KASAN about the cpu_entry_area") that
hole happens to be in the shadow covering fixmap area and leads to crash:
BUG: unable to handle kernel paging request at fffffbffffe8ee04
RIP: 0010:check_memory_region+0x5c/0x190
Call Trace:
<NMI>
memcpy+0x1f/0x50
ghes_copy_tofrom_phys+0xab/0x180
ghes_read_estatus+0xfb/0x280
ghes_notify_nmi+0x2b2/0x410
nmi_handle+0x115/0x2c0
default_do_nmi+0x57/0x110
do_nmi+0xf8/0x150
end_repeat_nmi+0x1a/0x1e
Note, the crash likely disappeared after commit 92a0f81d8957, which
changed kasan_populate_zero_shadow() call the way it was before
commit 21506525fb8d.
2) Attempt to load module near MODULES_END will fail, because
__vmalloc_node_range() called from kasan_module_alloc() will hit the
WARN_ON(!pte_none(*pte)) in the vmap_pte_range() and bail out with error.
To fix this we need to make kasan_mem_to_shadow(MODULES_END) page aligned
which means that MODULES_END should be 8*PAGE_SIZE aligned.
The whole point of commit f06bdd4001c2 was to move MODULES_END down if
NR_CPUS is big, so the cpu_entry_area takes a lot of space.
But since 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap")
the cpu_entry_area is no longer in fixmap, so we could just set
MODULES_END to a fixed 8*PAGE_SIZE aligned address.
Fixes: f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size")
Reported-by: Jakub Kicinski <kubakici(a)wp.pl>
Signed-off-by: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Thomas Garnier <thgarnie(a)google.com>
Link: https://lkml.kernel.org/r/20171228160620.23818-1-aryabinin@virtuozzo.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/x86/x86_64/mm.txt | 5 +----
arch/x86/include/asm/pgtable_64_types.h | 2 +-
2 files changed, 2 insertions(+), 5 deletions(-)
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -43,7 +43,7 @@ ffffff0000000000 - ffffff7fffffffff (=39
ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
... unused hole ...
ffffffff80000000 - ffffffff9fffffff (=512 MB) kernel text mapping, from phys 0
-ffffffffa0000000 - [fixmap start] (~1526 MB) module mapping space
+ffffffffa0000000 - fffffffffeffffff (1520 MB) module mapping space
[fixmap start] - ffffffffff5fffff kernel-internal fixmap range
ffffffffff600000 - ffffffffff600fff (=4 kB) legacy vsyscall ABI
ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole
@@ -67,9 +67,6 @@ memory window (this size is arbitrary, i
The mappings are not part of any other kernel PGD and are only available
during EFI runtime calls.
-The module mapping space size changes based on the CONFIG requirements for the
-following fixmap section.
-
Note that if CONFIG_RANDOMIZE_MEMORY is enabled, the direct mapping of all
physical memory, vmalloc/ioremap space and virtual memory map are randomized.
Their order is preserved but their base will be offset early at boot time.
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -104,7 +104,7 @@ typedef struct { pteval_t pte; } pte_t;
#define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE)
/* The module sections ends with the start of the fixmap */
-#define MODULES_END __fix_to_virt(__end_of_fixed_addresses + 1)
+#define MODULES_END _AC(0xffffffffff000000, UL)
#define MODULES_LEN (MODULES_END - MODULES_VADDR)
#define ESPFIX_PGD_ENTRY _AC(-2, UL)
Patches currently in stable-queue which might be from aryabinin(a)virtuozzo.com are
queue-4.14/x86-mm-set-modules_end-to-0xffffffffff000000.patch
This is a note to let you know that I've just added the patch titled
x86/kaslr: Fix the vaddr_end mess
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kaslr-fix-the-vaddr_end-mess.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 1dddd25125112ba49706518ac9077a1026a18f37 Mon Sep 17 00:00:00 2001
From: Thomas Gleixner <tglx(a)linutronix.de>
Date: Thu, 4 Jan 2018 12:32:03 +0100
Subject: x86/kaslr: Fix the vaddr_end mess
From: Thomas Gleixner <tglx(a)linutronix.de>
commit 1dddd25125112ba49706518ac9077a1026a18f37 upstream.
vaddr_end for KASLR is only documented in the KASLR code itself and is
adjusted depending on config options. So it's not surprising that a change
of the memory layout causes KASLR to have the wrong vaddr_end. This can map
arbitrary stuff into other areas causing hard to understand problems.
Remove the whole ifdef magic and define the start of the cpu_entry_area to
be the end of the KASLR vaddr range.
Add documentation to that effect.
Fixes: 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap")
Reported-by: Benjamin Gilbert <benjamin.gilbert(a)coreos.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Tested-by: Benjamin Gilbert <benjamin.gilbert(a)coreos.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Garnier <thgarnie(a)google.com>,
Cc: Alexander Kuleshov <kuleshovmail(a)gmail.com>
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801041320360.1771@nanos
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/x86/x86_64/mm.txt | 6 ++++++
arch/x86/include/asm/pgtable_64_types.h | 8 +++++++-
arch/x86/mm/kaslr.c | 32 +++++++++-----------------------
3 files changed, 22 insertions(+), 24 deletions(-)
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,7 @@ ffffea0000000000 - ffffeaffffffffff (=40
... unused hole ...
ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
... unused hole ...
+ vaddr_end for KASLR
fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
fffffe8000000000 - fffffeffffffffff (=39 bits) LDT remap for PTI
ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
@@ -37,6 +38,7 @@ ffd4000000000000 - ffd5ffffffffffff (=49
... unused hole ...
ffdf000000000000 - fffffc0000000000 (=53 bits) kasan shadow memory (8PB)
... unused hole ...
+ vaddr_end for KASLR
fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
... unused hole ...
ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
@@ -71,3 +73,7 @@ during EFI runtime calls.
Note that if CONFIG_RANDOMIZE_MEMORY is enabled, the direct mapping of all
physical memory, vmalloc/ioremap space and virtual memory map are randomized.
Their order is preserved but their base will be offset early at boot time.
+
+Be very careful vs. KASLR when changing anything here. The KASLR address
+range must not overlap with anything except the KASAN shadow area, which is
+correct as KASAN disables KASLR.
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -75,7 +75,13 @@ typedef struct { pteval_t pte; } pte_t;
#define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE - 1))
-/* See Documentation/x86/x86_64/mm.txt for a description of the memory map. */
+/*
+ * See Documentation/x86/x86_64/mm.txt for a description of the memory map.
+ *
+ * Be very careful vs. KASLR when changing anything here. The KASLR address
+ * range must not overlap with anything except the KASAN shadow area, which
+ * is correct as KASAN disables KASLR.
+ */
#define MAXMEM _AC(__AC(1, UL) << MAX_PHYSMEM_BITS, UL)
#ifdef CONFIG_X86_5LEVEL
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -34,25 +34,14 @@
#define TB_SHIFT 40
/*
- * Virtual address start and end range for randomization. The end changes base
- * on configuration to have the highest amount of space for randomization.
- * It increases the possible random position for each randomized region.
+ * Virtual address start and end range for randomization.
*
- * You need to add an if/def entry if you introduce a new memory region
- * compatible with KASLR. Your entry must be in logical order with memory
- * layout. For example, ESPFIX is before EFI because its virtual address is
- * before. You also need to add a BUILD_BUG_ON() in kernel_randomize_memory() to
- * ensure that this order is correct and won't be changed.
+ * The end address could depend on more configuration options to make the
+ * highest amount of space for randomization available, but that's too hard
+ * to keep straight and caused issues already.
*/
static const unsigned long vaddr_start = __PAGE_OFFSET_BASE;
-
-#if defined(CONFIG_X86_ESPFIX64)
-static const unsigned long vaddr_end = ESPFIX_BASE_ADDR;
-#elif defined(CONFIG_EFI)
-static const unsigned long vaddr_end = EFI_VA_END;
-#else
-static const unsigned long vaddr_end = __START_KERNEL_map;
-#endif
+static const unsigned long vaddr_end = CPU_ENTRY_AREA_BASE;
/* Default values */
unsigned long page_offset_base = __PAGE_OFFSET_BASE;
@@ -101,15 +90,12 @@ void __init kernel_randomize_memory(void
unsigned long remain_entropy;
/*
- * All these BUILD_BUG_ON checks ensures the memory layout is
- * consistent with the vaddr_start/vaddr_end variables.
+ * These BUILD_BUG_ON checks ensure the memory layout is consistent
+ * with the vaddr_start/vaddr_end variables. These checks are very
+ * limited....
*/
BUILD_BUG_ON(vaddr_start >= vaddr_end);
- BUILD_BUG_ON(IS_ENABLED(CONFIG_X86_ESPFIX64) &&
- vaddr_end >= EFI_VA_END);
- BUILD_BUG_ON((IS_ENABLED(CONFIG_X86_ESPFIX64) ||
- IS_ENABLED(CONFIG_EFI)) &&
- vaddr_end >= __START_KERNEL_map);
+ BUILD_BUG_ON(vaddr_end != CPU_ENTRY_AREA_BASE);
BUILD_BUG_ON(vaddr_end > __START_KERNEL_map);
if (!kaslr_memory_enabled())
Patches currently in stable-queue which might be from tglx(a)linutronix.de are
queue-4.14/x86-tlb-drop-the-_gpl-from-the-cpu_tlbstate-export.patch
queue-4.14/x86-alternatives-add-missing-n-at-end-of-alternative-inline-asm.patch
queue-4.14/x86-kaslr-fix-the-vaddr_end-mess.patch
queue-4.14/efi-capsule-loader-reinstate-virtual-capsule-mapping.patch
queue-4.14/x86-mm-set-modules_end-to-0xffffffffff000000.patch
queue-4.14/mm-sparse.c-wrong-allocation-for-mem_section.patch
queue-4.14/x86-events-intel-ds-use-the-proper-cache-flush-method-for-mapping-ds-buffers.patch
queue-4.14/x86-pti-rename-bug_cpu_insecure-to-bug_cpu_meltdown.patch
queue-4.14/x86-mm-map-cpu_entry_area-at-the-same-place-on-4-5-level.patch
This is a note to let you know that I've just added the patch titled
x86/events/intel/ds: Use the proper cache flush method for mapping ds buffers
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-events-intel-ds-use-the-proper-cache-flush-method-for-mapping-ds-buffers.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 42f3bdc5dd962a5958bc024c1e1444248a6b8b4a Mon Sep 17 00:00:00 2001
From: Peter Zijlstra <peterz(a)infradead.org>
Date: Thu, 4 Jan 2018 18:07:12 +0100
Subject: x86/events/intel/ds: Use the proper cache flush method for mapping ds buffers
From: Peter Zijlstra <peterz(a)infradead.org>
commit 42f3bdc5dd962a5958bc024c1e1444248a6b8b4a upstream.
Thomas reported the following warning:
BUG: using smp_processor_id() in preemptible [00000000] code: ovsdb-server/4498
caller is native_flush_tlb_single+0x57/0xc0
native_flush_tlb_single+0x57/0xc0
__set_pte_vaddr+0x2d/0x40
set_pte_vaddr+0x2f/0x40
cea_set_pte+0x30/0x40
ds_update_cea.constprop.4+0x4d/0x70
reserve_ds_buffers+0x159/0x410
x86_reserve_hardware+0x150/0x160
x86_pmu_event_init+0x3e/0x1f0
perf_try_init_event+0x69/0x80
perf_event_alloc+0x652/0x740
SyS_perf_event_open+0x3f6/0xd60
do_syscall_64+0x5c/0x190
set_pte_vaddr is used to map the ds buffers into the cpu entry area, but
there are two problems with that:
1) The resulting flush is not supposed to be called in preemptible context
2) The cpu entry area is supposed to be per CPU, but the debug store
buffers are mapped for all CPUs so these mappings need to be flushed
globally.
Add the necessary preemption protection across the mapping code and flush
TLBs globally.
Fixes: c1961a4631da ("x86/events/intel/ds: Map debug buffers in cpu_entry_area")
Reported-by: Thomas Zeitlhofer <thomas.zeitlhofer+lkml(a)ze-it.at>
Signed-off-by: Peter Zijlstra <peterz(a)infradead.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Tested-by: Thomas Zeitlhofer <thomas.zeitlhofer+lkml(a)ze-it.at>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Hugh Dickins <hughd(a)google.com>
Link: https://lkml.kernel.org/r/20180104170712.GB3040@hirez.programming.kicks-ass…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/events/intel/ds.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -5,6 +5,7 @@
#include <asm/cpu_entry_area.h>
#include <asm/perf_event.h>
+#include <asm/tlbflush.h>
#include <asm/insn.h>
#include "../perf_event.h"
@@ -283,20 +284,35 @@ static DEFINE_PER_CPU(void *, insn_buffe
static void ds_update_cea(void *cea, void *addr, size_t size, pgprot_t prot)
{
+ unsigned long start = (unsigned long)cea;
phys_addr_t pa;
size_t msz = 0;
pa = virt_to_phys(addr);
+
+ preempt_disable();
for (; msz < size; msz += PAGE_SIZE, pa += PAGE_SIZE, cea += PAGE_SIZE)
cea_set_pte(cea, pa, prot);
+
+ /*
+ * This is a cross-CPU update of the cpu_entry_area, we must shoot down
+ * all TLB entries for it.
+ */
+ flush_tlb_kernel_range(start, start + size);
+ preempt_enable();
}
static void ds_clear_cea(void *cea, size_t size)
{
+ unsigned long start = (unsigned long)cea;
size_t msz = 0;
+ preempt_disable();
for (; msz < size; msz += PAGE_SIZE, cea += PAGE_SIZE)
cea_set_pte(cea, 0, PAGE_NONE);
+
+ flush_tlb_kernel_range(start, start + size);
+ preempt_enable();
}
static void *dsalloc_pages(size_t size, gfp_t flags, int cpu)
Patches currently in stable-queue which might be from peterz(a)infradead.org are
queue-4.14/x86-tlb-drop-the-_gpl-from-the-cpu_tlbstate-export.patch
queue-4.14/x86-alternatives-add-missing-n-at-end-of-alternative-inline-asm.patch
queue-4.14/x86-kaslr-fix-the-vaddr_end-mess.patch
queue-4.14/efi-capsule-loader-reinstate-virtual-capsule-mapping.patch
queue-4.14/mm-sparse.c-wrong-allocation-for-mem_section.patch
queue-4.14/x86-events-intel-ds-use-the-proper-cache-flush-method-for-mapping-ds-buffers.patch
queue-4.14/x86-pti-rename-bug_cpu_insecure-to-bug_cpu_meltdown.patch
queue-4.14/x86-mm-map-cpu_entry_area-at-the-same-place-on-4-5-level.patch
This is a note to let you know that I've just added the patch titled
x86/alternatives: Add missing '\n' at end of ALTERNATIVE inline asm
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-alternatives-add-missing-n-at-end-of-alternative-inline-asm.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From b9e705ef7cfaf22db0daab91ad3cd33b0fa32eb9 Mon Sep 17 00:00:00 2001
From: David Woodhouse <dwmw(a)amazon.co.uk>
Date: Thu, 4 Jan 2018 14:37:05 +0000
Subject: x86/alternatives: Add missing '\n' at end of ALTERNATIVE inline asm
From: David Woodhouse <dwmw(a)amazon.co.uk>
commit b9e705ef7cfaf22db0daab91ad3cd33b0fa32eb9 upstream.
Where an ALTERNATIVE is used in the middle of an inline asm block, this
would otherwise lead to the following instruction being appended directly
to the trailing ".popsection", and a failed compile.
Fixes: 9cebed423c84 ("x86, alternative: Use .pushsection/.popsection")
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: gnomes(a)lxorguk.ukuu.org.uk
Cc: Rik van Riel <riel(a)redhat.com>
Cc: ak(a)linux.intel.com
Cc: Tim Chen <tim.c.chen(a)linux.intel.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Paul Turner <pjt(a)google.com>
Cc: Jiri Kosina <jikos(a)kernel.org>
Cc: Andy Lutomirski <luto(a)amacapital.net>
Cc: Dave Hansen <dave.hansen(a)intel.com>
Cc: Kees Cook <keescook(a)google.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh(a)linux-foundation.org>
Link: https://lkml.kernel.org/r/20180104143710.8961-8-dwmw@amazon.co.uk
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/alternative.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -140,7 +140,7 @@ static inline int alternatives_text_rese
".popsection\n" \
".pushsection .altinstr_replacement, \"ax\"\n" \
ALTINSTR_REPLACEMENT(newinstr, feature, 1) \
- ".popsection"
+ ".popsection\n"
#define ALTERNATIVE_2(oldinstr, newinstr1, feature1, newinstr2, feature2)\
OLDINSTR_2(oldinstr, 1, 2) \
@@ -151,7 +151,7 @@ static inline int alternatives_text_rese
".pushsection .altinstr_replacement, \"ax\"\n" \
ALTINSTR_REPLACEMENT(newinstr1, feature1, 1) \
ALTINSTR_REPLACEMENT(newinstr2, feature2, 2) \
- ".popsection"
+ ".popsection\n"
/*
* Alternative instructions for different CPU types or capabilities.
Patches currently in stable-queue which might be from dwmw(a)amazon.co.uk are
queue-4.14/x86-alternatives-add-missing-n-at-end-of-alternative-inline-asm.patch
queue-4.14/x86-pti-rename-bug_cpu_insecure-to-bug_cpu_meltdown.patch
This is a note to let you know that I've just added the patch titled
userfaultfd: clear the vma->vm_userfaultfd_ctx if UFFD_EVENT_FORK fails
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
userfaultfd-clear-the-vma-vm_userfaultfd_ctx-if-uffd_event_fork-fails.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 0cbb4b4f4c44f54af268969b18d8deda63aded59 Mon Sep 17 00:00:00 2001
From: Andrea Arcangeli <aarcange(a)redhat.com>
Date: Thu, 4 Jan 2018 16:18:09 -0800
Subject: userfaultfd: clear the vma->vm_userfaultfd_ctx if UFFD_EVENT_FORK fails
From: Andrea Arcangeli <aarcange(a)redhat.com>
commit 0cbb4b4f4c44f54af268969b18d8deda63aded59 upstream.
The previous fix in commit 384632e67e08 ("userfaultfd: non-cooperative:
fix fork use after free") corrected the refcounting in case of
UFFD_EVENT_FORK failure for the fork userfault paths.
That still didn't clear the vma->vm_userfaultfd_ctx of the vmas that
were set to point to the aborted new uffd ctx earlier in
dup_userfaultfd.
Link: http://lkml.kernel.org/r/20171223002505.593-2-aarcange@redhat.com
Signed-off-by: Andrea Arcangeli <aarcange(a)redhat.com>
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Reviewed-by: Mike Rapoport <rppt(a)linux.vnet.ibm.com>
Cc: Eric Biggers <ebiggers3(a)gmail.com>
Cc: Dmitry Vyukov <dvyukov(a)google.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/userfaultfd.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -570,11 +570,14 @@ out:
static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
struct userfaultfd_wait_queue *ewq)
{
+ struct userfaultfd_ctx *release_new_ctx;
+
if (WARN_ON_ONCE(current->flags & PF_EXITING))
goto out;
ewq->ctx = ctx;
init_waitqueue_entry(&ewq->wq, current);
+ release_new_ctx = NULL;
spin_lock(&ctx->event_wqh.lock);
/*
@@ -601,8 +604,7 @@ static void userfaultfd_event_wait_compl
new = (struct userfaultfd_ctx *)
(unsigned long)
ewq->msg.arg.reserved.reserved1;
-
- userfaultfd_ctx_put(new);
+ release_new_ctx = new;
}
break;
}
@@ -617,6 +619,20 @@ static void userfaultfd_event_wait_compl
__set_current_state(TASK_RUNNING);
spin_unlock(&ctx->event_wqh.lock);
+ if (release_new_ctx) {
+ struct vm_area_struct *vma;
+ struct mm_struct *mm = release_new_ctx->mm;
+
+ /* the various vma->vm_userfaultfd_ctx still points to it */
+ down_write(&mm->mmap_sem);
+ for (vma = mm->mmap; vma; vma = vma->vm_next)
+ if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx)
+ vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+ up_write(&mm->mmap_sem);
+
+ userfaultfd_ctx_put(release_new_ctx);
+ }
+
/*
* ctx may go away after this if the userfault pseudo fd is
* already released.
Patches currently in stable-queue which might be from aarcange(a)redhat.com are
queue-4.14/userfaultfd-clear-the-vma-vm_userfaultfd_ctx-if-uffd_event_fork-fails.patch
This is a note to let you know that I've just added the patch titled
sunxi-rsb: Include OF based modalias in device uevent
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
sunxi-rsb-include-of-based-modalias-in-device-uevent.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e2bf801ecd4e62222a46d1ba9e57e710171d29c1 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Stefan=20Br=C3=BCns?= <stefan.bruens(a)rwth-aachen.de>
Date: Mon, 27 Nov 2017 20:05:34 +0100
Subject: sunxi-rsb: Include OF based modalias in device uevent
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Stefan Brüns <stefan.bruens(a)rwth-aachen.de>
commit e2bf801ecd4e62222a46d1ba9e57e710171d29c1 upstream.
Include the OF-based modalias in the uevent sent when registering devices
on the sunxi RSB bus, so that user space has a chance to autoload the
kernel module for the device.
Fixes a regression caused by commit 3f241bfa60bd ("arm64: allwinner: a64:
pine64: Use dcdc1 regulator for mmc0"). When the axp20x-rsb module for
the AXP803 PMIC is built as a module, it is not loaded and the system
ends up with an disfunctional MMC controller.
Fixes: d787dcdb9c8f ("bus: sunxi-rsb: Add driver for Allwinner Reduced Serial Bus")
Acked-by: Chen-Yu Tsai <wens(a)csie.org>
Signed-off-by: Stefan Brüns <stefan.bruens(a)rwth-aachen.de>
Signed-off-by: Maxime Ripard <maxime.ripard(a)free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/bus/sunxi-rsb.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/bus/sunxi-rsb.c
+++ b/drivers/bus/sunxi-rsb.c
@@ -178,6 +178,7 @@ static struct bus_type sunxi_rsb_bus = {
.match = sunxi_rsb_device_match,
.probe = sunxi_rsb_device_probe,
.remove = sunxi_rsb_device_remove,
+ .uevent = of_device_uevent_modalias,
};
static void sunxi_rsb_dev_release(struct device *dev)
Patches currently in stable-queue which might be from stefan.bruens(a)rwth-aachen.de are
queue-4.14/sunxi-rsb-include-of-based-modalias-in-device-uevent.patch
This is a note to let you know that I've just added the patch titled
mm/sparse.c: wrong allocation for mem_section
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-sparse.c-wrong-allocation-for-mem_section.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d09cfbbfa0f761a97687828b5afb27b56cbf2e19 Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe(a)redhat.com>
Date: Thu, 4 Jan 2018 16:18:06 -0800
Subject: mm/sparse.c: wrong allocation for mem_section
From: Baoquan He <bhe(a)redhat.com>
commit d09cfbbfa0f761a97687828b5afb27b56cbf2e19 upstream.
In commit 83e3c48729d9 ("mm/sparsemem: Allocate mem_section at runtime
for CONFIG_SPARSEMEM_EXTREME=y") mem_section is allocated at runtime to
save memory.
It allocates the first dimension of array with sizeof(struct mem_section).
It costs extra memory, should be sizeof(struct mem_section *).
Fix it.
Link: http://lkml.kernel.org/r/1513932498-20350-1-git-send-email-bhe@redhat.com
Fixes: 83e3c48729 ("mm/sparsemem: Allocate mem_section at runtime for CONFIG_SPARSEMEM_EXTREME=y")
Signed-off-by: Baoquan He <bhe(a)redhat.com>
Tested-by: Dave Young <dyoung(a)redhat.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: Ingo Molnar <mingo(a)kernel.org>
Cc: Andy Lutomirski <luto(a)amacapital.net>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Atsushi Kumagai <ats-kumagai(a)wm.jp.nec.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
mm/sparse.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -211,7 +211,7 @@ void __init memory_present(int nid, unsi
if (unlikely(!mem_section)) {
unsigned long size, align;
- size = sizeof(struct mem_section) * NR_SECTION_ROOTS;
+ size = sizeof(struct mem_section*) * NR_SECTION_ROOTS;
align = 1 << (INTERNODE_CACHE_SHIFT);
mem_section = memblock_virt_alloc(size, align);
}
Patches currently in stable-queue which might be from bhe(a)redhat.com are
queue-4.14/mm-sparse.c-wrong-allocation-for-mem_section.patch
This is a note to let you know that I've just added the patch titled
mm/mprotect: add a cond_resched() inside change_pmd_range()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-mprotect-add-a-cond_resched-inside-change_pmd_range.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 4991c09c7c812dba13ea9be79a68b4565bb1fa4e Mon Sep 17 00:00:00 2001
From: Anshuman Khandual <khandual(a)linux.vnet.ibm.com>
Date: Thu, 4 Jan 2018 16:17:52 -0800
Subject: mm/mprotect: add a cond_resched() inside change_pmd_range()
From: Anshuman Khandual <khandual(a)linux.vnet.ibm.com>
commit 4991c09c7c812dba13ea9be79a68b4565bb1fa4e upstream.
While testing on a large CPU system, detected the following RCU stall
many times over the span of the workload. This problem is solved by
adding a cond_resched() in the change_pmd_range() function.
INFO: rcu_sched detected stalls on CPUs/tasks:
154-....: (670 ticks this GP) idle=022/140000000000000/0 softirq=2825/2825 fqs=612
(detected by 955, t=6002 jiffies, g=4486, c=4485, q=90864)
Sending NMI from CPU 955 to CPUs 154:
NMI backtrace for cpu 154
CPU: 154 PID: 147071 Comm: workload Not tainted 4.15.0-rc3+ #3
NIP: c0000000000b3f64 LR: c0000000000b33d4 CTR: 000000000000aa18
REGS: 00000000a4b0fb44 TRAP: 0501 Not tainted (4.15.0-rc3+)
MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE> CR: 22422082 XER: 00000000
CFAR: 00000000006cf8f0 SOFTE: 1
GPR00: 0010000000000000 c00003ef9b1cb8c0 c0000000010cc600 0000000000000000
GPR04: 8e0000018c32b200 40017b3858fd6e00 8e0000018c32b208 40017b3858fd6e00
GPR08: 8e0000018c32b210 40017b3858fd6e00 8e0000018c32b218 40017b3858fd6e00
GPR12: ffffffffffffffff c00000000fb25100
NIP [c0000000000b3f64] plpar_hcall9+0x44/0x7c
LR [c0000000000b33d4] pSeries_lpar_flush_hash_range+0x384/0x420
Call Trace:
flush_hash_range+0x48/0x100
__flush_tlb_pending+0x44/0xd0
hpte_need_flush+0x408/0x470
change_protection_range+0xaac/0xf10
change_prot_numa+0x30/0xb0
task_numa_work+0x2d0/0x3e0
task_work_run+0x130/0x190
do_notify_resume+0x118/0x120
ret_from_except_lite+0x70/0x74
Instruction dump:
60000000 f8810028 7ca42b78 7cc53378 7ce63b78 7d074378 7d284b78 7d495378
e9410060 e9610068 e9810070 44000022 <7d806378> e9810028 f88c0000 f8ac0008
Link: http://lkml.kernel.org/r/20171214140551.5794-1-khandual@linux.vnet.ibm.com
Signed-off-by: Anshuman Khandual <khandual(a)linux.vnet.ibm.com>
Suggested-by: Nicholas Piggin <npiggin(a)gmail.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
mm/mprotect.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -166,7 +166,7 @@ static inline unsigned long change_pmd_r
next = pmd_addr_end(addr, end);
if (!is_swap_pmd(*pmd) && !pmd_trans_huge(*pmd) && !pmd_devmap(*pmd)
&& pmd_none_or_clear_bad(pmd))
- continue;
+ goto next;
/* invoke the mmu notifier if the pmd is populated */
if (!mni_start) {
@@ -188,7 +188,7 @@ static inline unsigned long change_pmd_r
}
/* huge pmd was handled */
- continue;
+ goto next;
}
}
/* fall through, the trans huge pmd just split */
@@ -196,6 +196,8 @@ static inline unsigned long change_pmd_r
this_pages = change_pte_range(vma, pmd, addr, next, newprot,
dirty_accountable, prot_numa);
pages += this_pages;
+next:
+ cond_resched();
} while (pmd++, addr = next, addr != end);
if (mni_start)
Patches currently in stable-queue which might be from khandual(a)linux.vnet.ibm.com are
queue-4.14/mm-mprotect-add-a-cond_resched-inside-change_pmd_range.patch
This is a note to let you know that I've just added the patch titled
kernel/acct.c: fix the acct->needcheck check in check_free_space()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kernel-acct.c-fix-the-acct-needcheck-check-in-check_free_space.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 4d9570158b6260f449e317a5f9ed030c2504a615 Mon Sep 17 00:00:00 2001
From: Oleg Nesterov <oleg(a)redhat.com>
Date: Thu, 4 Jan 2018 16:17:49 -0800
Subject: kernel/acct.c: fix the acct->needcheck check in check_free_space()
From: Oleg Nesterov <oleg(a)redhat.com>
commit 4d9570158b6260f449e317a5f9ed030c2504a615 upstream.
As Tsukada explains, the time_is_before_jiffies(acct->needcheck) check
is very wrong, we need time_is_after_jiffies() to make sys_acct() work.
Ignoring the overflows, the code should "goto out" if needcheck >
jiffies, while currently it checks "needcheck < jiffies" and thus in the
likely case check_free_space() does nothing until jiffies overflow.
In particular this means that sys_acct() is simply broken, acct_on()
sets acct->needcheck = jiffies and expects that check_free_space()
should set acct->active = 1 after the free-space check, but this won't
happen if jiffies increments in between.
This was broken by commit 32dc73086015 ("get rid of timer in
kern/acct.c") in 2011, then another (correct) commit 795a2f22a8ea
("acct() should honour the limits from the very beginning") made the
problem more visible.
Link: http://lkml.kernel.org/r/20171213133940.GA6554@redhat.com
Fixes: 32dc73086015 ("get rid of timer in kern/acct.c")
Reported-by: TSUKADA Koutaro <tsukada(a)ascade.co.jp>
Suggested-by: TSUKADA Koutaro <tsukada(a)ascade.co.jp>
Signed-off-by: Oleg Nesterov <oleg(a)redhat.com>
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
kernel/acct.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/acct.c
+++ b/kernel/acct.c
@@ -102,7 +102,7 @@ static int check_free_space(struct bsd_a
{
struct kstatfs sbuf;
- if (time_is_before_jiffies(acct->needcheck))
+ if (time_is_after_jiffies(acct->needcheck))
goto out;
/* May block */
Patches currently in stable-queue which might be from oleg(a)redhat.com are
queue-4.14/kernel-acct.c-fix-the-acct-needcheck-check-in-check_free_space.patch
This is a note to let you know that I've just added the patch titled
efi/capsule-loader: Reinstate virtual capsule mapping
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
efi-capsule-loader-reinstate-virtual-capsule-mapping.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From f24c4d478013d82bd1b943df566fff3561d52864 Mon Sep 17 00:00:00 2001
From: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Date: Tue, 2 Jan 2018 17:21:10 +0000
Subject: efi/capsule-loader: Reinstate virtual capsule mapping
From: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
commit f24c4d478013d82bd1b943df566fff3561d52864 upstream.
Commit:
82c3768b8d68 ("efi/capsule-loader: Use a cached copy of the capsule header")
... refactored the capsule loading code that maps the capsule header,
to avoid having to map it several times.
However, as it turns out, the vmap() call we ended up removing did not
just map the header, but the entire capsule image, and dropping this
virtual mapping breaks capsules that are processed by the firmware
immediately (i.e., without a reboot).
Unfortunately, that change was part of a larger refactor that allowed
a quirk to be implemented for Quark, which has a non-standard memory
layout for capsules, and we have slightly painted ourselves into a
corner by allowing quirk code to mangle the capsule header and memory
layout.
So we need to fix this without breaking Quark. Fortunately, Quark does
not appear to care about the virtual mapping, and so we can simply
do a partial revert of commit:
2a457fb31df6 ("efi/capsule-loader: Use page addresses rather than struct page pointers")
... and create a vmap() mapping of the entire capsule (including header)
based on the reinstated struct page array, unless running on Quark, in
which case we pass the capsule header copy as before.
Reported-by: Ge Song <ge.song(a)hxt-semitech.com>
Tested-by: Bryan O'Donoghue <pure.logic(a)nexus-software.ie>
Tested-by: Ge Song <ge.song(a)hxt-semitech.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Cc: Dave Young <dyoung(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Matt Fleming <matt(a)codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-efi(a)vger.kernel.org
Fixes: 82c3768b8d68 ("efi/capsule-loader: Use a cached copy of the capsule header")
Link: http://lkml.kernel.org/r/20180102172110.17018-3-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/platform/efi/quirks.c | 13 +++++++++
drivers/firmware/efi/capsule-loader.c | 45 +++++++++++++++++++++++++++-------
include/linux/efi.h | 4 ++-
3 files changed, 52 insertions(+), 10 deletions(-)
--- a/arch/x86/platform/efi/quirks.c
+++ b/arch/x86/platform/efi/quirks.c
@@ -592,7 +592,18 @@ static int qrk_capsule_setup_info(struct
/*
* Update the first page pointer to skip over the CSH header.
*/
- cap_info->pages[0] += csh->headersize;
+ cap_info->phys[0] += csh->headersize;
+
+ /*
+ * cap_info->capsule should point at a virtual mapping of the entire
+ * capsule, starting at the capsule header. Our image has the Quark
+ * security header prepended, so we cannot rely on the default vmap()
+ * mapping created by the generic capsule code.
+ * Given that the Quark firmware does not appear to care about the
+ * virtual mapping, let's just point cap_info->capsule at our copy
+ * of the capsule header.
+ */
+ cap_info->capsule = &cap_info->header;
return 1;
}
--- a/drivers/firmware/efi/capsule-loader.c
+++ b/drivers/firmware/efi/capsule-loader.c
@@ -20,10 +20,6 @@
#define NO_FURTHER_WRITE_ACTION -1
-#ifndef phys_to_page
-#define phys_to_page(x) pfn_to_page((x) >> PAGE_SHIFT)
-#endif
-
/**
* efi_free_all_buff_pages - free all previous allocated buffer pages
* @cap_info: pointer to current instance of capsule_info structure
@@ -35,7 +31,7 @@
static void efi_free_all_buff_pages(struct capsule_info *cap_info)
{
while (cap_info->index > 0)
- __free_page(phys_to_page(cap_info->pages[--cap_info->index]));
+ __free_page(cap_info->pages[--cap_info->index]);
cap_info->index = NO_FURTHER_WRITE_ACTION;
}
@@ -71,6 +67,14 @@ int __efi_capsule_setup_info(struct caps
cap_info->pages = temp_page;
+ temp_page = krealloc(cap_info->phys,
+ pages_needed * sizeof(phys_addr_t *),
+ GFP_KERNEL | __GFP_ZERO);
+ if (!temp_page)
+ return -ENOMEM;
+
+ cap_info->phys = temp_page;
+
return 0;
}
@@ -105,9 +109,24 @@ int __weak efi_capsule_setup_info(struct
**/
static ssize_t efi_capsule_submit_update(struct capsule_info *cap_info)
{
+ bool do_vunmap = false;
int ret;
- ret = efi_capsule_update(&cap_info->header, cap_info->pages);
+ /*
+ * cap_info->capsule may have been assigned already by a quirk
+ * handler, so only overwrite it if it is NULL
+ */
+ if (!cap_info->capsule) {
+ cap_info->capsule = vmap(cap_info->pages, cap_info->index,
+ VM_MAP, PAGE_KERNEL);
+ if (!cap_info->capsule)
+ return -ENOMEM;
+ do_vunmap = true;
+ }
+
+ ret = efi_capsule_update(cap_info->capsule, cap_info->phys);
+ if (do_vunmap)
+ vunmap(cap_info->capsule);
if (ret) {
pr_err("capsule update failed\n");
return ret;
@@ -165,10 +184,12 @@ static ssize_t efi_capsule_write(struct
goto failed;
}
- cap_info->pages[cap_info->index++] = page_to_phys(page);
+ cap_info->pages[cap_info->index] = page;
+ cap_info->phys[cap_info->index] = page_to_phys(page);
cap_info->page_bytes_remain = PAGE_SIZE;
+ cap_info->index++;
} else {
- page = phys_to_page(cap_info->pages[cap_info->index - 1]);
+ page = cap_info->pages[cap_info->index - 1];
}
kbuff = kmap(page);
@@ -252,6 +273,7 @@ static int efi_capsule_release(struct in
struct capsule_info *cap_info = file->private_data;
kfree(cap_info->pages);
+ kfree(cap_info->phys);
kfree(file->private_data);
file->private_data = NULL;
return 0;
@@ -280,6 +302,13 @@ static int efi_capsule_open(struct inode
kfree(cap_info);
return -ENOMEM;
}
+
+ cap_info->phys = kzalloc(sizeof(void *), GFP_KERNEL);
+ if (!cap_info->phys) {
+ kfree(cap_info->pages);
+ kfree(cap_info);
+ return -ENOMEM;
+ }
file->private_data = cap_info;
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -140,11 +140,13 @@ struct efi_boot_memmap {
struct capsule_info {
efi_capsule_header_t header;
+ efi_capsule_header_t *capsule;
int reset_type;
long index;
size_t count;
size_t total_size;
- phys_addr_t *pages;
+ struct page **pages;
+ phys_addr_t *phys;
size_t page_bytes_remain;
};
Patches currently in stable-queue which might be from ard.biesheuvel(a)linaro.org are
queue-4.14/efi-capsule-loader-reinstate-virtual-capsule-mapping.patch
This is a note to let you know that I've just added the patch titled
drm/i915: Disable DC states around GMBUS on GLK
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-i915-disable-dc-states-around-gmbus-on-glk.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 3488d0237f6364614f0c59d6d784bb79b11eeb92 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Ville=20Syrj=C3=A4l=C3=A4?= <ville.syrjala(a)linux.intel.com>
Date: Fri, 8 Dec 2017 23:37:36 +0200
Subject: drm/i915: Disable DC states around GMBUS on GLK
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
commit 3488d0237f6364614f0c59d6d784bb79b11eeb92 upstream.
Prevent the DMC from destroying GMBUS transfers on GLK. GMBUS
lives in PG1 so DC off is all we need.
Signed-off-by: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171208213739.16388-1-ville.…
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan(a)intel.com>
(cherry picked from commit 156961ae7bdf6feb72778e8da83d321b273343fd)
Signed-off-by: Jani Nikula <jani.nikula(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/gpu/drm/i915/intel_runtime_pm.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -1786,6 +1786,7 @@ void intel_display_power_put(struct drm_
GLK_DISPLAY_POWERWELL_2_POWER_DOMAINS | \
BIT_ULL(POWER_DOMAIN_MODESET) | \
BIT_ULL(POWER_DOMAIN_AUX_A) | \
+ BIT_ULL(POWER_DOMAIN_GMBUS) | \
BIT_ULL(POWER_DOMAIN_INIT))
#define CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \
Patches currently in stable-queue which might be from ville.syrjala(a)linux.intel.com are
queue-4.14/drm-i915-apply-display-wa-1183-on-skl-kbl-and-cfl.patch
queue-4.14/drm-i915-disable-dc-states-around-gmbus-on-glk.patch
This is a note to let you know that I've just added the patch titled
drm/i915: Apply Display WA #1183 on skl, kbl, and cfl
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-i915-apply-display-wa-1183-on-skl-kbl-and-cfl.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 30414f3010aff95ffdb6bed7b9dce62cde94fdc7 Mon Sep 17 00:00:00 2001
From: Lucas De Marchi <lucas.demarchi(a)intel.com>
Date: Tue, 2 Jan 2018 12:18:37 -0800
Subject: drm/i915: Apply Display WA #1183 on skl, kbl, and cfl
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Lucas De Marchi <lucas.demarchi(a)intel.com>
commit 30414f3010aff95ffdb6bed7b9dce62cde94fdc7 upstream.
Display WA #1183 was recently added to workaround
"Failures when enabling DPLL0 with eDP link rate 2.16
or 4.32 GHz and CD clock frequency 308.57 or 617.14 MHz
(CDCLK_CTL CD Frequency Select 10b or 11b) used in this
enabling or in previous enabling."
This workaround was designed to minimize the impact only
to save the bad case with that link rates. But HW engineers
indicated that it should be safe to apply broadly, although
they were expecting the DPLL0 link rate to be unchanged on
runtime.
We need to cover 2 cases: when we are in fact enabling DPLL0
and when we are just changing the frequency with small
differences.
This is based on previous patch by Rodrigo Vivi with suggestions
from Ville Syrjälä.
Cc: Arthur J Runyan <arthur.j.runyan(a)intel.com>
Cc: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi(a)intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171204232210.4958-1-lucas.d…
(cherry picked from commit 53421c2fe99ce16838639ad89d772d914a119a49)
[ Lucas: Backport to 4.15 adding back variable that has been removed on
commits not meant to be backported ]
Signed-off-by: Jani Nikula <jani.nikula(a)intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180102201837.6812-1-lucas.d…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/gpu/drm/i915/i915_reg.h | 2 +
drivers/gpu/drm/i915/intel_cdclk.c | 35 +++++++++++++++++++++++---------
drivers/gpu/drm/i915/intel_runtime_pm.c | 10 +++++++++
3 files changed, 38 insertions(+), 9 deletions(-)
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -6944,6 +6944,7 @@ enum {
#define RESET_PCH_HANDSHAKE_ENABLE (1<<4)
#define GEN8_CHICKEN_DCPR_1 _MMIO(0x46430)
+#define SKL_SELECT_ALTERNATE_DC_EXIT (1<<30)
#define MASK_WAKEMEM (1<<13)
#define SKL_DFSM _MMIO(0x51000)
@@ -8475,6 +8476,7 @@ enum skl_power_gate {
#define BXT_CDCLK_CD2X_DIV_SEL_2 (2<<22)
#define BXT_CDCLK_CD2X_DIV_SEL_4 (3<<22)
#define BXT_CDCLK_CD2X_PIPE(pipe) ((pipe)<<20)
+#define CDCLK_DIVMUX_CD_OVERRIDE (1<<19)
#define BXT_CDCLK_CD2X_PIPE_NONE BXT_CDCLK_CD2X_PIPE(3)
#define BXT_CDCLK_SSA_PRECHARGE_ENABLE (1<<16)
#define CDCLK_FREQ_DECIMAL_MASK (0x7ff)
--- a/drivers/gpu/drm/i915/intel_cdclk.c
+++ b/drivers/gpu/drm/i915/intel_cdclk.c
@@ -859,16 +859,10 @@ static void skl_set_preferred_cdclk_vco(
static void skl_dpll0_enable(struct drm_i915_private *dev_priv, int vco)
{
- int min_cdclk = skl_calc_cdclk(0, vco);
u32 val;
WARN_ON(vco != 8100000 && vco != 8640000);
- /* select the minimum CDCLK before enabling DPLL 0 */
- val = CDCLK_FREQ_337_308 | skl_cdclk_decimal(min_cdclk);
- I915_WRITE(CDCLK_CTL, val);
- POSTING_READ(CDCLK_CTL);
-
/*
* We always enable DPLL0 with the lowest link rate possible, but still
* taking into account the VCO required to operate the eDP panel at the
@@ -922,7 +916,7 @@ static void skl_set_cdclk(struct drm_i91
{
int cdclk = cdclk_state->cdclk;
int vco = cdclk_state->vco;
- u32 freq_select, pcu_ack;
+ u32 freq_select, pcu_ack, cdclk_ctl;
int ret;
WARN_ON((cdclk == 24000) != (vco == 0));
@@ -939,7 +933,7 @@ static void skl_set_cdclk(struct drm_i91
return;
}
- /* set CDCLK_CTL */
+ /* Choose frequency for this cdclk */
switch (cdclk) {
case 450000:
case 432000:
@@ -967,10 +961,33 @@ static void skl_set_cdclk(struct drm_i91
dev_priv->cdclk.hw.vco != vco)
skl_dpll0_disable(dev_priv);
+ cdclk_ctl = I915_READ(CDCLK_CTL);
+
+ if (dev_priv->cdclk.hw.vco != vco) {
+ /* Wa Display #1183: skl,kbl,cfl */
+ cdclk_ctl &= ~(CDCLK_FREQ_SEL_MASK | CDCLK_FREQ_DECIMAL_MASK);
+ cdclk_ctl |= freq_select | skl_cdclk_decimal(cdclk);
+ I915_WRITE(CDCLK_CTL, cdclk_ctl);
+ }
+
+ /* Wa Display #1183: skl,kbl,cfl */
+ cdclk_ctl |= CDCLK_DIVMUX_CD_OVERRIDE;
+ I915_WRITE(CDCLK_CTL, cdclk_ctl);
+ POSTING_READ(CDCLK_CTL);
+
if (dev_priv->cdclk.hw.vco != vco)
skl_dpll0_enable(dev_priv, vco);
- I915_WRITE(CDCLK_CTL, freq_select | skl_cdclk_decimal(cdclk));
+ /* Wa Display #1183: skl,kbl,cfl */
+ cdclk_ctl &= ~(CDCLK_FREQ_SEL_MASK | CDCLK_FREQ_DECIMAL_MASK);
+ I915_WRITE(CDCLK_CTL, cdclk_ctl);
+
+ cdclk_ctl |= freq_select | skl_cdclk_decimal(cdclk);
+ I915_WRITE(CDCLK_CTL, cdclk_ctl);
+
+ /* Wa Display #1183: skl,kbl,cfl */
+ cdclk_ctl &= ~CDCLK_DIVMUX_CD_OVERRIDE;
+ I915_WRITE(CDCLK_CTL, cdclk_ctl);
POSTING_READ(CDCLK_CTL);
/* inform PCU of the change */
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -598,6 +598,11 @@ void gen9_enable_dc5(struct drm_i915_pri
DRM_DEBUG_KMS("Enabling DC5\n");
+ /* Wa Display #1183: skl,kbl,cfl */
+ if (IS_GEN9_BC(dev_priv))
+ I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) |
+ SKL_SELECT_ALTERNATE_DC_EXIT);
+
gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC5);
}
@@ -625,6 +630,11 @@ void skl_disable_dc6(struct drm_i915_pri
{
DRM_DEBUG_KMS("Disabling DC6\n");
+ /* Wa Display #1183: skl,kbl,cfl */
+ if (IS_GEN9_BC(dev_priv))
+ I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) |
+ SKL_SELECT_ALTERNATE_DC_EXIT);
+
gen9_set_dc_state(dev_priv, DC_STATE_DISABLE);
}
Patches currently in stable-queue which might be from lucas.demarchi(a)intel.com are
queue-4.14/drm-i915-apply-display-wa-1183-on-skl-kbl-and-cfl.patch
This is a note to let you know that I've just added the patch titled
crypto: pcrypt - fix freeing pcrypt instances
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-pcrypt-fix-freeing-pcrypt-instances.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d76c68109f37cb85b243a1cf0f40313afd2bae68 Mon Sep 17 00:00:00 2001
From: Eric Biggers <ebiggers(a)google.com>
Date: Wed, 20 Dec 2017 14:28:25 -0800
Subject: crypto: pcrypt - fix freeing pcrypt instances
From: Eric Biggers <ebiggers(a)google.com>
commit d76c68109f37cb85b243a1cf0f40313afd2bae68 upstream.
pcrypt is using the old way of freeing instances, where the ->free()
method specified in the 'struct crypto_template' is passed a pointer to
the 'struct crypto_instance'. But the crypto_instance is being
kfree()'d directly, which is incorrect because the memory was actually
allocated as an aead_instance, which contains the crypto_instance at a
nonzero offset. Thus, the wrong pointer was being kfree()'d.
Fix it by switching to the new way to free aead_instance's where the
->free() method is specified in the aead_instance itself.
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Fixes: 0496f56065e0 ("crypto: pcrypt - Add support for new AEAD interface")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
crypto/pcrypt.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
--- a/crypto/pcrypt.c
+++ b/crypto/pcrypt.c
@@ -254,6 +254,14 @@ static void pcrypt_aead_exit_tfm(struct
crypto_free_aead(ctx->child);
}
+static void pcrypt_free(struct aead_instance *inst)
+{
+ struct pcrypt_instance_ctx *ctx = aead_instance_ctx(inst);
+
+ crypto_drop_aead(&ctx->spawn);
+ kfree(inst);
+}
+
static int pcrypt_init_instance(struct crypto_instance *inst,
struct crypto_alg *alg)
{
@@ -319,6 +327,8 @@ static int pcrypt_create_aead(struct cry
inst->alg.encrypt = pcrypt_aead_encrypt;
inst->alg.decrypt = pcrypt_aead_decrypt;
+ inst->free = pcrypt_free;
+
err = aead_register_instance(tmpl, inst);
if (err)
goto out_drop_aead;
@@ -349,14 +359,6 @@ static int pcrypt_create(struct crypto_t
return -EINVAL;
}
-static void pcrypt_free(struct crypto_instance *inst)
-{
- struct pcrypt_instance_ctx *ctx = crypto_instance_ctx(inst);
-
- crypto_drop_aead(&ctx->spawn);
- kfree(inst);
-}
-
static int pcrypt_cpumask_change_notify(struct notifier_block *self,
unsigned long val, void *data)
{
@@ -469,7 +471,6 @@ static void pcrypt_fini_padata(struct pa
static struct crypto_template pcrypt_tmpl = {
.name = "pcrypt",
.create = pcrypt_create,
- .free = pcrypt_free,
.module = THIS_MODULE,
};
Patches currently in stable-queue which might be from ebiggers(a)google.com are
queue-4.14/crypto-chacha20poly1305-validate-the-digest-size.patch
queue-4.14/crypto-pcrypt-fix-freeing-pcrypt-instances.patch
This is a note to let you know that I've just added the patch titled
crypto: n2 - cure use after free
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-n2-cure-use-after-free.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 203f45003a3d03eea8fa28d74cfc74c354416fdb Mon Sep 17 00:00:00 2001
From: Jan Engelhardt <jengelh(a)inai.de>
Date: Tue, 19 Dec 2017 19:09:07 +0100
Subject: crypto: n2 - cure use after free
From: Jan Engelhardt <jengelh(a)inai.de>
commit 203f45003a3d03eea8fa28d74cfc74c354416fdb upstream.
queue_cache_init is first called for the Control Word Queue
(n2_crypto_probe). At that time, queue_cache[0] is NULL and a new
kmem_cache will be allocated. If the subsequent n2_register_algs call
fails, the kmem_cache will be released in queue_cache_destroy, but
queue_cache_init[0] is not set back to NULL.
So when the Module Arithmetic Unit gets probed next (n2_mau_probe),
queue_cache_init will not allocate a kmem_cache again, but leave it
as its bogus value, causing a BUG() to trigger when queue_cache[0] is
eventually passed to kmem_cache_zalloc:
n2_crypto: Found N2CP at /virtual-devices@100/n2cp@7
n2_crypto: Registered NCS HVAPI version 2.0
called queue_cache_init
n2_crypto: md5 alg registration failed
n2cp f028687c: /virtual-devices@100/n2cp@7: Unable to register algorithms.
called queue_cache_destroy
n2cp: probe of f028687c failed with error -22
n2_crypto: Found NCP at /virtual-devices@100/ncp@6
n2_crypto: Registered NCS HVAPI version 2.0
called queue_cache_init
kernel BUG at mm/slab.c:2993!
Call Trace:
[0000000000604488] kmem_cache_alloc+0x1a8/0x1e0
(inlined) kmem_cache_zalloc
(inlined) new_queue
(inlined) spu_queue_setup
(inlined) handle_exec_unit
[0000000010c61eb4] spu_mdesc_scan+0x1f4/0x460 [n2_crypto]
[0000000010c62b80] n2_mau_probe+0x100/0x220 [n2_crypto]
[000000000084b174] platform_drv_probe+0x34/0xc0
Signed-off-by: Jan Engelhardt <jengelh(a)inai.de>
Acked-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/crypto/n2_core.c | 3 +++
1 file changed, 3 insertions(+)
--- a/drivers/crypto/n2_core.c
+++ b/drivers/crypto/n2_core.c
@@ -1625,6 +1625,7 @@ static int queue_cache_init(void)
CWQ_ENTRY_SIZE, 0, NULL);
if (!queue_cache[HV_NCS_QTYPE_CWQ - 1]) {
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_MAU - 1]);
+ queue_cache[HV_NCS_QTYPE_MAU - 1] = NULL;
return -ENOMEM;
}
return 0;
@@ -1634,6 +1635,8 @@ static void queue_cache_destroy(void)
{
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_MAU - 1]);
kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_CWQ - 1]);
+ queue_cache[HV_NCS_QTYPE_MAU - 1] = NULL;
+ queue_cache[HV_NCS_QTYPE_CWQ - 1] = NULL;
}
static long spu_queue_register_workfn(void *arg)
Patches currently in stable-queue which might be from jengelh(a)inai.de are
queue-4.14/crypto-n2-cure-use-after-free.patch
This is a note to let you know that I've just added the patch titled
crypto: chelsio - select CRYPTO_GF128MUL
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-chelsio-select-crypto_gf128mul.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d042566d8c704e1ecec370300545d4a409222e39 Mon Sep 17 00:00:00 2001
From: Arnd Bergmann <arnd(a)arndb.de>
Date: Tue, 5 Dec 2017 11:10:26 +0100
Subject: crypto: chelsio - select CRYPTO_GF128MUL
From: Arnd Bergmann <arnd(a)arndb.de>
commit d042566d8c704e1ecec370300545d4a409222e39 upstream.
Without the gf128mul library support, we can run into a link
error:
drivers/crypto/chelsio/chcr_algo.o: In function `chcr_update_tweak':
chcr_algo.c:(.text+0x7e0): undefined reference to `gf128mul_x8_ble'
This adds a Kconfig select statement for it, next to the ones we
already have.
Fixes: b8fd1f4170e7 ("crypto: chcr - Add ctr mode and process large sg entries for cipher")
Signed-off-by: Arnd Bergmann <arnd(a)arndb.de>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/crypto/chelsio/Kconfig | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/crypto/chelsio/Kconfig
+++ b/drivers/crypto/chelsio/Kconfig
@@ -5,6 +5,7 @@ config CRYPTO_DEV_CHELSIO
select CRYPTO_SHA256
select CRYPTO_SHA512
select CRYPTO_AUTHENC
+ select CRYPTO_GF128MUL
---help---
The Chelsio Crypto Co-processor driver for T6 adapters.
Patches currently in stable-queue which might be from arnd(a)arndb.de are
queue-4.14/crypto-chelsio-select-crypto_gf128mul.patch
This is a note to let you know that I've just added the patch titled
crypto: chacha20poly1305 - validate the digest size
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-chacha20poly1305-validate-the-digest-size.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e57121d08c38dabec15cf3e1e2ad46721af30cae Mon Sep 17 00:00:00 2001
From: Eric Biggers <ebiggers(a)google.com>
Date: Mon, 11 Dec 2017 12:15:17 -0800
Subject: crypto: chacha20poly1305 - validate the digest size
From: Eric Biggers <ebiggers(a)google.com>
commit e57121d08c38dabec15cf3e1e2ad46721af30cae upstream.
If the rfc7539 template was instantiated with a hash algorithm with
digest size larger than 16 bytes (POLY1305_DIGEST_SIZE), then the digest
overran the 'tag' buffer in 'struct chachapoly_req_ctx', corrupting the
subsequent memory, including 'cryptlen'. This caused a crash during
crypto_skcipher_decrypt().
Fix it by, when instantiating the template, requiring that the
underlying hash algorithm has the digest size expected for Poly1305.
Reproducer:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
int algfd, reqfd;
struct sockaddr_alg addr = {
.salg_type = "aead",
.salg_name = "rfc7539(chacha20,sha256)",
};
unsigned char buf[32] = { 0 };
algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(algfd, (void *)&addr, sizeof(addr));
setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, sizeof(buf));
reqfd = accept(algfd, 0, 0);
write(reqfd, buf, 16);
read(reqfd, buf, 16);
}
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
crypto/chacha20poly1305.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/crypto/chacha20poly1305.c
+++ b/crypto/chacha20poly1305.c
@@ -610,6 +610,11 @@ static int chachapoly_create(struct cryp
algt->mask));
if (IS_ERR(poly))
return PTR_ERR(poly);
+ poly_hash = __crypto_hash_alg_common(poly);
+
+ err = -EINVAL;
+ if (poly_hash->digestsize != POLY1305_DIGEST_SIZE)
+ goto out_put_poly;
err = -ENOMEM;
inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
@@ -618,7 +623,6 @@ static int chachapoly_create(struct cryp
ctx = aead_instance_ctx(inst);
ctx->saltlen = CHACHAPOLY_IV_SIZE - ivsize;
- poly_hash = __crypto_hash_alg_common(poly);
err = crypto_init_ahash_spawn(&ctx->poly, poly_hash,
aead_crypto_instance(inst));
if (err)
Patches currently in stable-queue which might be from ebiggers(a)google.com are
queue-4.14/crypto-chacha20poly1305-validate-the-digest-size.patch
queue-4.14/crypto-pcrypt-fix-freeing-pcrypt-instances.patch
This is a note to let you know that I've just added the patch titled
btrfs: fix refcount_t usage when deleting btrfs_delayed_nodes
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
btrfs-fix-refcount_t-usage-when-deleting-btrfs_delayed_nodes.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From ec35e48b286959991cdbb886f1bdeda4575c80b4 Mon Sep 17 00:00:00 2001
From: Chris Mason <clm(a)fb.com>
Date: Fri, 15 Dec 2017 11:58:27 -0800
Subject: btrfs: fix refcount_t usage when deleting btrfs_delayed_nodes
From: Chris Mason <clm(a)fb.com>
commit ec35e48b286959991cdbb886f1bdeda4575c80b4 upstream.
refcounts have a generic implementation and an asm optimized one. The
generic version has extra debugging to make sure that once a refcount
goes to zero, refcount_inc won't increase it.
The btrfs delayed inode code wasn't expecting this, and we're tripping
over the warnings when the generic refcounts are used. We ended up with
this race:
Process A Process B
btrfs_get_delayed_node()
spin_lock(root->inode_lock)
radix_tree_lookup()
__btrfs_release_delayed_node()
refcount_dec_and_test(&delayed_node->refs)
our refcount is now zero
refcount_add(2) <---
warning here, refcount
unchanged
spin_lock(root->inode_lock)
radix_tree_delete()
With the generic refcounts, we actually warn again when process B above
tries to release his refcount because refcount_add() turned into a
no-op.
We saw this in production on older kernels without the asm optimized
refcounts.
The fix used here is to use refcount_inc_not_zero() to detect when the
object is in the middle of being freed and return NULL. This is almost
always the right answer anyway, since we usually end up pitching the
delayed_node if it didn't have fresh data in it.
This also changes __btrfs_release_delayed_node() to remove the extra
check for zero refcounts before radix tree deletion.
btrfs_get_delayed_node() was the only path that was allowing refcounts
to go from zero to one.
Fixes: 6de5f18e7b0da ("btrfs: fix refcount_t usage when deleting btrfs_delayed_node")
Signed-off-by: Chris Mason <clm(a)fb.com>
Reviewed-by: Liu Bo <bo.li.liu(a)oracle.com>
Signed-off-by: David Sterba <dsterba(a)suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/btrfs/delayed-inode.c | 45 ++++++++++++++++++++++++++++++++++-----------
1 file changed, 34 insertions(+), 11 deletions(-)
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -87,6 +87,7 @@ static struct btrfs_delayed_node *btrfs_
spin_lock(&root->inode_lock);
node = radix_tree_lookup(&root->delayed_nodes_tree, ino);
+
if (node) {
if (btrfs_inode->delayed_node) {
refcount_inc(&node->refs); /* can be accessed */
@@ -94,9 +95,30 @@ static struct btrfs_delayed_node *btrfs_
spin_unlock(&root->inode_lock);
return node;
}
- btrfs_inode->delayed_node = node;
- /* can be accessed and cached in the inode */
- refcount_add(2, &node->refs);
+
+ /*
+ * It's possible that we're racing into the middle of removing
+ * this node from the radix tree. In this case, the refcount
+ * was zero and it should never go back to one. Just return
+ * NULL like it was never in the radix at all; our release
+ * function is in the process of removing it.
+ *
+ * Some implementations of refcount_inc refuse to bump the
+ * refcount once it has hit zero. If we don't do this dance
+ * here, refcount_inc() may decide to just WARN_ONCE() instead
+ * of actually bumping the refcount.
+ *
+ * If this node is properly in the radix, we want to bump the
+ * refcount twice, once for the inode and once for this get
+ * operation.
+ */
+ if (refcount_inc_not_zero(&node->refs)) {
+ refcount_inc(&node->refs);
+ btrfs_inode->delayed_node = node;
+ } else {
+ node = NULL;
+ }
+
spin_unlock(&root->inode_lock);
return node;
}
@@ -254,17 +276,18 @@ static void __btrfs_release_delayed_node
mutex_unlock(&delayed_node->mutex);
if (refcount_dec_and_test(&delayed_node->refs)) {
- bool free = false;
struct btrfs_root *root = delayed_node->root;
+
spin_lock(&root->inode_lock);
- if (refcount_read(&delayed_node->refs) == 0) {
- radix_tree_delete(&root->delayed_nodes_tree,
- delayed_node->inode_id);
- free = true;
- }
+ /*
+ * Once our refcount goes to zero, nobody is allowed to bump it
+ * back up. We can delete it now.
+ */
+ ASSERT(refcount_read(&delayed_node->refs) == 0);
+ radix_tree_delete(&root->delayed_nodes_tree,
+ delayed_node->inode_id);
spin_unlock(&root->inode_lock);
- if (free)
- kmem_cache_free(delayed_node_cache, delayed_node);
+ kmem_cache_free(delayed_node_cache, delayed_node);
}
}
Patches currently in stable-queue which might be from clm(a)fb.com are
queue-4.14/btrfs-fix-refcount_t-usage-when-deleting-btrfs_delayed_nodes.patch
This is a note to let you know that I've just added the patch titled
x86/kasan: Write protect kasan zero shadow
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kasan-write-protect-kasan-zero-shadow.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 063fb3e56f6dd29b2633b678b837e1d904200e6f Mon Sep 17 00:00:00 2001
From: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Date: Mon, 11 Jan 2016 15:51:19 +0300
Subject: x86/kasan: Write protect kasan zero shadow
From: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
commit 063fb3e56f6dd29b2633b678b837e1d904200e6f upstream.
After kasan_init() executed, no one is allowed to write to kasan_zero_page,
so write protect it.
Signed-off-by: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Reviewed-by: Borislav Petkov <bp(a)suse.de>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Andy Lutomirski <luto(a)amacapital.net>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof(a)suse.com>
Cc: Oleg Nesterov <oleg(a)redhat.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Toshi Kani <toshi.kani(a)hp.com>
Cc: linux-mm(a)kvack.org
Link: http://lkml.kernel.org/r/1452516679-32040-3-git-send-email-aryabinin@virtuo…
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Cc: Guenter Roeck <linux(a)roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kasan_init_64.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -126,10 +126,16 @@ void __init kasan_init(void)
/*
* kasan_zero_page has been used as early shadow memory, thus it may
- * contain some garbage. Now we can clear it, since after the TLB flush
- * no one should write to it.
+ * contain some garbage. Now we can clear and write protect it, since
+ * after the TLB flush no one should write to it.
*/
memset(kasan_zero_page, 0, PAGE_SIZE);
+ for (i = 0; i < PTRS_PER_PTE; i++) {
+ pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO);
+ set_pte(&kasan_zero_pte[i], pte);
+ }
+ /* Flush TLBs again to be sure that write protection applied. */
+ __flush_tlb_all();
init_task.kasan_depth = 0;
pr_info("KernelAddressSanitizer initialized\n");
Patches currently in stable-queue which might be from aryabinin(a)virtuozzo.com are
queue-4.4/x86-kasan-write-protect-kasan-zero-shadow.patch
Geminilake requires the 3D driver to select whether barriers are
intended for compute shaders, or tessellation control shaders, by
whacking a "Barrier Mode" bit in SLICE_COMMON_ECO_CHICKEN1 when
switching pipelines. Failure to do this properly can result in GPU
hangs.
Unfortunately, this means it needs to switch mid-batch, so only
userspace can properly set it. To facilitate this, the kernel needs
to whitelist the register.
The workarounds page currently tags this as applying to Broxton only,
but that doesn't make sense. The documentation for the register it
references says the bit userspace is supposed to toggle only exists on
Geminilake. Empirically, the Mesa patch to toggle this bit appears to
fix intermittent GPU hangs in tessellation control shader barrier tests
on Geminilake; we haven't seen those hangs on Broxton.
v2: Mention WA #0862 in the comment (it doesn't have a name).
Signed-off-by: Kenneth Graunke <kenneth(a)whitecape.org>
Acked-by: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Cc: stable(a)vger.kernel.org
---
drivers/gpu/drm/i915/i915_reg.h | 2 ++
drivers/gpu/drm/i915/intel_engine_cs.c | 5 +++++
2 files changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 966e4df9700e..505c605eff98 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -7079,6 +7079,8 @@ enum {
#define GEN9_SLICE_COMMON_ECO_CHICKEN0 _MMIO(0x7308)
#define DISABLE_PIXEL_MASK_CAMMING (1<<14)
+#define GEN9_SLICE_COMMON_ECO_CHICKEN1 _MMIO(0x731c)
+
#define GEN7_L3SQCREG1 _MMIO(0xB010)
#define VLV_B0_WA_L3SQCREG1_VALUE 0x00D30000
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index ebdcbcbacb3c..6bb51a502b8b 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -1338,6 +1338,11 @@ static int glk_init_workarounds(struct intel_engine_cs *engine)
if (ret)
return ret;
+ /* WA #0862: Userspace has to set "Barrier Mode" to avoid hangs. */
+ ret = wa_ring_whitelist_reg(engine, GEN9_SLICE_COMMON_ECO_CHICKEN1);
+ if (ret)
+ return ret;
+
/* WaToEnableHwFixForPushConstHWBug:glk */
WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION);
--
2.15.1
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 152ec4e87b57..5d2676d043de 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2763,6 +2763,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
nojitter [IA-64] Disables jitter checking for ITC timers.
+ nopti [X86-64] Disable KAISER isolation of kernel from user.
+
no-kvmclock [X86,KVM] Disable paravirtualized KVM clock driver
no-kvmapf [X86,KVM] Disable paravirtualized asynchronous page
@@ -3325,6 +3327,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
pt. [PARIDE]
See Documentation/blockdev/paride.txt.
+ pti= [X86_64]
+ Control KAISER user/kernel address space isolation:
+ on - enable
+ off - disable
+ auto - default setting
+
pty.legacy_count=
[KNL] Number of legacy pty's. Overwrites compiled-in
default number.
diff --git a/Makefile b/Makefile
index 075e429732e7..acbc1b032db2 100644
--- a/Makefile
+++ b/Makefile
@@ -1,6 +1,6 @@
VERSION = 4
PATCHLEVEL = 9
-SUBLEVEL = 74
+SUBLEVEL = 75
EXTRAVERSION =
NAME = Roaring Lionus
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 766a5211f827..2728e1b7e4a6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -9,6 +9,7 @@
*/
#undef CONFIG_PARAVIRT
#undef CONFIG_PARAVIRT_SPINLOCKS
+#undef CONFIG_PAGE_TABLE_ISOLATION
#undef CONFIG_KASAN
#include <linux/linkage.h>
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index e7b0e7ff4c58..af4e58132d91 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -36,6 +36,7 @@
#include <asm/smap.h>
#include <asm/pgtable_types.h>
#include <asm/export.h>
+#include <asm/kaiser.h>
#include <linux/err.h>
/* Avoid __ASSEMBLER__'ifying <linux/audit.h> just for this. */
@@ -146,6 +147,7 @@ ENTRY(entry_SYSCALL_64)
* it is too small to ever cause noticeable irq latency.
*/
SWAPGS_UNSAFE_STACK
+ SWITCH_KERNEL_CR3_NO_STACK
/*
* A hypervisor implementation might want to use a label
* after the swapgs, so that it can do the swapgs
@@ -228,6 +230,14 @@ entry_SYSCALL_64_fastpath:
movq RIP(%rsp), %rcx
movq EFLAGS(%rsp), %r11
RESTORE_C_REGS_EXCEPT_RCX_R11
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
+ SWITCH_USER_CR3
movq RSP(%rsp), %rsp
USERGS_SYSRET64
@@ -323,10 +333,26 @@ return_from_SYSCALL_64:
syscall_return_via_sysret:
/* rcx and r11 are already restored (see code above) */
RESTORE_C_REGS_EXCEPT_RCX_R11
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
+ SWITCH_USER_CR3
movq RSP(%rsp), %rsp
USERGS_SYSRET64
opportunistic_sysret_failed:
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
+ SWITCH_USER_CR3
SWAPGS
jmp restore_c_regs_and_iret
END(entry_SYSCALL_64)
@@ -424,6 +450,7 @@ ENTRY(ret_from_fork)
movq %rsp, %rdi
call syscall_return_slowpath /* returns with IRQs disabled */
TRACE_IRQS_ON /* user mode is traced as IRQS on */
+ SWITCH_USER_CR3
SWAPGS
jmp restore_regs_and_iret
@@ -478,6 +505,7 @@ END(irq_entries_start)
* tracking that we're in kernel mode.
*/
SWAPGS
+ SWITCH_KERNEL_CR3
/*
* We need to tell lockdep that IRQs are off. We can't do this until
@@ -535,6 +563,7 @@ GLOBAL(retint_user)
mov %rsp,%rdi
call prepare_exit_to_usermode
TRACE_IRQS_IRETQ
+ SWITCH_USER_CR3
SWAPGS
jmp restore_regs_and_iret
@@ -612,6 +641,7 @@ native_irq_return_ldt:
pushq %rdi /* Stash user RDI */
SWAPGS
+ SWITCH_KERNEL_CR3
movq PER_CPU_VAR(espfix_waddr), %rdi
movq %rax, (0*8)(%rdi) /* user RAX */
movq (1*8)(%rsp), %rax /* user RIP */
@@ -638,6 +668,7 @@ native_irq_return_ldt:
* still points to an RO alias of the ESPFIX stack.
*/
orq PER_CPU_VAR(espfix_stack), %rax
+ SWITCH_USER_CR3
SWAPGS
movq %rax, %rsp
@@ -1022,7 +1053,11 @@ idtentry machine_check has_error_code=0 paranoid=1 do_sym=*machine_check_vec
/*
* Save all registers in pt_regs, and switch gs if needed.
* Use slow, but surefire "are we in kernel?" check.
- * Return: ebx=0: need swapgs on exit, ebx=1: otherwise
+ *
+ * Return: ebx=0: needs swapgs but not SWITCH_USER_CR3 in paranoid_exit
+ * ebx=1: needs neither swapgs nor SWITCH_USER_CR3 in paranoid_exit
+ * ebx=2: needs both swapgs and SWITCH_USER_CR3 in paranoid_exit
+ * ebx=3: needs SWITCH_USER_CR3 but not swapgs in paranoid_exit
*/
ENTRY(paranoid_entry)
cld
@@ -1035,7 +1070,26 @@ ENTRY(paranoid_entry)
js 1f /* negative -> in kernel */
SWAPGS
xorl %ebx, %ebx
-1: ret
+1:
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /*
+ * We might have come in between a swapgs and a SWITCH_KERNEL_CR3
+ * on entry, or between a SWITCH_USER_CR3 and a swapgs on exit.
+ * Do a conditional SWITCH_KERNEL_CR3: this could safely be done
+ * unconditionally, but we need to find out whether the reverse
+ * should be done on return (conveyed to paranoid_exit in %ebx).
+ */
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
+ testl $KAISER_SHADOW_PGD_OFFSET, %eax
+ jz 2f
+ orl $2, %ebx
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ /* If PCID enabled, set X86_CR3_PCID_NOFLUSH_BIT */
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
+ movq %rax, %cr3
+2:
+#endif
+ ret
END(paranoid_entry)
/*
@@ -1048,19 +1102,26 @@ END(paranoid_entry)
* be complicated. Fortunately, we there's no good reason
* to try to handle preemption here.
*
- * On entry, ebx is "no swapgs" flag (1: don't need swapgs, 0: need it)
+ * On entry: ebx=0: needs swapgs but not SWITCH_USER_CR3
+ * ebx=1: needs neither swapgs nor SWITCH_USER_CR3
+ * ebx=2: needs both swapgs and SWITCH_USER_CR3
+ * ebx=3: needs SWITCH_USER_CR3 but not swapgs
*/
ENTRY(paranoid_exit)
DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF_DEBUG
- testl %ebx, %ebx /* swapgs needed? */
+ TRACE_IRQS_IRETQ_DEBUG
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /* No ALTERNATIVE for X86_FEATURE_KAISER: paranoid_entry sets %ebx */
+ testl $2, %ebx /* SWITCH_USER_CR3 needed? */
+ jz paranoid_exit_no_switch
+ SWITCH_USER_CR3
+paranoid_exit_no_switch:
+#endif
+ testl $1, %ebx /* swapgs needed? */
jnz paranoid_exit_no_swapgs
- TRACE_IRQS_IRETQ
SWAPGS_UNSAFE_STACK
- jmp paranoid_exit_restore
paranoid_exit_no_swapgs:
- TRACE_IRQS_IRETQ_DEBUG
-paranoid_exit_restore:
RESTORE_EXTRA_REGS
RESTORE_C_REGS
REMOVE_PT_GPREGS_FROM_STACK 8
@@ -1075,6 +1136,13 @@ ENTRY(error_entry)
cld
SAVE_C_REGS 8
SAVE_EXTRA_REGS 8
+ /*
+ * error_entry() always returns with a kernel gsbase and
+ * CR3. We must also have a kernel CR3/gsbase before
+ * calling TRACE_IRQS_*. Just unconditionally switch to
+ * the kernel CR3 here.
+ */
+ SWITCH_KERNEL_CR3
xorl %ebx, %ebx
testb $3, CS+8(%rsp)
jz .Lerror_kernelspace
@@ -1235,6 +1303,10 @@ ENTRY(nmi)
*/
SWAPGS_UNSAFE_STACK
+ /*
+ * percpu variables are mapped with user CR3, so no need
+ * to switch CR3 here.
+ */
cld
movq %rsp, %rdx
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
@@ -1268,12 +1340,34 @@ ENTRY(nmi)
movq %rsp, %rdi
movq $-1, %rsi
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /* Unconditionally use kernel CR3 for do_nmi() */
+ /* %rax is saved above, so OK to clobber here */
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
+ /* If PCID enabled, NOFLUSH now and NOFLUSH on return */
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
+ pushq %rax
+ /* mask off "user" bit of pgd address and 12 PCID bits: */
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ movq %rax, %cr3
+2:
+#endif
call do_nmi
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /*
+ * Unconditionally restore CR3. I know we return to
+ * kernel code that needs user CR3, but do we ever return
+ * to "user mode" where we need the kernel CR3?
+ */
+ ALTERNATIVE "", "popq %rax; movq %rax, %cr3", X86_FEATURE_KAISER
+#endif
+
/*
* Return back to user mode. We must *not* do the normal exit
- * work, because we don't want to enable interrupts. Fortunately,
- * do_nmi doesn't modify pt_regs.
+ * work, because we don't want to enable interrupts. Do not
+ * switch to user CR3: we might be going back to kernel code
+ * that had a user CR3 set.
*/
SWAPGS
jmp restore_c_regs_and_iret
@@ -1470,22 +1564,55 @@ end_repeat_nmi:
ALLOC_PT_GPREGS_ON_STACK
/*
- * Use paranoid_entry to handle SWAPGS, but no need to use paranoid_exit
- * as we should not be calling schedule in NMI context.
- * Even with normal interrupts enabled. An NMI should not be
- * setting NEED_RESCHED or anything that normal interrupts and
- * exceptions might do.
+ * Use the same approach as paranoid_entry to handle SWAPGS, but
+ * without CR3 handling since we do that differently in NMIs. No
+ * need to use paranoid_exit as we should not be calling schedule
+ * in NMI context. Even with normal interrupts enabled. An NMI
+ * should not be setting NEED_RESCHED or anything that normal
+ * interrupts and exceptions might do.
*/
- call paranoid_entry
-
- /* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
+ cld
+ SAVE_C_REGS
+ SAVE_EXTRA_REGS
+ movl $1, %ebx
+ movl $MSR_GS_BASE, %ecx
+ rdmsr
+ testl %edx, %edx
+ js 1f /* negative -> in kernel */
+ SWAPGS
+ xorl %ebx, %ebx
+1:
movq %rsp, %rdi
movq $-1, %rsi
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /* Unconditionally use kernel CR3 for do_nmi() */
+ /* %rax is saved above, so OK to clobber here */
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
+ /* If PCID enabled, NOFLUSH now and NOFLUSH on return */
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
+ pushq %rax
+ /* mask off "user" bit of pgd address and 12 PCID bits: */
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ movq %rax, %cr3
+2:
+#endif
+
+ /* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
call do_nmi
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /*
+ * Unconditionally restore CR3. We might be returning to
+ * kernel code that needs user CR3, like just just before
+ * a sysret.
+ */
+ ALTERNATIVE "", "popq %rax; movq %rax, %cr3", X86_FEATURE_KAISER
+#endif
+
testl %ebx, %ebx /* swapgs needed? */
jnz nmi_restore
nmi_swapgs:
+ /* We fixed up CR3 above, so no need to switch it here */
SWAPGS_UNSAFE_STACK
nmi_restore:
RESTORE_EXTRA_REGS
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index e1721dafbcb1..d76a97653980 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -13,6 +13,8 @@
#include <asm/irqflags.h>
#include <asm/asm.h>
#include <asm/smap.h>
+#include <asm/pgtable_types.h>
+#include <asm/kaiser.h>
#include <linux/linkage.h>
#include <linux/err.h>
@@ -48,6 +50,7 @@
ENTRY(entry_SYSENTER_compat)
/* Interrupts are off on entry. */
SWAPGS_UNSAFE_STACK
+ SWITCH_KERNEL_CR3_NO_STACK
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
/*
@@ -184,6 +187,7 @@ ENDPROC(entry_SYSENTER_compat)
ENTRY(entry_SYSCALL_compat)
/* Interrupts are off on entry. */
SWAPGS_UNSAFE_STACK
+ SWITCH_KERNEL_CR3_NO_STACK
/* Stash user ESP and switch to the kernel stack. */
movl %esp, %r8d
@@ -259,6 +263,7 @@ sysret32_from_system_call:
xorq %r8, %r8
xorq %r9, %r9
xorq %r10, %r10
+ SWITCH_USER_CR3
movq RSP-ORIG_RAX(%rsp), %rsp
swapgs
sysretl
@@ -297,7 +302,7 @@ ENTRY(entry_INT80_compat)
PARAVIRT_ADJUST_EXCEPTION_FRAME
ASM_CLAC /* Do this early to minimize exposure */
SWAPGS
-
+ SWITCH_KERNEL_CR3_NO_STACK
/*
* User tracing code (ptrace or signal handlers) might assume that
* the saved RAX contains a 32-bit number when we're invoking a 32-bit
@@ -338,6 +343,7 @@ ENTRY(entry_INT80_compat)
/* Go back to user mode. */
TRACE_IRQS_ON
+ SWITCH_USER_CR3
SWAPGS
jmp restore_regs_and_iret
END(entry_INT80_compat)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 9dfeeeca0ea8..8e7a3f1df3a5 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -2,11 +2,15 @@
#include <linux/types.h>
#include <linux/slab.h>
+#include <asm/kaiser.h>
#include <asm/perf_event.h>
#include <asm/insn.h>
#include "../perf_event.h"
+static
+DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct debug_store, cpu_debug_store);
+
/* The size of a BTS record in bytes: */
#define BTS_RECORD_SIZE 24
@@ -268,6 +272,39 @@ void fini_debug_store_on_cpu(int cpu)
static DEFINE_PER_CPU(void *, insn_buffer);
+static void *dsalloc(size_t size, gfp_t flags, int node)
+{
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ unsigned int order = get_order(size);
+ struct page *page;
+ unsigned long addr;
+
+ page = __alloc_pages_node(node, flags | __GFP_ZERO, order);
+ if (!page)
+ return NULL;
+ addr = (unsigned long)page_address(page);
+ if (kaiser_add_mapping(addr, size, __PAGE_KERNEL) < 0) {
+ __free_pages(page, order);
+ addr = 0;
+ }
+ return (void *)addr;
+#else
+ return kmalloc_node(size, flags | __GFP_ZERO, node);
+#endif
+}
+
+static void dsfree(const void *buffer, size_t size)
+{
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ if (!buffer)
+ return;
+ kaiser_remove_mapping((unsigned long)buffer, size);
+ free_pages((unsigned long)buffer, get_order(size));
+#else
+ kfree(buffer);
+#endif
+}
+
static int alloc_pebs_buffer(int cpu)
{
struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
@@ -278,7 +315,7 @@ static int alloc_pebs_buffer(int cpu)
if (!x86_pmu.pebs)
return 0;
- buffer = kzalloc_node(x86_pmu.pebs_buffer_size, GFP_KERNEL, node);
+ buffer = dsalloc(x86_pmu.pebs_buffer_size, GFP_KERNEL, node);
if (unlikely(!buffer))
return -ENOMEM;
@@ -289,7 +326,7 @@ static int alloc_pebs_buffer(int cpu)
if (x86_pmu.intel_cap.pebs_format < 2) {
ibuffer = kzalloc_node(PEBS_FIXUP_SIZE, GFP_KERNEL, node);
if (!ibuffer) {
- kfree(buffer);
+ dsfree(buffer, x86_pmu.pebs_buffer_size);
return -ENOMEM;
}
per_cpu(insn_buffer, cpu) = ibuffer;
@@ -315,7 +352,8 @@ static void release_pebs_buffer(int cpu)
kfree(per_cpu(insn_buffer, cpu));
per_cpu(insn_buffer, cpu) = NULL;
- kfree((void *)(unsigned long)ds->pebs_buffer_base);
+ dsfree((void *)(unsigned long)ds->pebs_buffer_base,
+ x86_pmu.pebs_buffer_size);
ds->pebs_buffer_base = 0;
}
@@ -329,7 +367,7 @@ static int alloc_bts_buffer(int cpu)
if (!x86_pmu.bts)
return 0;
- buffer = kzalloc_node(BTS_BUFFER_SIZE, GFP_KERNEL | __GFP_NOWARN, node);
+ buffer = dsalloc(BTS_BUFFER_SIZE, GFP_KERNEL | __GFP_NOWARN, node);
if (unlikely(!buffer)) {
WARN_ONCE(1, "%s: BTS buffer allocation failure\n", __func__);
return -ENOMEM;
@@ -355,19 +393,15 @@ static void release_bts_buffer(int cpu)
if (!ds || !x86_pmu.bts)
return;
- kfree((void *)(unsigned long)ds->bts_buffer_base);
+ dsfree((void *)(unsigned long)ds->bts_buffer_base, BTS_BUFFER_SIZE);
ds->bts_buffer_base = 0;
}
static int alloc_ds_buffer(int cpu)
{
- int node = cpu_to_node(cpu);
- struct debug_store *ds;
-
- ds = kzalloc_node(sizeof(*ds), GFP_KERNEL, node);
- if (unlikely(!ds))
- return -ENOMEM;
+ struct debug_store *ds = per_cpu_ptr(&cpu_debug_store, cpu);
+ memset(ds, 0, sizeof(*ds));
per_cpu(cpu_hw_events, cpu).ds = ds;
return 0;
@@ -381,7 +415,6 @@ static void release_ds_buffer(int cpu)
return;
per_cpu(cpu_hw_events, cpu).ds = NULL;
- kfree(ds);
}
void release_ds_buffers(void)
diff --git a/arch/x86/include/asm/cmdline.h b/arch/x86/include/asm/cmdline.h
index e01f7f7ccb0c..84ae170bc3d0 100644
--- a/arch/x86/include/asm/cmdline.h
+++ b/arch/x86/include/asm/cmdline.h
@@ -2,5 +2,7 @@
#define _ASM_X86_CMDLINE_H
int cmdline_find_option_bool(const char *cmdline_ptr, const char *option);
+int cmdline_find_option(const char *cmdline_ptr, const char *option,
+ char *buffer, int bufsize);
#endif /* _ASM_X86_CMDLINE_H */
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index ed10b5bf9b93..454a37adb823 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -189,6 +189,7 @@
#define X86_FEATURE_CPB ( 7*32+ 2) /* AMD Core Performance Boost */
#define X86_FEATURE_EPB ( 7*32+ 3) /* IA32_ENERGY_PERF_BIAS support */
+#define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 4) /* Effectively INVPCID && CR4.PCIDE=1 */
#define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */
#define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */
@@ -197,6 +198,9 @@
#define X86_FEATURE_AVX512_4VNNIW (7*32+16) /* AVX-512 Neural Network Instructions */
#define X86_FEATURE_AVX512_4FMAPS (7*32+17) /* AVX-512 Multiply Accumulation Single precision */
+/* Because the ALTERNATIVE scheme is for members of the X86_FEATURE club... */
+#define X86_FEATURE_KAISER ( 7*32+31) /* CONFIG_PAGE_TABLE_ISOLATION w/o nokaiser */
+
/* Virtualization flags: Linux defined, word 8 */
#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
#define X86_FEATURE_VNMI ( 8*32+ 1) /* Intel Virtual NMI */
diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h
index 12080d87da3b..2ed5a2b3f8f7 100644
--- a/arch/x86/include/asm/desc.h
+++ b/arch/x86/include/asm/desc.h
@@ -43,7 +43,7 @@ struct gdt_page {
struct desc_struct gdt[GDT_ENTRIES];
} __attribute__((aligned(PAGE_SIZE)));
-DECLARE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page);
+DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page);
static inline struct desc_struct *get_cpu_gdt_table(unsigned int cpu)
{
diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h
index b90e1053049b..0817d63bce41 100644
--- a/arch/x86/include/asm/hw_irq.h
+++ b/arch/x86/include/asm/hw_irq.h
@@ -178,7 +178,7 @@ extern char irq_entries_start[];
#define VECTOR_RETRIGGERED ((void *)~0UL)
typedef struct irq_desc* vector_irq_t[NR_VECTORS];
-DECLARE_PER_CPU(vector_irq_t, vector_irq);
+DECLARE_PER_CPU_USER_MAPPED(vector_irq_t, vector_irq);
#endif /* !ASSEMBLY_ */
diff --git a/arch/x86/include/asm/kaiser.h b/arch/x86/include/asm/kaiser.h
new file mode 100644
index 000000000000..802bbbdfe143
--- /dev/null
+++ b/arch/x86/include/asm/kaiser.h
@@ -0,0 +1,141 @@
+#ifndef _ASM_X86_KAISER_H
+#define _ASM_X86_KAISER_H
+
+#include <uapi/asm/processor-flags.h> /* For PCID constants */
+
+/*
+ * This file includes the definitions for the KAISER feature.
+ * KAISER is a counter measure against x86_64 side channel attacks on
+ * the kernel virtual memory. It has a shadow pgd for every process: the
+ * shadow pgd has a minimalistic kernel-set mapped, but includes the whole
+ * user memory. Within a kernel context switch, or when an interrupt is handled,
+ * the pgd is switched to the normal one. When the system switches to user mode,
+ * the shadow pgd is enabled. By this, the virtual memory caches are freed,
+ * and the user may not attack the whole kernel memory.
+ *
+ * A minimalistic kernel mapping holds the parts needed to be mapped in user
+ * mode, such as the entry/exit functions of the user space, or the stacks.
+ */
+
+#define KAISER_SHADOW_PGD_OFFSET 0x1000
+
+#ifdef __ASSEMBLY__
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+
+.macro _SWITCH_TO_KERNEL_CR3 reg
+movq %cr3, \reg
+andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
+/* If PCID enabled, set X86_CR3_PCID_NOFLUSH_BIT */
+ALTERNATIVE "", "bts $63, \reg", X86_FEATURE_PCID
+movq \reg, %cr3
+.endm
+
+.macro _SWITCH_TO_USER_CR3 reg regb
+/*
+ * regb must be the low byte portion of reg: because we have arranged
+ * for the low byte of the user PCID to serve as the high byte of NOFLUSH
+ * (0x80 for each when PCID is enabled, or 0x00 when PCID and NOFLUSH are
+ * not enabled): so that the one register can update both memory and cr3.
+ */
+movq %cr3, \reg
+orq PER_CPU_VAR(x86_cr3_pcid_user), \reg
+js 9f
+/* If PCID enabled, FLUSH this time, reset to NOFLUSH for next time */
+movb \regb, PER_CPU_VAR(x86_cr3_pcid_user+7)
+9:
+movq \reg, %cr3
+.endm
+
+.macro SWITCH_KERNEL_CR3
+ALTERNATIVE "jmp 8f", "pushq %rax", X86_FEATURE_KAISER
+_SWITCH_TO_KERNEL_CR3 %rax
+popq %rax
+8:
+.endm
+
+.macro SWITCH_USER_CR3
+ALTERNATIVE "jmp 8f", "pushq %rax", X86_FEATURE_KAISER
+_SWITCH_TO_USER_CR3 %rax %al
+popq %rax
+8:
+.endm
+
+.macro SWITCH_KERNEL_CR3_NO_STACK
+ALTERNATIVE "jmp 8f", \
+ __stringify(movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)), \
+ X86_FEATURE_KAISER
+_SWITCH_TO_KERNEL_CR3 %rax
+movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
+8:
+.endm
+
+#else /* CONFIG_PAGE_TABLE_ISOLATION */
+
+.macro SWITCH_KERNEL_CR3
+.endm
+.macro SWITCH_USER_CR3
+.endm
+.macro SWITCH_KERNEL_CR3_NO_STACK
+.endm
+
+#endif /* CONFIG_PAGE_TABLE_ISOLATION */
+
+#else /* __ASSEMBLY__ */
+
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+/*
+ * Upon kernel/user mode switch, it may happen that the address
+ * space has to be switched before the registers have been
+ * stored. To change the address space, another register is
+ * needed. A register therefore has to be stored/restored.
+*/
+DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+
+DECLARE_PER_CPU(unsigned long, x86_cr3_pcid_user);
+
+extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
+
+extern int kaiser_enabled;
+extern void __init kaiser_check_boottime_disable(void);
+#else
+#define kaiser_enabled 0
+static inline void __init kaiser_check_boottime_disable(void) {}
+#endif /* CONFIG_PAGE_TABLE_ISOLATION */
+
+/*
+ * Kaiser function prototypes are needed even when CONFIG_PAGE_TABLE_ISOLATION is not set,
+ * so as to build with tests on kaiser_enabled instead of #ifdefs.
+ */
+
+/**
+ * kaiser_add_mapping - map a virtual memory part to the shadow (user) mapping
+ * @addr: the start address of the range
+ * @size: the size of the range
+ * @flags: The mapping flags of the pages
+ *
+ * The mapping is done on a global scope, so no bigger
+ * synchronization has to be done. the pages have to be
+ * manually unmapped again when they are not needed any longer.
+ */
+extern int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags);
+
+/**
+ * kaiser_remove_mapping - unmap a virtual memory part of the shadow mapping
+ * @addr: the start address of the range
+ * @size: the size of the range
+ */
+extern void kaiser_remove_mapping(unsigned long start, unsigned long size);
+
+/**
+ * kaiser_init - Initialize the shadow mapping
+ *
+ * Most parts of the shadow mapping can be mapped upon boot
+ * time. Only per-process things like the thread stacks
+ * or a new LDT have to be mapped at runtime. These boot-
+ * time mappings are permanent and never unmapped.
+ */
+extern void kaiser_init(void);
+
+#endif /* __ASSEMBLY */
+
+#endif /* _ASM_X86_KAISER_H */
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 437feb436efa..2536f90cd30c 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -18,6 +18,12 @@
#ifndef __ASSEMBLY__
#include <asm/x86_init.h>
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+extern int kaiser_enabled;
+#else
+#define kaiser_enabled 0
+#endif
+
void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd);
void ptdump_walk_pgd_level_checkwx(void);
@@ -690,7 +696,17 @@ static inline pud_t *pud_offset(pgd_t *pgd, unsigned long address)
static inline int pgd_bad(pgd_t pgd)
{
- return (pgd_flags(pgd) & ~_PAGE_USER) != _KERNPG_TABLE;
+ pgdval_t ignore_flags = _PAGE_USER;
+ /*
+ * We set NX on KAISER pgds that map userspace memory so
+ * that userspace can not meaningfully use the kernel
+ * page table by accident; it will fault on the first
+ * instruction it tries to run. See native_set_pgd().
+ */
+ if (kaiser_enabled)
+ ignore_flags |= _PAGE_NX;
+
+ return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE;
}
static inline int pgd_none(pgd_t pgd)
@@ -903,7 +919,15 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
*/
static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count)
{
- memcpy(dst, src, count * sizeof(pgd_t));
+ memcpy(dst, src, count * sizeof(pgd_t));
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ if (kaiser_enabled) {
+ /* Clone the shadow pgd part as well */
+ memcpy(native_get_shadow_pgd(dst),
+ native_get_shadow_pgd(src),
+ count * sizeof(pgd_t));
+ }
+#endif
}
#define PTE_SHIFT ilog2(PTRS_PER_PTE)
diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
index 1cc82ece9ac1..ce97c8c6a310 100644
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -106,9 +106,32 @@ static inline void native_pud_clear(pud_t *pud)
native_set_pud(pud, native_make_pud(0));
}
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+extern pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd);
+
+static inline pgd_t *native_get_shadow_pgd(pgd_t *pgdp)
+{
+#ifdef CONFIG_DEBUG_VM
+ /* linux/mmdebug.h may not have been included at this point */
+ BUG_ON(!kaiser_enabled);
+#endif
+ return (pgd_t *)((unsigned long)pgdp | (unsigned long)PAGE_SIZE);
+}
+#else
+static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+ return pgd;
+}
+static inline pgd_t *native_get_shadow_pgd(pgd_t *pgdp)
+{
+ BUILD_BUG_ON(1);
+ return NULL;
+}
+#endif /* CONFIG_PAGE_TABLE_ISOLATION */
+
static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
{
- *pgdp = pgd;
+ *pgdp = kaiser_set_shadow_pgd(pgdp, pgd);
}
static inline void native_pgd_clear(pgd_t *pgd)
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 8b4de22d6429..f1c8ac468292 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -119,7 +119,7 @@
#define _PAGE_DEVMAP (_AT(pteval_t, 0))
#endif
-#define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
+#define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
_PAGE_ACCESSED | _PAGE_DIRTY)
@@ -137,6 +137,33 @@
_PAGE_SOFT_DIRTY)
#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
+/* The ASID is the lower 12 bits of CR3 */
+#define X86_CR3_PCID_ASID_MASK (_AC((1<<12)-1,UL))
+
+/* Mask for all the PCID-related bits in CR3: */
+#define X86_CR3_PCID_MASK (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_MASK)
+#define X86_CR3_PCID_ASID_KERN (_AC(0x0,UL))
+
+#if defined(CONFIG_PAGE_TABLE_ISOLATION) && defined(CONFIG_X86_64)
+/* Let X86_CR3_PCID_ASID_USER be usable for the X86_CR3_PCID_NOFLUSH bit */
+#define X86_CR3_PCID_ASID_USER (_AC(0x80,UL))
+
+#define X86_CR3_PCID_KERN_FLUSH (X86_CR3_PCID_ASID_KERN)
+#define X86_CR3_PCID_USER_FLUSH (X86_CR3_PCID_ASID_USER)
+#define X86_CR3_PCID_KERN_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_KERN)
+#define X86_CR3_PCID_USER_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_USER)
+#else
+#define X86_CR3_PCID_ASID_USER (_AC(0x0,UL))
+/*
+ * PCIDs are unsupported on 32-bit and none of these bits can be
+ * set in CR3:
+ */
+#define X86_CR3_PCID_KERN_FLUSH (0)
+#define X86_CR3_PCID_USER_FLUSH (0)
+#define X86_CR3_PCID_KERN_NOFLUSH (0)
+#define X86_CR3_PCID_USER_NOFLUSH (0)
+#endif
+
/*
* The cache modes defined here are used to translate between pure SW usage
* and the HW defined cache mode bits and/or PAT entries.
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 83db0eae9979..8cb52ee3ade6 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -308,7 +308,7 @@ struct tss_struct {
} ____cacheline_aligned;
-DECLARE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss);
+DECLARE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss);
#ifdef CONFIG_X86_32
DECLARE_PER_CPU(unsigned long, cpu_current_top_of_stack);
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 7d2ea6b1f7d9..94146f665a3c 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -132,6 +132,24 @@ static inline void cr4_set_bits_and_update_boot(unsigned long mask)
cr4_set_bits(mask);
}
+/*
+ * Declare a couple of kaiser interfaces here for convenience,
+ * to avoid the need for asm/kaiser.h in unexpected places.
+ */
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+extern int kaiser_enabled;
+extern void kaiser_setup_pcid(void);
+extern void kaiser_flush_tlb_on_return_to_user(void);
+#else
+#define kaiser_enabled 0
+static inline void kaiser_setup_pcid(void)
+{
+}
+static inline void kaiser_flush_tlb_on_return_to_user(void)
+{
+}
+#endif
+
static inline void __native_flush_tlb(void)
{
/*
@@ -140,6 +158,8 @@ static inline void __native_flush_tlb(void)
* back:
*/
preempt_disable();
+ if (kaiser_enabled)
+ kaiser_flush_tlb_on_return_to_user();
native_write_cr3(native_read_cr3());
preempt_enable();
}
@@ -149,20 +169,27 @@ static inline void __native_flush_tlb_global_irq_disabled(void)
unsigned long cr4;
cr4 = this_cpu_read(cpu_tlbstate.cr4);
- /* clear PGE */
- native_write_cr4(cr4 & ~X86_CR4_PGE);
- /* write old PGE again and flush TLBs */
- native_write_cr4(cr4);
+ if (cr4 & X86_CR4_PGE) {
+ /* clear PGE and flush TLB of all entries */
+ native_write_cr4(cr4 & ~X86_CR4_PGE);
+ /* restore PGE as it was before */
+ native_write_cr4(cr4);
+ } else {
+ /* do it with cr3, letting kaiser flush user PCID */
+ __native_flush_tlb();
+ }
}
static inline void __native_flush_tlb_global(void)
{
unsigned long flags;
- if (static_cpu_has(X86_FEATURE_INVPCID)) {
+ if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Using INVPCID is considerably faster than a pair of writes
* to CR4 sandwiched inside an IRQ flag save/restore.
+ *
+ * Note, this works with CR4.PCIDE=0 or 1.
*/
invpcid_flush_all();
return;
@@ -174,24 +201,45 @@ static inline void __native_flush_tlb_global(void)
* be called from deep inside debugging code.)
*/
raw_local_irq_save(flags);
-
__native_flush_tlb_global_irq_disabled();
-
raw_local_irq_restore(flags);
}
static inline void __native_flush_tlb_single(unsigned long addr)
{
- asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+ /*
+ * SIMICS #GP's if you run INVPCID with type 2/3
+ * and X86_CR4_PCIDE clear. Shame!
+ *
+ * The ASIDs used below are hard-coded. But, we must not
+ * call invpcid(type=1/2) before CR4.PCIDE=1. Just call
+ * invlpg in the case we are called early.
+ */
+
+ if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
+ if (kaiser_enabled)
+ kaiser_flush_tlb_on_return_to_user();
+ asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+ return;
+ }
+ /* Flush the address out of both PCIDs. */
+ /*
+ * An optimization here might be to determine addresses
+ * that are only kernel-mapped and only flush the kernel
+ * ASID. But, userspace flushes are probably much more
+ * important performance-wise.
+ *
+ * Make sure to do only a single invpcid when KAISER is
+ * disabled and we have only a single ASID.
+ */
+ if (kaiser_enabled)
+ invpcid_flush_one(X86_CR3_PCID_ASID_USER, addr);
+ invpcid_flush_one(X86_CR3_PCID_ASID_KERN, addr);
}
static inline void __flush_tlb_all(void)
{
- if (boot_cpu_has(X86_FEATURE_PGE))
- __flush_tlb_global();
- else
- __flush_tlb();
-
+ __flush_tlb_global();
/*
* Note: if we somehow had PCID but not PGE, then this wouldn't work --
* we'd end up flushing kernel translations for the current ASID but
diff --git a/arch/x86/include/uapi/asm/processor-flags.h b/arch/x86/include/uapi/asm/processor-flags.h
index 567de50a4c2a..6768d1321016 100644
--- a/arch/x86/include/uapi/asm/processor-flags.h
+++ b/arch/x86/include/uapi/asm/processor-flags.h
@@ -77,7 +77,8 @@
#define X86_CR3_PWT _BITUL(X86_CR3_PWT_BIT)
#define X86_CR3_PCD_BIT 4 /* Page Cache Disable */
#define X86_CR3_PCD _BITUL(X86_CR3_PCD_BIT)
-#define X86_CR3_PCID_MASK _AC(0x00000fff,UL) /* PCID Mask */
+#define X86_CR3_PCID_NOFLUSH_BIT 63 /* Preserve old PCID */
+#define X86_CR3_PCID_NOFLUSH _BITULL(X86_CR3_PCID_NOFLUSH_BIT)
/*
* Intel CPU features in CR4
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 91588be529b9..918e44772b04 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -93,7 +93,7 @@ static const struct cpu_dev default_cpu = {
static const struct cpu_dev *this_cpu = &default_cpu;
-DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = {
+DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page) = { .gdt = {
#ifdef CONFIG_X86_64
/*
* We need valid kernel segments for data and code in long mode too
@@ -327,8 +327,21 @@ static __always_inline void setup_smap(struct cpuinfo_x86 *c)
static void setup_pcid(struct cpuinfo_x86 *c)
{
if (cpu_has(c, X86_FEATURE_PCID)) {
- if (cpu_has(c, X86_FEATURE_PGE)) {
+ if (cpu_has(c, X86_FEATURE_PGE) || kaiser_enabled) {
cr4_set_bits(X86_CR4_PCIDE);
+ /*
+ * INVPCID has two "groups" of types:
+ * 1/2: Invalidate an individual address
+ * 3/4: Invalidate all contexts
+ *
+ * 1/2 take a PCID, but 3/4 do not. So, 3/4
+ * ignore the PCID argument in the descriptor.
+ * But, we have to be careful not to call 1/2
+ * with an actual non-zero PCID in them before
+ * we do the above cr4_set_bits().
+ */
+ if (cpu_has(c, X86_FEATURE_INVPCID))
+ set_cpu_cap(c, X86_FEATURE_INVPCID_SINGLE);
} else {
/*
* flush_tlb_all(), as currently implemented, won't
@@ -341,6 +354,7 @@ static void setup_pcid(struct cpuinfo_x86 *c)
clear_cpu_cap(c, X86_FEATURE_PCID);
}
}
+ kaiser_setup_pcid();
}
/*
@@ -1365,7 +1379,7 @@ static const unsigned int exception_stack_sizes[N_EXCEPTION_STACKS] = {
[DEBUG_STACK - 1] = DEBUG_STKSZ
};
-static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
+DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(char, exception_stacks
[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ]);
/* May not be marked __init: used by software suspend */
@@ -1523,6 +1537,14 @@ void cpu_init(void)
* try to read it.
*/
cr4_init_shadow();
+ if (!kaiser_enabled) {
+ /*
+ * secondary_startup_64() deferred setting PGE in cr4:
+ * probe_page_size_mask() sets it on the boot cpu,
+ * but it needs to be set on each secondary cpu.
+ */
+ cr4_set_bits(X86_CR4_PGE);
+ }
/*
* Load microcode on this cpu if a valid microcode is available.
diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c
index 04f89caef9c4..e33b38541be3 100644
--- a/arch/x86/kernel/espfix_64.c
+++ b/arch/x86/kernel/espfix_64.c
@@ -41,6 +41,7 @@
#include <asm/pgalloc.h>
#include <asm/setup.h>
#include <asm/espfix.h>
+#include <asm/kaiser.h>
/*
* Note: we only need 6*8 = 48 bytes for the espfix stack, but round
@@ -126,6 +127,15 @@ void __init init_espfix_bsp(void)
/* Install the espfix pud into the kernel page directory */
pgd_p = &init_level4_pgt[pgd_index(ESPFIX_BASE_ADDR)];
pgd_populate(&init_mm, pgd_p, (pud_t *)espfix_pud_page);
+ /*
+ * Just copy the top-level PGD that is mapping the espfix
+ * area to ensure it is mapped into the shadow user page
+ * tables.
+ */
+ if (kaiser_enabled) {
+ set_pgd(native_get_shadow_pgd(pgd_p),
+ __pgd(_KERNPG_TABLE | __pa((pud_t *)espfix_pud_page)));
+ }
/* Randomize the locations */
init_espfix_random();
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index b4421cc191b0..67cd7c1b99da 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -190,8 +190,8 @@ ENTRY(secondary_startup_64)
movq $(init_level4_pgt - __START_KERNEL_map), %rax
1:
- /* Enable PAE mode and PGE */
- movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx
+ /* Enable PAE and PSE, but defer PGE until kaiser_enabled is decided */
+ movl $(X86_CR4_PAE | X86_CR4_PSE), %ecx
movq %rcx, %cr4
/* Setup early boot stage 4 level pagetables. */
@@ -405,6 +405,27 @@ GLOBAL(early_recursion_flag)
.balign PAGE_SIZE; \
GLOBAL(name)
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+/*
+ * Each PGD needs to be 8k long and 8k aligned. We do not
+ * ever go out to userspace with these, so we do not
+ * strictly *need* the second page, but this allows us to
+ * have a single set_pgd() implementation that does not
+ * need to worry about whether it has 4k or 8k to work
+ * with.
+ *
+ * This ensures PGDs are 8k long:
+ */
+#define KAISER_USER_PGD_FILL 512
+/* This ensures they are 8k-aligned: */
+#define NEXT_PGD_PAGE(name) \
+ .balign 2 * PAGE_SIZE; \
+GLOBAL(name)
+#else
+#define NEXT_PGD_PAGE(name) NEXT_PAGE(name)
+#define KAISER_USER_PGD_FILL 0
+#endif
+
/* Automate the creation of 1 to 1 mapping pmd entries */
#define PMDS(START, PERM, COUNT) \
i = 0 ; \
@@ -414,9 +435,10 @@ GLOBAL(name)
.endr
__INITDATA
-NEXT_PAGE(early_level4_pgt)
+NEXT_PGD_PAGE(early_level4_pgt)
.fill 511,8,0
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(early_dynamic_pgts)
.fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0
@@ -424,16 +446,18 @@ NEXT_PAGE(early_dynamic_pgts)
.data
#ifndef CONFIG_XEN
-NEXT_PAGE(init_level4_pgt)
+NEXT_PGD_PAGE(init_level4_pgt)
.fill 512,8,0
+ .fill KAISER_USER_PGD_FILL,8,0
#else
-NEXT_PAGE(init_level4_pgt)
+NEXT_PGD_PAGE(init_level4_pgt)
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
.org init_level4_pgt + L4_PAGE_OFFSET*8, 0
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
.org init_level4_pgt + L4_START_KERNEL*8, 0
/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(level3_ident_pgt)
.quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
@@ -444,6 +468,7 @@ NEXT_PAGE(level2_ident_pgt)
*/
PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)
#endif
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(level3_kernel_pgt)
.fill L3_START_KERNEL,8,0
diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c
index 1423ab1b0312..f480b38a03c3 100644
--- a/arch/x86/kernel/irqinit.c
+++ b/arch/x86/kernel/irqinit.c
@@ -51,7 +51,7 @@ static struct irqaction irq2 = {
.flags = IRQF_NO_THREAD,
};
-DEFINE_PER_CPU(vector_irq_t, vector_irq) = {
+DEFINE_PER_CPU_USER_MAPPED(vector_irq_t, vector_irq) = {
[0 ... NR_VECTORS - 1] = VECTOR_UNUSED,
};
diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c
index 5f70014ca602..8bc68cfc0d33 100644
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -16,6 +16,7 @@
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/uaccess.h>
+#include <linux/kaiser.h>
#include <asm/ldt.h>
#include <asm/desc.h>
@@ -34,11 +35,21 @@ static void flush_ldt(void *current_mm)
set_ldt(pc->ldt->entries, pc->ldt->size);
}
+static void __free_ldt_struct(struct ldt_struct *ldt)
+{
+ if (ldt->size * LDT_ENTRY_SIZE > PAGE_SIZE)
+ vfree(ldt->entries);
+ else
+ free_page((unsigned long)ldt->entries);
+ kfree(ldt);
+}
+
/* The caller must call finalize_ldt_struct on the result. LDT starts zeroed. */
static struct ldt_struct *alloc_ldt_struct(int size)
{
struct ldt_struct *new_ldt;
int alloc_size;
+ int ret;
if (size > LDT_ENTRIES)
return NULL;
@@ -66,7 +77,13 @@ static struct ldt_struct *alloc_ldt_struct(int size)
return NULL;
}
+ ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
+ __PAGE_KERNEL);
new_ldt->size = size;
+ if (ret) {
+ __free_ldt_struct(new_ldt);
+ return NULL;
+ }
return new_ldt;
}
@@ -92,12 +109,10 @@ static void free_ldt_struct(struct ldt_struct *ldt)
if (likely(!ldt))
return;
+ kaiser_remove_mapping((unsigned long)ldt->entries,
+ ldt->size * LDT_ENTRY_SIZE);
paravirt_free_ldt(ldt->entries, ldt->size);
- if (ldt->size * LDT_ENTRY_SIZE > PAGE_SIZE)
- vfree(ldt->entries);
- else
- free_page((unsigned long)ldt->entries);
- kfree(ldt);
+ __free_ldt_struct(ldt);
}
/*
diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c
index bb3840cedb4f..ee43b36075c7 100644
--- a/arch/x86/kernel/paravirt_patch_64.c
+++ b/arch/x86/kernel/paravirt_patch_64.c
@@ -9,7 +9,6 @@ DEF_NATIVE(pv_irq_ops, save_fl, "pushfq; popq %rax");
DEF_NATIVE(pv_mmu_ops, read_cr2, "movq %cr2, %rax");
DEF_NATIVE(pv_mmu_ops, read_cr3, "movq %cr3, %rax");
DEF_NATIVE(pv_mmu_ops, write_cr3, "movq %rdi, %cr3");
-DEF_NATIVE(pv_mmu_ops, flush_tlb_single, "invlpg (%rdi)");
DEF_NATIVE(pv_cpu_ops, clts, "clts");
DEF_NATIVE(pv_cpu_ops, wbinvd, "wbinvd");
@@ -59,7 +58,6 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
PATCH_SITE(pv_mmu_ops, read_cr3);
PATCH_SITE(pv_mmu_ops, write_cr3);
PATCH_SITE(pv_cpu_ops, clts);
- PATCH_SITE(pv_mmu_ops, flush_tlb_single);
PATCH_SITE(pv_cpu_ops, wbinvd);
#if defined(CONFIG_PARAVIRT_SPINLOCKS)
case PARAVIRT_PATCH(pv_lock_ops.queued_spin_unlock):
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 8e10e72bf6ee..a55b32007785 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -41,7 +41,7 @@
* section. Since TSS's are completely CPU-local, we want them
* on exact cacheline boundaries, to eliminate cacheline ping-pong.
*/
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = {
+__visible DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss) = {
.x86_tss = {
.sp0 = TOP_OF_INIT_STACK,
#ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index feaab07fa124..6b55012d02a3 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -114,6 +114,7 @@
#include <asm/microcode.h>
#include <asm/mmu_context.h>
#include <asm/kaslr.h>
+#include <asm/kaiser.h>
/*
* max_low_pfn_mapped: highest direct mapped pfn under 4GB
@@ -1019,6 +1020,12 @@ void __init setup_arch(char **cmdline_p)
*/
init_hypervisor_platform();
+ /*
+ * This needs to happen right after XENPV is set on xen and
+ * kaiser_enabled is checked below in cleanup_highmap().
+ */
+ kaiser_check_boottime_disable();
+
x86_init.resources.probe_roms();
/* after parse_early_param, so could debug it */
diff --git a/arch/x86/kernel/tracepoint.c b/arch/x86/kernel/tracepoint.c
index 1c113db9ed57..2bb5ee464df3 100644
--- a/arch/x86/kernel/tracepoint.c
+++ b/arch/x86/kernel/tracepoint.c
@@ -9,10 +9,12 @@
#include <linux/atomic.h>
atomic_t trace_idt_ctr = ATOMIC_INIT(0);
+__aligned(PAGE_SIZE)
struct desc_ptr trace_idt_descr = { NR_VECTORS * 16 - 1,
(unsigned long) trace_idt_table };
/* No need to be aligned, but done to keep all IDTs defined the same way. */
+__aligned(PAGE_SIZE)
gate_desc trace_idt_table[NR_VECTORS] __page_aligned_bss;
static int trace_irq_vector_refcount;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7e28e6c877d9..73304b1a03cc 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -773,7 +773,8 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
return 1;
/* PCID can not be enabled when cr3[11:0]!=000H or EFER.LMA=0 */
- if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_MASK) || !is_long_mode(vcpu))
+ if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_ASID_MASK) ||
+ !is_long_mode(vcpu))
return 1;
}
diff --git a/arch/x86/lib/cmdline.c b/arch/x86/lib/cmdline.c
index 5cc78bf57232..3261abb21ef4 100644
--- a/arch/x86/lib/cmdline.c
+++ b/arch/x86/lib/cmdline.c
@@ -104,7 +104,112 @@ __cmdline_find_option_bool(const char *cmdline, int max_cmdline_size,
return 0; /* Buffer overrun */
}
+/*
+ * Find a non-boolean option (i.e. option=argument). In accordance with
+ * standard Linux practice, if this option is repeated, this returns the
+ * last instance on the command line.
+ *
+ * @cmdline: the cmdline string
+ * @max_cmdline_size: the maximum size of cmdline
+ * @option: option string to look for
+ * @buffer: memory buffer to return the option argument
+ * @bufsize: size of the supplied memory buffer
+ *
+ * Returns the length of the argument (regardless of if it was
+ * truncated to fit in the buffer), or -1 on not found.
+ */
+static int
+__cmdline_find_option(const char *cmdline, int max_cmdline_size,
+ const char *option, char *buffer, int bufsize)
+{
+ char c;
+ int pos = 0, len = -1;
+ const char *opptr = NULL;
+ char *bufptr = buffer;
+ enum {
+ st_wordstart = 0, /* Start of word/after whitespace */
+ st_wordcmp, /* Comparing this word */
+ st_wordskip, /* Miscompare, skip */
+ st_bufcpy, /* Copying this to buffer */
+ } state = st_wordstart;
+
+ if (!cmdline)
+ return -1; /* No command line */
+
+ /*
+ * This 'pos' check ensures we do not overrun
+ * a non-NULL-terminated 'cmdline'
+ */
+ while (pos++ < max_cmdline_size) {
+ c = *(char *)cmdline++;
+ if (!c)
+ break;
+
+ switch (state) {
+ case st_wordstart:
+ if (myisspace(c))
+ break;
+
+ state = st_wordcmp;
+ opptr = option;
+ /* fall through */
+
+ case st_wordcmp:
+ if ((c == '=') && !*opptr) {
+ /*
+ * We matched all the way to the end of the
+ * option we were looking for, prepare to
+ * copy the argument.
+ */
+ len = 0;
+ bufptr = buffer;
+ state = st_bufcpy;
+ break;
+ } else if (c == *opptr++) {
+ /*
+ * We are currently matching, so continue
+ * to the next character on the cmdline.
+ */
+ break;
+ }
+ state = st_wordskip;
+ /* fall through */
+
+ case st_wordskip:
+ if (myisspace(c))
+ state = st_wordstart;
+ break;
+
+ case st_bufcpy:
+ if (myisspace(c)) {
+ state = st_wordstart;
+ } else {
+ /*
+ * Increment len, but don't overrun the
+ * supplied buffer and leave room for the
+ * NULL terminator.
+ */
+ if (++len < bufsize)
+ *bufptr++ = c;
+ }
+ break;
+ }
+ }
+
+ if (bufsize)
+ *bufptr = '\0';
+
+ return len;
+}
+
int cmdline_find_option_bool(const char *cmdline, const char *option)
{
return __cmdline_find_option_bool(cmdline, COMMAND_LINE_SIZE, option);
}
+
+int cmdline_find_option(const char *cmdline, const char *option, char *buffer,
+ int bufsize)
+{
+ return __cmdline_find_option(cmdline, COMMAND_LINE_SIZE, option,
+ buffer, bufsize);
+}
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 96d2b847e09e..c548b46100cb 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -37,5 +37,5 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulation.o
obj-$(CONFIG_X86_INTEL_MPX) += mpx.o
obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o
-obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
-
+obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
+obj-$(CONFIG_PAGE_TABLE_ISOLATION) += kaiser.o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 0381638168d1..1e779bca4f3e 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -177,7 +177,7 @@ static void __init probe_page_size_mask(void)
cr4_set_bits_and_update_boot(X86_CR4_PSE);
/* Enable PGE if available */
- if (boot_cpu_has(X86_FEATURE_PGE)) {
+ if (boot_cpu_has(X86_FEATURE_PGE) && !kaiser_enabled) {
cr4_set_bits_and_update_boot(X86_CR4_PGE);
__supported_pte_mask |= _PAGE_GLOBAL;
} else
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 3e27ded6ac65..7df8e3a79dc0 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -324,6 +324,16 @@ void __init cleanup_highmap(void)
continue;
if (vaddr < (unsigned long) _text || vaddr > end)
set_pmd(pmd, __pmd(0));
+ else if (kaiser_enabled) {
+ /*
+ * level2_kernel_pgt is initialized with _PAGE_GLOBAL:
+ * clear that now. This is not important, so long as
+ * CR4.PGE remains clear, but it removes an anomaly.
+ * Physical mapping setup below avoids _PAGE_GLOBAL
+ * by use of massage_pgprot() inside pfn_pte() etc.
+ */
+ set_pmd(pmd, pmd_clear_flags(*pmd, _PAGE_GLOBAL));
+ }
}
}
diff --git a/arch/x86/mm/kaiser.c b/arch/x86/mm/kaiser.c
new file mode 100644
index 000000000000..d8376b4ad9f0
--- /dev/null
+++ b/arch/x86/mm/kaiser.c
@@ -0,0 +1,455 @@
+#include <linux/bug.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/bug.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/mm.h>
+#include <linux/uaccess.h>
+
+#undef pr_fmt
+#define pr_fmt(fmt) "Kernel/User page tables isolation: " fmt
+
+#include <asm/kaiser.h>
+#include <asm/tlbflush.h> /* to verify its kaiser declarations */
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/desc.h>
+#include <asm/cmdline.h>
+
+int kaiser_enabled __read_mostly = 1;
+EXPORT_SYMBOL(kaiser_enabled); /* for inlined TLB flush functions */
+
+__visible
+DEFINE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+
+/*
+ * These can have bit 63 set, so we can not just use a plain "or"
+ * instruction to get their value or'd into CR3. It would take
+ * another register. So, we use a memory reference to these instead.
+ *
+ * This is also handy because systems that do not support PCIDs
+ * just end up or'ing a 0 into their CR3, which does no harm.
+ */
+DEFINE_PER_CPU(unsigned long, x86_cr3_pcid_user);
+
+/*
+ * At runtime, the only things we map are some things for CPU
+ * hotplug, and stacks for new processes. No two CPUs will ever
+ * be populating the same addresses, so we only need to ensure
+ * that we protect between two CPUs trying to allocate and
+ * populate the same page table page.
+ *
+ * Only take this lock when doing a set_p[4um]d(), but it is not
+ * needed for doing a set_pte(). We assume that only the *owner*
+ * of a given allocation will be doing this for _their_
+ * allocation.
+ *
+ * This ensures that once a system has been running for a while
+ * and there have been stacks all over and these page tables
+ * are fully populated, there will be no further acquisitions of
+ * this lock.
+ */
+static DEFINE_SPINLOCK(shadow_table_allocation_lock);
+
+/*
+ * Returns -1 on error.
+ */
+static inline unsigned long get_pa_from_mapping(unsigned long vaddr)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ pgd = pgd_offset_k(vaddr);
+ /*
+ * We made all the kernel PGDs present in kaiser_init().
+ * We expect them to stay that way.
+ */
+ BUG_ON(pgd_none(*pgd));
+ /*
+ * PGDs are either 512GB or 128TB on all x86_64
+ * configurations. We don't handle these.
+ */
+ BUG_ON(pgd_large(*pgd));
+
+ pud = pud_offset(pgd, vaddr);
+ if (pud_none(*pud)) {
+ WARN_ON_ONCE(1);
+ return -1;
+ }
+
+ if (pud_large(*pud))
+ return (pud_pfn(*pud) << PAGE_SHIFT) | (vaddr & ~PUD_PAGE_MASK);
+
+ pmd = pmd_offset(pud, vaddr);
+ if (pmd_none(*pmd)) {
+ WARN_ON_ONCE(1);
+ return -1;
+ }
+
+ if (pmd_large(*pmd))
+ return (pmd_pfn(*pmd) << PAGE_SHIFT) | (vaddr & ~PMD_PAGE_MASK);
+
+ pte = pte_offset_kernel(pmd, vaddr);
+ if (pte_none(*pte)) {
+ WARN_ON_ONCE(1);
+ return -1;
+ }
+
+ return (pte_pfn(*pte) << PAGE_SHIFT) | (vaddr & ~PAGE_MASK);
+}
+
+/*
+ * This is a relatively normal page table walk, except that it
+ * also tries to allocate page tables pages along the way.
+ *
+ * Returns a pointer to a PTE on success, or NULL on failure.
+ */
+static pte_t *kaiser_pagetable_walk(unsigned long address)
+{
+ pmd_t *pmd;
+ pud_t *pud;
+ pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(address));
+ gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+
+ if (pgd_none(*pgd)) {
+ WARN_ONCE(1, "All shadow pgds should have been populated");
+ return NULL;
+ }
+ BUILD_BUG_ON(pgd_large(*pgd) != 0);
+
+ pud = pud_offset(pgd, address);
+ /* The shadow page tables do not use large mappings: */
+ if (pud_large(*pud)) {
+ WARN_ON(1);
+ return NULL;
+ }
+ if (pud_none(*pud)) {
+ unsigned long new_pmd_page = __get_free_page(gfp);
+ if (!new_pmd_page)
+ return NULL;
+ spin_lock(&shadow_table_allocation_lock);
+ if (pud_none(*pud)) {
+ set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
+ __inc_zone_page_state(virt_to_page((void *)
+ new_pmd_page), NR_KAISERTABLE);
+ } else
+ free_page(new_pmd_page);
+ spin_unlock(&shadow_table_allocation_lock);
+ }
+
+ pmd = pmd_offset(pud, address);
+ /* The shadow page tables do not use large mappings: */
+ if (pmd_large(*pmd)) {
+ WARN_ON(1);
+ return NULL;
+ }
+ if (pmd_none(*pmd)) {
+ unsigned long new_pte_page = __get_free_page(gfp);
+ if (!new_pte_page)
+ return NULL;
+ spin_lock(&shadow_table_allocation_lock);
+ if (pmd_none(*pmd)) {
+ set_pmd(pmd, __pmd(_KERNPG_TABLE | __pa(new_pte_page)));
+ __inc_zone_page_state(virt_to_page((void *)
+ new_pte_page), NR_KAISERTABLE);
+ } else
+ free_page(new_pte_page);
+ spin_unlock(&shadow_table_allocation_lock);
+ }
+
+ return pte_offset_kernel(pmd, address);
+}
+
+static int kaiser_add_user_map(const void *__start_addr, unsigned long size,
+ unsigned long flags)
+{
+ int ret = 0;
+ pte_t *pte;
+ unsigned long start_addr = (unsigned long )__start_addr;
+ unsigned long address = start_addr & PAGE_MASK;
+ unsigned long end_addr = PAGE_ALIGN(start_addr + size);
+ unsigned long target_address;
+
+ /*
+ * It is convenient for callers to pass in __PAGE_KERNEL etc,
+ * and there is no actual harm from setting _PAGE_GLOBAL, so
+ * long as CR4.PGE is not set. But it is nonetheless troubling
+ * to see Kaiser itself setting _PAGE_GLOBAL (now that "nokaiser"
+ * requires that not to be #defined to 0): so mask it off here.
+ */
+ flags &= ~_PAGE_GLOBAL;
+
+ for (; address < end_addr; address += PAGE_SIZE) {
+ target_address = get_pa_from_mapping(address);
+ if (target_address == -1) {
+ ret = -EIO;
+ break;
+ }
+ pte = kaiser_pagetable_walk(address);
+ if (!pte) {
+ ret = -ENOMEM;
+ break;
+ }
+ if (pte_none(*pte)) {
+ set_pte(pte, __pte(flags | target_address));
+ } else {
+ pte_t tmp;
+ set_pte(&tmp, __pte(flags | target_address));
+ WARN_ON_ONCE(!pte_same(*pte, tmp));
+ }
+ }
+ return ret;
+}
+
+static int kaiser_add_user_map_ptrs(const void *start, const void *end, unsigned long flags)
+{
+ unsigned long size = end - start;
+
+ return kaiser_add_user_map(start, size, flags);
+}
+
+/*
+ * Ensure that the top level of the (shadow) page tables are
+ * entirely populated. This ensures that all processes that get
+ * forked have the same entries. This way, we do not have to
+ * ever go set up new entries in older processes.
+ *
+ * Note: we never free these, so there are no updates to them
+ * after this.
+ */
+static void __init kaiser_init_all_pgds(void)
+{
+ pgd_t *pgd;
+ int i = 0;
+
+ pgd = native_get_shadow_pgd(pgd_offset_k((unsigned long )0));
+ for (i = PTRS_PER_PGD / 2; i < PTRS_PER_PGD; i++) {
+ pgd_t new_pgd;
+ pud_t *pud = pud_alloc_one(&init_mm,
+ PAGE_OFFSET + i * PGDIR_SIZE);
+ if (!pud) {
+ WARN_ON(1);
+ break;
+ }
+ inc_zone_page_state(virt_to_page(pud), NR_KAISERTABLE);
+ new_pgd = __pgd(_KERNPG_TABLE |__pa(pud));
+ /*
+ * Make sure not to stomp on some other pgd entry.
+ */
+ if (!pgd_none(pgd[i])) {
+ WARN_ON(1);
+ continue;
+ }
+ set_pgd(pgd + i, new_pgd);
+ }
+}
+
+#define kaiser_add_user_map_early(start, size, flags) do { \
+ int __ret = kaiser_add_user_map(start, size, flags); \
+ WARN_ON(__ret); \
+} while (0)
+
+#define kaiser_add_user_map_ptrs_early(start, end, flags) do { \
+ int __ret = kaiser_add_user_map_ptrs(start, end, flags); \
+ WARN_ON(__ret); \
+} while (0)
+
+void __init kaiser_check_boottime_disable(void)
+{
+ bool enable = true;
+ char arg[5];
+ int ret;
+
+ if (boot_cpu_has(X86_FEATURE_XENPV))
+ goto silent_disable;
+
+ ret = cmdline_find_option(boot_command_line, "pti", arg, sizeof(arg));
+ if (ret > 0) {
+ if (!strncmp(arg, "on", 2))
+ goto enable;
+
+ if (!strncmp(arg, "off", 3))
+ goto disable;
+
+ if (!strncmp(arg, "auto", 4))
+ goto skip;
+ }
+
+ if (cmdline_find_option_bool(boot_command_line, "nopti"))
+ goto disable;
+
+skip:
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
+ goto disable;
+
+enable:
+ if (enable)
+ setup_force_cpu_cap(X86_FEATURE_KAISER);
+
+ return;
+
+disable:
+ pr_info("disabled\n");
+
+silent_disable:
+ kaiser_enabled = 0;
+ setup_clear_cpu_cap(X86_FEATURE_KAISER);
+}
+
+/*
+ * If anything in here fails, we will likely die on one of the
+ * first kernel->user transitions and init will die. But, we
+ * will have most of the kernel up by then and should be able to
+ * get a clean warning out of it. If we BUG_ON() here, we run
+ * the risk of being before we have good console output.
+ */
+void __init kaiser_init(void)
+{
+ int cpu;
+
+ if (!kaiser_enabled)
+ return;
+
+ kaiser_init_all_pgds();
+
+ for_each_possible_cpu(cpu) {
+ void *percpu_vaddr = __per_cpu_user_mapped_start +
+ per_cpu_offset(cpu);
+ unsigned long percpu_sz = __per_cpu_user_mapped_end -
+ __per_cpu_user_mapped_start;
+ kaiser_add_user_map_early(percpu_vaddr, percpu_sz,
+ __PAGE_KERNEL);
+ }
+
+ /*
+ * Map the entry/exit text section, which is needed at
+ * switches from user to and from kernel.
+ */
+ kaiser_add_user_map_ptrs_early(__entry_text_start, __entry_text_end,
+ __PAGE_KERNEL_RX);
+
+#if defined(CONFIG_FUNCTION_GRAPH_TRACER) || defined(CONFIG_KASAN)
+ kaiser_add_user_map_ptrs_early(__irqentry_text_start,
+ __irqentry_text_end,
+ __PAGE_KERNEL_RX);
+#endif
+ kaiser_add_user_map_early((void *)idt_descr.address,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL_RO);
+#ifdef CONFIG_TRACING
+ kaiser_add_user_map_early(&trace_idt_descr,
+ sizeof(trace_idt_descr),
+ __PAGE_KERNEL);
+ kaiser_add_user_map_early(&trace_idt_table,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL);
+#endif
+ kaiser_add_user_map_early(&debug_idt_descr, sizeof(debug_idt_descr),
+ __PAGE_KERNEL);
+ kaiser_add_user_map_early(&debug_idt_table,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL);
+
+ pr_info("enabled\n");
+}
+
+/* Add a mapping to the shadow mapping, and synchronize the mappings */
+int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
+{
+ if (!kaiser_enabled)
+ return 0;
+ return kaiser_add_user_map((const void *)addr, size, flags);
+}
+
+void kaiser_remove_mapping(unsigned long start, unsigned long size)
+{
+ extern void unmap_pud_range_nofree(pgd_t *pgd,
+ unsigned long start, unsigned long end);
+ unsigned long end = start + size;
+ unsigned long addr, next;
+ pgd_t *pgd;
+
+ if (!kaiser_enabled)
+ return;
+ pgd = native_get_shadow_pgd(pgd_offset_k(start));
+ for (addr = start; addr < end; pgd++, addr = next) {
+ next = pgd_addr_end(addr, end);
+ unmap_pud_range_nofree(pgd, addr, next);
+ }
+}
+
+/*
+ * Page table pages are page-aligned. The lower half of the top
+ * level is used for userspace and the top half for the kernel.
+ * This returns true for user pages that need to get copied into
+ * both the user and kernel copies of the page tables, and false
+ * for kernel pages that should only be in the kernel copy.
+ */
+static inline bool is_userspace_pgd(pgd_t *pgdp)
+{
+ return ((unsigned long)pgdp % PAGE_SIZE) < (PAGE_SIZE / 2);
+}
+
+pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+ if (!kaiser_enabled)
+ return pgd;
+ /*
+ * Do we need to also populate the shadow pgd? Check _PAGE_USER to
+ * skip cases like kexec and EFI which make temporary low mappings.
+ */
+ if (pgd.pgd & _PAGE_USER) {
+ if (is_userspace_pgd(pgdp)) {
+ native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
+ /*
+ * Even if the entry is *mapping* userspace, ensure
+ * that userspace can not use it. This way, if we
+ * get out to userspace running on the kernel CR3,
+ * userspace will crash instead of running.
+ */
+ if (__supported_pte_mask & _PAGE_NX)
+ pgd.pgd |= _PAGE_NX;
+ }
+ } else if (!pgd.pgd) {
+ /*
+ * pgd_clear() cannot check _PAGE_USER, and is even used to
+ * clear corrupted pgd entries: so just rely on cases like
+ * kexec and EFI never to be using pgd_clear().
+ */
+ if (!WARN_ON_ONCE((unsigned long)pgdp & PAGE_SIZE) &&
+ is_userspace_pgd(pgdp))
+ native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
+ }
+ return pgd;
+}
+
+void kaiser_setup_pcid(void)
+{
+ unsigned long user_cr3 = KAISER_SHADOW_PGD_OFFSET;
+
+ if (this_cpu_has(X86_FEATURE_PCID))
+ user_cr3 |= X86_CR3_PCID_USER_NOFLUSH;
+ /*
+ * These variables are used by the entry/exit
+ * code to change PCID and pgd and TLB flushing.
+ */
+ this_cpu_write(x86_cr3_pcid_user, user_cr3);
+}
+
+/*
+ * Make a note that this cpu will need to flush USER tlb on return to user.
+ * If cpu does not have PCID, then the NOFLUSH bit will never have been set.
+ */
+void kaiser_flush_tlb_on_return_to_user(void)
+{
+ if (this_cpu_has(X86_FEATURE_PCID))
+ this_cpu_write(x86_cr3_pcid_user,
+ X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
+}
+EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index aed206475aa7..319183d93602 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -189,6 +189,6 @@ void __meminit init_trampoline(void)
*pud_tramp = *pud;
}
- set_pgd(&trampoline_pgd_entry,
- __pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
+ /* Avoid set_pgd(), in case it's complicated by CONFIG_PAGE_TABLE_ISOLATION */
+ trampoline_pgd_entry = __pgd(_KERNPG_TABLE | __pa(pud_page_tramp));
}
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index e3353c97d086..73dcb0e18c1b 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -52,6 +52,7 @@ static DEFINE_SPINLOCK(cpa_lock);
#define CPA_FLUSHTLB 1
#define CPA_ARRAY 2
#define CPA_PAGES_ARRAY 4
+#define CPA_FREE_PAGETABLES 8
#ifdef CONFIG_PROC_FS
static unsigned long direct_pages_count[PG_LEVEL_NUM];
@@ -729,10 +730,13 @@ static int split_large_page(struct cpa_data *cpa, pte_t *kpte,
return 0;
}
-static bool try_to_free_pte_page(pte_t *pte)
+static bool try_to_free_pte_page(struct cpa_data *cpa, pte_t *pte)
{
int i;
+ if (!(cpa->flags & CPA_FREE_PAGETABLES))
+ return false;
+
for (i = 0; i < PTRS_PER_PTE; i++)
if (!pte_none(pte[i]))
return false;
@@ -741,10 +745,13 @@ static bool try_to_free_pte_page(pte_t *pte)
return true;
}
-static bool try_to_free_pmd_page(pmd_t *pmd)
+static bool try_to_free_pmd_page(struct cpa_data *cpa, pmd_t *pmd)
{
int i;
+ if (!(cpa->flags & CPA_FREE_PAGETABLES))
+ return false;
+
for (i = 0; i < PTRS_PER_PMD; i++)
if (!pmd_none(pmd[i]))
return false;
@@ -753,7 +760,9 @@ static bool try_to_free_pmd_page(pmd_t *pmd)
return true;
}
-static bool unmap_pte_range(pmd_t *pmd, unsigned long start, unsigned long end)
+static bool unmap_pte_range(struct cpa_data *cpa, pmd_t *pmd,
+ unsigned long start,
+ unsigned long end)
{
pte_t *pte = pte_offset_kernel(pmd, start);
@@ -764,22 +773,23 @@ static bool unmap_pte_range(pmd_t *pmd, unsigned long start, unsigned long end)
pte++;
}
- if (try_to_free_pte_page((pte_t *)pmd_page_vaddr(*pmd))) {
+ if (try_to_free_pte_page(cpa, (pte_t *)pmd_page_vaddr(*pmd))) {
pmd_clear(pmd);
return true;
}
return false;
}
-static void __unmap_pmd_range(pud_t *pud, pmd_t *pmd,
+static void __unmap_pmd_range(struct cpa_data *cpa, pud_t *pud, pmd_t *pmd,
unsigned long start, unsigned long end)
{
- if (unmap_pte_range(pmd, start, end))
- if (try_to_free_pmd_page((pmd_t *)pud_page_vaddr(*pud)))
+ if (unmap_pte_range(cpa, pmd, start, end))
+ if (try_to_free_pmd_page(cpa, (pmd_t *)pud_page_vaddr(*pud)))
pud_clear(pud);
}
-static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
+static void unmap_pmd_range(struct cpa_data *cpa, pud_t *pud,
+ unsigned long start, unsigned long end)
{
pmd_t *pmd = pmd_offset(pud, start);
@@ -790,7 +800,7 @@ static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
unsigned long next_page = (start + PMD_SIZE) & PMD_MASK;
unsigned long pre_end = min_t(unsigned long, end, next_page);
- __unmap_pmd_range(pud, pmd, start, pre_end);
+ __unmap_pmd_range(cpa, pud, pmd, start, pre_end);
start = pre_end;
pmd++;
@@ -803,7 +813,8 @@ static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
if (pmd_large(*pmd))
pmd_clear(pmd);
else
- __unmap_pmd_range(pud, pmd, start, start + PMD_SIZE);
+ __unmap_pmd_range(cpa, pud, pmd,
+ start, start + PMD_SIZE);
start += PMD_SIZE;
pmd++;
@@ -813,17 +824,19 @@ static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
* 4K leftovers?
*/
if (start < end)
- return __unmap_pmd_range(pud, pmd, start, end);
+ return __unmap_pmd_range(cpa, pud, pmd, start, end);
/*
* Try again to free the PMD page if haven't succeeded above.
*/
if (!pud_none(*pud))
- if (try_to_free_pmd_page((pmd_t *)pud_page_vaddr(*pud)))
+ if (try_to_free_pmd_page(cpa, (pmd_t *)pud_page_vaddr(*pud)))
pud_clear(pud);
}
-static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
+static void __unmap_pud_range(struct cpa_data *cpa, pgd_t *pgd,
+ unsigned long start,
+ unsigned long end)
{
pud_t *pud = pud_offset(pgd, start);
@@ -834,7 +847,7 @@ static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
unsigned long next_page = (start + PUD_SIZE) & PUD_MASK;
unsigned long pre_end = min_t(unsigned long, end, next_page);
- unmap_pmd_range(pud, start, pre_end);
+ unmap_pmd_range(cpa, pud, start, pre_end);
start = pre_end;
pud++;
@@ -848,7 +861,7 @@ static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
if (pud_large(*pud))
pud_clear(pud);
else
- unmap_pmd_range(pud, start, start + PUD_SIZE);
+ unmap_pmd_range(cpa, pud, start, start + PUD_SIZE);
start += PUD_SIZE;
pud++;
@@ -858,7 +871,7 @@ static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
* 2M leftovers?
*/
if (start < end)
- unmap_pmd_range(pud, start, end);
+ unmap_pmd_range(cpa, pud, start, end);
/*
* No need to try to free the PUD page because we'll free it in
@@ -866,6 +879,24 @@ static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
*/
}
+static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
+{
+ struct cpa_data cpa = {
+ .flags = CPA_FREE_PAGETABLES,
+ };
+
+ __unmap_pud_range(&cpa, pgd, start, end);
+}
+
+void unmap_pud_range_nofree(pgd_t *pgd, unsigned long start, unsigned long end)
+{
+ struct cpa_data cpa = {
+ .flags = 0,
+ };
+
+ __unmap_pud_range(&cpa, pgd, start, end);
+}
+
static int alloc_pte_page(pmd_t *pmd)
{
pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL | __GFP_NOTRACK);
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 3feec5af4e67..5aaec8effc5f 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -344,14 +344,22 @@ static inline void _pgd_free(pgd_t *pgd)
kmem_cache_free(pgd_cache, pgd);
}
#else
+
+/*
+ * Instead of one pgd, Kaiser acquires two pgds. Being order-1, it is
+ * both 8k in size and 8k-aligned. That lets us just flip bit 12
+ * in a pointer to swap between the two 4k halves.
+ */
+#define PGD_ALLOCATION_ORDER kaiser_enabled
+
static inline pgd_t *_pgd_alloc(void)
{
- return (pgd_t *)__get_free_page(PGALLOC_GFP);
+ return (pgd_t *)__get_free_pages(PGALLOC_GFP, PGD_ALLOCATION_ORDER);
}
static inline void _pgd_free(pgd_t *pgd)
{
- free_page((unsigned long)pgd);
+ free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER);
}
#endif /* CONFIG_X86_PAE */
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 53b72fb4e781..41205de487e7 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -6,13 +6,14 @@
#include <linux/interrupt.h>
#include <linux/export.h>
#include <linux/cpu.h>
+#include <linux/debugfs.h>
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
#include <asm/cache.h>
#include <asm/apic.h>
#include <asm/uv/uv.h>
-#include <linux/debugfs.h>
+#include <asm/kaiser.h>
/*
* TLB flushing, formerly SMP-only
@@ -34,6 +35,36 @@ struct flush_tlb_info {
unsigned long flush_end;
};
+static void load_new_mm_cr3(pgd_t *pgdir)
+{
+ unsigned long new_mm_cr3 = __pa(pgdir);
+
+ if (kaiser_enabled) {
+ /*
+ * We reuse the same PCID for different tasks, so we must
+ * flush all the entries for the PCID out when we change tasks.
+ * Flush KERN below, flush USER when returning to userspace in
+ * kaiser's SWITCH_USER_CR3 (_SWITCH_TO_USER_CR3) macro.
+ *
+ * invpcid_flush_single_context(X86_CR3_PCID_ASID_USER) could
+ * do it here, but can only be used if X86_FEATURE_INVPCID is
+ * available - and many machines support pcid without invpcid.
+ *
+ * If X86_CR3_PCID_KERN_FLUSH actually added something, then it
+ * would be needed in the write_cr3() below - if PCIDs enabled.
+ */
+ BUILD_BUG_ON(X86_CR3_PCID_KERN_FLUSH);
+ kaiser_flush_tlb_on_return_to_user();
+ }
+
+ /*
+ * Caution: many callers of this function expect
+ * that load_cr3() is serializing and orders TLB
+ * fills with respect to the mm_cpumask writes.
+ */
+ write_cr3(new_mm_cr3);
+}
+
/*
* We cannot call mmdrop() because we are in interrupt context,
* instead update mm->cpu_vm_mask.
@@ -45,7 +76,7 @@ void leave_mm(int cpu)
BUG();
if (cpumask_test_cpu(cpu, mm_cpumask(active_mm))) {
cpumask_clear_cpu(cpu, mm_cpumask(active_mm));
- load_cr3(swapper_pg_dir);
+ load_new_mm_cr3(swapper_pg_dir);
/*
* This gets called in the idle path where RCU
* functions differently. Tracing normally
@@ -120,7 +151,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
* ordering guarantee we need.
*
*/
- load_cr3(next->pgd);
+ load_new_mm_cr3(next->pgd);
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
@@ -167,7 +198,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
* As above, load_cr3() is serializing and orders TLB
* fills with respect to the mm_cpumask write.
*/
- load_cr3(next->pgd);
+ load_new_mm_cr3(next->pgd);
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
load_mm_cr4(next);
load_mm_ldt(next);
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index dc81e5287ebf..2e6000a4eb2c 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -778,7 +778,14 @@
*/
#define PERCPU_INPUT(cacheline) \
VMLINUX_SYMBOL(__per_cpu_start) = .; \
+ VMLINUX_SYMBOL(__per_cpu_user_mapped_start) = .; \
*(.data..percpu..first) \
+ . = ALIGN(cacheline); \
+ *(.data..percpu..user_mapped) \
+ *(.data..percpu..user_mapped..shared_aligned) \
+ . = ALIGN(PAGE_SIZE); \
+ *(.data..percpu..user_mapped..page_aligned) \
+ VMLINUX_SYMBOL(__per_cpu_user_mapped_end) = .; \
. = ALIGN(PAGE_SIZE); \
*(.data..percpu..page_aligned) \
. = ALIGN(cacheline); \
diff --git a/include/linux/kaiser.h b/include/linux/kaiser.h
new file mode 100644
index 000000000000..58c55b1589d0
--- /dev/null
+++ b/include/linux/kaiser.h
@@ -0,0 +1,52 @@
+#ifndef _LINUX_KAISER_H
+#define _LINUX_KAISER_H
+
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+#include <asm/kaiser.h>
+
+static inline int kaiser_map_thread_stack(void *stack)
+{
+ /*
+ * Map that page of kernel stack on which we enter from user context.
+ */
+ return kaiser_add_mapping((unsigned long)stack +
+ THREAD_SIZE - PAGE_SIZE, PAGE_SIZE, __PAGE_KERNEL);
+}
+
+static inline void kaiser_unmap_thread_stack(void *stack)
+{
+ /*
+ * Note: may be called even when kaiser_map_thread_stack() failed.
+ */
+ kaiser_remove_mapping((unsigned long)stack +
+ THREAD_SIZE - PAGE_SIZE, PAGE_SIZE);
+}
+#else
+
+/*
+ * These stubs are used whenever CONFIG_PAGE_TABLE_ISOLATION is off, which
+ * includes architectures that support KAISER, but have it disabled.
+ */
+
+static inline void kaiser_init(void)
+{
+}
+static inline int kaiser_add_mapping(unsigned long addr,
+ unsigned long size, unsigned long flags)
+{
+ return 0;
+}
+static inline void kaiser_remove_mapping(unsigned long start,
+ unsigned long size)
+{
+}
+static inline int kaiser_map_thread_stack(void *stack)
+{
+ return 0;
+}
+static inline void kaiser_unmap_thread_stack(void *stack)
+{
+}
+
+#endif /* !CONFIG_PAGE_TABLE_ISOLATION */
+#endif /* _LINUX_KAISER_H */
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fff21a82780c..490f5a83f947 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -124,8 +124,9 @@ enum zone_stat_item {
NR_SLAB_UNRECLAIMABLE,
NR_PAGETABLE, /* used for pagetables */
NR_KERNEL_STACK_KB, /* measured in KiB */
- /* Second 128 byte cacheline */
+ NR_KAISERTABLE,
NR_BOUNCE,
+ /* Second 128 byte cacheline */
#if IS_ENABLED(CONFIG_ZSMALLOC)
NR_ZSPAGES, /* allocated in zsmalloc */
#endif
diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h
index 8f16299ca068..8902f23bb770 100644
--- a/include/linux/percpu-defs.h
+++ b/include/linux/percpu-defs.h
@@ -35,6 +35,12 @@
#endif
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+#define USER_MAPPED_SECTION "..user_mapped"
+#else
+#define USER_MAPPED_SECTION ""
+#endif
+
/*
* Base implementations of per-CPU variable declarations and definitions, where
* the section in which the variable is to be placed is provided by the
@@ -115,6 +121,12 @@
#define DEFINE_PER_CPU(type, name) \
DEFINE_PER_CPU_SECTION(type, name, "")
+#define DECLARE_PER_CPU_USER_MAPPED(type, name) \
+ DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION)
+
+#define DEFINE_PER_CPU_USER_MAPPED(type, name) \
+ DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION)
+
/*
* Declaration/definition used for per-CPU variables that must come first in
* the set of variables.
@@ -144,6 +156,14 @@
DEFINE_PER_CPU_SECTION(type, name, PER_CPU_SHARED_ALIGNED_SECTION) \
____cacheline_aligned_in_smp
+#define DECLARE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(type, name) \
+ DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION PER_CPU_SHARED_ALIGNED_SECTION) \
+ ____cacheline_aligned_in_smp
+
+#define DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(type, name) \
+ DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION PER_CPU_SHARED_ALIGNED_SECTION) \
+ ____cacheline_aligned_in_smp
+
#define DECLARE_PER_CPU_ALIGNED(type, name) \
DECLARE_PER_CPU_SECTION(type, name, PER_CPU_ALIGNED_SECTION) \
____cacheline_aligned
@@ -162,11 +182,21 @@
#define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
DEFINE_PER_CPU_SECTION(type, name, "..page_aligned") \
__aligned(PAGE_SIZE)
+/*
+ * Declaration/definition used for per-CPU variables that must be page aligned and need to be mapped in user mode.
+ */
+#define DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(type, name) \
+ DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION"..page_aligned") \
+ __aligned(PAGE_SIZE)
+
+#define DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(type, name) \
+ DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION"..page_aligned") \
+ __aligned(PAGE_SIZE)
/*
* Declaration/definition used for per-CPU variables that must be read mostly.
*/
-#define DECLARE_PER_CPU_READ_MOSTLY(type, name) \
+#define DECLARE_PER_CPU_READ_MOSTLY(type, name) \
DECLARE_PER_CPU_SECTION(type, name, "..read_mostly")
#define DEFINE_PER_CPU_READ_MOSTLY(type, name) \
diff --git a/init/main.c b/init/main.c
index 25bac88bc66e..99f026565608 100644
--- a/init/main.c
+++ b/init/main.c
@@ -80,6 +80,7 @@
#include <linux/integrity.h>
#include <linux/proc_ns.h>
#include <linux/io.h>
+#include <linux/kaiser.h>
#include <asm/io.h>
#include <asm/bugs.h>
@@ -473,6 +474,7 @@ static void __init mm_init(void)
pgtable_init();
vmalloc_init();
ioremap_huge_init();
+ kaiser_init();
}
asmlinkage __visible void __init start_kernel(void)
diff --git a/kernel/fork.c b/kernel/fork.c
index 9321b1ad3335..70e10cb49be0 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -58,6 +58,7 @@
#include <linux/tsacct_kern.h>
#include <linux/cn_proc.h>
#include <linux/freezer.h>
+#include <linux/kaiser.h>
#include <linux/delayacct.h>
#include <linux/taskstats_kern.h>
#include <linux/random.h>
@@ -213,6 +214,7 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
static inline void free_thread_stack(struct task_struct *tsk)
{
+ kaiser_unmap_thread_stack(tsk->stack);
#ifdef CONFIG_VMAP_STACK
if (task_stack_vm_area(tsk)) {
unsigned long flags;
@@ -495,6 +497,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
* functions again.
*/
tsk->stack = stack;
+
+ err= kaiser_map_thread_stack(tsk->stack);
+ if (err)
+ goto free_stack;
#ifdef CONFIG_VMAP_STACK
tsk->stack_vm_area = stack_vm_area;
#endif
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 604f26a4f696..6a088df04b29 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -932,6 +932,7 @@ const char * const vmstat_text[] = {
"nr_slab_unreclaimable",
"nr_page_table_pages",
"nr_kernel_stack",
+ "nr_overhead",
"nr_bounce",
#if IS_ENABLED(CONFIG_ZSMALLOC)
"nr_zspages",
diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
index 97f9cac98348..e86a34fd5484 100644
--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -843,6 +843,11 @@ static u32 bbr_sndbuf_expand(struct sock *sk)
*/
static u32 bbr_undo_cwnd(struct sock *sk)
{
+ struct bbr *bbr = inet_csk_ca(sk);
+
+ bbr->full_bw = 0; /* spurious slow-down; reset full pipe detection */
+ bbr->full_bw_cnt = 0;
+ bbr_reset_lt_bw_sampling(sk);
return tcp_sk(sk)->snd_cwnd;
}
diff --git a/security/Kconfig b/security/Kconfig
index 118f4549404e..32f36b40e9f0 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -31,6 +31,16 @@ config SECURITY
If you are unsure how to answer this question, answer N.
+config PAGE_TABLE_ISOLATION
+ bool "Remove the kernel mapping in user mode"
+ default y
+ depends on X86_64 && SMP
+ help
+ This enforces a strict kernel and user space isolation, in order
+ to close hardware side channels on kernel address information.
+
+ If you are unsure how to answer this question, answer Y.
+
config SECURITYFS
bool "Enable the securityfs filesystem"
help
diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
index a39629206864..f79669a38c0c 100644
--- a/tools/arch/x86/include/asm/cpufeatures.h
+++ b/tools/arch/x86/include/asm/cpufeatures.h
@@ -197,6 +197,9 @@
#define X86_FEATURE_AVX512_4VNNIW (7*32+16) /* AVX-512 Neural Network Instructions */
#define X86_FEATURE_AVX512_4FMAPS (7*32+17) /* AVX-512 Multiply Accumulation Single precision */
+/* Because the ALTERNATIVE scheme is for members of the X86_FEATURE club... */
+#define X86_FEATURE_KAISER ( 7*32+31) /* CONFIG_PAGE_TABLE_ISOLATION w/o nokaiser */
+
/* Virtualization flags: Linux defined, word 8 */
#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
#define X86_FEATURE_VNMI ( 8*32+ 1) /* Intel Virtual NMI */
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index b4a83a490212..5977c4d71356 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2523,6 +2523,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
nojitter [IA-64] Disables jitter checking for ITC timers.
+ nopti [X86-64] Disable KAISER isolation of kernel from user.
+
no-kvmclock [X86,KVM] Disable paravirtualized KVM clock driver
no-kvmapf [X86,KVM] Disable paravirtualized asynchronous page
@@ -3054,6 +3056,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
pt. [PARIDE]
See Documentation/blockdev/paride.txt.
+ pti= [X86_64]
+ Control KAISER user/kernel address space isolation:
+ on - enable
+ off - disable
+ auto - default setting
+
pty.legacy_count=
[KNL] Number of legacy pty's. Overwrites compiled-in
default number.
diff --git a/Makefile b/Makefile
index 5d67056e24dd..b028c106535b 100644
--- a/Makefile
+++ b/Makefile
@@ -1,6 +1,6 @@
VERSION = 4
PATCHLEVEL = 4
-SUBLEVEL = 109
+SUBLEVEL = 110
EXTRAVERSION =
NAME = Blurry Fish Butt
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 3783dc3e10b3..4abb284a5b9c 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -9,6 +9,7 @@
*/
#undef CONFIG_PARAVIRT
#undef CONFIG_PARAVIRT_SPINLOCKS
+#undef CONFIG_PAGE_TABLE_ISOLATION
#undef CONFIG_KASAN
#include <linux/linkage.h>
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index cc0f2f5da19b..952b23b5d4e9 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -35,6 +35,7 @@
#include <asm/asm.h>
#include <asm/smap.h>
#include <asm/pgtable_types.h>
+#include <asm/kaiser.h>
#include <linux/err.h>
/* Avoid __ASSEMBLER__'ifying <linux/audit.h> just for this. */
@@ -135,6 +136,7 @@ ENTRY(entry_SYSCALL_64)
* it is too small to ever cause noticeable irq latency.
*/
SWAPGS_UNSAFE_STACK
+ SWITCH_KERNEL_CR3_NO_STACK
/*
* A hypervisor implementation might want to use a label
* after the swapgs, so that it can do the swapgs
@@ -207,9 +209,17 @@ entry_SYSCALL_64_fastpath:
testl $_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
jnz int_ret_from_sys_call_irqs_off /* Go to the slow path */
- RESTORE_C_REGS_EXCEPT_RCX_R11
movq RIP(%rsp), %rcx
movq EFLAGS(%rsp), %r11
+ RESTORE_C_REGS_EXCEPT_RCX_R11
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
+ SWITCH_USER_CR3
movq RSP(%rsp), %rsp
/*
* 64-bit SYSRET restores rip from rcx,
@@ -347,10 +357,26 @@ GLOBAL(int_ret_from_sys_call)
syscall_return_via_sysret:
/* rcx and r11 are already restored (see code above) */
RESTORE_C_REGS_EXCEPT_RCX_R11
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
+ SWITCH_USER_CR3
movq RSP(%rsp), %rsp
USERGS_SYSRET64
opportunistic_sysret_failed:
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
+ SWITCH_USER_CR3
SWAPGS
jmp restore_c_regs_and_iret
END(entry_SYSCALL_64)
@@ -509,6 +535,7 @@ END(irq_entries_start)
* tracking that we're in kernel mode.
*/
SWAPGS
+ SWITCH_KERNEL_CR3
/*
* We need to tell lockdep that IRQs are off. We can't do this until
@@ -568,6 +595,7 @@ GLOBAL(retint_user)
mov %rsp,%rdi
call prepare_exit_to_usermode
TRACE_IRQS_IRETQ
+ SWITCH_USER_CR3
SWAPGS
jmp restore_regs_and_iret
@@ -625,6 +653,7 @@ native_irq_return_ldt:
pushq %rax
pushq %rdi
SWAPGS
+ SWITCH_KERNEL_CR3
movq PER_CPU_VAR(espfix_waddr), %rdi
movq %rax, (0*8)(%rdi) /* RAX */
movq (2*8)(%rsp), %rax /* RIP */
@@ -640,6 +669,7 @@ native_irq_return_ldt:
andl $0xffff0000, %eax
popq %rdi
orq PER_CPU_VAR(espfix_stack), %rax
+ SWITCH_USER_CR3
SWAPGS
movq %rax, %rsp
popq %rax
@@ -995,7 +1025,11 @@ idtentry machine_check has_error_code=0 paranoid=1 do_sym=*machine_check_vec
/*
* Save all registers in pt_regs, and switch gs if needed.
* Use slow, but surefire "are we in kernel?" check.
- * Return: ebx=0: need swapgs on exit, ebx=1: otherwise
+ *
+ * Return: ebx=0: needs swapgs but not SWITCH_USER_CR3 in paranoid_exit
+ * ebx=1: needs neither swapgs nor SWITCH_USER_CR3 in paranoid_exit
+ * ebx=2: needs both swapgs and SWITCH_USER_CR3 in paranoid_exit
+ * ebx=3: needs SWITCH_USER_CR3 but not swapgs in paranoid_exit
*/
ENTRY(paranoid_entry)
cld
@@ -1008,7 +1042,26 @@ ENTRY(paranoid_entry)
js 1f /* negative -> in kernel */
SWAPGS
xorl %ebx, %ebx
-1: ret
+1:
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /*
+ * We might have come in between a swapgs and a SWITCH_KERNEL_CR3
+ * on entry, or between a SWITCH_USER_CR3 and a swapgs on exit.
+ * Do a conditional SWITCH_KERNEL_CR3: this could safely be done
+ * unconditionally, but we need to find out whether the reverse
+ * should be done on return (conveyed to paranoid_exit in %ebx).
+ */
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
+ testl $KAISER_SHADOW_PGD_OFFSET, %eax
+ jz 2f
+ orl $2, %ebx
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ /* If PCID enabled, set X86_CR3_PCID_NOFLUSH_BIT */
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
+ movq %rax, %cr3
+2:
+#endif
+ ret
END(paranoid_entry)
/*
@@ -1021,19 +1074,26 @@ END(paranoid_entry)
* be complicated. Fortunately, we there's no good reason
* to try to handle preemption here.
*
- * On entry, ebx is "no swapgs" flag (1: don't need swapgs, 0: need it)
+ * On entry: ebx=0: needs swapgs but not SWITCH_USER_CR3
+ * ebx=1: needs neither swapgs nor SWITCH_USER_CR3
+ * ebx=2: needs both swapgs and SWITCH_USER_CR3
+ * ebx=3: needs SWITCH_USER_CR3 but not swapgs
*/
ENTRY(paranoid_exit)
DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF_DEBUG
- testl %ebx, %ebx /* swapgs needed? */
+ TRACE_IRQS_IRETQ_DEBUG
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /* No ALTERNATIVE for X86_FEATURE_KAISER: paranoid_entry sets %ebx */
+ testl $2, %ebx /* SWITCH_USER_CR3 needed? */
+ jz paranoid_exit_no_switch
+ SWITCH_USER_CR3
+paranoid_exit_no_switch:
+#endif
+ testl $1, %ebx /* swapgs needed? */
jnz paranoid_exit_no_swapgs
- TRACE_IRQS_IRETQ
SWAPGS_UNSAFE_STACK
- jmp paranoid_exit_restore
paranoid_exit_no_swapgs:
- TRACE_IRQS_IRETQ_DEBUG
-paranoid_exit_restore:
RESTORE_EXTRA_REGS
RESTORE_C_REGS
REMOVE_PT_GPREGS_FROM_STACK 8
@@ -1048,6 +1108,13 @@ ENTRY(error_entry)
cld
SAVE_C_REGS 8
SAVE_EXTRA_REGS 8
+ /*
+ * error_entry() always returns with a kernel gsbase and
+ * CR3. We must also have a kernel CR3/gsbase before
+ * calling TRACE_IRQS_*. Just unconditionally switch to
+ * the kernel CR3 here.
+ */
+ SWITCH_KERNEL_CR3
xorl %ebx, %ebx
testb $3, CS+8(%rsp)
jz .Lerror_kernelspace
@@ -1210,6 +1277,10 @@ ENTRY(nmi)
*/
SWAPGS_UNSAFE_STACK
+ /*
+ * percpu variables are mapped with user CR3, so no need
+ * to switch CR3 here.
+ */
cld
movq %rsp, %rdx
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
@@ -1243,12 +1314,34 @@ ENTRY(nmi)
movq %rsp, %rdi
movq $-1, %rsi
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /* Unconditionally use kernel CR3 for do_nmi() */
+ /* %rax is saved above, so OK to clobber here */
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
+ /* If PCID enabled, NOFLUSH now and NOFLUSH on return */
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
+ pushq %rax
+ /* mask off "user" bit of pgd address and 12 PCID bits: */
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ movq %rax, %cr3
+2:
+#endif
call do_nmi
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /*
+ * Unconditionally restore CR3. I know we return to
+ * kernel code that needs user CR3, but do we ever return
+ * to "user mode" where we need the kernel CR3?
+ */
+ ALTERNATIVE "", "popq %rax; movq %rax, %cr3", X86_FEATURE_KAISER
+#endif
+
/*
* Return back to user mode. We must *not* do the normal exit
- * work, because we don't want to enable interrupts. Fortunately,
- * do_nmi doesn't modify pt_regs.
+ * work, because we don't want to enable interrupts. Do not
+ * switch to user CR3: we might be going back to kernel code
+ * that had a user CR3 set.
*/
SWAPGS
jmp restore_c_regs_and_iret
@@ -1445,22 +1538,55 @@ end_repeat_nmi:
ALLOC_PT_GPREGS_ON_STACK
/*
- * Use paranoid_entry to handle SWAPGS, but no need to use paranoid_exit
- * as we should not be calling schedule in NMI context.
- * Even with normal interrupts enabled. An NMI should not be
- * setting NEED_RESCHED or anything that normal interrupts and
- * exceptions might do.
+ * Use the same approach as paranoid_entry to handle SWAPGS, but
+ * without CR3 handling since we do that differently in NMIs. No
+ * need to use paranoid_exit as we should not be calling schedule
+ * in NMI context. Even with normal interrupts enabled. An NMI
+ * should not be setting NEED_RESCHED or anything that normal
+ * interrupts and exceptions might do.
*/
- call paranoid_entry
-
- /* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
+ cld
+ SAVE_C_REGS
+ SAVE_EXTRA_REGS
+ movl $1, %ebx
+ movl $MSR_GS_BASE, %ecx
+ rdmsr
+ testl %edx, %edx
+ js 1f /* negative -> in kernel */
+ SWAPGS
+ xorl %ebx, %ebx
+1:
movq %rsp, %rdi
movq $-1, %rsi
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /* Unconditionally use kernel CR3 for do_nmi() */
+ /* %rax is saved above, so OK to clobber here */
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
+ /* If PCID enabled, NOFLUSH now and NOFLUSH on return */
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
+ pushq %rax
+ /* mask off "user" bit of pgd address and 12 PCID bits: */
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ movq %rax, %cr3
+2:
+#endif
+
+ /* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
call do_nmi
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ /*
+ * Unconditionally restore CR3. We might be returning to
+ * kernel code that needs user CR3, like just just before
+ * a sysret.
+ */
+ ALTERNATIVE "", "popq %rax; movq %rax, %cr3", X86_FEATURE_KAISER
+#endif
+
testl %ebx, %ebx /* swapgs needed? */
jnz nmi_restore
nmi_swapgs:
+ /* We fixed up CR3 above, so no need to switch it here */
SWAPGS_UNSAFE_STACK
nmi_restore:
RESTORE_EXTRA_REGS
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 15cfebaa7688..d03bf0e28b8b 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -13,6 +13,8 @@
#include <asm/irqflags.h>
#include <asm/asm.h>
#include <asm/smap.h>
+#include <asm/pgtable_types.h>
+#include <asm/kaiser.h>
#include <linux/linkage.h>
#include <linux/err.h>
@@ -50,6 +52,7 @@ ENDPROC(native_usergs_sysret32)
ENTRY(entry_SYSENTER_compat)
/* Interrupts are off on entry. */
SWAPGS_UNSAFE_STACK
+ SWITCH_KERNEL_CR3_NO_STACK
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
/*
@@ -161,6 +164,7 @@ ENDPROC(entry_SYSENTER_compat)
ENTRY(entry_SYSCALL_compat)
/* Interrupts are off on entry. */
SWAPGS_UNSAFE_STACK
+ SWITCH_KERNEL_CR3_NO_STACK
/* Stash user ESP and switch to the kernel stack. */
movl %esp, %r8d
@@ -208,6 +212,7 @@ ENTRY(entry_SYSCALL_compat)
/* Opportunistic SYSRET */
sysret32_from_system_call:
TRACE_IRQS_ON /* User mode traces as IRQs on. */
+ SWITCH_USER_CR3
movq RBX(%rsp), %rbx /* pt_regs->rbx */
movq RBP(%rsp), %rbp /* pt_regs->rbp */
movq EFLAGS(%rsp), %r11 /* pt_regs->flags (in r11) */
@@ -269,6 +274,7 @@ ENTRY(entry_INT80_compat)
PARAVIRT_ADJUST_EXCEPTION_FRAME
ASM_CLAC /* Do this early to minimize exposure */
SWAPGS
+ SWITCH_KERNEL_CR3_NO_STACK
/*
* User tracing code (ptrace or signal handlers) might assume that
@@ -311,6 +317,7 @@ ENTRY(entry_INT80_compat)
/* Go back to user mode. */
TRACE_IRQS_ON
+ SWITCH_USER_CR3
SWAPGS
jmp restore_regs_and_iret
END(entry_INT80_compat)
diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
index ca94fa649251..5dd363d54348 100644
--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -36,6 +36,11 @@ static notrace cycle_t vread_hpet(void)
}
#endif
+#ifdef CONFIG_PARAVIRT_CLOCK
+extern u8 pvclock_page
+ __attribute__((visibility("hidden")));
+#endif
+
#ifndef BUILD_VDSO32
#include <linux/kernel.h>
@@ -62,63 +67,65 @@ notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz)
#ifdef CONFIG_PARAVIRT_CLOCK
-static notrace const struct pvclock_vsyscall_time_info *get_pvti(int cpu)
+static notrace const struct pvclock_vsyscall_time_info *get_pvti0(void)
{
- const struct pvclock_vsyscall_time_info *pvti_base;
- int idx = cpu / (PAGE_SIZE/PVTI_SIZE);
- int offset = cpu % (PAGE_SIZE/PVTI_SIZE);
-
- BUG_ON(PVCLOCK_FIXMAP_BEGIN + idx > PVCLOCK_FIXMAP_END);
-
- pvti_base = (struct pvclock_vsyscall_time_info *)
- __fix_to_virt(PVCLOCK_FIXMAP_BEGIN+idx);
-
- return &pvti_base[offset];
+ return (const struct pvclock_vsyscall_time_info *)&pvclock_page;
}
static notrace cycle_t vread_pvclock(int *mode)
{
- const struct pvclock_vsyscall_time_info *pvti;
+ const struct pvclock_vcpu_time_info *pvti = &get_pvti0()->pvti;
cycle_t ret;
- u64 last;
- u32 version;
- u8 flags;
- unsigned cpu, cpu1;
-
+ u64 tsc, pvti_tsc;
+ u64 last, delta, pvti_system_time;
+ u32 version, pvti_tsc_to_system_mul, pvti_tsc_shift;
/*
- * Note: hypervisor must guarantee that:
- * 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
- * 2. that per-CPU pvclock time info is updated if the
- * underlying CPU changes.
- * 3. that version is increased whenever underlying CPU
- * changes.
+ * Note: The kernel and hypervisor must guarantee that cpu ID
+ * number maps 1:1 to per-CPU pvclock time info.
+ *
+ * Because the hypervisor is entirely unaware of guest userspace
+ * preemption, it cannot guarantee that per-CPU pvclock time
+ * info is updated if the underlying CPU changes or that that
+ * version is increased whenever underlying CPU changes.
*
+ * On KVM, we are guaranteed that pvti updates for any vCPU are
+ * atomic as seen by *all* vCPUs. This is an even stronger
+ * guarantee than we get with a normal seqlock.
+ *
+ * On Xen, we don't appear to have that guarantee, but Xen still
+ * supplies a valid seqlock using the version field.
+
+ * We only do pvclock vdso timing at all if
+ * PVCLOCK_TSC_STABLE_BIT is set, and we interpret that bit to
+ * mean that all vCPUs have matching pvti and that the TSC is
+ * synced, so we can just look at vCPU 0's pvti.
*/
- do {
- cpu = __getcpu() & VGETCPU_CPU_MASK;
- /* TODO: We can put vcpu id into higher bits of pvti.version.
- * This will save a couple of cycles by getting rid of
- * __getcpu() calls (Gleb).
- */
-
- pvti = get_pvti(cpu);
-
- version = __pvclock_read_cycles(&pvti->pvti, &ret, &flags);
-
- /*
- * Test we're still on the cpu as well as the version.
- * We could have been migrated just after the first
- * vgetcpu but before fetching the version, so we
- * wouldn't notice a version change.
- */
- cpu1 = __getcpu() & VGETCPU_CPU_MASK;
- } while (unlikely(cpu != cpu1 ||
- (pvti->pvti.version & 1) ||
- pvti->pvti.version != version));
-
- if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
+
+ if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT))) {
*mode = VCLOCK_NONE;
+ return 0;
+ }
+
+ do {
+ version = pvti->version;
+
+ /* This is also a read barrier, so we'll read version first. */
+ tsc = rdtsc_ordered();
+
+ pvti_tsc_to_system_mul = pvti->tsc_to_system_mul;
+ pvti_tsc_shift = pvti->tsc_shift;
+ pvti_system_time = pvti->system_time;
+ pvti_tsc = pvti->tsc_timestamp;
+
+ /* Make sure that the version double-check is last. */
+ smp_rmb();
+ } while (unlikely((version & 1) || version != pvti->version));
+
+ delta = tsc - pvti_tsc;
+ ret = pvti_system_time +
+ pvclock_scale_delta(delta, pvti_tsc_to_system_mul,
+ pvti_tsc_shift);
/* refer to tsc.c read_tsc() comment for rationale */
last = gtod->cycle_last;
diff --git a/arch/x86/entry/vdso/vdso-layout.lds.S b/arch/x86/entry/vdso/vdso-layout.lds.S
index de2c921025f5..4158acc17df0 100644
--- a/arch/x86/entry/vdso/vdso-layout.lds.S
+++ b/arch/x86/entry/vdso/vdso-layout.lds.S
@@ -25,7 +25,7 @@ SECTIONS
* segment.
*/
- vvar_start = . - 2 * PAGE_SIZE;
+ vvar_start = . - 3 * PAGE_SIZE;
vvar_page = vvar_start;
/* Place all vvars at the offsets in asm/vvar.h. */
@@ -36,6 +36,7 @@ SECTIONS
#undef EMIT_VVAR
hpet_page = vvar_start + PAGE_SIZE;
+ pvclock_page = vvar_start + 2 * PAGE_SIZE;
. = SIZEOF_HEADERS;
diff --git a/arch/x86/entry/vdso/vdso2c.c b/arch/x86/entry/vdso/vdso2c.c
index 785d9922b106..491020b2826d 100644
--- a/arch/x86/entry/vdso/vdso2c.c
+++ b/arch/x86/entry/vdso/vdso2c.c
@@ -73,6 +73,7 @@ enum {
sym_vvar_start,
sym_vvar_page,
sym_hpet_page,
+ sym_pvclock_page,
sym_VDSO_FAKE_SECTION_TABLE_START,
sym_VDSO_FAKE_SECTION_TABLE_END,
};
@@ -80,6 +81,7 @@ enum {
const int special_pages[] = {
sym_vvar_page,
sym_hpet_page,
+ sym_pvclock_page,
};
struct vdso_sym {
@@ -91,6 +93,7 @@ struct vdso_sym required_syms[] = {
[sym_vvar_start] = {"vvar_start", true},
[sym_vvar_page] = {"vvar_page", true},
[sym_hpet_page] = {"hpet_page", true},
+ [sym_pvclock_page] = {"pvclock_page", true},
[sym_VDSO_FAKE_SECTION_TABLE_START] = {
"VDSO_FAKE_SECTION_TABLE_START", false
},
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 64df47148160..aa828191c654 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -100,6 +100,7 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr)
.name = "[vvar]",
.pages = no_pages,
};
+ struct pvclock_vsyscall_time_info *pvti;
if (calculate_addr) {
addr = vdso_addr(current->mm->start_stack,
@@ -169,6 +170,18 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr)
}
#endif
+ pvti = pvclock_pvti_cpu0_va();
+ if (pvti && image->sym_pvclock_page) {
+ ret = remap_pfn_range(vma,
+ text_start + image->sym_pvclock_page,
+ __pa(pvti) >> PAGE_SHIFT,
+ PAGE_SIZE,
+ PAGE_READONLY);
+
+ if (ret)
+ goto up_fail;
+ }
+
up_fail:
if (ret)
current->mm->context.vdso = NULL;
diff --git a/arch/x86/include/asm/cmdline.h b/arch/x86/include/asm/cmdline.h
index e01f7f7ccb0c..84ae170bc3d0 100644
--- a/arch/x86/include/asm/cmdline.h
+++ b/arch/x86/include/asm/cmdline.h
@@ -2,5 +2,7 @@
#define _ASM_X86_CMDLINE_H
int cmdline_find_option_bool(const char *cmdline_ptr, const char *option);
+int cmdline_find_option(const char *cmdline_ptr, const char *option,
+ char *buffer, int bufsize);
#endif /* _ASM_X86_CMDLINE_H */
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index f7ba9fbf12ee..f6605712ca90 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -187,6 +187,7 @@
#define X86_FEATURE_ARAT ( 7*32+ 1) /* Always Running APIC Timer */
#define X86_FEATURE_CPB ( 7*32+ 2) /* AMD Core Performance Boost */
#define X86_FEATURE_EPB ( 7*32+ 3) /* IA32_ENERGY_PERF_BIAS support */
+#define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 4) /* Effectively INVPCID && CR4.PCIDE=1 */
#define X86_FEATURE_PLN ( 7*32+ 5) /* Intel Power Limit Notification */
#define X86_FEATURE_PTS ( 7*32+ 6) /* Intel Package Thermal Status */
#define X86_FEATURE_DTHERM ( 7*32+ 7) /* Digital Thermal Sensor */
@@ -199,6 +200,9 @@
#define X86_FEATURE_HWP_PKG_REQ ( 7*32+14) /* Intel HWP_PKG_REQ */
#define X86_FEATURE_INTEL_PT ( 7*32+15) /* Intel Processor Trace */
+/* Because the ALTERNATIVE scheme is for members of the X86_FEATURE club... */
+#define X86_FEATURE_KAISER ( 7*32+31) /* CONFIG_PAGE_TABLE_ISOLATION w/o nokaiser */
+
/* Virtualization flags: Linux defined, word 8 */
#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
#define X86_FEATURE_VNMI ( 8*32+ 1) /* Intel Virtual NMI */
diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h
index 4e10d73cf018..880db91d9457 100644
--- a/arch/x86/include/asm/desc.h
+++ b/arch/x86/include/asm/desc.h
@@ -43,7 +43,7 @@ struct gdt_page {
struct desc_struct gdt[GDT_ENTRIES];
} __attribute__((aligned(PAGE_SIZE)));
-DECLARE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page);
+DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page);
static inline struct desc_struct *get_cpu_gdt_table(unsigned int cpu)
{
diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h
index 59caa55fb9b5..ee52ff858699 100644
--- a/arch/x86/include/asm/hw_irq.h
+++ b/arch/x86/include/asm/hw_irq.h
@@ -187,7 +187,7 @@ extern char irq_entries_start[];
#define VECTOR_RETRIGGERED ((void *)~0UL)
typedef struct irq_desc* vector_irq_t[NR_VECTORS];
-DECLARE_PER_CPU(vector_irq_t, vector_irq);
+DECLARE_PER_CPU_USER_MAPPED(vector_irq_t, vector_irq);
#endif /* !ASSEMBLY_ */
diff --git a/arch/x86/include/asm/kaiser.h b/arch/x86/include/asm/kaiser.h
new file mode 100644
index 000000000000..802bbbdfe143
--- /dev/null
+++ b/arch/x86/include/asm/kaiser.h
@@ -0,0 +1,141 @@
+#ifndef _ASM_X86_KAISER_H
+#define _ASM_X86_KAISER_H
+
+#include <uapi/asm/processor-flags.h> /* For PCID constants */
+
+/*
+ * This file includes the definitions for the KAISER feature.
+ * KAISER is a counter measure against x86_64 side channel attacks on
+ * the kernel virtual memory. It has a shadow pgd for every process: the
+ * shadow pgd has a minimalistic kernel-set mapped, but includes the whole
+ * user memory. Within a kernel context switch, or when an interrupt is handled,
+ * the pgd is switched to the normal one. When the system switches to user mode,
+ * the shadow pgd is enabled. By this, the virtual memory caches are freed,
+ * and the user may not attack the whole kernel memory.
+ *
+ * A minimalistic kernel mapping holds the parts needed to be mapped in user
+ * mode, such as the entry/exit functions of the user space, or the stacks.
+ */
+
+#define KAISER_SHADOW_PGD_OFFSET 0x1000
+
+#ifdef __ASSEMBLY__
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+
+.macro _SWITCH_TO_KERNEL_CR3 reg
+movq %cr3, \reg
+andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
+/* If PCID enabled, set X86_CR3_PCID_NOFLUSH_BIT */
+ALTERNATIVE "", "bts $63, \reg", X86_FEATURE_PCID
+movq \reg, %cr3
+.endm
+
+.macro _SWITCH_TO_USER_CR3 reg regb
+/*
+ * regb must be the low byte portion of reg: because we have arranged
+ * for the low byte of the user PCID to serve as the high byte of NOFLUSH
+ * (0x80 for each when PCID is enabled, or 0x00 when PCID and NOFLUSH are
+ * not enabled): so that the one register can update both memory and cr3.
+ */
+movq %cr3, \reg
+orq PER_CPU_VAR(x86_cr3_pcid_user), \reg
+js 9f
+/* If PCID enabled, FLUSH this time, reset to NOFLUSH for next time */
+movb \regb, PER_CPU_VAR(x86_cr3_pcid_user+7)
+9:
+movq \reg, %cr3
+.endm
+
+.macro SWITCH_KERNEL_CR3
+ALTERNATIVE "jmp 8f", "pushq %rax", X86_FEATURE_KAISER
+_SWITCH_TO_KERNEL_CR3 %rax
+popq %rax
+8:
+.endm
+
+.macro SWITCH_USER_CR3
+ALTERNATIVE "jmp 8f", "pushq %rax", X86_FEATURE_KAISER
+_SWITCH_TO_USER_CR3 %rax %al
+popq %rax
+8:
+.endm
+
+.macro SWITCH_KERNEL_CR3_NO_STACK
+ALTERNATIVE "jmp 8f", \
+ __stringify(movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)), \
+ X86_FEATURE_KAISER
+_SWITCH_TO_KERNEL_CR3 %rax
+movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
+8:
+.endm
+
+#else /* CONFIG_PAGE_TABLE_ISOLATION */
+
+.macro SWITCH_KERNEL_CR3
+.endm
+.macro SWITCH_USER_CR3
+.endm
+.macro SWITCH_KERNEL_CR3_NO_STACK
+.endm
+
+#endif /* CONFIG_PAGE_TABLE_ISOLATION */
+
+#else /* __ASSEMBLY__ */
+
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+/*
+ * Upon kernel/user mode switch, it may happen that the address
+ * space has to be switched before the registers have been
+ * stored. To change the address space, another register is
+ * needed. A register therefore has to be stored/restored.
+*/
+DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+
+DECLARE_PER_CPU(unsigned long, x86_cr3_pcid_user);
+
+extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
+
+extern int kaiser_enabled;
+extern void __init kaiser_check_boottime_disable(void);
+#else
+#define kaiser_enabled 0
+static inline void __init kaiser_check_boottime_disable(void) {}
+#endif /* CONFIG_PAGE_TABLE_ISOLATION */
+
+/*
+ * Kaiser function prototypes are needed even when CONFIG_PAGE_TABLE_ISOLATION is not set,
+ * so as to build with tests on kaiser_enabled instead of #ifdefs.
+ */
+
+/**
+ * kaiser_add_mapping - map a virtual memory part to the shadow (user) mapping
+ * @addr: the start address of the range
+ * @size: the size of the range
+ * @flags: The mapping flags of the pages
+ *
+ * The mapping is done on a global scope, so no bigger
+ * synchronization has to be done. the pages have to be
+ * manually unmapped again when they are not needed any longer.
+ */
+extern int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags);
+
+/**
+ * kaiser_remove_mapping - unmap a virtual memory part of the shadow mapping
+ * @addr: the start address of the range
+ * @size: the size of the range
+ */
+extern void kaiser_remove_mapping(unsigned long start, unsigned long size);
+
+/**
+ * kaiser_init - Initialize the shadow mapping
+ *
+ * Most parts of the shadow mapping can be mapped upon boot
+ * time. Only per-process things like the thread stacks
+ * or a new LDT have to be mapped at runtime. These boot-
+ * time mappings are permanent and never unmapped.
+ */
+extern void kaiser_init(void);
+
+#endif /* __ASSEMBLY */
+
+#endif /* _ASM_X86_KAISER_H */
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 6ec0c8b2e9df..84c62d950023 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -18,6 +18,12 @@
#ifndef __ASSEMBLY__
#include <asm/x86_init.h>
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+extern int kaiser_enabled;
+#else
+#define kaiser_enabled 0
+#endif
+
void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd);
void ptdump_walk_pgd_level_checkwx(void);
@@ -653,7 +659,17 @@ static inline pud_t *pud_offset(pgd_t *pgd, unsigned long address)
static inline int pgd_bad(pgd_t pgd)
{
- return (pgd_flags(pgd) & ~_PAGE_USER) != _KERNPG_TABLE;
+ pgdval_t ignore_flags = _PAGE_USER;
+ /*
+ * We set NX on KAISER pgds that map userspace memory so
+ * that userspace can not meaningfully use the kernel
+ * page table by accident; it will fault on the first
+ * instruction it tries to run. See native_set_pgd().
+ */
+ if (kaiser_enabled)
+ ignore_flags |= _PAGE_NX;
+
+ return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE;
}
static inline int pgd_none(pgd_t pgd)
@@ -855,7 +871,15 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
*/
static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count)
{
- memcpy(dst, src, count * sizeof(pgd_t));
+ memcpy(dst, src, count * sizeof(pgd_t));
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ if (kaiser_enabled) {
+ /* Clone the shadow pgd part as well */
+ memcpy(native_get_shadow_pgd(dst),
+ native_get_shadow_pgd(src),
+ count * sizeof(pgd_t));
+ }
+#endif
}
#define PTE_SHIFT ilog2(PTRS_PER_PTE)
diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
index 2ee781114d34..c810226e741a 100644
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -106,9 +106,32 @@ static inline void native_pud_clear(pud_t *pud)
native_set_pud(pud, native_make_pud(0));
}
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+extern pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd);
+
+static inline pgd_t *native_get_shadow_pgd(pgd_t *pgdp)
+{
+#ifdef CONFIG_DEBUG_VM
+ /* linux/mmdebug.h may not have been included at this point */
+ BUG_ON(!kaiser_enabled);
+#endif
+ return (pgd_t *)((unsigned long)pgdp | (unsigned long)PAGE_SIZE);
+}
+#else
+static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+ return pgd;
+}
+static inline pgd_t *native_get_shadow_pgd(pgd_t *pgdp)
+{
+ BUILD_BUG_ON(1);
+ return NULL;
+}
+#endif /* CONFIG_PAGE_TABLE_ISOLATION */
+
static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
{
- *pgdp = pgd;
+ *pgdp = kaiser_set_shadow_pgd(pgdp, pgd);
}
static inline void native_pgd_clear(pgd_t *pgd)
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 79c91853e50e..8dba273da25a 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -89,7 +89,7 @@
#define _PAGE_NX (_AT(pteval_t, 0))
#endif
-#define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
+#define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
_PAGE_ACCESSED | _PAGE_DIRTY)
@@ -102,6 +102,33 @@
_PAGE_SOFT_DIRTY)
#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
+/* The ASID is the lower 12 bits of CR3 */
+#define X86_CR3_PCID_ASID_MASK (_AC((1<<12)-1,UL))
+
+/* Mask for all the PCID-related bits in CR3: */
+#define X86_CR3_PCID_MASK (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_MASK)
+#define X86_CR3_PCID_ASID_KERN (_AC(0x0,UL))
+
+#if defined(CONFIG_PAGE_TABLE_ISOLATION) && defined(CONFIG_X86_64)
+/* Let X86_CR3_PCID_ASID_USER be usable for the X86_CR3_PCID_NOFLUSH bit */
+#define X86_CR3_PCID_ASID_USER (_AC(0x80,UL))
+
+#define X86_CR3_PCID_KERN_FLUSH (X86_CR3_PCID_ASID_KERN)
+#define X86_CR3_PCID_USER_FLUSH (X86_CR3_PCID_ASID_USER)
+#define X86_CR3_PCID_KERN_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_KERN)
+#define X86_CR3_PCID_USER_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_USER)
+#else
+#define X86_CR3_PCID_ASID_USER (_AC(0x0,UL))
+/*
+ * PCIDs are unsupported on 32-bit and none of these bits can be
+ * set in CR3:
+ */
+#define X86_CR3_PCID_KERN_FLUSH (0)
+#define X86_CR3_PCID_USER_FLUSH (0)
+#define X86_CR3_PCID_KERN_NOFLUSH (0)
+#define X86_CR3_PCID_USER_NOFLUSH (0)
+#endif
+
/*
* The cache modes defined here are used to translate between pure SW usage
* and the HW defined cache mode bits and/or PAT entries.
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 2d5a50cb61a2..f3bdaed0188f 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -305,7 +305,7 @@ struct tss_struct {
} ____cacheline_aligned;
-DECLARE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss);
+DECLARE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss);
#ifdef CONFIG_X86_32
DECLARE_PER_CPU(unsigned long, cpu_current_top_of_stack);
diff --git a/arch/x86/include/asm/pvclock.h b/arch/x86/include/asm/pvclock.h
index baad72e4c100..6045cef376c2 100644
--- a/arch/x86/include/asm/pvclock.h
+++ b/arch/x86/include/asm/pvclock.h
@@ -4,6 +4,15 @@
#include <linux/clocksource.h>
#include <asm/pvclock-abi.h>
+#ifdef CONFIG_PARAVIRT_CLOCK
+extern struct pvclock_vsyscall_time_info *pvclock_pvti_cpu0_va(void);
+#else
+static inline struct pvclock_vsyscall_time_info *pvclock_pvti_cpu0_va(void)
+{
+ return NULL;
+}
+#endif
+
/* some helper functions for xen and kvm pv clock sources */
cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src);
u8 pvclock_read_flags(struct pvclock_vcpu_time_info *src);
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 9fc5968da820..a691b66cc40a 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -131,6 +131,24 @@ static inline void cr4_set_bits_and_update_boot(unsigned long mask)
cr4_set_bits(mask);
}
+/*
+ * Declare a couple of kaiser interfaces here for convenience,
+ * to avoid the need for asm/kaiser.h in unexpected places.
+ */
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+extern int kaiser_enabled;
+extern void kaiser_setup_pcid(void);
+extern void kaiser_flush_tlb_on_return_to_user(void);
+#else
+#define kaiser_enabled 0
+static inline void kaiser_setup_pcid(void)
+{
+}
+static inline void kaiser_flush_tlb_on_return_to_user(void)
+{
+}
+#endif
+
static inline void __native_flush_tlb(void)
{
/*
@@ -139,6 +157,8 @@ static inline void __native_flush_tlb(void)
* back:
*/
preempt_disable();
+ if (kaiser_enabled)
+ kaiser_flush_tlb_on_return_to_user();
native_write_cr3(native_read_cr3());
preempt_enable();
}
@@ -148,20 +168,27 @@ static inline void __native_flush_tlb_global_irq_disabled(void)
unsigned long cr4;
cr4 = this_cpu_read(cpu_tlbstate.cr4);
- /* clear PGE */
- native_write_cr4(cr4 & ~X86_CR4_PGE);
- /* write old PGE again and flush TLBs */
- native_write_cr4(cr4);
+ if (cr4 & X86_CR4_PGE) {
+ /* clear PGE and flush TLB of all entries */
+ native_write_cr4(cr4 & ~X86_CR4_PGE);
+ /* restore PGE as it was before */
+ native_write_cr4(cr4);
+ } else {
+ /* do it with cr3, letting kaiser flush user PCID */
+ __native_flush_tlb();
+ }
}
static inline void __native_flush_tlb_global(void)
{
unsigned long flags;
- if (static_cpu_has(X86_FEATURE_INVPCID)) {
+ if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Using INVPCID is considerably faster than a pair of writes
* to CR4 sandwiched inside an IRQ flag save/restore.
+ *
+ * Note, this works with CR4.PCIDE=0 or 1.
*/
invpcid_flush_all();
return;
@@ -173,24 +200,45 @@ static inline void __native_flush_tlb_global(void)
* be called from deep inside debugging code.)
*/
raw_local_irq_save(flags);
-
__native_flush_tlb_global_irq_disabled();
-
raw_local_irq_restore(flags);
}
static inline void __native_flush_tlb_single(unsigned long addr)
{
- asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+ /*
+ * SIMICS #GP's if you run INVPCID with type 2/3
+ * and X86_CR4_PCIDE clear. Shame!
+ *
+ * The ASIDs used below are hard-coded. But, we must not
+ * call invpcid(type=1/2) before CR4.PCIDE=1. Just call
+ * invlpg in the case we are called early.
+ */
+
+ if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
+ if (kaiser_enabled)
+ kaiser_flush_tlb_on_return_to_user();
+ asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+ return;
+ }
+ /* Flush the address out of both PCIDs. */
+ /*
+ * An optimization here might be to determine addresses
+ * that are only kernel-mapped and only flush the kernel
+ * ASID. But, userspace flushes are probably much more
+ * important performance-wise.
+ *
+ * Make sure to do only a single invpcid when KAISER is
+ * disabled and we have only a single ASID.
+ */
+ if (kaiser_enabled)
+ invpcid_flush_one(X86_CR3_PCID_ASID_USER, addr);
+ invpcid_flush_one(X86_CR3_PCID_ASID_KERN, addr);
}
static inline void __flush_tlb_all(void)
{
- if (cpu_has_pge)
- __flush_tlb_global();
- else
- __flush_tlb();
-
+ __flush_tlb_global();
/*
* Note: if we somehow had PCID but not PGE, then this wouldn't work --
* we'd end up flushing kernel translations for the current ASID but
diff --git a/arch/x86/include/asm/vdso.h b/arch/x86/include/asm/vdso.h
index 756de9190aec..deabaf9759b6 100644
--- a/arch/x86/include/asm/vdso.h
+++ b/arch/x86/include/asm/vdso.h
@@ -22,6 +22,7 @@ struct vdso_image {
long sym_vvar_page;
long sym_hpet_page;
+ long sym_pvclock_page;
long sym_VDSO32_NOTE_MASK;
long sym___kernel_sigreturn;
long sym___kernel_rt_sigreturn;
diff --git a/arch/x86/include/uapi/asm/processor-flags.h b/arch/x86/include/uapi/asm/processor-flags.h
index 79887abcb5e1..1361779f44fe 100644
--- a/arch/x86/include/uapi/asm/processor-flags.h
+++ b/arch/x86/include/uapi/asm/processor-flags.h
@@ -77,7 +77,8 @@
#define X86_CR3_PWT _BITUL(X86_CR3_PWT_BIT)
#define X86_CR3_PCD_BIT 4 /* Page Cache Disable */
#define X86_CR3_PCD _BITUL(X86_CR3_PCD_BIT)
-#define X86_CR3_PCID_MASK _AC(0x00000fff,UL) /* PCID Mask */
+#define X86_CR3_PCID_NOFLUSH_BIT 63 /* Preserve old PCID */
+#define X86_CR3_PCID_NOFLUSH _BITULL(X86_CR3_PCID_NOFLUSH_BIT)
/*
* Intel CPU features in CR4
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index aa1e7246b06b..cc154ac64f00 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -92,7 +92,7 @@ static const struct cpu_dev default_cpu = {
static const struct cpu_dev *this_cpu = &default_cpu;
-DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = {
+DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page) = { .gdt = {
#ifdef CONFIG_X86_64
/*
* We need valid kernel segments for data and code in long mode too
@@ -324,8 +324,21 @@ static __always_inline void setup_smap(struct cpuinfo_x86 *c)
static void setup_pcid(struct cpuinfo_x86 *c)
{
if (cpu_has(c, X86_FEATURE_PCID)) {
- if (cpu_has(c, X86_FEATURE_PGE)) {
+ if (cpu_has(c, X86_FEATURE_PGE) || kaiser_enabled) {
cr4_set_bits(X86_CR4_PCIDE);
+ /*
+ * INVPCID has two "groups" of types:
+ * 1/2: Invalidate an individual address
+ * 3/4: Invalidate all contexts
+ *
+ * 1/2 take a PCID, but 3/4 do not. So, 3/4
+ * ignore the PCID argument in the descriptor.
+ * But, we have to be careful not to call 1/2
+ * with an actual non-zero PCID in them before
+ * we do the above cr4_set_bits().
+ */
+ if (cpu_has(c, X86_FEATURE_INVPCID))
+ set_cpu_cap(c, X86_FEATURE_INVPCID_SINGLE);
} else {
/*
* flush_tlb_all(), as currently implemented, won't
@@ -338,6 +351,7 @@ static void setup_pcid(struct cpuinfo_x86 *c)
clear_cpu_cap(c, X86_FEATURE_PCID);
}
}
+ kaiser_setup_pcid();
}
/*
@@ -1229,7 +1243,7 @@ static const unsigned int exception_stack_sizes[N_EXCEPTION_STACKS] = {
[DEBUG_STACK - 1] = DEBUG_STKSZ
};
-static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
+DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(char, exception_stacks
[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ]);
/* May not be marked __init: used by software suspend */
@@ -1392,6 +1406,14 @@ void cpu_init(void)
* try to read it.
*/
cr4_init_shadow();
+ if (!kaiser_enabled) {
+ /*
+ * secondary_startup_64() deferred setting PGE in cr4:
+ * probe_page_size_mask() sets it on the boot cpu,
+ * but it needs to be set on each secondary cpu.
+ */
+ cr4_set_bits(X86_CR4_PGE);
+ }
/*
* Load microcode on this cpu if a valid microcode is available.
diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 1e7de3cefc9c..f01b3a12dce0 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -2,11 +2,15 @@
#include <linux/types.h>
#include <linux/slab.h>
+#include <asm/kaiser.h>
#include <asm/perf_event.h>
#include <asm/insn.h>
#include "perf_event.h"
+static
+DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct debug_store, cpu_debug_store);
+
/* The size of a BTS record in bytes: */
#define BTS_RECORD_SIZE 24
@@ -268,6 +272,39 @@ void fini_debug_store_on_cpu(int cpu)
static DEFINE_PER_CPU(void *, insn_buffer);
+static void *dsalloc(size_t size, gfp_t flags, int node)
+{
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ unsigned int order = get_order(size);
+ struct page *page;
+ unsigned long addr;
+
+ page = __alloc_pages_node(node, flags | __GFP_ZERO, order);
+ if (!page)
+ return NULL;
+ addr = (unsigned long)page_address(page);
+ if (kaiser_add_mapping(addr, size, __PAGE_KERNEL) < 0) {
+ __free_pages(page, order);
+ addr = 0;
+ }
+ return (void *)addr;
+#else
+ return kmalloc_node(size, flags | __GFP_ZERO, node);
+#endif
+}
+
+static void dsfree(const void *buffer, size_t size)
+{
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+ if (!buffer)
+ return;
+ kaiser_remove_mapping((unsigned long)buffer, size);
+ free_pages((unsigned long)buffer, get_order(size));
+#else
+ kfree(buffer);
+#endif
+}
+
static int alloc_pebs_buffer(int cpu)
{
struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
@@ -278,7 +315,7 @@ static int alloc_pebs_buffer(int cpu)
if (!x86_pmu.pebs)
return 0;
- buffer = kzalloc_node(x86_pmu.pebs_buffer_size, GFP_KERNEL, node);
+ buffer = dsalloc(x86_pmu.pebs_buffer_size, GFP_KERNEL, node);
if (unlikely(!buffer))
return -ENOMEM;
@@ -289,7 +326,7 @@ static int alloc_pebs_buffer(int cpu)
if (x86_pmu.intel_cap.pebs_format < 2) {
ibuffer = kzalloc_node(PEBS_FIXUP_SIZE, GFP_KERNEL, node);
if (!ibuffer) {
- kfree(buffer);
+ dsfree(buffer, x86_pmu.pebs_buffer_size);
return -ENOMEM;
}
per_cpu(insn_buffer, cpu) = ibuffer;
@@ -315,7 +352,8 @@ static void release_pebs_buffer(int cpu)
kfree(per_cpu(insn_buffer, cpu));
per_cpu(insn_buffer, cpu) = NULL;
- kfree((void *)(unsigned long)ds->pebs_buffer_base);
+ dsfree((void *)(unsigned long)ds->pebs_buffer_base,
+ x86_pmu.pebs_buffer_size);
ds->pebs_buffer_base = 0;
}
@@ -329,7 +367,7 @@ static int alloc_bts_buffer(int cpu)
if (!x86_pmu.bts)
return 0;
- buffer = kzalloc_node(BTS_BUFFER_SIZE, GFP_KERNEL | __GFP_NOWARN, node);
+ buffer = dsalloc(BTS_BUFFER_SIZE, GFP_KERNEL | __GFP_NOWARN, node);
if (unlikely(!buffer)) {
WARN_ONCE(1, "%s: BTS buffer allocation failure\n", __func__);
return -ENOMEM;
@@ -355,19 +393,15 @@ static void release_bts_buffer(int cpu)
if (!ds || !x86_pmu.bts)
return;
- kfree((void *)(unsigned long)ds->bts_buffer_base);
+ dsfree((void *)(unsigned long)ds->bts_buffer_base, BTS_BUFFER_SIZE);
ds->bts_buffer_base = 0;
}
static int alloc_ds_buffer(int cpu)
{
- int node = cpu_to_node(cpu);
- struct debug_store *ds;
-
- ds = kzalloc_node(sizeof(*ds), GFP_KERNEL, node);
- if (unlikely(!ds))
- return -ENOMEM;
+ struct debug_store *ds = per_cpu_ptr(&cpu_debug_store, cpu);
+ memset(ds, 0, sizeof(*ds));
per_cpu(cpu_hw_events, cpu).ds = ds;
return 0;
@@ -381,7 +415,6 @@ static void release_ds_buffer(int cpu)
return;
per_cpu(cpu_hw_events, cpu).ds = NULL;
- kfree(ds);
}
void release_ds_buffers(void)
diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c
index 4d38416e2a7f..b02cb2ec6726 100644
--- a/arch/x86/kernel/espfix_64.c
+++ b/arch/x86/kernel/espfix_64.c
@@ -41,6 +41,7 @@
#include <asm/pgalloc.h>
#include <asm/setup.h>
#include <asm/espfix.h>
+#include <asm/kaiser.h>
/*
* Note: we only need 6*8 = 48 bytes for the espfix stack, but round
@@ -126,6 +127,15 @@ void __init init_espfix_bsp(void)
/* Install the espfix pud into the kernel page directory */
pgd_p = &init_level4_pgt[pgd_index(ESPFIX_BASE_ADDR)];
pgd_populate(&init_mm, pgd_p, (pud_t *)espfix_pud_page);
+ /*
+ * Just copy the top-level PGD that is mapping the espfix
+ * area to ensure it is mapped into the shadow user page
+ * tables.
+ */
+ if (kaiser_enabled) {
+ set_pgd(native_get_shadow_pgd(pgd_p),
+ __pgd(_KERNPG_TABLE | __pa((pud_t *)espfix_pud_page)));
+ }
/* Randomize the locations */
init_espfix_random();
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index ffdc0e860390..4034e905741a 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -183,8 +183,8 @@ ENTRY(secondary_startup_64)
movq $(init_level4_pgt - __START_KERNEL_map), %rax
1:
- /* Enable PAE mode and PGE */
- movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx
+ /* Enable PAE and PSE, but defer PGE until kaiser_enabled is decided */
+ movl $(X86_CR4_PAE | X86_CR4_PSE), %ecx
movq %rcx, %cr4
/* Setup early boot stage 4 level pagetables. */
@@ -441,6 +441,27 @@ early_idt_ripmsg:
.balign PAGE_SIZE; \
GLOBAL(name)
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+/*
+ * Each PGD needs to be 8k long and 8k aligned. We do not
+ * ever go out to userspace with these, so we do not
+ * strictly *need* the second page, but this allows us to
+ * have a single set_pgd() implementation that does not
+ * need to worry about whether it has 4k or 8k to work
+ * with.
+ *
+ * This ensures PGDs are 8k long:
+ */
+#define KAISER_USER_PGD_FILL 512
+/* This ensures they are 8k-aligned: */
+#define NEXT_PGD_PAGE(name) \
+ .balign 2 * PAGE_SIZE; \
+GLOBAL(name)
+#else
+#define NEXT_PGD_PAGE(name) NEXT_PAGE(name)
+#define KAISER_USER_PGD_FILL 0
+#endif
+
/* Automate the creation of 1 to 1 mapping pmd entries */
#define PMDS(START, PERM, COUNT) \
i = 0 ; \
@@ -450,9 +471,10 @@ GLOBAL(name)
.endr
__INITDATA
-NEXT_PAGE(early_level4_pgt)
+NEXT_PGD_PAGE(early_level4_pgt)
.fill 511,8,0
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(early_dynamic_pgts)
.fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0
@@ -460,16 +482,18 @@ NEXT_PAGE(early_dynamic_pgts)
.data
#ifndef CONFIG_XEN
-NEXT_PAGE(init_level4_pgt)
+NEXT_PGD_PAGE(init_level4_pgt)
.fill 512,8,0
+ .fill KAISER_USER_PGD_FILL,8,0
#else
-NEXT_PAGE(init_level4_pgt)
+NEXT_PGD_PAGE(init_level4_pgt)
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
.org init_level4_pgt + L4_PAGE_OFFSET*8, 0
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
.org init_level4_pgt + L4_START_KERNEL*8, 0
/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(level3_ident_pgt)
.quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
@@ -480,6 +504,7 @@ NEXT_PAGE(level2_ident_pgt)
*/
PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)
#endif
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(level3_kernel_pgt)
.fill L3_START_KERNEL,8,0
diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c
index 1423ab1b0312..f480b38a03c3 100644
--- a/arch/x86/kernel/irqinit.c
+++ b/arch/x86/kernel/irqinit.c
@@ -51,7 +51,7 @@ static struct irqaction irq2 = {
.flags = IRQF_NO_THREAD,
};
-DEFINE_PER_CPU(vector_irq_t, vector_irq) = {
+DEFINE_PER_CPU_USER_MAPPED(vector_irq_t, vector_irq) = {
[0 ... NR_VECTORS - 1] = VECTOR_UNUSED,
};
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index 2bd81e302427..ec1b06dc82d2 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -45,6 +45,11 @@ early_param("no-kvmclock", parse_no_kvmclock);
static struct pvclock_vsyscall_time_info *hv_clock;
static struct pvclock_wall_clock wall_clock;
+struct pvclock_vsyscall_time_info *pvclock_pvti_cpu0_va(void)
+{
+ return hv_clock;
+}
+
/*
* The wallclock is the time of day when we booted. Since then, some time may
* have elapsed since the hypervisor wrote the data. So we try to account for
diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c
index d6279593bcdd..bc429365b72a 100644
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -16,6 +16,7 @@
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/uaccess.h>
+#include <linux/kaiser.h>
#include <asm/ldt.h>
#include <asm/desc.h>
@@ -34,11 +35,21 @@ static void flush_ldt(void *current_mm)
set_ldt(pc->ldt->entries, pc->ldt->size);
}
+static void __free_ldt_struct(struct ldt_struct *ldt)
+{
+ if (ldt->size * LDT_ENTRY_SIZE > PAGE_SIZE)
+ vfree(ldt->entries);
+ else
+ free_page((unsigned long)ldt->entries);
+ kfree(ldt);
+}
+
/* The caller must call finalize_ldt_struct on the result. LDT starts zeroed. */
static struct ldt_struct *alloc_ldt_struct(int size)
{
struct ldt_struct *new_ldt;
int alloc_size;
+ int ret;
if (size > LDT_ENTRIES)
return NULL;
@@ -66,7 +77,13 @@ static struct ldt_struct *alloc_ldt_struct(int size)
return NULL;
}
+ ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
+ __PAGE_KERNEL);
new_ldt->size = size;
+ if (ret) {
+ __free_ldt_struct(new_ldt);
+ return NULL;
+ }
return new_ldt;
}
@@ -92,12 +109,10 @@ static void free_ldt_struct(struct ldt_struct *ldt)
if (likely(!ldt))
return;
+ kaiser_remove_mapping((unsigned long)ldt->entries,
+ ldt->size * LDT_ENTRY_SIZE);
paravirt_free_ldt(ldt->entries, ldt->size);
- if (ldt->size * LDT_ENTRY_SIZE > PAGE_SIZE)
- vfree(ldt->entries);
- else
- free_page((unsigned long)ldt->entries);
- kfree(ldt);
+ __free_ldt_struct(ldt);
}
/*
diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c
index 8aa05583bc42..0677bf8d3a42 100644
--- a/arch/x86/kernel/paravirt_patch_64.c
+++ b/arch/x86/kernel/paravirt_patch_64.c
@@ -9,7 +9,6 @@ DEF_NATIVE(pv_irq_ops, save_fl, "pushfq; popq %rax");
DEF_NATIVE(pv_mmu_ops, read_cr2, "movq %cr2, %rax");
DEF_NATIVE(pv_mmu_ops, read_cr3, "movq %cr3, %rax");
DEF_NATIVE(pv_mmu_ops, write_cr3, "movq %rdi, %cr3");
-DEF_NATIVE(pv_mmu_ops, flush_tlb_single, "invlpg (%rdi)");
DEF_NATIVE(pv_cpu_ops, clts, "clts");
DEF_NATIVE(pv_cpu_ops, wbinvd, "wbinvd");
@@ -62,7 +61,6 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
PATCH_SITE(pv_mmu_ops, read_cr3);
PATCH_SITE(pv_mmu_ops, write_cr3);
PATCH_SITE(pv_cpu_ops, clts);
- PATCH_SITE(pv_mmu_ops, flush_tlb_single);
PATCH_SITE(pv_cpu_ops, wbinvd);
#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUED_SPINLOCKS)
case PARAVIRT_PATCH(pv_lock_ops.queued_spin_unlock):
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 9f7c21c22477..7c5c5dc90ffa 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -39,7 +39,7 @@
* section. Since TSS's are completely CPU-local, we want them
* on exact cacheline boundaries, to eliminate cacheline ping-pong.
*/
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = {
+__visible DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss) = {
.x86_tss = {
.sp0 = TOP_OF_INIT_STACK,
#ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index e67b834279b2..bbaae4cf9e8e 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -112,6 +112,7 @@
#include <asm/alternative.h>
#include <asm/prom.h>
#include <asm/microcode.h>
+#include <asm/kaiser.h>
/*
* max_low_pfn_mapped: highest direct mapped pfn under 4GB
@@ -1016,6 +1017,12 @@ void __init setup_arch(char **cmdline_p)
*/
init_hypervisor_platform();
+ /*
+ * This needs to happen right after XENPV is set on xen and
+ * kaiser_enabled is checked below in cleanup_highmap().
+ */
+ kaiser_check_boottime_disable();
+
x86_init.resources.probe_roms();
/* after parse_early_param, so could debug it */
diff --git a/arch/x86/kernel/tracepoint.c b/arch/x86/kernel/tracepoint.c
index 1c113db9ed57..2bb5ee464df3 100644
--- a/arch/x86/kernel/tracepoint.c
+++ b/arch/x86/kernel/tracepoint.c
@@ -9,10 +9,12 @@
#include <linux/atomic.h>
atomic_t trace_idt_ctr = ATOMIC_INIT(0);
+__aligned(PAGE_SIZE)
struct desc_ptr trace_idt_descr = { NR_VECTORS * 16 - 1,
(unsigned long) trace_idt_table };
/* No need to be aligned, but done to keep all IDTs defined the same way. */
+__aligned(PAGE_SIZE)
gate_desc trace_idt_table[NR_VECTORS] __page_aligned_bss;
static int trace_irq_vector_refcount;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 796f1ec67469..ccf17dbfea09 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -759,7 +759,8 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
return 1;
/* PCID can not be enabled when cr3[11:0]!=000H or EFER.LMA=0 */
- if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_MASK) || !is_long_mode(vcpu))
+ if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_ASID_MASK) ||
+ !is_long_mode(vcpu))
return 1;
}
diff --git a/arch/x86/lib/cmdline.c b/arch/x86/lib/cmdline.c
index 422db000d727..a744506856b1 100644
--- a/arch/x86/lib/cmdline.c
+++ b/arch/x86/lib/cmdline.c
@@ -82,3 +82,108 @@ int cmdline_find_option_bool(const char *cmdline, const char *option)
return 0; /* Buffer overrun */
}
+
+/*
+ * Find a non-boolean option (i.e. option=argument). In accordance with
+ * standard Linux practice, if this option is repeated, this returns the
+ * last instance on the command line.
+ *
+ * @cmdline: the cmdline string
+ * @max_cmdline_size: the maximum size of cmdline
+ * @option: option string to look for
+ * @buffer: memory buffer to return the option argument
+ * @bufsize: size of the supplied memory buffer
+ *
+ * Returns the length of the argument (regardless of if it was
+ * truncated to fit in the buffer), or -1 on not found.
+ */
+static int
+__cmdline_find_option(const char *cmdline, int max_cmdline_size,
+ const char *option, char *buffer, int bufsize)
+{
+ char c;
+ int pos = 0, len = -1;
+ const char *opptr = NULL;
+ char *bufptr = buffer;
+ enum {
+ st_wordstart = 0, /* Start of word/after whitespace */
+ st_wordcmp, /* Comparing this word */
+ st_wordskip, /* Miscompare, skip */
+ st_bufcpy, /* Copying this to buffer */
+ } state = st_wordstart;
+
+ if (!cmdline)
+ return -1; /* No command line */
+
+ /*
+ * This 'pos' check ensures we do not overrun
+ * a non-NULL-terminated 'cmdline'
+ */
+ while (pos++ < max_cmdline_size) {
+ c = *(char *)cmdline++;
+ if (!c)
+ break;
+
+ switch (state) {
+ case st_wordstart:
+ if (myisspace(c))
+ break;
+
+ state = st_wordcmp;
+ opptr = option;
+ /* fall through */
+
+ case st_wordcmp:
+ if ((c == '=') && !*opptr) {
+ /*
+ * We matched all the way to the end of the
+ * option we were looking for, prepare to
+ * copy the argument.
+ */
+ len = 0;
+ bufptr = buffer;
+ state = st_bufcpy;
+ break;
+ } else if (c == *opptr++) {
+ /*
+ * We are currently matching, so continue
+ * to the next character on the cmdline.
+ */
+ break;
+ }
+ state = st_wordskip;
+ /* fall through */
+
+ case st_wordskip:
+ if (myisspace(c))
+ state = st_wordstart;
+ break;
+
+ case st_bufcpy:
+ if (myisspace(c)) {
+ state = st_wordstart;
+ } else {
+ /*
+ * Increment len, but don't overrun the
+ * supplied buffer and leave room for the
+ * NULL terminator.
+ */
+ if (++len < bufsize)
+ *bufptr++ = c;
+ }
+ break;
+ }
+ }
+
+ if (bufsize)
+ *bufptr = '\0';
+
+ return len;
+}
+
+int cmdline_find_option(const char *cmdline, const char *option, char *buffer,
+ int bufsize)
+{
+ return __cmdline_find_option(cmdline, COMMAND_LINE_SIZE, option,
+ buffer, bufsize);
+}
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 1ae7c141f778..61e6cead9c4a 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -32,3 +32,4 @@ obj-$(CONFIG_ACPI_NUMA) += srat.o
obj-$(CONFIG_NUMA_EMU) += numa_emulation.o
obj-$(CONFIG_X86_INTEL_MPX) += mpx.o
+obj-$(CONFIG_PAGE_TABLE_ISOLATION) += kaiser.o
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index ed4b372860e4..2bd45ae91eb3 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -165,7 +165,7 @@ static void __init probe_page_size_mask(void)
cr4_set_bits_and_update_boot(X86_CR4_PSE);
/* Enable PGE if available */
- if (cpu_has_pge) {
+ if (cpu_has_pge && !kaiser_enabled) {
cr4_set_bits_and_update_boot(X86_CR4_PGE);
__supported_pte_mask |= _PAGE_GLOBAL;
} else
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index ec081fe0ce2c..d76ec9348cff 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -395,6 +395,16 @@ void __init cleanup_highmap(void)
continue;
if (vaddr < (unsigned long) _text || vaddr > end)
set_pmd(pmd, __pmd(0));
+ else if (kaiser_enabled) {
+ /*
+ * level2_kernel_pgt is initialized with _PAGE_GLOBAL:
+ * clear that now. This is not important, so long as
+ * CR4.PGE remains clear, but it removes an anomaly.
+ * Physical mapping setup below avoids _PAGE_GLOBAL
+ * by use of massage_pgprot() inside pfn_pte() etc.
+ */
+ set_pmd(pmd, pmd_clear_flags(*pmd, _PAGE_GLOBAL));
+ }
}
}
diff --git a/arch/x86/mm/kaiser.c b/arch/x86/mm/kaiser.c
new file mode 100644
index 000000000000..b0b3a69f1c7f
--- /dev/null
+++ b/arch/x86/mm/kaiser.c
@@ -0,0 +1,456 @@
+#include <linux/bug.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/bug.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/mm.h>
+#include <linux/uaccess.h>
+#include <linux/ftrace.h>
+
+#undef pr_fmt
+#define pr_fmt(fmt) "Kernel/User page tables isolation: " fmt
+
+#include <asm/kaiser.h>
+#include <asm/tlbflush.h> /* to verify its kaiser declarations */
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/desc.h>
+#include <asm/cmdline.h>
+
+int kaiser_enabled __read_mostly = 1;
+EXPORT_SYMBOL(kaiser_enabled); /* for inlined TLB flush functions */
+
+__visible
+DEFINE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+
+/*
+ * These can have bit 63 set, so we can not just use a plain "or"
+ * instruction to get their value or'd into CR3. It would take
+ * another register. So, we use a memory reference to these instead.
+ *
+ * This is also handy because systems that do not support PCIDs
+ * just end up or'ing a 0 into their CR3, which does no harm.
+ */
+DEFINE_PER_CPU(unsigned long, x86_cr3_pcid_user);
+
+/*
+ * At runtime, the only things we map are some things for CPU
+ * hotplug, and stacks for new processes. No two CPUs will ever
+ * be populating the same addresses, so we only need to ensure
+ * that we protect between two CPUs trying to allocate and
+ * populate the same page table page.
+ *
+ * Only take this lock when doing a set_p[4um]d(), but it is not
+ * needed for doing a set_pte(). We assume that only the *owner*
+ * of a given allocation will be doing this for _their_
+ * allocation.
+ *
+ * This ensures that once a system has been running for a while
+ * and there have been stacks all over and these page tables
+ * are fully populated, there will be no further acquisitions of
+ * this lock.
+ */
+static DEFINE_SPINLOCK(shadow_table_allocation_lock);
+
+/*
+ * Returns -1 on error.
+ */
+static inline unsigned long get_pa_from_mapping(unsigned long vaddr)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ pgd = pgd_offset_k(vaddr);
+ /*
+ * We made all the kernel PGDs present in kaiser_init().
+ * We expect them to stay that way.
+ */
+ BUG_ON(pgd_none(*pgd));
+ /*
+ * PGDs are either 512GB or 128TB on all x86_64
+ * configurations. We don't handle these.
+ */
+ BUG_ON(pgd_large(*pgd));
+
+ pud = pud_offset(pgd, vaddr);
+ if (pud_none(*pud)) {
+ WARN_ON_ONCE(1);
+ return -1;
+ }
+
+ if (pud_large(*pud))
+ return (pud_pfn(*pud) << PAGE_SHIFT) | (vaddr & ~PUD_PAGE_MASK);
+
+ pmd = pmd_offset(pud, vaddr);
+ if (pmd_none(*pmd)) {
+ WARN_ON_ONCE(1);
+ return -1;
+ }
+
+ if (pmd_large(*pmd))
+ return (pmd_pfn(*pmd) << PAGE_SHIFT) | (vaddr & ~PMD_PAGE_MASK);
+
+ pte = pte_offset_kernel(pmd, vaddr);
+ if (pte_none(*pte)) {
+ WARN_ON_ONCE(1);
+ return -1;
+ }
+
+ return (pte_pfn(*pte) << PAGE_SHIFT) | (vaddr & ~PAGE_MASK);
+}
+
+/*
+ * This is a relatively normal page table walk, except that it
+ * also tries to allocate page tables pages along the way.
+ *
+ * Returns a pointer to a PTE on success, or NULL on failure.
+ */
+static pte_t *kaiser_pagetable_walk(unsigned long address)
+{
+ pmd_t *pmd;
+ pud_t *pud;
+ pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(address));
+ gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+
+ if (pgd_none(*pgd)) {
+ WARN_ONCE(1, "All shadow pgds should have been populated");
+ return NULL;
+ }
+ BUILD_BUG_ON(pgd_large(*pgd) != 0);
+
+ pud = pud_offset(pgd, address);
+ /* The shadow page tables do not use large mappings: */
+ if (pud_large(*pud)) {
+ WARN_ON(1);
+ return NULL;
+ }
+ if (pud_none(*pud)) {
+ unsigned long new_pmd_page = __get_free_page(gfp);
+ if (!new_pmd_page)
+ return NULL;
+ spin_lock(&shadow_table_allocation_lock);
+ if (pud_none(*pud)) {
+ set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
+ __inc_zone_page_state(virt_to_page((void *)
+ new_pmd_page), NR_KAISERTABLE);
+ } else
+ free_page(new_pmd_page);
+ spin_unlock(&shadow_table_allocation_lock);
+ }
+
+ pmd = pmd_offset(pud, address);
+ /* The shadow page tables do not use large mappings: */
+ if (pmd_large(*pmd)) {
+ WARN_ON(1);
+ return NULL;
+ }
+ if (pmd_none(*pmd)) {
+ unsigned long new_pte_page = __get_free_page(gfp);
+ if (!new_pte_page)
+ return NULL;
+ spin_lock(&shadow_table_allocation_lock);
+ if (pmd_none(*pmd)) {
+ set_pmd(pmd, __pmd(_KERNPG_TABLE | __pa(new_pte_page)));
+ __inc_zone_page_state(virt_to_page((void *)
+ new_pte_page), NR_KAISERTABLE);
+ } else
+ free_page(new_pte_page);
+ spin_unlock(&shadow_table_allocation_lock);
+ }
+
+ return pte_offset_kernel(pmd, address);
+}
+
+static int kaiser_add_user_map(const void *__start_addr, unsigned long size,
+ unsigned long flags)
+{
+ int ret = 0;
+ pte_t *pte;
+ unsigned long start_addr = (unsigned long )__start_addr;
+ unsigned long address = start_addr & PAGE_MASK;
+ unsigned long end_addr = PAGE_ALIGN(start_addr + size);
+ unsigned long target_address;
+
+ /*
+ * It is convenient for callers to pass in __PAGE_KERNEL etc,
+ * and there is no actual harm from setting _PAGE_GLOBAL, so
+ * long as CR4.PGE is not set. But it is nonetheless troubling
+ * to see Kaiser itself setting _PAGE_GLOBAL (now that "nokaiser"
+ * requires that not to be #defined to 0): so mask it off here.
+ */
+ flags &= ~_PAGE_GLOBAL;
+
+ for (; address < end_addr; address += PAGE_SIZE) {
+ target_address = get_pa_from_mapping(address);
+ if (target_address == -1) {
+ ret = -EIO;
+ break;
+ }
+ pte = kaiser_pagetable_walk(address);
+ if (!pte) {
+ ret = -ENOMEM;
+ break;
+ }
+ if (pte_none(*pte)) {
+ set_pte(pte, __pte(flags | target_address));
+ } else {
+ pte_t tmp;
+ set_pte(&tmp, __pte(flags | target_address));
+ WARN_ON_ONCE(!pte_same(*pte, tmp));
+ }
+ }
+ return ret;
+}
+
+static int kaiser_add_user_map_ptrs(const void *start, const void *end, unsigned long flags)
+{
+ unsigned long size = end - start;
+
+ return kaiser_add_user_map(start, size, flags);
+}
+
+/*
+ * Ensure that the top level of the (shadow) page tables are
+ * entirely populated. This ensures that all processes that get
+ * forked have the same entries. This way, we do not have to
+ * ever go set up new entries in older processes.
+ *
+ * Note: we never free these, so there are no updates to them
+ * after this.
+ */
+static void __init kaiser_init_all_pgds(void)
+{
+ pgd_t *pgd;
+ int i = 0;
+
+ pgd = native_get_shadow_pgd(pgd_offset_k((unsigned long )0));
+ for (i = PTRS_PER_PGD / 2; i < PTRS_PER_PGD; i++) {
+ pgd_t new_pgd;
+ pud_t *pud = pud_alloc_one(&init_mm,
+ PAGE_OFFSET + i * PGDIR_SIZE);
+ if (!pud) {
+ WARN_ON(1);
+ break;
+ }
+ inc_zone_page_state(virt_to_page(pud), NR_KAISERTABLE);
+ new_pgd = __pgd(_KERNPG_TABLE |__pa(pud));
+ /*
+ * Make sure not to stomp on some other pgd entry.
+ */
+ if (!pgd_none(pgd[i])) {
+ WARN_ON(1);
+ continue;
+ }
+ set_pgd(pgd + i, new_pgd);
+ }
+}
+
+#define kaiser_add_user_map_early(start, size, flags) do { \
+ int __ret = kaiser_add_user_map(start, size, flags); \
+ WARN_ON(__ret); \
+} while (0)
+
+#define kaiser_add_user_map_ptrs_early(start, end, flags) do { \
+ int __ret = kaiser_add_user_map_ptrs(start, end, flags); \
+ WARN_ON(__ret); \
+} while (0)
+
+void __init kaiser_check_boottime_disable(void)
+{
+ bool enable = true;
+ char arg[5];
+ int ret;
+
+ if (boot_cpu_has(X86_FEATURE_XENPV))
+ goto silent_disable;
+
+ ret = cmdline_find_option(boot_command_line, "pti", arg, sizeof(arg));
+ if (ret > 0) {
+ if (!strncmp(arg, "on", 2))
+ goto enable;
+
+ if (!strncmp(arg, "off", 3))
+ goto disable;
+
+ if (!strncmp(arg, "auto", 4))
+ goto skip;
+ }
+
+ if (cmdline_find_option_bool(boot_command_line, "nopti"))
+ goto disable;
+
+skip:
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
+ goto disable;
+
+enable:
+ if (enable)
+ setup_force_cpu_cap(X86_FEATURE_KAISER);
+
+ return;
+
+disable:
+ pr_info("disabled\n");
+
+silent_disable:
+ kaiser_enabled = 0;
+ setup_clear_cpu_cap(X86_FEATURE_KAISER);
+}
+
+/*
+ * If anything in here fails, we will likely die on one of the
+ * first kernel->user transitions and init will die. But, we
+ * will have most of the kernel up by then and should be able to
+ * get a clean warning out of it. If we BUG_ON() here, we run
+ * the risk of being before we have good console output.
+ */
+void __init kaiser_init(void)
+{
+ int cpu;
+
+ if (!kaiser_enabled)
+ return;
+
+ kaiser_init_all_pgds();
+
+ for_each_possible_cpu(cpu) {
+ void *percpu_vaddr = __per_cpu_user_mapped_start +
+ per_cpu_offset(cpu);
+ unsigned long percpu_sz = __per_cpu_user_mapped_end -
+ __per_cpu_user_mapped_start;
+ kaiser_add_user_map_early(percpu_vaddr, percpu_sz,
+ __PAGE_KERNEL);
+ }
+
+ /*
+ * Map the entry/exit text section, which is needed at
+ * switches from user to and from kernel.
+ */
+ kaiser_add_user_map_ptrs_early(__entry_text_start, __entry_text_end,
+ __PAGE_KERNEL_RX);
+
+#if defined(CONFIG_FUNCTION_GRAPH_TRACER) || defined(CONFIG_KASAN)
+ kaiser_add_user_map_ptrs_early(__irqentry_text_start,
+ __irqentry_text_end,
+ __PAGE_KERNEL_RX);
+#endif
+ kaiser_add_user_map_early((void *)idt_descr.address,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL_RO);
+#ifdef CONFIG_TRACING
+ kaiser_add_user_map_early(&trace_idt_descr,
+ sizeof(trace_idt_descr),
+ __PAGE_KERNEL);
+ kaiser_add_user_map_early(&trace_idt_table,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL);
+#endif
+ kaiser_add_user_map_early(&debug_idt_descr, sizeof(debug_idt_descr),
+ __PAGE_KERNEL);
+ kaiser_add_user_map_early(&debug_idt_table,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL);
+
+ pr_info("enabled\n");
+}
+
+/* Add a mapping to the shadow mapping, and synchronize the mappings */
+int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
+{
+ if (!kaiser_enabled)
+ return 0;
+ return kaiser_add_user_map((const void *)addr, size, flags);
+}
+
+void kaiser_remove_mapping(unsigned long start, unsigned long size)
+{
+ extern void unmap_pud_range_nofree(pgd_t *pgd,
+ unsigned long start, unsigned long end);
+ unsigned long end = start + size;
+ unsigned long addr, next;
+ pgd_t *pgd;
+
+ if (!kaiser_enabled)
+ return;
+ pgd = native_get_shadow_pgd(pgd_offset_k(start));
+ for (addr = start; addr < end; pgd++, addr = next) {
+ next = pgd_addr_end(addr, end);
+ unmap_pud_range_nofree(pgd, addr, next);
+ }
+}
+
+/*
+ * Page table pages are page-aligned. The lower half of the top
+ * level is used for userspace and the top half for the kernel.
+ * This returns true for user pages that need to get copied into
+ * both the user and kernel copies of the page tables, and false
+ * for kernel pages that should only be in the kernel copy.
+ */
+static inline bool is_userspace_pgd(pgd_t *pgdp)
+{
+ return ((unsigned long)pgdp % PAGE_SIZE) < (PAGE_SIZE / 2);
+}
+
+pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+ if (!kaiser_enabled)
+ return pgd;
+ /*
+ * Do we need to also populate the shadow pgd? Check _PAGE_USER to
+ * skip cases like kexec and EFI which make temporary low mappings.
+ */
+ if (pgd.pgd & _PAGE_USER) {
+ if (is_userspace_pgd(pgdp)) {
+ native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
+ /*
+ * Even if the entry is *mapping* userspace, ensure
+ * that userspace can not use it. This way, if we
+ * get out to userspace running on the kernel CR3,
+ * userspace will crash instead of running.
+ */
+ if (__supported_pte_mask & _PAGE_NX)
+ pgd.pgd |= _PAGE_NX;
+ }
+ } else if (!pgd.pgd) {
+ /*
+ * pgd_clear() cannot check _PAGE_USER, and is even used to
+ * clear corrupted pgd entries: so just rely on cases like
+ * kexec and EFI never to be using pgd_clear().
+ */
+ if (!WARN_ON_ONCE((unsigned long)pgdp & PAGE_SIZE) &&
+ is_userspace_pgd(pgdp))
+ native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
+ }
+ return pgd;
+}
+
+void kaiser_setup_pcid(void)
+{
+ unsigned long user_cr3 = KAISER_SHADOW_PGD_OFFSET;
+
+ if (this_cpu_has(X86_FEATURE_PCID))
+ user_cr3 |= X86_CR3_PCID_USER_NOFLUSH;
+ /*
+ * These variables are used by the entry/exit
+ * code to change PCID and pgd and TLB flushing.
+ */
+ this_cpu_write(x86_cr3_pcid_user, user_cr3);
+}
+
+/*
+ * Make a note that this cpu will need to flush USER tlb on return to user.
+ * If cpu does not have PCID, then the NOFLUSH bit will never have been set.
+ */
+void kaiser_flush_tlb_on_return_to_user(void)
+{
+ if (this_cpu_has(X86_FEATURE_PCID))
+ this_cpu_write(x86_cr3_pcid_user,
+ X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
+}
+EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 4e5ac46adc9d..81ec7c02f968 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -121,11 +121,16 @@ void __init kasan_init(void)
kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
(void *)KASAN_SHADOW_END);
- memset(kasan_zero_page, 0, PAGE_SIZE);
-
load_cr3(init_level4_pgt);
__flush_tlb_all();
- init_task.kasan_depth = 0;
+ /*
+ * kasan_zero_page has been used as early shadow memory, thus it may
+ * contain some garbage. Now we can clear it, since after the TLB flush
+ * no one should write to it.
+ */
+ memset(kasan_zero_page, 0, PAGE_SIZE);
+
+ init_task.kasan_depth = 0;
pr_info("KernelAddressSanitizer initialized\n");
}
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index b599a780a5a9..79377e2a7bcd 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -52,6 +52,7 @@ static DEFINE_SPINLOCK(cpa_lock);
#define CPA_FLUSHTLB 1
#define CPA_ARRAY 2
#define CPA_PAGES_ARRAY 4
+#define CPA_FREE_PAGETABLES 8
#ifdef CONFIG_PROC_FS
static unsigned long direct_pages_count[PG_LEVEL_NUM];
@@ -723,10 +724,13 @@ static int split_large_page(struct cpa_data *cpa, pte_t *kpte,
return 0;
}
-static bool try_to_free_pte_page(pte_t *pte)
+static bool try_to_free_pte_page(struct cpa_data *cpa, pte_t *pte)
{
int i;
+ if (!(cpa->flags & CPA_FREE_PAGETABLES))
+ return false;
+
for (i = 0; i < PTRS_PER_PTE; i++)
if (!pte_none(pte[i]))
return false;
@@ -735,10 +739,13 @@ static bool try_to_free_pte_page(pte_t *pte)
return true;
}
-static bool try_to_free_pmd_page(pmd_t *pmd)
+static bool try_to_free_pmd_page(struct cpa_data *cpa, pmd_t *pmd)
{
int i;
+ if (!(cpa->flags & CPA_FREE_PAGETABLES))
+ return false;
+
for (i = 0; i < PTRS_PER_PMD; i++)
if (!pmd_none(pmd[i]))
return false;
@@ -759,7 +766,9 @@ static bool try_to_free_pud_page(pud_t *pud)
return true;
}
-static bool unmap_pte_range(pmd_t *pmd, unsigned long start, unsigned long end)
+static bool unmap_pte_range(struct cpa_data *cpa, pmd_t *pmd,
+ unsigned long start,
+ unsigned long end)
{
pte_t *pte = pte_offset_kernel(pmd, start);
@@ -770,22 +779,23 @@ static bool unmap_pte_range(pmd_t *pmd, unsigned long start, unsigned long end)
pte++;
}
- if (try_to_free_pte_page((pte_t *)pmd_page_vaddr(*pmd))) {
+ if (try_to_free_pte_page(cpa, (pte_t *)pmd_page_vaddr(*pmd))) {
pmd_clear(pmd);
return true;
}
return false;
}
-static void __unmap_pmd_range(pud_t *pud, pmd_t *pmd,
+static void __unmap_pmd_range(struct cpa_data *cpa, pud_t *pud, pmd_t *pmd,
unsigned long start, unsigned long end)
{
- if (unmap_pte_range(pmd, start, end))
- if (try_to_free_pmd_page((pmd_t *)pud_page_vaddr(*pud)))
+ if (unmap_pte_range(cpa, pmd, start, end))
+ if (try_to_free_pmd_page(cpa, (pmd_t *)pud_page_vaddr(*pud)))
pud_clear(pud);
}
-static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
+static void unmap_pmd_range(struct cpa_data *cpa, pud_t *pud,
+ unsigned long start, unsigned long end)
{
pmd_t *pmd = pmd_offset(pud, start);
@@ -796,7 +806,7 @@ static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
unsigned long next_page = (start + PMD_SIZE) & PMD_MASK;
unsigned long pre_end = min_t(unsigned long, end, next_page);
- __unmap_pmd_range(pud, pmd, start, pre_end);
+ __unmap_pmd_range(cpa, pud, pmd, start, pre_end);
start = pre_end;
pmd++;
@@ -809,7 +819,8 @@ static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
if (pmd_large(*pmd))
pmd_clear(pmd);
else
- __unmap_pmd_range(pud, pmd, start, start + PMD_SIZE);
+ __unmap_pmd_range(cpa, pud, pmd,
+ start, start + PMD_SIZE);
start += PMD_SIZE;
pmd++;
@@ -819,17 +830,19 @@ static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
* 4K leftovers?
*/
if (start < end)
- return __unmap_pmd_range(pud, pmd, start, end);
+ return __unmap_pmd_range(cpa, pud, pmd, start, end);
/*
* Try again to free the PMD page if haven't succeeded above.
*/
if (!pud_none(*pud))
- if (try_to_free_pmd_page((pmd_t *)pud_page_vaddr(*pud)))
+ if (try_to_free_pmd_page(cpa, (pmd_t *)pud_page_vaddr(*pud)))
pud_clear(pud);
}
-static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
+static void __unmap_pud_range(struct cpa_data *cpa, pgd_t *pgd,
+ unsigned long start,
+ unsigned long end)
{
pud_t *pud = pud_offset(pgd, start);
@@ -840,7 +853,7 @@ static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
unsigned long next_page = (start + PUD_SIZE) & PUD_MASK;
unsigned long pre_end = min_t(unsigned long, end, next_page);
- unmap_pmd_range(pud, start, pre_end);
+ unmap_pmd_range(cpa, pud, start, pre_end);
start = pre_end;
pud++;
@@ -854,7 +867,7 @@ static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
if (pud_large(*pud))
pud_clear(pud);
else
- unmap_pmd_range(pud, start, start + PUD_SIZE);
+ unmap_pmd_range(cpa, pud, start, start + PUD_SIZE);
start += PUD_SIZE;
pud++;
@@ -864,7 +877,7 @@ static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
* 2M leftovers?
*/
if (start < end)
- unmap_pmd_range(pud, start, end);
+ unmap_pmd_range(cpa, pud, start, end);
/*
* No need to try to free the PUD page because we'll free it in
@@ -872,6 +885,24 @@ static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
*/
}
+static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
+{
+ struct cpa_data cpa = {
+ .flags = CPA_FREE_PAGETABLES,
+ };
+
+ __unmap_pud_range(&cpa, pgd, start, end);
+}
+
+void unmap_pud_range_nofree(pgd_t *pgd, unsigned long start, unsigned long end)
+{
+ struct cpa_data cpa = {
+ .flags = 0,
+ };
+
+ __unmap_pud_range(&cpa, pgd, start, end);
+}
+
static void unmap_pgd_range(pgd_t *root, unsigned long addr, unsigned long end)
{
pgd_t *pgd_entry = root + pgd_index(addr);
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index fb0a9dd1d6e4..dbc27a2b4ad5 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -6,7 +6,7 @@
#include <asm/fixmap.h>
#include <asm/mtrr.h>
-#define PGALLOC_GFP GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO
+#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO)
#ifdef CONFIG_HIGHPTE
#define PGALLOC_USER_GFP __GFP_HIGHMEM
@@ -340,14 +340,24 @@ static inline void _pgd_free(pgd_t *pgd)
kmem_cache_free(pgd_cache, pgd);
}
#else
+
+/*
+ * Instead of one pgd, Kaiser acquires two pgds. Being order-1, it is
+ * both 8k in size and 8k-aligned. That lets us just flip bit 12
+ * in a pointer to swap between the two 4k halves.
+ */
+#define PGD_ALLOCATION_ORDER kaiser_enabled
+
static inline pgd_t *_pgd_alloc(void)
{
- return (pgd_t *)__get_free_page(PGALLOC_GFP);
+ /* No __GFP_REPEAT: to avoid page allocation stalls in order-1 case */
+ return (pgd_t *)__get_free_pages(PGALLOC_GFP & ~__GFP_REPEAT,
+ PGD_ALLOCATION_ORDER);
}
static inline void _pgd_free(pgd_t *pgd)
{
- free_page((unsigned long)pgd);
+ free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER);
}
#endif /* CONFIG_X86_PAE */
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 7a4cdb632508..7cad01af6dcd 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -6,13 +6,14 @@
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/cpu.h>
+#include <linux/debugfs.h>
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
#include <asm/cache.h>
#include <asm/apic.h>
#include <asm/uv/uv.h>
-#include <linux/debugfs.h>
+#include <asm/kaiser.h>
/*
* TLB flushing, formerly SMP-only
@@ -34,6 +35,36 @@ struct flush_tlb_info {
unsigned long flush_end;
};
+static void load_new_mm_cr3(pgd_t *pgdir)
+{
+ unsigned long new_mm_cr3 = __pa(pgdir);
+
+ if (kaiser_enabled) {
+ /*
+ * We reuse the same PCID for different tasks, so we must
+ * flush all the entries for the PCID out when we change tasks.
+ * Flush KERN below, flush USER when returning to userspace in
+ * kaiser's SWITCH_USER_CR3 (_SWITCH_TO_USER_CR3) macro.
+ *
+ * invpcid_flush_single_context(X86_CR3_PCID_ASID_USER) could
+ * do it here, but can only be used if X86_FEATURE_INVPCID is
+ * available - and many machines support pcid without invpcid.
+ *
+ * If X86_CR3_PCID_KERN_FLUSH actually added something, then it
+ * would be needed in the write_cr3() below - if PCIDs enabled.
+ */
+ BUILD_BUG_ON(X86_CR3_PCID_KERN_FLUSH);
+ kaiser_flush_tlb_on_return_to_user();
+ }
+
+ /*
+ * Caution: many callers of this function expect
+ * that load_cr3() is serializing and orders TLB
+ * fills with respect to the mm_cpumask writes.
+ */
+ write_cr3(new_mm_cr3);
+}
+
/*
* We cannot call mmdrop() because we are in interrupt context,
* instead update mm->cpu_vm_mask.
@@ -45,7 +76,7 @@ void leave_mm(int cpu)
BUG();
if (cpumask_test_cpu(cpu, mm_cpumask(active_mm))) {
cpumask_clear_cpu(cpu, mm_cpumask(active_mm));
- load_cr3(swapper_pg_dir);
+ load_new_mm_cr3(swapper_pg_dir);
/*
* This gets called in the idle path where RCU
* functions differently. Tracing normally
@@ -105,7 +136,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
* ordering guarantee we need.
*
*/
- load_cr3(next->pgd);
+ load_new_mm_cr3(next->pgd);
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
@@ -152,7 +183,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
* As above, load_cr3() is serializing and orders TLB
* fills with respect to the mm_cpumask write.
*/
- load_cr3(next->pgd);
+ load_new_mm_cr3(next->pgd);
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
load_mm_cr4(next);
load_mm_ldt(next);
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index ef2e8c97e183..a461b6604fd9 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -725,7 +725,14 @@
*/
#define PERCPU_INPUT(cacheline) \
VMLINUX_SYMBOL(__per_cpu_start) = .; \
+ VMLINUX_SYMBOL(__per_cpu_user_mapped_start) = .; \
*(.data..percpu..first) \
+ . = ALIGN(cacheline); \
+ *(.data..percpu..user_mapped) \
+ *(.data..percpu..user_mapped..shared_aligned) \
+ . = ALIGN(PAGE_SIZE); \
+ *(.data..percpu..user_mapped..page_aligned) \
+ VMLINUX_SYMBOL(__per_cpu_user_mapped_end) = .; \
. = ALIGN(PAGE_SIZE); \
*(.data..percpu..page_aligned) \
. = ALIGN(cacheline); \
diff --git a/include/linux/kaiser.h b/include/linux/kaiser.h
new file mode 100644
index 000000000000..58c55b1589d0
--- /dev/null
+++ b/include/linux/kaiser.h
@@ -0,0 +1,52 @@
+#ifndef _LINUX_KAISER_H
+#define _LINUX_KAISER_H
+
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+#include <asm/kaiser.h>
+
+static inline int kaiser_map_thread_stack(void *stack)
+{
+ /*
+ * Map that page of kernel stack on which we enter from user context.
+ */
+ return kaiser_add_mapping((unsigned long)stack +
+ THREAD_SIZE - PAGE_SIZE, PAGE_SIZE, __PAGE_KERNEL);
+}
+
+static inline void kaiser_unmap_thread_stack(void *stack)
+{
+ /*
+ * Note: may be called even when kaiser_map_thread_stack() failed.
+ */
+ kaiser_remove_mapping((unsigned long)stack +
+ THREAD_SIZE - PAGE_SIZE, PAGE_SIZE);
+}
+#else
+
+/*
+ * These stubs are used whenever CONFIG_PAGE_TABLE_ISOLATION is off, which
+ * includes architectures that support KAISER, but have it disabled.
+ */
+
+static inline void kaiser_init(void)
+{
+}
+static inline int kaiser_add_mapping(unsigned long addr,
+ unsigned long size, unsigned long flags)
+{
+ return 0;
+}
+static inline void kaiser_remove_mapping(unsigned long start,
+ unsigned long size)
+{
+}
+static inline int kaiser_map_thread_stack(void *stack)
+{
+ return 0;
+}
+static inline void kaiser_unmap_thread_stack(void *stack)
+{
+}
+
+#endif /* !CONFIG_PAGE_TABLE_ISOLATION */
+#endif /* _LINUX_KAISER_H */
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ff88d6189411..b93b578cfa42 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -131,8 +131,9 @@ enum zone_stat_item {
NR_SLAB_RECLAIMABLE,
NR_SLAB_UNRECLAIMABLE,
NR_PAGETABLE, /* used for pagetables */
- NR_KERNEL_STACK,
/* Second 128 byte cacheline */
+ NR_KERNEL_STACK,
+ NR_KAISERTABLE,
NR_UNSTABLE_NFS, /* NFS unstable pages */
NR_BOUNCE,
NR_VMSCAN_WRITE,
diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h
index 8f16299ca068..8902f23bb770 100644
--- a/include/linux/percpu-defs.h
+++ b/include/linux/percpu-defs.h
@@ -35,6 +35,12 @@
#endif
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+#define USER_MAPPED_SECTION "..user_mapped"
+#else
+#define USER_MAPPED_SECTION ""
+#endif
+
/*
* Base implementations of per-CPU variable declarations and definitions, where
* the section in which the variable is to be placed is provided by the
@@ -115,6 +121,12 @@
#define DEFINE_PER_CPU(type, name) \
DEFINE_PER_CPU_SECTION(type, name, "")
+#define DECLARE_PER_CPU_USER_MAPPED(type, name) \
+ DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION)
+
+#define DEFINE_PER_CPU_USER_MAPPED(type, name) \
+ DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION)
+
/*
* Declaration/definition used for per-CPU variables that must come first in
* the set of variables.
@@ -144,6 +156,14 @@
DEFINE_PER_CPU_SECTION(type, name, PER_CPU_SHARED_ALIGNED_SECTION) \
____cacheline_aligned_in_smp
+#define DECLARE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(type, name) \
+ DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION PER_CPU_SHARED_ALIGNED_SECTION) \
+ ____cacheline_aligned_in_smp
+
+#define DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(type, name) \
+ DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION PER_CPU_SHARED_ALIGNED_SECTION) \
+ ____cacheline_aligned_in_smp
+
#define DECLARE_PER_CPU_ALIGNED(type, name) \
DECLARE_PER_CPU_SECTION(type, name, PER_CPU_ALIGNED_SECTION) \
____cacheline_aligned
@@ -162,11 +182,21 @@
#define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
DEFINE_PER_CPU_SECTION(type, name, "..page_aligned") \
__aligned(PAGE_SIZE)
+/*
+ * Declaration/definition used for per-CPU variables that must be page aligned and need to be mapped in user mode.
+ */
+#define DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(type, name) \
+ DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION"..page_aligned") \
+ __aligned(PAGE_SIZE)
+
+#define DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(type, name) \
+ DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION"..page_aligned") \
+ __aligned(PAGE_SIZE)
/*
* Declaration/definition used for per-CPU variables that must be read mostly.
*/
-#define DECLARE_PER_CPU_READ_MOSTLY(type, name) \
+#define DECLARE_PER_CPU_READ_MOSTLY(type, name) \
DECLARE_PER_CPU_SECTION(type, name, "..read_mostly")
#define DEFINE_PER_CPU_READ_MOSTLY(type, name) \
diff --git a/init/main.c b/init/main.c
index 9e64d7097f1a..49926d95442f 100644
--- a/init/main.c
+++ b/init/main.c
@@ -81,6 +81,7 @@
#include <linux/integrity.h>
#include <linux/proc_ns.h>
#include <linux/io.h>
+#include <linux/kaiser.h>
#include <asm/io.h>
#include <asm/bugs.h>
@@ -492,6 +493,7 @@ static void __init mm_init(void)
pgtable_init();
vmalloc_init();
ioremap_huge_init();
+ kaiser_init();
}
asmlinkage __visible void __init start_kernel(void)
diff --git a/kernel/fork.c b/kernel/fork.c
index 68cfda1c1800..ac00f14208b7 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -58,6 +58,7 @@
#include <linux/tsacct_kern.h>
#include <linux/cn_proc.h>
#include <linux/freezer.h>
+#include <linux/kaiser.h>
#include <linux/delayacct.h>
#include <linux/taskstats_kern.h>
#include <linux/random.h>
@@ -169,6 +170,7 @@ static struct thread_info *alloc_thread_info_node(struct task_struct *tsk,
static inline void free_thread_info(struct thread_info *ti)
{
+ kaiser_unmap_thread_stack(ti);
free_kmem_pages((unsigned long)ti, THREAD_SIZE_ORDER);
}
# else
@@ -352,6 +354,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
goto free_ti;
tsk->stack = ti;
+
+ err = kaiser_map_thread_stack(tsk->stack);
+ if (err)
+ goto free_ti;
#ifdef CONFIG_SECCOMP
/*
* We must handle setting up seccomp filters once we're under
diff --git a/mm/vmstat.c b/mm/vmstat.c
index c344e3609c53..324b7e90b4c5 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -736,6 +736,7 @@ const char * const vmstat_text[] = {
"nr_slab_unreclaimable",
"nr_page_table_pages",
"nr_kernel_stack",
+ "nr_overhead",
"nr_unstable",
"nr_bounce",
"nr_vmscan_write",
diff --git a/security/Kconfig b/security/Kconfig
index e45237897b43..a3ebb6ee5bd5 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -31,6 +31,16 @@ config SECURITY
If you are unsure how to answer this question, answer N.
+config PAGE_TABLE_ISOLATION
+ bool "Remove the kernel mapping in user mode"
+ default y
+ depends on X86_64 && SMP
+ help
+ This enforces a strict kernel and user space isolation, in order
+ to close hardware side channels on kernel address information.
+
+ If you are unsure how to answer this question, answer Y.
+
config SECURITYFS
bool "Enable the securityfs filesystem"
help
This is a note to let you know that I've just added the patch titled
x86/kasan: Clear kasan_zero_page after TLB flush
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kasan-clear-kasan_zero_page-after-tlb-flush.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 69e0210fd01ff157d332102219aaf5c26ca8069b Mon Sep 17 00:00:00 2001
From: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Date: Mon, 11 Jan 2016 15:51:18 +0300
Subject: x86/kasan: Clear kasan_zero_page after TLB flush
From: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
commit 69e0210fd01ff157d332102219aaf5c26ca8069b upstream.
Currently we clear kasan_zero_page before __flush_tlb_all(). This
works with current implementation of native_flush_tlb[_global]()
because it doesn't cause do any writes to kasan shadow memory.
But any subtle change made in native_flush_tlb*() could break this.
Also current code seems doesn't work for paravirt guests (lguest).
Only after the TLB flush we can be sure that kasan_zero_page is not
used as early shadow anymore (instrumented code will not write to it).
So it should cleared it only after the TLB flush.
Signed-off-by: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Reviewed-by: Borislav Petkov <bp(a)suse.de>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Andy Lutomirski <luto(a)amacapital.net>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof(a)suse.com>
Cc: Oleg Nesterov <oleg(a)redhat.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Toshi Kani <toshi.kani(a)hp.com>
Cc: linux-mm(a)kvack.org
Link: http://lkml.kernel.org/r/1452516679-32040-2-git-send-email-aryabinin@virtuo…
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Cc: Jamie Iles <jamie.iles(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kasan_init_64.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -121,11 +121,16 @@ void __init kasan_init(void)
kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
(void *)KASAN_SHADOW_END);
- memset(kasan_zero_page, 0, PAGE_SIZE);
-
load_cr3(init_level4_pgt);
__flush_tlb_all();
- init_task.kasan_depth = 0;
+ /*
+ * kasan_zero_page has been used as early shadow memory, thus it may
+ * contain some garbage. Now we can clear it, since after the TLB flush
+ * no one should write to it.
+ */
+ memset(kasan_zero_page, 0, PAGE_SIZE);
+
+ init_task.kasan_depth = 0;
pr_info("KernelAddressSanitizer initialized\n");
}
Patches currently in stable-queue which might be from aryabinin(a)virtuozzo.com are
queue-4.4/x86-kasan-clear-kasan_zero_page-after-tlb-flush.patch
queue-4.4/x86-boot-add-early-cmdline-parsing-for-options-with-arguments.patch
This is a note to let you know that I've just added the patch titled
x86, vdso, pvclock: Simplify and speed up the vdso pvclock reader
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-vdso-pvclock-simplify-and-speed-up-the-vdso-pvclock-reader.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6b078f5de7fc0851af4102493c7b5bb07e49c4cb Mon Sep 17 00:00:00 2001
From: Andy Lutomirski <luto(a)amacapital.net>
Date: Thu, 10 Dec 2015 19:20:19 -0800
Subject: x86, vdso, pvclock: Simplify and speed up the vdso pvclock reader
From: Andy Lutomirski <luto(a)amacapital.net>
commit 6b078f5de7fc0851af4102493c7b5bb07e49c4cb upstream.
The pvclock vdso code was too abstracted to understand easily
and excessively paranoid. Simplify it for a huge speedup.
This opens the door for additional simplifications, as the vdso
no longer accesses the pvti for any vcpu other than vcpu 0.
Before, vclock_gettime using kvm-clock took about 45ns on my
machine. With this change, it takes 29ns, which is almost as
fast as the pure TSC implementation.
Signed-off-by: Andy Lutomirski <luto(a)amacapital.net>
Reviewed-by: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-mm(a)kvack.org
Link: http://lkml.kernel.org/r/6b51dcc41f1b101f963945c5ec7093d72bdac429.144970253…
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Cc: Jamie Iles <jamie.iles(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/vdso/vclock_gettime.c | 79 +++++++++++++++++++----------------
1 file changed, 45 insertions(+), 34 deletions(-)
--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -78,47 +78,58 @@ static notrace const struct pvclock_vsys
static notrace cycle_t vread_pvclock(int *mode)
{
- const struct pvclock_vsyscall_time_info *pvti;
+ const struct pvclock_vcpu_time_info *pvti = &get_pvti(0)->pvti;
cycle_t ret;
- u64 last;
- u32 version;
- u8 flags;
- unsigned cpu, cpu1;
-
+ u64 tsc, pvti_tsc;
+ u64 last, delta, pvti_system_time;
+ u32 version, pvti_tsc_to_system_mul, pvti_tsc_shift;
/*
- * Note: hypervisor must guarantee that:
- * 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
- * 2. that per-CPU pvclock time info is updated if the
- * underlying CPU changes.
- * 3. that version is increased whenever underlying CPU
- * changes.
+ * Note: The kernel and hypervisor must guarantee that cpu ID
+ * number maps 1:1 to per-CPU pvclock time info.
+ *
+ * Because the hypervisor is entirely unaware of guest userspace
+ * preemption, it cannot guarantee that per-CPU pvclock time
+ * info is updated if the underlying CPU changes or that that
+ * version is increased whenever underlying CPU changes.
+ *
+ * On KVM, we are guaranteed that pvti updates for any vCPU are
+ * atomic as seen by *all* vCPUs. This is an even stronger
+ * guarantee than we get with a normal seqlock.
*
+ * On Xen, we don't appear to have that guarantee, but Xen still
+ * supplies a valid seqlock using the version field.
+
+ * We only do pvclock vdso timing at all if
+ * PVCLOCK_TSC_STABLE_BIT is set, and we interpret that bit to
+ * mean that all vCPUs have matching pvti and that the TSC is
+ * synced, so we can just look at vCPU 0's pvti.
*/
- do {
- cpu = __getcpu() & VGETCPU_CPU_MASK;
- /* TODO: We can put vcpu id into higher bits of pvti.version.
- * This will save a couple of cycles by getting rid of
- * __getcpu() calls (Gleb).
- */
-
- pvti = get_pvti(cpu);
-
- version = __pvclock_read_cycles(&pvti->pvti, &ret, &flags);
-
- /*
- * Test we're still on the cpu as well as the version.
- * We could have been migrated just after the first
- * vgetcpu but before fetching the version, so we
- * wouldn't notice a version change.
- */
- cpu1 = __getcpu() & VGETCPU_CPU_MASK;
- } while (unlikely(cpu != cpu1 ||
- (pvti->pvti.version & 1) ||
- pvti->pvti.version != version));
- if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
+ if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT))) {
*mode = VCLOCK_NONE;
+ return 0;
+ }
+
+ do {
+ version = pvti->version;
+
+ /* This is also a read barrier, so we'll read version first. */
+ tsc = rdtsc_ordered();
+
+ pvti_tsc_to_system_mul = pvti->tsc_to_system_mul;
+ pvti_tsc_shift = pvti->tsc_shift;
+ pvti_system_time = pvti->system_time;
+ pvti_tsc = pvti->tsc_timestamp;
+
+ /* Make sure that the version double-check is last. */
+ smp_rmb();
+ } while (unlikely((version & 1) || version != pvti->version));
+
+ delta = tsc - pvti_tsc;
+ ret = pvti_system_time +
+ pvclock_scale_delta(delta, pvti_tsc_to_system_mul,
+ pvti_tsc_shift);
/* refer to tsc.c read_tsc() comment for rationale */
last = gtod->cycle_last;
Patches currently in stable-queue which might be from luto(a)amacapital.net are
queue-4.4/x86-vdso-pvclock-simplify-and-speed-up-the-vdso-pvclock-reader.patch
queue-4.4/x86-vdso-get-pvclock-data-from-the-vvar-vma-instead-of-the-fixmap.patch
This is the start of the stable review cycle for the 4.14.12 release.
There are 14 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Sat Jan 6 12:08:52 UTC 2018.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.12-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Linux 4.14.12-rc1
Troy Kisky <troy.kisky(a)boundarydevices.com>
rtc: m41t80: remove unneeded checks from m41t80_sqw_set_rate
Troy Kisky <troy.kisky(a)boundarydevices.com>
rtc: m41t80: avoid i2c read in m41t80_sqw_is_prepared
Troy Kisky <troy.kisky(a)boundarydevices.com>
rtc: m41t80: avoid i2c read in m41t80_sqw_recalc_rate
Troy Kisky <troy.kisky(a)boundarydevices.com>
rtc: m41t80: fix m41t80_sqw_round_rate return value
Troy Kisky <troy.kisky(a)boundarydevices.com>
rtc: m41t80: m41t80_sqw_set_rate should return 0 on success
Steffen Klassert <steffen.klassert(a)secunet.com>
Revert "xfrm: Fix stack-out-of-bounds read in xfrm_state_find."
Nick Desaulniers <ndesaulniers(a)google.com>
x86/process: Define cpu_tss_rw in same section as declaration
Thomas Gleixner <tglx(a)linutronix.de>
x86/pti: Switch to kernel CR3 at early in entry_SYSCALL_compat()
Josh Poimboeuf <jpoimboe(a)redhat.com>
x86/dumpstack: Print registers for first stack frame
Josh Poimboeuf <jpoimboe(a)redhat.com>
x86/dumpstack: Fix partial register dumps
Thomas Gleixner <tglx(a)linutronix.de>
x86/pti: Make sure the user/kernel PTEs match
Tom Lendacky <thomas.lendacky(a)amd.com>
x86/cpu, x86/pti: Do not enable PTI on AMD processors
Eric Biggers <ebiggers(a)google.com>
capabilities: fix buffer overread on very short xattr
Kees Cook <keescook(a)chromium.org>
exec: Weaken dumpability for secureexec
-------------
Diffstat:
Makefile | 4 +-
arch/x86/entry/entry_64_compat.S | 13 +++----
arch/x86/include/asm/unwind.h | 17 ++++++--
arch/x86/kernel/cpu/common.c | 4 +-
arch/x86/kernel/dumpstack.c | 31 ++++++++++-----
arch/x86/kernel/process.c | 2 +-
arch/x86/kernel/stacktrace.c | 2 +-
arch/x86/mm/pti.c | 3 +-
drivers/rtc/rtc-m41t80.c | 84 ++++++++++++++++++----------------------
fs/exec.c | 9 ++++-
net/xfrm/xfrm_policy.c | 29 ++++++++------
security/commoncap.c | 21 +++++-----
12 files changed, 120 insertions(+), 99 deletions(-)
From: Eric Biggers <ebiggers(a)google.com>
syzkaller triggered a NULL pointer dereference in crypto_remove_spawns()
via a program that repeatedly and concurrently requests AEADs
"authenc(cmac(des3_ede-asm),pcbc-aes-aesni)" and hashes "cmac(des3_ede)"
through AF_ALG, where the hashes are requested as "untested"
(CRYPTO_ALG_TESTED is set in ->salg_mask but clear in ->salg_feat; this
causes the template to be instantiated for every request).
Although AF_ALG users really shouldn't be able to request an "untested"
algorithm, the NULL pointer dereference is actually caused by a
longstanding race condition where crypto_remove_spawns() can encounter
an instance which has had spawn(s) "grabbed" but hasn't yet been
registered, resulting in ->cra_users still being NULL.
We probably should properly initialize ->cra_users earlier, but that
would require updating many templates individually. For now just fix
the bug in a simple way that can easily be backported: make
crypto_remove_spawns() treat a NULL ->cra_users list as empty.
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
---
crypto/algapi.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/crypto/algapi.c b/crypto/algapi.c
index 9895cafcce7e..395b082d03a9 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -166,6 +166,18 @@ void crypto_remove_spawns(struct crypto_alg *alg, struct list_head *list,
spawn->alg = NULL;
spawns = &inst->alg.cra_users;
+
+ /*
+ * We may encounter an unregistered instance here, since
+ * an instance's spawns are set up prior to the instance
+ * being registered. An unregistered instance will have
+ * NULL ->cra_users.next, since ->cra_users isn't
+ * properly initialized until registration. But an
+ * unregistered instance cannot have any users, so treat
+ * it the same as ->cra_users being empty.
+ */
+ if (spawns->next == NULL)
+ break;
}
} while ((spawns = crypto_more_spawns(alg, &stack, &top,
&secondary_spawns)));
--
2.15.1
As mentioned on commit '88be58be886f ("drm/i915/fbdev:
Always forward hotplug events") we have real valid cases
of hotplugs where fbdev is not fully setup yet.
Unfortunately this remove the checkpoint after the sync point.
So probably we can live without it. Or we need a more robust
serialization.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104158
Fixes: a45b30a6c5db ("drm/i915/fbdev: Serialise early hotplug events with async fbdev config")
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Lukas Wunner <lukas(a)wunner.de>
Cc: jrg2718(a)gmail.com
Cc: stable(a)vger.kernel.org
Signed-off-by: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
---
drivers/gpu/drm/i915/intel_fbdev.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c
index da48af11eb6b..7a6069b389f2 100644
--- a/drivers/gpu/drm/i915/intel_fbdev.c
+++ b/drivers/gpu/drm/i915/intel_fbdev.c
@@ -801,8 +801,7 @@ void intel_fbdev_output_poll_changed(struct drm_device *dev)
return;
intel_fbdev_sync(ifbdev);
- if (ifbdev->vma)
- drm_fb_helper_hotplug_event(&ifbdev->helper);
+ drm_fb_helper_hotplug_event(&ifbdev->helper);
}
void intel_fbdev_restore_mode(struct drm_device *dev)
--
2.13.6
On Thu, Jan 04, 2018 at 11:39:23PM +0000, Kenneth Graunke wrote:
> On Thursday, January 4, 2018 1:23:06 PM PST Chris Wilson wrote:
> > Quoting Kenneth Graunke (2018-01-04 19:38:05)
> > > Geminilake requires the 3D driver to select whether barriers are
> > > intended for compute shaders, or tessellation control shaders, by
> > > whacking a "Barrier Mode" bit in SLICE_COMMON_ECO_CHICKEN1 when
> > > switching pipelines. Failure to do this properly can result in GPU
> > > hangs.
> > >
> > > Unfortunately, this means it needs to switch mid-batch, so only
> > > userspace can properly set it. To facilitate this, the kernel needs
> > > to whitelist the register.
> > >
> > > Signed-off-by: Kenneth Graunke <kenneth(a)whitecape.org>
> > > Cc: stable(a)vger.kernel.org
> > > ---
> > > drivers/gpu/drm/i915/i915_reg.h | 2 ++
> > > drivers/gpu/drm/i915/intel_engine_cs.c | 5 +++++
> > > 2 files changed, 7 insertions(+)
> > >
> > > Hello,
> > >
> > > We unfortunately need to whitelist an extra register for GPU hang fix
> > > on Geminilake. Here's the corresponding Mesa patch:
> >
> > Thankfully it appears to be context saved. Has a w/a name been assigned
> > for this?
> > -Chris
>
> There doesn't appear to be one. The workaround page lists it, but there
> is no name. The register description has a note saying that you need to
> set this, but doesn't call it out as a workaround.
It mentions only BXT:ALL, but not mention to GLK.
Should we add to both then?
>
> That's why I put a generic comment, rather than the name.
On Display side we started using the row name for this case, to help
easily finding this later.
ex: "Display WA #0390: skl,kbl"
The number for this apparently is:
WA #0862
Maybe we could use this one to start
/* GT WA #0862: bxt,glk */
GT? GEM?
Unnamed WA #0862?
Thanks,
Rodrigo.
>
> --Ken
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx(a)lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
From: Baoquan He <bhe(a)redhat.com>
Subject: mm/sparse.c: wrong allocation for mem_section
In 83e3c48729 ("mm/sparsemem: Allocate mem_section at runtime for CONFIG_SPARSEMEM_EXTREME=y") mem_section is allocated at runtime to save memory. While it allocates
the first dimension of array with sizeof(struct mem_section). It costs
extra memory, should be sizeof(struct mem_section*).
Fix it.
Link: http://lkml.kernel.org/r/1513932498-20350-1-git-send-email-bhe@redhat.com
Fixes: 83e3c48729 ("mm/sparsemem: Allocate mem_section at runtime for CONFIG_SPARSEMEM_EXTREME=y")
Signed-off-by: Baoquan He <bhe(a)redhat.com>
Tested-by: Dave Young <dyoung(a)redhat.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: Ingo Molnar <mingo(a)kernel.org>
Cc: Andy Lutomirski <luto(a)amacapital.net>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Atsushi Kumagai <ats-kumagai(a)wm.jp.nec.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/sparse.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -puN mm/sparse.c~mm-sparsec-wrong-allocation-for-mem_section mm/sparse.c
--- a/mm/sparse.c~mm-sparsec-wrong-allocation-for-mem_section
+++ a/mm/sparse.c
@@ -211,7 +211,7 @@ void __init memory_present(int nid, unsi
if (unlikely(!mem_section)) {
unsigned long size, align;
- size = sizeof(struct mem_section) * NR_SECTION_ROOTS;
+ size = sizeof(struct mem_section*) * NR_SECTION_ROOTS;
align = 1 << (INTERNODE_CACHE_SHIFT);
mem_section = memblock_virt_alloc(size, align);
}
_
From: Oleg Nesterov <oleg(a)redhat.com>
Subject: kernel/acct.c: fix the acct->needcheck check in check_free_space()
As Tsukada explains, the time_is_before_jiffies(acct->needcheck) check is
very wrong, we need time_is_after_jiffies() to make sys_acct() work.
Ignoring the overflows, the code should "goto out" if needcheck > jiffies,
while currently it checks "needcheck < jiffies" and thus in the likely
case check_free_space() does nothing until jiffies overflow.
In particular this means that sys_acct() is simply broken, acct_on() sets
acct->needcheck = jiffies and expects that check_free_space() should set
acct->active = 1 after the free-space check, but this won't happen if
jiffies increments in between.
This was broken by commit 32dc73086015 ("get rid of timer in kern/acct.c")
in 2011, then another (correct) commit 795a2f22a8ea ("acct() should honour
the limits from the very beginning") made the problem more visible.
Link: http://lkml.kernel.org/r/20171213133940.GA6554@redhat.com
Fixes: 32dc73086015 ("get rid of timer in kern/acct.c")
Reported-by: TSUKADA Koutaro <tsukada(a)ascade.co.jp>
Suggested-by: TSUKADA Koutaro <tsukada(a)ascade.co.jp>
Signed-off-by: Oleg Nesterov <oleg(a)redhat.com>
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
kernel/acct.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -puN kernel/acct.c~acct-fix-the-acct-needcheck-check-in-check_free_space kernel/acct.c
--- a/kernel/acct.c~acct-fix-the-acct-needcheck-check-in-check_free_space
+++ a/kernel/acct.c
@@ -102,7 +102,7 @@ static int check_free_space(struct bsd_a
{
struct kstatfs sbuf;
- if (time_is_before_jiffies(acct->needcheck))
+ if (time_is_after_jiffies(acct->needcheck))
goto out;
/* May block */
_
From: Thomas Gleixner <tglx(a)linutronix.de>
The preparation for PTI which added CR3 switching to the entry code
misplaced the CR3 switch in entry_SYSCALL_compat().
With PTI enabled the entry code tries to access a per cpu variable after
switching to kernel GS. This fails because that variable is not mapped to
user space. This results in a double fault and in the worst case a kernel
crash.
Move the switch ahead of the access and clobber RSP which has been saved
already.
Fixes: 8a09317b895f ("x86/mm/pti: Prepare the x86/entry assembly code for entry/exit CR3 switching")
Reported-by: Lars Wendler <wendler.lars(a)web.de>
Reported-by: Laura Abbott <labbott(a)redhat.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Borislav Betkov <bp(a)alien8.de>
Cc: Andy Lutomirski <luto(a)kernel.org>,
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>,
Cc: Peter Zijlstra <peterz(a)infradead.org>,
Cc: Greg KH <gregkh(a)linuxfoundation.org>, ,
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>,
Cc: Juergen Gross <jgross(a)suse.com>
Cc: stable(a)vger.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801031949200.1957@nanos
---
arch/x86/entry/entry_64_compat.S | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 40f17009ec20..98d5358e4041 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -190,8 +190,13 @@ ENTRY(entry_SYSCALL_compat)
/* Interrupts are off on entry. */
swapgs
- /* Stash user ESP and switch to the kernel stack. */
+ /* Stash user ESP */
movl %esp, %r8d
+
+ /* Use %rsp as scratch reg. User ESP is stashed in r8 */
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
+ /* Switch to the kernel stack */
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
/* Construct struct pt_regs on stack */
@@ -219,12 +224,6 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe)
pushq $0 /* pt_regs->r14 = 0 */
pushq $0 /* pt_regs->r15 = 0 */
- /*
- * We just saved %rdi so it is safe to clobber. It is not
- * preserved during the C calls inside TRACE_IRQS_OFF anyway.
- */
- SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
-
/*
* User mode is traced as though IRQs are on, and SYSENTER
* turned them off.
--
2.15.1
Geminilake requires the 3D driver to select whether barriers are
intended for compute shaders, or tessellation control shaders, by
whacking a "Barrier Mode" bit in SLICE_COMMON_ECO_CHICKEN1 when
switching pipelines. Failure to do this properly can result in GPU
hangs.
Unfortunately, this means it needs to switch mid-batch, so only
userspace can properly set it. To facilitate this, the kernel needs
to whitelist the register.
Signed-off-by: Kenneth Graunke <kenneth(a)whitecape.org>
Cc: stable(a)vger.kernel.org
---
drivers/gpu/drm/i915/i915_reg.h | 2 ++
drivers/gpu/drm/i915/intel_engine_cs.c | 5 +++++
2 files changed, 7 insertions(+)
Hello,
We unfortunately need to whitelist an extra register for GPU hang fix
on Geminilake. Here's the corresponding Mesa patch:
https://patchwork.freedesktop.org/patch/196047/
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 966e4df9700e..505c605eff98 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -7079,6 +7079,8 @@ enum {
#define GEN9_SLICE_COMMON_ECO_CHICKEN0 _MMIO(0x7308)
#define DISABLE_PIXEL_MASK_CAMMING (1<<14)
+#define GEN9_SLICE_COMMON_ECO_CHICKEN1 _MMIO(0x731c)
+
#define GEN7_L3SQCREG1 _MMIO(0xB010)
#define VLV_B0_WA_L3SQCREG1_VALUE 0x00D30000
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index ebdcbcbacb3c..d64a9f907550 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -1338,6 +1338,11 @@ static int glk_init_workarounds(struct intel_engine_cs *engine)
if (ret)
return ret;
+ /* Userspace needs to toggle "Barrier Mode" to avoid GPU hangs */
+ ret = wa_ring_whitelist_reg(engine, GEN9_SLICE_COMMON_ECO_CHICKEN1);
+ if (ret)
+ return ret;
+
/* WaToEnableHwFixForPushConstHWBug:glk */
WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION);
--
2.15.1
This is a note to let you know that I've just added the patch titled
usbip: vudc_tx: fix v_send_ret_submit() vulnerability to null xfer
to my usb git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git
in the usb-linus branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will hopefully also be merged in Linus's tree for the
next -rc kernel release.
If you have any questions about this process, please let me know.
>From 5fd77a3a0e408c23ab4002a57db980e46bc16e72 Mon Sep 17 00:00:00 2001
From: Shuah Khan <shuahkh(a)osg.samsung.com>
Date: Fri, 22 Dec 2017 19:23:47 -0700
Subject: usbip: vudc_tx: fix v_send_ret_submit() vulnerability to null xfer
buffer
v_send_ret_submit() handles urb with a null transfer_buffer, when it
replays a packet with potential malicious data that could contain a
null buffer.
Add a check for the condition when actual_length > 0 and transfer_buffer
is null.
Signed-off-by: Shuah Khan <shuahkh(a)osg.samsung.com>
Cc: stable <stable(a)vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/usb/usbip/vudc_tx.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/usbip/vudc_tx.c b/drivers/usb/usbip/vudc_tx.c
index 1440ae0919ec..3ccb17c3e840 100644
--- a/drivers/usb/usbip/vudc_tx.c
+++ b/drivers/usb/usbip/vudc_tx.c
@@ -85,6 +85,13 @@ static int v_send_ret_submit(struct vudc *udc, struct urbp *urb_p)
memset(&pdu_header, 0, sizeof(pdu_header));
memset(&msg, 0, sizeof(msg));
+ if (urb->actual_length > 0 && !urb->transfer_buffer) {
+ dev_err(&udc->gadget.dev,
+ "urb: actual_length %d transfer_buffer null\n",
+ urb->actual_length);
+ return -1;
+ }
+
if (urb_p->type == USB_ENDPOINT_XFER_ISOC)
iovnum = 2 + urb->number_of_packets;
else
@@ -100,8 +107,8 @@ static int v_send_ret_submit(struct vudc *udc, struct urbp *urb_p)
/* 1. setup usbip_header */
setup_ret_submit_pdu(&pdu_header, urb_p);
- usbip_dbg_stub_tx("setup txdata seqnum: %d urb: %p\n",
- pdu_header.base.seqnum, urb);
+ usbip_dbg_stub_tx("setup txdata seqnum: %d\n",
+ pdu_header.base.seqnum);
usbip_header_correct_endian(&pdu_header, 1);
iov[iovnum].iov_base = &pdu_header;
--
2.15.1
This is a note to let you know that I've just added the patch titled
usbip: remove kernel addresses from usb device and urb debug msgs
to my usb git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git
in the usb-linus branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will hopefully also be merged in Linus's tree for the
next -rc kernel release.
If you have any questions about this process, please let me know.
>From e1346fd87c71a1f61de1fe476ec8df1425ac931c Mon Sep 17 00:00:00 2001
From: Shuah Khan <shuahkh(a)osg.samsung.com>
Date: Fri, 22 Dec 2017 17:00:06 -0700
Subject: usbip: remove kernel addresses from usb device and urb debug msgs
usbip_dump_usb_device() and usbip_dump_urb() print kernel addresses.
Remove kernel addresses from usb device and urb debug msgs and improve
the message content.
Instead of printing parent device and bus addresses, print parent device
and bus names.
Signed-off-by: Shuah Khan <shuahkh(a)osg.samsung.com>
Cc: stable <stable(a)vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/usb/usbip/usbip_common.c | 17 +++--------------
1 file changed, 3 insertions(+), 14 deletions(-)
diff --git a/drivers/usb/usbip/usbip_common.c b/drivers/usb/usbip/usbip_common.c
index 7b219d9109b4..ee2bbce24584 100644
--- a/drivers/usb/usbip/usbip_common.c
+++ b/drivers/usb/usbip/usbip_common.c
@@ -91,7 +91,7 @@ static void usbip_dump_usb_device(struct usb_device *udev)
dev_dbg(dev, " devnum(%d) devpath(%s) usb speed(%s)",
udev->devnum, udev->devpath, usb_speed_string(udev->speed));
- pr_debug("tt %p, ttport %d\n", udev->tt, udev->ttport);
+ pr_debug("tt hub ttport %d\n", udev->ttport);
dev_dbg(dev, " ");
for (i = 0; i < 16; i++)
@@ -124,12 +124,8 @@ static void usbip_dump_usb_device(struct usb_device *udev)
}
pr_debug("\n");
- dev_dbg(dev, "parent %p, bus %p\n", udev->parent, udev->bus);
-
- dev_dbg(dev,
- "descriptor %p, config %p, actconfig %p, rawdescriptors %p\n",
- &udev->descriptor, udev->config,
- udev->actconfig, udev->rawdescriptors);
+ dev_dbg(dev, "parent %s, bus %s\n", dev_name(&udev->parent->dev),
+ udev->bus->bus_name);
dev_dbg(dev, "have_langid %d, string_langid %d\n",
udev->have_langid, udev->string_langid);
@@ -237,9 +233,6 @@ void usbip_dump_urb(struct urb *urb)
dev = &urb->dev->dev;
- dev_dbg(dev, " urb :%p\n", urb);
- dev_dbg(dev, " dev :%p\n", urb->dev);
-
usbip_dump_usb_device(urb->dev);
dev_dbg(dev, " pipe :%08x ", urb->pipe);
@@ -248,11 +241,9 @@ void usbip_dump_urb(struct urb *urb)
dev_dbg(dev, " status :%d\n", urb->status);
dev_dbg(dev, " transfer_flags :%08X\n", urb->transfer_flags);
- dev_dbg(dev, " transfer_buffer :%p\n", urb->transfer_buffer);
dev_dbg(dev, " transfer_buffer_length:%d\n",
urb->transfer_buffer_length);
dev_dbg(dev, " actual_length :%d\n", urb->actual_length);
- dev_dbg(dev, " setup_packet :%p\n", urb->setup_packet);
if (urb->setup_packet && usb_pipetype(urb->pipe) == PIPE_CONTROL)
usbip_dump_usb_ctrlrequest(
@@ -262,8 +253,6 @@ void usbip_dump_urb(struct urb *urb)
dev_dbg(dev, " number_of_packets :%d\n", urb->number_of_packets);
dev_dbg(dev, " interval :%d\n", urb->interval);
dev_dbg(dev, " error_count :%d\n", urb->error_count);
- dev_dbg(dev, " context :%p\n", urb->context);
- dev_dbg(dev, " complete :%p\n", urb->complete);
}
EXPORT_SYMBOL_GPL(usbip_dump_urb);
--
2.15.1
This is a note to let you know that I've just added the patch titled
usbip: fix vudc_rx: harden CMD_SUBMIT path to handle malicious input
to my usb git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git
in the usb-linus branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will hopefully also be merged in Linus's tree for the
next -rc kernel release.
If you have any questions about this process, please let me know.
>From b78d830f0049ef1966dc1e0ebd1ec2a594e2cf25 Mon Sep 17 00:00:00 2001
From: Shuah Khan <shuahkh(a)osg.samsung.com>
Date: Fri, 22 Dec 2017 19:23:46 -0700
Subject: usbip: fix vudc_rx: harden CMD_SUBMIT path to handle malicious input
Harden CMD_SUBMIT path to handle malicious input that could trigger
large memory allocations. Add checks to validate transfer_buffer_length
and number_of_packets to protect against bad input requesting for
unbounded memory allocations.
Signed-off-by: Shuah Khan <shuahkh(a)osg.samsung.com>
Cc: stable <stable(a)vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/usb/usbip/vudc_rx.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/usb/usbip/vudc_rx.c b/drivers/usb/usbip/vudc_rx.c
index df1e30989148..1e8a23d92cb4 100644
--- a/drivers/usb/usbip/vudc_rx.c
+++ b/drivers/usb/usbip/vudc_rx.c
@@ -120,6 +120,25 @@ static int v_recv_cmd_submit(struct vudc *udc,
urb_p->new = 1;
urb_p->seqnum = pdu->base.seqnum;
+ if (urb_p->ep->type == USB_ENDPOINT_XFER_ISOC) {
+ /* validate packet size and number of packets */
+ unsigned int maxp, packets, bytes;
+
+ maxp = usb_endpoint_maxp(urb_p->ep->desc);
+ maxp *= usb_endpoint_maxp_mult(urb_p->ep->desc);
+ bytes = pdu->u.cmd_submit.transfer_buffer_length;
+ packets = DIV_ROUND_UP(bytes, maxp);
+
+ if (pdu->u.cmd_submit.number_of_packets < 0 ||
+ pdu->u.cmd_submit.number_of_packets > packets) {
+ dev_err(&udc->gadget.dev,
+ "CMD_SUBMIT: isoc invalid num packets %d\n",
+ pdu->u.cmd_submit.number_of_packets);
+ ret = -EMSGSIZE;
+ goto free_urbp;
+ }
+ }
+
ret = alloc_urb_from_cmd(&urb_p->urb, pdu, urb_p->ep->type);
if (ret) {
usbip_event_add(&udc->ud, VUDC_EVENT_ERROR_MALLOC);
--
2.15.1
This is a note to let you know that I've just added the patch titled
Bluetooth: Prevent stack info leak from the EFS element.
to my char-misc git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
in the char-misc-linus branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will hopefully also be merged in Linus's tree for the
next -rc kernel release.
If you have any questions about this process, please let me know.
>From 06e7e776ca4d36547e503279aeff996cbb292c16 Mon Sep 17 00:00:00 2001
From: Ben Seri <ben(a)armis.com>
Date: Fri, 8 Dec 2017 15:14:47 +0100
Subject: Bluetooth: Prevent stack info leak from the EFS element.
In the function l2cap_parse_conf_rsp and in the function
l2cap_parse_conf_req the following variable is declared without
initialization:
struct l2cap_conf_efs efs;
In addition, when parsing input configuration parameters in both of
these functions, the switch case for handling EFS elements may skip the
memcpy call that will write to the efs variable:
...
case L2CAP_CONF_EFS:
if (olen == sizeof(efs))
memcpy(&efs, (void *)val, olen);
...
The olen in the above if is attacker controlled, and regardless of that
if, in both of these functions the efs variable would eventually be
added to the outgoing configuration request that is being built:
l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs), (unsigned long) &efs);
So by sending a configuration request, or response, that contains an
L2CAP_CONF_EFS element, but with an element length that is not
sizeof(efs) - the memcpy to the uninitialized efs variable can be
avoided, and the uninitialized variable would be returned to the
attacker (16 bytes).
This issue has been assigned CVE-2017-1000410
Cc: Marcel Holtmann <marcel(a)holtmann.org>
Cc: Gustavo Padovan <gustavo(a)padovan.org>
Cc: Johan Hedberg <johan.hedberg(a)gmail.com>
Cc: stable <stable(a)vger.kernel.org>
Signed-off-by: Ben Seri <ben(a)armis.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/bluetooth/l2cap_core.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
index 43ba91c440bc..fc6615d59165 100644
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -3363,9 +3363,10 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
break;
case L2CAP_CONF_EFS:
- remote_efs = 1;
- if (olen == sizeof(efs))
+ if (olen == sizeof(efs)) {
+ remote_efs = 1;
memcpy(&efs, (void *) val, olen);
+ }
break;
case L2CAP_CONF_EWS:
@@ -3584,16 +3585,17 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len,
break;
case L2CAP_CONF_EFS:
- if (olen == sizeof(efs))
+ if (olen == sizeof(efs)) {
memcpy(&efs, (void *)val, olen);
- if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
- efs.stype != L2CAP_SERV_NOTRAFIC &&
- efs.stype != chan->local_stype)
- return -ECONNREFUSED;
+ if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
+ efs.stype != L2CAP_SERV_NOTRAFIC &&
+ efs.stype != chan->local_stype)
+ return -ECONNREFUSED;
- l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
- (unsigned long) &efs, endptr - ptr);
+ l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
+ (unsigned long) &efs, endptr - ptr);
+ }
break;
case L2CAP_CONF_FCS:
--
2.15.1
This is a note to let you know that I've just added the patch titled
mei: me: allow runtime pm for platform with D0i3
to my char-misc git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
in the char-misc-next branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will also be merged in the next major kernel release
during the merge window.
If you have any questions about this process, please let me know.
>From cc365dcf0e56271bedf3de95f88922abe248e951 Mon Sep 17 00:00:00 2001
From: Tomas Winkler <tomas.winkler(a)intel.com>
Date: Tue, 2 Jan 2018 12:01:41 +0200
Subject: mei: me: allow runtime pm for platform with D0i3
>From the pci power documentation:
"The driver itself should not call pm_runtime_allow(), though. Instead,
it should let user space or some platform-specific code do that (user space
can do it via sysfs as stated above)..."
However, the S0ix residency cannot be reached without MEI device getting
into low power state. Hence, for mei devices that support D0i3, it's better
to make runtime power management mandatory and not rely on the system
integration such as udev rules.
This policy cannot be applied globally as some older platforms
were found to have broken power management.
Cc: <stable(a)vger.kernel.org> v4.13+
Cc: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler(a)intel.com>
Reviewed-by: Alexander Usyskin <alexander.usyskin(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/misc/mei/pci-me.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
index f4f17552c9b8..4a0ccda4d04b 100644
--- a/drivers/misc/mei/pci-me.c
+++ b/drivers/misc/mei/pci-me.c
@@ -238,8 +238,11 @@ static int mei_me_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
*/
mei_me_set_pm_domain(dev);
- if (mei_pg_is_enabled(dev))
+ if (mei_pg_is_enabled(dev)) {
pm_runtime_put_noidle(&pdev->dev);
+ if (hw->d0i3_supported)
+ pm_runtime_allow(&pdev->dev);
+ }
dev_dbg(&pdev->dev, "initialization successful.\n");
--
2.15.1
>From the pci power documentation:
"The driver itself should not call pm_runtime_allow(), though. Instead,
it should let user space or some platform-specific code do that (user space
can do it via sysfs as stated above)..."
However, the S0ix residency cannot be reached without MEI device getting
into low power state. Hence, for mei devices that support D0i3, it's better
to make runtime power management mandatory and not rely on the system
integration such as udev rules.
This policy cannot be applied globally as some older platforms
were found to have broken power management.
Cc: <stable(a)vger.kernel.org> v4.13+
Cc: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler(a)intel.com>
Reviewed-by: Alexander Usyskin <alexander.usyskin(a)intel.com>
---
drivers/misc/mei/pci-me.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
index f4f17552c9b8..4a0ccda4d04b 100644
--- a/drivers/misc/mei/pci-me.c
+++ b/drivers/misc/mei/pci-me.c
@@ -238,8 +238,11 @@ static int mei_me_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
*/
mei_me_set_pm_domain(dev);
- if (mei_pg_is_enabled(dev))
+ if (mei_pg_is_enabled(dev)) {
pm_runtime_put_noidle(&pdev->dev);
+ if (hw->d0i3_supported)
+ pm_runtime_allow(&pdev->dev);
+ }
dev_dbg(&pdev->dev, "initialization successful.\n");
--
2.14.3
From: Oliver Hartkopp <socketcan(a)hartkopp.net>
Picking up the patch from Serhey Popovych (commit 191cdb3822e5df6b3c8,
"veth: Be more robust on network device creation when no attributes").
When the peer name attribute is not provided the former implementation tries
to register the given device name twice ... which leads to -EEXIST.
If only one device name is given apply an automatic generated and valid name
for the peer.
Cc: Serhey Popovych <serhe.popovych(a)gmail.com>
Signed-off-by: Oliver Hartkopp <socketcan(a)hartkopp.net>
Cc: linux-stable <stable(a)vger.kernel.org>
Signed-off-by: Marc Kleine-Budde <mkl(a)pengutronix.de>
---
drivers/net/can/vxcan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c
index 8404e8852a0f..b4c4a2c76437 100644
--- a/drivers/net/can/vxcan.c
+++ b/drivers/net/can/vxcan.c
@@ -194,7 +194,7 @@ static int vxcan_newlink(struct net *net, struct net_device *dev,
tbp = peer_tb;
}
- if (tbp[IFLA_IFNAME]) {
+ if (ifmp && tbp[IFLA_IFNAME]) {
nla_strlcpy(ifname, tbp[IFLA_IFNAME], IFNAMSIZ);
name_assign_type = NET_NAME_USER;
} else {
--
2.15.1
The current code hides a couple of bugs.
- The global variable 'clock_event_ddata' is overwritten each time the
init function is invoked.
This is fixed with a kmemdup instead of assigning the global variable. That
prevents a memory corruption when several timers are defined in the DT.
- The clockevent's event_handler is NULL if the time framework does
not select the clockevent when registering it, this is fine but the init
code generates in any case an interrupt leading to dereference this
NULL pointer.
The stm32 timer works with shadow registers, a mechanism to cache the
registers. When a change is done in one buffered register, we need to
artificially generate an event to force the timer to copy the content
of the register to the shadowed register.
The auto-reload register (ARR) is one of the shadowed register as well as
the prescaler register (PSC), so in order to force the copy, we issue an
event which in turn leads to an interrupt and the NULL dereference.
This is fixed by inverting two lines where we clear the status register
before enabling the update event interrupt.
As this kernel crash is resulting from the combination of these two bugs,
the fixes are grouped into a single patch.
Cc: stable(a)vger.kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano(a)linaro.org>
---
drivers/clocksource/timer-stm32.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/clocksource/timer-stm32.c b/drivers/clocksource/timer-stm32.c
index 8f24237..4bfeb99 100644
--- a/drivers/clocksource/timer-stm32.c
+++ b/drivers/clocksource/timer-stm32.c
@@ -106,6 +106,10 @@ static int __init stm32_clockevent_init(struct device_node *np)
unsigned long rate, max_delta;
int irq, ret, bits, prescaler = 1;
+ data = kmemdup(&clock_event_ddata, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
clk = of_clk_get(np, 0);
if (IS_ERR(clk)) {
ret = PTR_ERR(clk);
@@ -156,8 +160,8 @@ static int __init stm32_clockevent_init(struct device_node *np)
writel_relaxed(prescaler - 1, data->base + TIM_PSC);
writel_relaxed(TIM_EGR_UG, data->base + TIM_EGR);
- writel_relaxed(TIM_DIER_UIE, data->base + TIM_DIER);
writel_relaxed(0, data->base + TIM_SR);
+ writel_relaxed(TIM_DIER_UIE, data->base + TIM_DIER);
data->periodic_top = DIV_ROUND_CLOSEST(rate, prescaler * HZ);
@@ -184,6 +188,7 @@ static int __init stm32_clockevent_init(struct device_node *np)
err_clk_enable:
clk_put(clk);
err_clk_get:
+ kfree(data);
return ret;
}
--
2.7.4
In profile.html, if settings.ENABLE_REST_API == False, trying to render a
link to the generate_token page will raise a NoReverseMatch exception, so
we shouldn't render that. In any case, if the REST API is disabled, we
really shouldn't render the API token section of the page at all.
Only render the API token and generation link if settings.ENABLE_REST_API
is True.
Cc: stable
Reported-by: Tomas Novotny <tomas(a)novotny.cz>
Closes: #138 ("NoReverseMatch exception on user login with disabled REST API")
Fixes: 85c8f369204a ("views: Provide a way to view, (re)generate tokens")
Signed-off-by: Andrew Donnellan <andrew.donnellan(a)au1.ibm.com>
---
patchwork/templates/patchwork/profile.html | 2 ++
patchwork/views/user.py | 2 ++
2 files changed, 4 insertions(+)
diff --git a/patchwork/templates/patchwork/profile.html b/patchwork/templates/patchwork/profile.html
index 75c4f59..4ca78da 100644
--- a/patchwork/templates/patchwork/profile.html
+++ b/patchwork/templates/patchwork/profile.html
@@ -140,6 +140,7 @@ address.</p>
<th>Password:</th>
<td><a href="{% url 'password_change' %}">Change password</a>
</tr>
+{% if rest_api_enabled %}
<tr>
<th>API Token:</th>
<td>
@@ -162,6 +163,7 @@ address.</p>
</form>
</td>
</tr>
+{% endif %}
</table>
</div>
diff --git a/patchwork/views/user.py b/patchwork/views/user.py
index d99fedf..693c02d 100644
--- a/patchwork/views/user.py
+++ b/patchwork/views/user.py
@@ -128,6 +128,8 @@ def profile(request):
context['linked_emails'] = people
context['linkform'] = EmailForm()
context['api_token'] = request.user.profile.token
+ if settings.ENABLE_REST_API:
+ context['rest_api_enabled'] = True
return render(request, 'patchwork/profile.html', context)
--
2.11.0
This is a note to let you know that I've just added the patch titled
rtc: m41t80: remove unneeded checks from m41t80_sqw_set_rate
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rtc-m41t80-remove-unneeded-checks-from-m41t80_sqw_set_rate.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 05a03bf260e0480bfc0db91b1fdbc2115e3f193b Mon Sep 17 00:00:00 2001
From: Troy Kisky <troy.kisky(a)boundarydevices.com>
Date: Thu, 2 Nov 2017 18:58:16 -0700
Subject: rtc: m41t80: remove unneeded checks from m41t80_sqw_set_rate
From: Troy Kisky <troy.kisky(a)boundarydevices.com>
commit 05a03bf260e0480bfc0db91b1fdbc2115e3f193b upstream.
m41t80_sqw_set_rate will be called with the result from
m41t80_sqw_round_rate, so might as well make
m41t80_sqw_set_rate(n) same as
m41t80_sqw_set_rate(m41t80_sqw_round_rate(n))
As Russell King wrote[1],
"clk_round_rate() is supposed to tell you what you end up with if you
ask clk_set_rate() to set the exact same value you passed in - but
clk_round_rate() won't modify the hardware."
[1]
http://lists.infradead.org/pipermail/linux-arm-kernel/2012-January/080175.h…
Signed-off-by: Troy Kisky <troy.kisky(a)boundarydevices.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni(a)free-electrons.com>
Cc: Christoph Fritz <chf.fritz(a)googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/rtc/rtc-m41t80.c | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)
--- a/drivers/rtc/rtc-m41t80.c
+++ b/drivers/rtc/rtc-m41t80.c
@@ -490,17 +490,12 @@ static int m41t80_sqw_set_rate(struct cl
M41T80_REG_WDAY : M41T80_REG_SQW;
int reg, ret, val = 0;
- if (rate) {
- if (!is_power_of_2(rate))
- return -EINVAL;
- val = ilog2(rate);
- if (val == ilog2(M41T80_SQW_MAX_FREQ))
- val = 1;
- else if (val < (ilog2(M41T80_SQW_MAX_FREQ) - 1))
- val = ilog2(M41T80_SQW_MAX_FREQ) - val;
- else
- return -EINVAL;
- }
+ if (rate >= M41T80_SQW_MAX_FREQ)
+ val = 1;
+ else if (rate >= M41T80_SQW_MAX_FREQ / 4)
+ val = 2;
+ else if (rate)
+ val = 15 - ilog2(rate);
reg = i2c_smbus_read_byte_data(client, reg_sqw);
if (reg < 0)
Patches currently in stable-queue which might be from troy.kisky(a)boundarydevices.com are
queue-4.14/rtc-m41t80-m41t80_sqw_set_rate-should-return-0-on-success.patch
queue-4.14/rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_is_prepared.patch
queue-4.14/rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_recalc_rate.patch
queue-4.14/rtc-m41t80-remove-unneeded-checks-from-m41t80_sqw_set_rate.patch
queue-4.14/rtc-m41t80-fix-m41t80_sqw_round_rate-return-value.patch
This is a note to let you know that I've just added the patch titled
rtc: m41t80: m41t80_sqw_set_rate should return 0 on success
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rtc-m41t80-m41t80_sqw_set_rate-should-return-0-on-success.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From de6042d2fa8afe22b76e3c68fd6e9584c9415a3b Mon Sep 17 00:00:00 2001
From: Troy Kisky <troy.kisky(a)boundarydevices.com>
Date: Thu, 2 Nov 2017 18:58:12 -0700
Subject: rtc: m41t80: m41t80_sqw_set_rate should return 0 on success
From: Troy Kisky <troy.kisky(a)boundarydevices.com>
commit de6042d2fa8afe22b76e3c68fd6e9584c9415a3b upstream.
Previously it was returning -EINVAL upon success.
Signed-off-by: Troy Kisky <troy.kisky(a)boundarydevices.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni(a)free-electrons.com>
Cc: Christoph Fritz <chf.fritz(a)googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/rtc/rtc-m41t80.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
--- a/drivers/rtc/rtc-m41t80.c
+++ b/drivers/rtc/rtc-m41t80.c
@@ -510,10 +510,7 @@ static int m41t80_sqw_set_rate(struct cl
reg = (reg & 0x0f) | (val << 4);
ret = i2c_smbus_write_byte_data(client, reg_sqw, reg);
- if (ret < 0)
- return ret;
-
- return -EINVAL;
+ return ret;
}
static int m41t80_sqw_control(struct clk_hw *hw, bool enable)
Patches currently in stable-queue which might be from troy.kisky(a)boundarydevices.com are
queue-4.14/rtc-m41t80-m41t80_sqw_set_rate-should-return-0-on-success.patch
queue-4.14/rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_is_prepared.patch
queue-4.14/rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_recalc_rate.patch
queue-4.14/rtc-m41t80-remove-unneeded-checks-from-m41t80_sqw_set_rate.patch
queue-4.14/rtc-m41t80-fix-m41t80_sqw_round_rate-return-value.patch
This is a note to let you know that I've just added the patch titled
rtc: m41t80: fix m41t80_sqw_round_rate return value
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rtc-m41t80-fix-m41t80_sqw_round_rate-return-value.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From c8384bb04261b9d32fe7402a6068ddaf38913b23 Mon Sep 17 00:00:00 2001
From: Troy Kisky <troy.kisky(a)boundarydevices.com>
Date: Thu, 2 Nov 2017 18:58:13 -0700
Subject: rtc: m41t80: fix m41t80_sqw_round_rate return value
From: Troy Kisky <troy.kisky(a)boundarydevices.com>
commit c8384bb04261b9d32fe7402a6068ddaf38913b23 upstream.
Previously it was returning the best of
32768, 8192, 1024, 64, 2, 0
Now, best of
32768, 8192, 4096, 2048, 1024, 512, 256, 128,
64, 32, 16, 8, 4, 2, 1, 0
Signed-off-by: Troy Kisky <troy.kisky(a)boundarydevices.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni(a)free-electrons.com>
Cc: Christoph Fritz <chf.fritz(a)googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/rtc/rtc-m41t80.c | 19 +++++++------------
1 file changed, 7 insertions(+), 12 deletions(-)
--- a/drivers/rtc/rtc-m41t80.c
+++ b/drivers/rtc/rtc-m41t80.c
@@ -468,18 +468,13 @@ static unsigned long m41t80_sqw_recalc_r
static long m41t80_sqw_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *prate)
{
- int i, freq = M41T80_SQW_MAX_FREQ;
-
- if (freq <= rate)
- return freq;
-
- for (i = 2; i <= ilog2(M41T80_SQW_MAX_FREQ); i++) {
- freq /= 1 << i;
- if (freq <= rate)
- return freq;
- }
-
- return 0;
+ if (rate >= M41T80_SQW_MAX_FREQ)
+ return M41T80_SQW_MAX_FREQ;
+ if (rate >= M41T80_SQW_MAX_FREQ / 4)
+ return M41T80_SQW_MAX_FREQ / 4;
+ if (!rate)
+ return 0;
+ return 1 << ilog2(rate);
}
static int m41t80_sqw_set_rate(struct clk_hw *hw, unsigned long rate,
Patches currently in stable-queue which might be from troy.kisky(a)boundarydevices.com are
queue-4.14/rtc-m41t80-m41t80_sqw_set_rate-should-return-0-on-success.patch
queue-4.14/rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_is_prepared.patch
queue-4.14/rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_recalc_rate.patch
queue-4.14/rtc-m41t80-remove-unneeded-checks-from-m41t80_sqw_set_rate.patch
queue-4.14/rtc-m41t80-fix-m41t80_sqw_round_rate-return-value.patch
This is a note to let you know that I've just added the patch titled
rtc: m41t80: avoid i2c read in m41t80_sqw_is_prepared
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_is_prepared.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 13bb1d78f2e372ec0d9b30489ac63768240140fc Mon Sep 17 00:00:00 2001
From: Troy Kisky <troy.kisky(a)boundarydevices.com>
Date: Thu, 2 Nov 2017 18:58:15 -0700
Subject: rtc: m41t80: avoid i2c read in m41t80_sqw_is_prepared
From: Troy Kisky <troy.kisky(a)boundarydevices.com>
commit 13bb1d78f2e372ec0d9b30489ac63768240140fc upstream.
This is a little more efficient and avoids the warning
WARNING: possible circular locking dependency detected
4.14.0-rc7-00010 #16 Not tainted
------------------------------------------------------
kworker/2:1/70 is trying to acquire lock:
(prepare_lock){+.+.}, at: [<c049300c>] clk_prepare_lock+0x80/0xf4
but task is already holding lock:
(i2c_register_adapter){+.+.}, at: [<c0690b04>]
i2c_adapter_lock_bus+0x14/0x18
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (i2c_register_adapter){+.+.}:
rt_mutex_lock+0x44/0x5c
i2c_adapter_lock_bus+0x14/0x18
i2c_transfer+0xa8/0xbc
i2c_smbus_xfer+0x20c/0x5d8
i2c_smbus_read_byte_data+0x38/0x48
m41t80_sqw_is_prepared+0x18/0x28
Signed-off-by: Troy Kisky <troy.kisky(a)boundarydevices.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni(a)free-electrons.com>
Cc: Christoph Fritz <chf.fritz(a)googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/rtc/rtc-m41t80.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
--- a/drivers/rtc/rtc-m41t80.c
+++ b/drivers/rtc/rtc-m41t80.c
@@ -155,6 +155,7 @@ struct m41t80_data {
#ifdef CONFIG_COMMON_CLK
struct clk_hw sqw;
unsigned long freq;
+ unsigned int sqwe;
#endif
};
@@ -527,7 +528,10 @@ static int m41t80_sqw_control(struct clk
else
ret &= ~M41T80_ALMON_SQWE;
- return i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, ret);
+ ret = i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, ret);
+ if (!ret)
+ m41t80->sqwe = enable;
+ return ret;
}
static int m41t80_sqw_prepare(struct clk_hw *hw)
@@ -542,14 +546,7 @@ static void m41t80_sqw_unprepare(struct
static int m41t80_sqw_is_prepared(struct clk_hw *hw)
{
- struct m41t80_data *m41t80 = sqw_to_m41t80_data(hw);
- struct i2c_client *client = m41t80->client;
- int ret = i2c_smbus_read_byte_data(client, M41T80_REG_ALARM_MON);
-
- if (ret < 0)
- return ret;
-
- return !!(ret & M41T80_ALMON_SQWE);
+ return sqw_to_m41t80_data(hw)->sqwe;
}
static const struct clk_ops m41t80_sqw_ops = {
Patches currently in stable-queue which might be from troy.kisky(a)boundarydevices.com are
queue-4.14/rtc-m41t80-m41t80_sqw_set_rate-should-return-0-on-success.patch
queue-4.14/rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_is_prepared.patch
queue-4.14/rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_recalc_rate.patch
queue-4.14/rtc-m41t80-remove-unneeded-checks-from-m41t80_sqw_set_rate.patch
queue-4.14/rtc-m41t80-fix-m41t80_sqw_round_rate-return-value.patch
This is a note to let you know that I've just added the patch titled
rtc: m41t80: avoid i2c read in m41t80_sqw_recalc_rate
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_recalc_rate.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 2cb90ed3de1e279dbaf23df141f54eb9fb1861e6 Mon Sep 17 00:00:00 2001
From: Troy Kisky <troy.kisky(a)boundarydevices.com>
Date: Thu, 2 Nov 2017 18:58:14 -0700
Subject: rtc: m41t80: avoid i2c read in m41t80_sqw_recalc_rate
From: Troy Kisky <troy.kisky(a)boundarydevices.com>
commit 2cb90ed3de1e279dbaf23df141f54eb9fb1861e6 upstream.
This is a little more efficient, and avoids the warning
WARNING: possible circular locking dependency detected
4.14.0-rc7-00007 #14 Not tainted
------------------------------------------------------
alsactl/330 is trying to acquire lock:
(prepare_lock){+.+.}, at: [<c049300c>] clk_prepare_lock+0x80/0xf4
but task is already holding lock:
(i2c_register_adapter){+.+.}, at: [<c0690ae0>]
i2c_adapter_lock_bus+0x14/0x18
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (i2c_register_adapter){+.+.}:
rt_mutex_lock+0x44/0x5c
i2c_adapter_lock_bus+0x14/0x18
i2c_transfer+0xa8/0xbc
i2c_smbus_xfer+0x20c/0x5d8
i2c_smbus_read_byte_data+0x38/0x48
m41t80_sqw_recalc_rate+0x24/0x58
Signed-off-by: Troy Kisky <troy.kisky(a)boundarydevices.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni(a)free-electrons.com>
Cc: Christoph Fritz <chf.fritz(a)googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/rtc/rtc-m41t80.c | 28 +++++++++++++++++-----------
1 file changed, 17 insertions(+), 11 deletions(-)
--- a/drivers/rtc/rtc-m41t80.c
+++ b/drivers/rtc/rtc-m41t80.c
@@ -154,6 +154,7 @@ struct m41t80_data {
struct rtc_device *rtc;
#ifdef CONFIG_COMMON_CLK
struct clk_hw sqw;
+ unsigned long freq;
#endif
};
@@ -443,26 +444,28 @@ static SIMPLE_DEV_PM_OPS(m41t80_pm, m41t
#ifdef CONFIG_COMMON_CLK
#define sqw_to_m41t80_data(_hw) container_of(_hw, struct m41t80_data, sqw)
-static unsigned long m41t80_sqw_recalc_rate(struct clk_hw *hw,
- unsigned long parent_rate)
+static unsigned long m41t80_decode_freq(int setting)
+{
+ return (setting == 0) ? 0 : (setting == 1) ? M41T80_SQW_MAX_FREQ :
+ M41T80_SQW_MAX_FREQ >> setting;
+}
+
+static unsigned long m41t80_get_freq(struct m41t80_data *m41t80)
{
- struct m41t80_data *m41t80 = sqw_to_m41t80_data(hw);
struct i2c_client *client = m41t80->client;
int reg_sqw = (m41t80->features & M41T80_FEATURE_SQ_ALT) ?
M41T80_REG_WDAY : M41T80_REG_SQW;
int ret = i2c_smbus_read_byte_data(client, reg_sqw);
- unsigned long val = M41T80_SQW_MAX_FREQ;
if (ret < 0)
return 0;
+ return m41t80_decode_freq(ret >> 4);
+}
- ret >>= 4;
- if (ret == 0)
- val = 0;
- else if (ret > 1)
- val = val / (1 << ret);
-
- return val;
+static unsigned long m41t80_sqw_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ return sqw_to_m41t80_data(hw)->freq;
}
static long m41t80_sqw_round_rate(struct clk_hw *hw, unsigned long rate,
@@ -505,6 +508,8 @@ static int m41t80_sqw_set_rate(struct cl
reg = (reg & 0x0f) | (val << 4);
ret = i2c_smbus_write_byte_data(client, reg_sqw, reg);
+ if (!ret)
+ m41t80->freq = m41t80_decode_freq(val);
return ret;
}
@@ -579,6 +584,7 @@ static struct clk *m41t80_sqw_register_c
init.parent_names = NULL;
init.num_parents = 0;
m41t80->sqw.init = &init;
+ m41t80->freq = m41t80_get_freq(m41t80);
/* optional override of the clockname */
of_property_read_string(node, "clock-output-names", &init.name);
Patches currently in stable-queue which might be from troy.kisky(a)boundarydevices.com are
queue-4.14/rtc-m41t80-m41t80_sqw_set_rate-should-return-0-on-success.patch
queue-4.14/rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_is_prepared.patch
queue-4.14/rtc-m41t80-avoid-i2c-read-in-m41t80_sqw_recalc_rate.patch
queue-4.14/rtc-m41t80-remove-unneeded-checks-from-m41t80_sqw_set_rate.patch
queue-4.14/rtc-m41t80-fix-m41t80_sqw_round_rate-return-value.patch
This is a note to let you know that I've just added the patch titled
Revert "xfrm: Fix stack-out-of-bounds read in xfrm_state_find."
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
revert-xfrm-fix-stack-out-of-bounds-read-in-xfrm_state_find.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 94802151894d482e82c324edf2c658f8e6b96508 Mon Sep 17 00:00:00 2001
From: Steffen Klassert <steffen.klassert(a)secunet.com>
Date: Wed, 15 Nov 2017 06:40:57 +0100
Subject: Revert "xfrm: Fix stack-out-of-bounds read in xfrm_state_find."
From: Steffen Klassert <steffen.klassert(a)secunet.com>
commit 94802151894d482e82c324edf2c658f8e6b96508 upstream.
This reverts commit c9f3f813d462c72dbe412cee6a5cbacf13c4ad5e.
This commit breaks transport mode when the policy template
has widlcard addresses configured, so revert it.
Signed-off-by: Steffen Klassert <steffen.klassert(a)secunet.com>
Cc: From: Derek Robson <robsonde(a)gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/xfrm/xfrm_policy.c | 29 ++++++++++++++++++-----------
1 file changed, 18 insertions(+), 11 deletions(-)
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -1362,29 +1362,36 @@ xfrm_tmpl_resolve_one(struct xfrm_policy
struct net *net = xp_net(policy);
int nx;
int i, error;
+ xfrm_address_t *daddr = xfrm_flowi_daddr(fl, family);
+ xfrm_address_t *saddr = xfrm_flowi_saddr(fl, family);
xfrm_address_t tmp;
for (nx = 0, i = 0; i < policy->xfrm_nr; i++) {
struct xfrm_state *x;
- xfrm_address_t *local;
- xfrm_address_t *remote;
+ xfrm_address_t *remote = daddr;
+ xfrm_address_t *local = saddr;
struct xfrm_tmpl *tmpl = &policy->xfrm_vec[i];
- remote = &tmpl->id.daddr;
- local = &tmpl->saddr;
- if (xfrm_addr_any(local, tmpl->encap_family)) {
- error = xfrm_get_saddr(net, fl->flowi_oif,
- &tmp, remote,
- tmpl->encap_family, 0);
- if (error)
- goto fail;
- local = &tmp;
+ if (tmpl->mode == XFRM_MODE_TUNNEL ||
+ tmpl->mode == XFRM_MODE_BEET) {
+ remote = &tmpl->id.daddr;
+ local = &tmpl->saddr;
+ if (xfrm_addr_any(local, tmpl->encap_family)) {
+ error = xfrm_get_saddr(net, fl->flowi_oif,
+ &tmp, remote,
+ tmpl->encap_family, 0);
+ if (error)
+ goto fail;
+ local = &tmp;
+ }
}
x = xfrm_state_find(remote, local, fl, tmpl, policy, &error, family);
if (x && x->km.state == XFRM_STATE_VALID) {
xfrm[nx++] = x;
+ daddr = remote;
+ saddr = local;
continue;
}
if (x) {
Patches currently in stable-queue which might be from steffen.klassert(a)secunet.com are
queue-4.14/revert-xfrm-fix-stack-out-of-bounds-read-in-xfrm_state_find.patch
This is a note to let you know that I've just added the patch titled
x86/pti: Switch to kernel CR3 at early in entry_SYSCALL_compat()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-pti-switch-to-kernel-cr3-at-early-in-entry_syscall_compat.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d7732ba55c4b6a2da339bb12589c515830cfac2c Mon Sep 17 00:00:00 2001
From: Thomas Gleixner <tglx(a)linutronix.de>
Date: Wed, 3 Jan 2018 19:52:04 +0100
Subject: x86/pti: Switch to kernel CR3 at early in entry_SYSCALL_compat()
From: Thomas Gleixner <tglx(a)linutronix.de>
commit d7732ba55c4b6a2da339bb12589c515830cfac2c upstream.
The preparation for PTI which added CR3 switching to the entry code
misplaced the CR3 switch in entry_SYSCALL_compat().
With PTI enabled the entry code tries to access a per cpu variable after
switching to kernel GS. This fails because that variable is not mapped to
user space. This results in a double fault and in the worst case a kernel
crash.
Move the switch ahead of the access and clobber RSP which has been saved
already.
Fixes: 8a09317b895f ("x86/mm/pti: Prepare the x86/entry assembly code for entry/exit CR3 switching")
Reported-by: Lars Wendler <wendler.lars(a)web.de>
Reported-by: Laura Abbott <labbott(a)redhat.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Borislav Betkov <bp(a)alien8.de>
Cc: Andy Lutomirski <luto(a)kernel.org>,
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>,
Cc: Peter Zijlstra <peterz(a)infradead.org>,
Cc: Greg KH <gregkh(a)linuxfoundation.org>, ,
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>,
Cc: Juergen Gross <jgross(a)suse.com>
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801031949200.1957@nanos
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64_compat.S | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -190,8 +190,13 @@ ENTRY(entry_SYSCALL_compat)
/* Interrupts are off on entry. */
swapgs
- /* Stash user ESP and switch to the kernel stack. */
+ /* Stash user ESP */
movl %esp, %r8d
+
+ /* Use %rsp as scratch reg. User ESP is stashed in r8 */
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
+ /* Switch to the kernel stack */
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
/* Construct struct pt_regs on stack */
@@ -220,12 +225,6 @@ GLOBAL(entry_SYSCALL_compat_after_hwfram
pushq $0 /* pt_regs->r15 = 0 */
/*
- * We just saved %rdi so it is safe to clobber. It is not
- * preserved during the C calls inside TRACE_IRQS_OFF anyway.
- */
- SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
-
- /*
* User mode is traced as though IRQs are on, and SYSENTER
* turned them off.
*/
Patches currently in stable-queue which might be from tglx(a)linutronix.de are
queue-4.14/x86-pti-switch-to-kernel-cr3-at-early-in-entry_syscall_compat.patch
queue-4.14/x86-dumpstack-print-registers-for-first-stack-frame.patch
queue-4.14/x86-process-define-cpu_tss_rw-in-same-section-as-declaration.patch
queue-4.14/x86-pti-make-sure-the-user-kernel-ptes-match.patch
queue-4.14/x86-cpu-x86-pti-do-not-enable-pti-on-amd-processors.patch
queue-4.14/x86-dumpstack-fix-partial-register-dumps.patch
This is a note to let you know that I've just added the patch titled
x86/process: Define cpu_tss_rw in same section as declaration
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-process-define-cpu_tss_rw-in-same-section-as-declaration.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 2fd9c41aea47f4ad071accf94b94f94f2c4d31eb Mon Sep 17 00:00:00 2001
From: Nick Desaulniers <ndesaulniers(a)google.com>
Date: Wed, 3 Jan 2018 12:39:52 -0800
Subject: x86/process: Define cpu_tss_rw in same section as declaration
From: Nick Desaulniers <ndesaulniers(a)google.com>
commit 2fd9c41aea47f4ad071accf94b94f94f2c4d31eb upstream.
cpu_tss_rw is declared with DECLARE_PER_CPU_PAGE_ALIGNED
but then defined with DEFINE_PER_CPU_SHARED_ALIGNED
leading to section mismatch warnings.
Use DEFINE_PER_CPU_PAGE_ALIGNED consistently. This is necessary because
it's mapped to the cpu entry area and must be page aligned.
[ tglx: Massaged changelog a bit ]
Fixes: 1a935bc3d4ea ("x86/entry: Move SYSENTER_stack to the beginning of struct tss_struct")
Suggested-by: Thomas Gleixner <tglx(a)linutronix.de>
Signed-off-by: Nick Desaulniers <ndesaulniers(a)google.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: thomas.lendacky(a)amd.com
Cc: Borislav Petkov <bpetkov(a)suse.de>
Cc: tklauser(a)distanz.ch
Cc: minipli(a)googlemail.com
Cc: me(a)kylehuey.com
Cc: namit(a)vmware.com
Cc: luto(a)kernel.org
Cc: jpoimboe(a)redhat.com
Cc: tj(a)kernel.org
Cc: cl(a)linux.com
Cc: bp(a)suse.de
Cc: thgarnie(a)google.com
Cc: kirill.shutemov(a)linux.intel.com
Link: https://lkml.kernel.org/r/20180103203954.183360-1-ndesaulniers@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/process.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -47,7 +47,7 @@
* section. Since TSS's are completely CPU-local, we want them
* on exact cacheline boundaries, to eliminate cacheline ping-pong.
*/
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss_rw) = {
+__visible DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss_rw) = {
.x86_tss = {
/*
* .sp0 is only used when entering ring 0 from a lower
Patches currently in stable-queue which might be from ndesaulniers(a)google.com are
queue-4.14/x86-process-define-cpu_tss_rw-in-same-section-as-declaration.patch
This is a note to let you know that I've just added the patch titled
x86/pti: Make sure the user/kernel PTEs match
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-pti-make-sure-the-user-kernel-ptes-match.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 52994c256df36fda9a715697431cba9daecb6b11 Mon Sep 17 00:00:00 2001
From: Thomas Gleixner <tglx(a)linutronix.de>
Date: Wed, 3 Jan 2018 15:57:59 +0100
Subject: x86/pti: Make sure the user/kernel PTEs match
From: Thomas Gleixner <tglx(a)linutronix.de>
commit 52994c256df36fda9a715697431cba9daecb6b11 upstream.
Meelis reported that his K8 Athlon64 emits MCE warnings when PTI is
enabled:
[Hardware Error]: Error Addr: 0x0000ffff81e000e0
[Hardware Error]: MC1 Error: L1 TLB multimatch.
[Hardware Error]: cache level: L1, tx: INSN
The address is in the entry area, which is mapped into kernel _AND_ user
space. That's special because we switch CR3 while we are executing
there.
User mapping:
0xffffffff81e00000-0xffffffff82000000 2M ro PSE GLB x pmd
Kernel mapping:
0xffffffff81000000-0xffffffff82000000 16M ro PSE x pmd
So the K8 is complaining that the TLB entries differ. They differ in the
GLB bit.
Drop the GLB bit when installing the user shared mapping.
Fixes: 6dc72c3cbca0 ("x86/mm/pti: Share entry text PMD")
Reported-by: Meelis Roos <mroos(a)linux.ee>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Tested-by: Meelis Roos <mroos(a)linux.ee>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Tom Lendacky <thomas.lendacky(a)amd.com>
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801031407180.1957@nanos
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/pti.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -367,7 +367,8 @@ static void __init pti_setup_espfix64(vo
static void __init pti_clone_entry_text(void)
{
pti_clone_pmds((unsigned long) __entry_text_start,
- (unsigned long) __irqentry_text_end, _PAGE_RW);
+ (unsigned long) __irqentry_text_end,
+ _PAGE_RW | _PAGE_GLOBAL);
}
/*
Patches currently in stable-queue which might be from tglx(a)linutronix.de are
queue-4.14/x86-pti-switch-to-kernel-cr3-at-early-in-entry_syscall_compat.patch
queue-4.14/x86-dumpstack-print-registers-for-first-stack-frame.patch
queue-4.14/x86-process-define-cpu_tss_rw-in-same-section-as-declaration.patch
queue-4.14/x86-pti-make-sure-the-user-kernel-ptes-match.patch
queue-4.14/x86-cpu-x86-pti-do-not-enable-pti-on-amd-processors.patch
queue-4.14/x86-dumpstack-fix-partial-register-dumps.patch
This is a note to let you know that I've just added the patch titled
x86/dumpstack: Fix partial register dumps
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-dumpstack-fix-partial-register-dumps.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From a9cdbe72c4e8bf3b38781c317a79326e2e1a230d Mon Sep 17 00:00:00 2001
From: Josh Poimboeuf <jpoimboe(a)redhat.com>
Date: Sun, 31 Dec 2017 10:18:06 -0600
Subject: x86/dumpstack: Fix partial register dumps
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Josh Poimboeuf <jpoimboe(a)redhat.com>
commit a9cdbe72c4e8bf3b38781c317a79326e2e1a230d upstream.
The show_regs_safe() logic is wrong. When there's an iret stack frame,
it prints the entire pt_regs -- most of which is random stack data --
instead of just the five registers at the end.
show_regs_safe() is also poorly named: the on_stack() checks aren't for
safety. Rename the function to show_regs_if_on_stack() and add a
comment to explain why the checks are needed.
These issues were introduced with the "partial register dump" feature of
the following commit:
b02fcf9ba121 ("x86/unwinder: Handle stack overflows more gracefully")
That patch had gone through a few iterations of development, and the
above issues were artifacts from a previous iteration of the patch where
'regs' pointed directly to the iret frame rather than to the (partially
empty) pt_regs.
Tested-by: Alexander Tsoy <alexander(a)tsoy.me>
Signed-off-by: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Toralf Förster <toralf.foerster(a)gmx.de>
Fixes: b02fcf9ba121 ("x86/unwinder: Handle stack overflows more gracefully")
Link: http://lkml.kernel.org/r/5b05b8b344f59db2d3d50dbdeba92d60f2304c54.151473674…
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/unwind.h | 17 +++++++++++++----
arch/x86/kernel/dumpstack.c | 28 ++++++++++++++++++++--------
arch/x86/kernel/stacktrace.c | 2 +-
3 files changed, 34 insertions(+), 13 deletions(-)
--- a/arch/x86/include/asm/unwind.h
+++ b/arch/x86/include/asm/unwind.h
@@ -56,18 +56,27 @@ void unwind_start(struct unwind_state *s
#if defined(CONFIG_UNWINDER_ORC) || defined(CONFIG_UNWINDER_FRAME_POINTER)
/*
- * WARNING: The entire pt_regs may not be safe to dereference. In some cases,
- * only the iret frame registers are accessible. Use with caution!
+ * If 'partial' returns true, only the iret frame registers are valid.
*/
-static inline struct pt_regs *unwind_get_entry_regs(struct unwind_state *state)
+static inline struct pt_regs *unwind_get_entry_regs(struct unwind_state *state,
+ bool *partial)
{
if (unwind_done(state))
return NULL;
+ if (partial) {
+#ifdef CONFIG_UNWINDER_ORC
+ *partial = !state->full_regs;
+#else
+ *partial = false;
+#endif
+ }
+
return state->regs;
}
#else
-static inline struct pt_regs *unwind_get_entry_regs(struct unwind_state *state)
+static inline struct pt_regs *unwind_get_entry_regs(struct unwind_state *state,
+ bool *partial)
{
return NULL;
}
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -76,12 +76,23 @@ void show_iret_regs(struct pt_regs *regs
regs->sp, regs->flags);
}
-static void show_regs_safe(struct stack_info *info, struct pt_regs *regs)
+static void show_regs_if_on_stack(struct stack_info *info, struct pt_regs *regs,
+ bool partial)
{
- if (on_stack(info, regs, sizeof(*regs)))
+ /*
+ * These on_stack() checks aren't strictly necessary: the unwind code
+ * has already validated the 'regs' pointer. The checks are done for
+ * ordering reasons: if the registers are on the next stack, we don't
+ * want to print them out yet. Otherwise they'll be shown as part of
+ * the wrong stack. Later, when show_trace_log_lvl() switches to the
+ * next stack, this function will be called again with the same regs so
+ * they can be printed in the right context.
+ */
+ if (!partial && on_stack(info, regs, sizeof(*regs))) {
__show_regs(regs, 0);
- else if (on_stack(info, (void *)regs + IRET_FRAME_OFFSET,
- IRET_FRAME_SIZE)) {
+
+ } else if (partial && on_stack(info, (void *)regs + IRET_FRAME_OFFSET,
+ IRET_FRAME_SIZE)) {
/*
* When an interrupt or exception occurs in entry code, the
* full pt_regs might not have been saved yet. In that case
@@ -98,6 +109,7 @@ void show_trace_log_lvl(struct task_stru
struct stack_info stack_info = {0};
unsigned long visit_mask = 0;
int graph_idx = 0;
+ bool partial;
printk("%sCall Trace:\n", log_lvl);
@@ -140,7 +152,7 @@ void show_trace_log_lvl(struct task_stru
printk("%s <%s>\n", log_lvl, stack_name);
if (regs)
- show_regs_safe(&stack_info, regs);
+ show_regs_if_on_stack(&stack_info, regs, partial);
/*
* Scan the stack, printing any text addresses we find. At the
@@ -164,7 +176,7 @@ void show_trace_log_lvl(struct task_stru
/*
* Don't print regs->ip again if it was already printed
- * by show_regs_safe() below.
+ * by show_regs_if_on_stack().
*/
if (regs && stack == ®s->ip)
goto next;
@@ -199,9 +211,9 @@ next:
unwind_next_frame(&state);
/* if the frame has entry regs, print them */
- regs = unwind_get_entry_regs(&state);
+ regs = unwind_get_entry_regs(&state, &partial);
if (regs)
- show_regs_safe(&stack_info, regs);
+ show_regs_if_on_stack(&stack_info, regs, partial);
}
if (stack_name)
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -98,7 +98,7 @@ static int __save_stack_trace_reliable(s
for (unwind_start(&state, task, NULL, NULL); !unwind_done(&state);
unwind_next_frame(&state)) {
- regs = unwind_get_entry_regs(&state);
+ regs = unwind_get_entry_regs(&state, NULL);
if (regs) {
/*
* Kernel mode registers on the stack indicate an
Patches currently in stable-queue which might be from jpoimboe(a)redhat.com are
queue-4.14/x86-dumpstack-print-registers-for-first-stack-frame.patch
queue-4.14/x86-process-define-cpu_tss_rw-in-same-section-as-declaration.patch
queue-4.14/x86-dumpstack-fix-partial-register-dumps.patch
This is a note to let you know that I've just added the patch titled
x86/dumpstack: Print registers for first stack frame
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-dumpstack-print-registers-for-first-stack-frame.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 3ffdeb1a02be3086f1411a15c5b9c481fa28e21f Mon Sep 17 00:00:00 2001
From: Josh Poimboeuf <jpoimboe(a)redhat.com>
Date: Sun, 31 Dec 2017 10:18:07 -0600
Subject: x86/dumpstack: Print registers for first stack frame
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Josh Poimboeuf <jpoimboe(a)redhat.com>
commit 3ffdeb1a02be3086f1411a15c5b9c481fa28e21f upstream.
In the stack dump code, if the frame after the starting pt_regs is also
a regs frame, the registers don't get printed. Fix that.
Reported-by: Andy Lutomirski <luto(a)amacapital.net>
Tested-by: Alexander Tsoy <alexander(a)tsoy.me>
Signed-off-by: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Toralf Förster <toralf.foerster(a)gmx.de>
Fixes: 3b3fa11bc700 ("x86/dumpstack: Print any pt_regs found on the stack")
Link: http://lkml.kernel.org/r/396f84491d2f0ef64eda4217a2165f5712f6a115.151473674…
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/dumpstack.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -115,6 +115,7 @@ void show_trace_log_lvl(struct task_stru
unwind_start(&state, task, regs, stack);
stack = stack ? : get_stack_pointer(task, regs);
+ regs = unwind_get_entry_regs(&state, &partial);
/*
* Iterate through the stacks, starting with the current stack pointer.
@@ -132,7 +133,7 @@ void show_trace_log_lvl(struct task_stru
* - hardirq stack
* - entry stack
*/
- for (regs = NULL; stack; stack = PTR_ALIGN(stack_info.next_sp, sizeof(long))) {
+ for ( ; stack; stack = PTR_ALIGN(stack_info.next_sp, sizeof(long))) {
const char *stack_name;
if (get_stack_info(stack, task, &stack_info, &visit_mask)) {
Patches currently in stable-queue which might be from jpoimboe(a)redhat.com are
queue-4.14/x86-dumpstack-print-registers-for-first-stack-frame.patch
queue-4.14/x86-process-define-cpu_tss_rw-in-same-section-as-declaration.patch
queue-4.14/x86-dumpstack-fix-partial-register-dumps.patch
This is a note to let you know that I've just added the patch titled
x86/cpu, x86/pti: Do not enable PTI on AMD processors
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-cpu-x86-pti-do-not-enable-pti-on-amd-processors.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 694d99d40972f12e59a3696effee8a376b79d7c8 Mon Sep 17 00:00:00 2001
From: Tom Lendacky <thomas.lendacky(a)amd.com>
Date: Tue, 26 Dec 2017 23:43:54 -0600
Subject: x86/cpu, x86/pti: Do not enable PTI on AMD processors
From: Tom Lendacky <thomas.lendacky(a)amd.com>
commit 694d99d40972f12e59a3696effee8a376b79d7c8 upstream.
AMD processors are not subject to the types of attacks that the kernel
page table isolation feature protects against. The AMD microarchitecture
does not allow memory references, including speculative references, that
access higher privileged data when running in a lesser privileged mode
when that access would result in a page fault.
Disable page table isolation by default on AMD processors by not setting
the X86_BUG_CPU_INSECURE feature, which controls whether X86_FEATURE_PTI
is set.
Signed-off-by: Tom Lendacky <thomas.lendacky(a)amd.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Reviewed-by: Borislav Petkov <bp(a)suse.de>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Link: https://lkml.kernel.org/r/20171227054354.20369.94587.stgit@tlendack-t1.amdo…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/cpu/common.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -899,8 +899,8 @@ static void __init early_identify_cpu(st
setup_force_cpu_cap(X86_FEATURE_ALWAYS);
- /* Assume for now that ALL x86 CPUs are insecure */
- setup_force_cpu_bug(X86_BUG_CPU_INSECURE);
+ if (c->x86_vendor != X86_VENDOR_AMD)
+ setup_force_cpu_bug(X86_BUG_CPU_INSECURE);
fpu__init_system(c);
Patches currently in stable-queue which might be from thomas.lendacky(a)amd.com are
queue-4.14/x86-process-define-cpu_tss_rw-in-same-section-as-declaration.patch
queue-4.14/x86-pti-make-sure-the-user-kernel-ptes-match.patch
queue-4.14/x86-cpu-x86-pti-do-not-enable-pti-on-amd-processors.patch
This is a note to let you know that I've just added the patch titled
exec: Weaken dumpability for secureexec
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
exec-weaken-dumpability-for-secureexec.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e816c201aed5232171f8eb80b5d46ae6516683b9 Mon Sep 17 00:00:00 2001
From: Kees Cook <keescook(a)chromium.org>
Date: Tue, 2 Jan 2018 15:21:33 -0800
Subject: exec: Weaken dumpability for secureexec
From: Kees Cook <keescook(a)chromium.org>
commit e816c201aed5232171f8eb80b5d46ae6516683b9 upstream.
This is a logical revert of commit e37fdb785a5f ("exec: Use secureexec
for setting dumpability")
This weakens dumpability back to checking only for uid/gid changes in
current (which is useless), but userspace depends on dumpability not
being tied to secureexec.
https://bugzilla.redhat.com/show_bug.cgi?id=1528633
Reported-by: Tom Horsley <horsley1953(a)gmail.com>
Fixes: e37fdb785a5f ("exec: Use secureexec for setting dumpability")
Signed-off-by: Kees Cook <keescook(a)chromium.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/exec.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1350,9 +1350,14 @@ void setup_new_exec(struct linux_binprm
current->sas_ss_sp = current->sas_ss_size = 0;
- /* Figure out dumpability. */
+ /*
+ * Figure out dumpability. Note that this checking only of current
+ * is wrong, but userspace depends on it. This should be testing
+ * bprm->secureexec instead.
+ */
if (bprm->interp_flags & BINPRM_FLAGS_ENFORCE_NONDUMP ||
- bprm->secureexec)
+ !(uid_eq(current_euid(), current_uid()) &&
+ gid_eq(current_egid(), current_gid())))
set_dumpable(current->mm, suid_dumpable);
else
set_dumpable(current->mm, SUID_DUMP_USER);
Patches currently in stable-queue which might be from keescook(a)chromium.org are
queue-4.14/exec-weaken-dumpability-for-secureexec.patch
This is a note to let you know that I've just added the patch titled
capabilities: fix buffer overread on very short xattr
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
capabilities-fix-buffer-overread-on-very-short-xattr.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From dc32b5c3e6e2ef29cef76d9ce1b92d394446150e Mon Sep 17 00:00:00 2001
From: Eric Biggers <ebiggers(a)google.com>
Date: Mon, 1 Jan 2018 09:28:31 -0600
Subject: capabilities: fix buffer overread on very short xattr
From: Eric Biggers <ebiggers(a)google.com>
commit dc32b5c3e6e2ef29cef76d9ce1b92d394446150e upstream.
If userspace attempted to set a "security.capability" xattr shorter than
4 bytes (e.g. 'setfattr -n security.capability -v x file'), then
cap_convert_nscap() read past the end of the buffer containing the xattr
value because it accessed the ->magic_etc field without verifying that
the xattr value is long enough to contain that field.
Fix it by validating the xattr value size first.
This bug was found using syzkaller with KASAN. The KASAN report was as
follows (cleaned up slightly):
BUG: KASAN: slab-out-of-bounds in cap_convert_nscap+0x514/0x630 security/commoncap.c:498
Read of size 4 at addr ffff88002d8741c0 by task syz-executor1/2852
CPU: 0 PID: 2852 Comm: syz-executor1 Not tainted 4.15.0-rc6-00200-gcc0aac99d977 #253
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-20171110_100015-anatol 04/01/2014
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0xe3/0x195 lib/dump_stack.c:53
print_address_description+0x73/0x260 mm/kasan/report.c:252
kasan_report_error mm/kasan/report.c:351 [inline]
kasan_report+0x235/0x350 mm/kasan/report.c:409
cap_convert_nscap+0x514/0x630 security/commoncap.c:498
setxattr+0x2bd/0x350 fs/xattr.c:446
path_setxattr+0x168/0x1b0 fs/xattr.c:472
SYSC_setxattr fs/xattr.c:487 [inline]
SyS_setxattr+0x36/0x50 fs/xattr.c:483
entry_SYSCALL_64_fastpath+0x18/0x85
Fixes: 8db6c34f1dbc ("Introduce v3 namespaced file capabilities")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Reviewed-by: Serge Hallyn <serge(a)hallyn.com>
Signed-off-by: James Morris <james.l.morris(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
security/commoncap.c | 21 +++++++++------------
1 file changed, 9 insertions(+), 12 deletions(-)
--- a/security/commoncap.c
+++ b/security/commoncap.c
@@ -348,21 +348,18 @@ static __u32 sansflags(__u32 m)
return m & ~VFS_CAP_FLAGS_EFFECTIVE;
}
-static bool is_v2header(size_t size, __le32 magic)
+static bool is_v2header(size_t size, const struct vfs_cap_data *cap)
{
- __u32 m = le32_to_cpu(magic);
if (size != XATTR_CAPS_SZ_2)
return false;
- return sansflags(m) == VFS_CAP_REVISION_2;
+ return sansflags(le32_to_cpu(cap->magic_etc)) == VFS_CAP_REVISION_2;
}
-static bool is_v3header(size_t size, __le32 magic)
+static bool is_v3header(size_t size, const struct vfs_cap_data *cap)
{
- __u32 m = le32_to_cpu(magic);
-
if (size != XATTR_CAPS_SZ_3)
return false;
- return sansflags(m) == VFS_CAP_REVISION_3;
+ return sansflags(le32_to_cpu(cap->magic_etc)) == VFS_CAP_REVISION_3;
}
/*
@@ -405,7 +402,7 @@ int cap_inode_getsecurity(struct inode *
fs_ns = inode->i_sb->s_user_ns;
cap = (struct vfs_cap_data *) tmpbuf;
- if (is_v2header((size_t) ret, cap->magic_etc)) {
+ if (is_v2header((size_t) ret, cap)) {
/* If this is sizeof(vfs_cap_data) then we're ok with the
* on-disk value, so return that. */
if (alloc)
@@ -413,7 +410,7 @@ int cap_inode_getsecurity(struct inode *
else
kfree(tmpbuf);
return ret;
- } else if (!is_v3header((size_t) ret, cap->magic_etc)) {
+ } else if (!is_v3header((size_t) ret, cap)) {
kfree(tmpbuf);
return -EINVAL;
}
@@ -470,9 +467,9 @@ static kuid_t rootid_from_xattr(const vo
return make_kuid(task_ns, rootid);
}
-static bool validheader(size_t size, __le32 magic)
+static bool validheader(size_t size, const struct vfs_cap_data *cap)
{
- return is_v2header(size, magic) || is_v3header(size, magic);
+ return is_v2header(size, cap) || is_v3header(size, cap);
}
/*
@@ -495,7 +492,7 @@ int cap_convert_nscap(struct dentry *den
if (!*ivalue)
return -EINVAL;
- if (!validheader(size, cap->magic_etc))
+ if (!validheader(size, cap))
return -EINVAL;
if (!capable_wrt_inode_uidgid(inode, CAP_SETFCAP))
return -EPERM;
Patches currently in stable-queue which might be from ebiggers(a)google.com are
queue-4.14/capabilities-fix-buffer-overread-on-very-short-xattr.patch
On Thu, Jan 04, 2018 at 01:24:55AM +0000, Ben Hutchings wrote:
> On Wed, 2018-01-03 at 21:11 +0100, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 4.9.75 release.
> > There are 39 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> [...]
>
> This seems to be missing a backport of:
>
> commit 85900ea51577e31b186e523c8f4e068c79ecc7d3
> Author: Andy Lutomirski <luto(a)kernel.org>
> Date: Tue Dec 12 07:56:42 2017 -0800
>
> x86/pti: Map the vsyscall page if needed
>
> without which, KPTI will break some older user-space.
Ah crap, thanks, I forgot that one in all of this mess, I'll go queue it
up now.
greg k-h
On Thu, Jan 04, 2018 at 12:16:08AM +0000, Ben Hutchings wrote:
> No such exception has been added upstream, so I don't think it's
> appropriate to add it in stable.
On its way:
https://git.kernel.org/tip/694d99d40972f12e59a3696effee8a376b79d7c8
--
Regards/Gruss,
Boris.
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
--
On Wed, Jan 3, 2018 at 5:24 PM, Ben Hutchings <ben(a)decadent.org.uk> wrote:
> On Wed, 2018-01-03 at 21:11 +0100, Greg Kroah-Hartman wrote:
>> This is the start of the stable review cycle for the 4.9.75 release.
>> There are 39 patches in this series, all will be posted as a response
>> to this one. If anyone has any issues with these being applied, please
>> let me know.
> [...]
>
> This seems to be missing a backport of:
>
> commit 85900ea51577e31b186e523c8f4e068c79ecc7d3
> Author: Andy Lutomirski <luto(a)kernel.org>
> Date: Tue Dec 12 07:56:42 2017 -0800
>
> x86/pti: Map the vsyscall page if needed
>
> without which, KPTI will break some older user-space.
You're (kind of) right: I did warn Greg of that yesterday, and we were
both vaguely hoping that someone else could fill in that blank more
quickly than I'd get there.
I doubt it's an actual backport of Andy's 85900ea51577 that's needed -
I'd expect his to depend on at least some of the improved
infrastructure in 4.15-rc, rather than the old-style kaiser-mapping
employed in these backports.
Andrea has a RHEL7 patch for this, and I have a 3.2 patch derived from
Andrea's, but neither would be right for 4.9 or 4.4 as is: I think I'm
going to have to drop today's priority, and attend to this lacuna
tomorrow.
Hugh
Make sure that the initiator port GUID is stored in ch->ini_guid.
Note: when initiating a connection sgid and dgid members in struct
sa_path_rec represent the source and destination GIDs. When accepting
a connection however sgid represents the destination GID and dgid the
source GID.
Fixes: commit 2bce1a6d2209 ("IB/srpt: Accept GUIDs as port names")
Signed-off-by: Bart Van Assche <bart.vanassche(a)wdc.com>
Cc: <stable(a)vger.kernel.org>
---
drivers/infiniband/ulp/srpt/ib_srpt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index 7c4249038004..bfa576aa9f03 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -2077,7 +2077,7 @@ static int srpt_cm_req_recv(struct ib_cm_id *cm_id,
goto destroy_ib;
}
- guid = (__be16 *)¶m->primary_path->sgid.global.interface_id;
+ guid = (__be16 *)¶m->primary_path->dgid.global.interface_id;
snprintf(ch->ini_guid, sizeof(ch->ini_guid), "%04x:%04x:%04x:%04x",
be16_to_cpu(guid[0]), be16_to_cpu(guid[1]),
be16_to_cpu(guid[2]), be16_to_cpu(guid[3]));
--
2.15.1
With the SRP protocol all RDMA operations are initiated by the target.
Since no RDMA operations are initiated by the initiator, do not grant
the initiator permission to submit RDMA reads or writes to the target.
Signed-off-by: Bart Van Assche <bart.vanassche(a)wdc.com>
Cc: <stable(a)vger.kernel.org>
---
drivers/infiniband/ulp/srpt/ib_srpt.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index 8a1bd354b1cc..7c4249038004 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -1013,8 +1013,7 @@ static int srpt_init_ch_qp(struct srpt_rdma_ch *ch, struct ib_qp *qp)
return -ENOMEM;
attr->qp_state = IB_QPS_INIT;
- attr->qp_access_flags = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_READ |
- IB_ACCESS_REMOTE_WRITE;
+ attr->qp_access_flags = IB_ACCESS_LOCAL_WRITE;
attr->port_num = ch->sport->port;
attr->pkey_index = 0;
--
2.15.1
Before we go into suspend mode, we enable the imx uart's interrupt for
the awake bit in the UART Status Register 1. If, for some reason, the
awake bit is already set before we enter suspend mode, we get an
interrupt immediately when we enable interrupts for awake. The uart's
clk_ipg is already disabled at this point. We end up in the interrupt
handler, which usually tries to clear the awake bit. This doesn't work
with the clock disabled. Therefore, we keep getting interrupts forever,
resulting in an endless loop.
Move the calls to serial_imx_enable_wakeup() into the _noirq functions,
where interrupts are disabled and clk_ipg is active. This way, we can
safely clear the awake bit and enable the imx interrupt for awake.
Now that we do the wakeup configuration in .suspend_noirq, we need
separate functions for .suspend_noirq and .freeze_noirq. However,
.resume_noirq and .restore_noirq can still be shared. We just disable
the wakeup source there, this does not conflict with hibernation.
Signed-off-by: Martin Kaiser <martin(a)kaiser.cx>
Cc: stable(a)vger.kernel.org
---
drivers/tty/serial/imx.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
index 250aa26..5df9172 100644
--- a/drivers/tty/serial/imx.c
+++ b/drivers/tty/serial/imx.c
@@ -2221,8 +2221,10 @@ static void serial_imx_enable_wakeup(struct imx_port *sport, bool on)
unsigned int val;
val = readl(sport->port.membase + UCR3);
- if (on)
+ if (on) {
+ writel(USR1_AWAKE, sport->port.membase + USR1);
val |= UCR3_AWAKEN;
+ }
else
val &= ~UCR3_AWAKEN;
writel(val, sport->port.membase + UCR3);
@@ -2245,6 +2247,9 @@ static int imx_serial_port_suspend_noirq(struct device *dev)
if (ret)
return ret;
+ /* enable wakeup from i.MX UART */
+ serial_imx_enable_wakeup(sport, true);
+
serial_imx_save_context(sport);
clk_disable(sport->clk_ipg);
@@ -2264,6 +2269,9 @@ static int imx_serial_port_resume_noirq(struct device *dev)
serial_imx_restore_context(sport);
+ /* disable wakeup from i.MX UART */
+ serial_imx_enable_wakeup(sport, false);
+
clk_disable(sport->clk_ipg);
return 0;
@@ -2291,9 +2299,6 @@ static int imx_serial_port_suspend(struct device *dev)
struct platform_device *pdev = to_platform_device(dev);
struct imx_port *sport = platform_get_drvdata(pdev);
- /* enable wakeup from i.MX UART */
- serial_imx_enable_wakeup(sport, true);
-
uart_suspend_port(&imx_reg, &sport->port);
disable_irq(sport->port.irq);
@@ -2306,9 +2311,6 @@ static int imx_serial_port_resume(struct device *dev)
struct platform_device *pdev = to_platform_device(dev);
struct imx_port *sport = platform_get_drvdata(pdev);
- /* disable wakeup from i.MX UART */
- serial_imx_enable_wakeup(sport, false);
-
uart_resume_port(&imx_reg, &sport->port);
enable_irq(sport->port.irq);
--
2.1.4
This is a note to let you know that I've just added the patch titled
kaiser: enhanced by kernel and user PCIDs
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-enhanced-by-kernel-and-user-pcids.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Dave Hansen <dave.hansen(a)linux.intel.com>
Date: Wed, 30 Aug 2017 16:23:00 -0700
Subject: kaiser: enhanced by kernel and user PCIDs
From: Dave Hansen <dave.hansen(a)linux.intel.com>
Merged performance improvements to Kaiser, using distinct kernel
and user Process Context Identifiers to minimize the TLB flushing.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 10 ++++-
arch/x86/entry/entry_64_compat.S | 1
arch/x86/include/asm/cpufeature.h | 1
arch/x86/include/asm/kaiser.h | 15 ++++++-
arch/x86/include/asm/pgtable_types.h | 26 +++++++++++++
arch/x86/include/asm/tlbflush.h | 54 +++++++++++++++++++++++-----
arch/x86/include/uapi/asm/processor-flags.h | 3 +
arch/x86/kernel/cpu/common.c | 34 +++++++++++++++++
arch/x86/kvm/x86.c | 3 +
arch/x86/mm/kaiser.c | 7 +++
arch/x86/mm/tlb.c | 46 ++++++++++++++++++++++-
11 files changed, 182 insertions(+), 18 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1291,7 +1291,10 @@ ENTRY(nmi)
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
pushq %rax
- andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
+ /* mask off "user" bit of pgd address and 12 PCID bits: */
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ /* Add back kernel PCID and "no flush" bit */
+ orq X86_CR3_PCID_KERN_VAR, %rax
movq %rax, %cr3
#endif
call do_nmi
@@ -1532,7 +1535,10 @@ end_repeat_nmi:
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
pushq %rax
- andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
+ /* mask off "user" bit of pgd address and 12 PCID bits: */
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ /* Add back kernel PCID and "no flush" bit */
+ orq X86_CR3_PCID_KERN_VAR, %rax
movq %rax, %cr3
#endif
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -13,6 +13,7 @@
#include <asm/irqflags.h>
#include <asm/asm.h>
#include <asm/smap.h>
+#include <asm/pgtable_types.h>
#include <asm/kaiser.h>
#include <linux/linkage.h>
#include <linux/err.h>
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -187,6 +187,7 @@
#define X86_FEATURE_ARAT ( 7*32+ 1) /* Always Running APIC Timer */
#define X86_FEATURE_CPB ( 7*32+ 2) /* AMD Core Performance Boost */
#define X86_FEATURE_EPB ( 7*32+ 3) /* IA32_ENERGY_PERF_BIAS support */
+#define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 4) /* Effectively INVPCID && CR4.PCIDE=1 */
#define X86_FEATURE_PLN ( 7*32+ 5) /* Intel Power Limit Notification */
#define X86_FEATURE_PTS ( 7*32+ 6) /* Intel Package Thermal Status */
#define X86_FEATURE_DTHERM ( 7*32+ 7) /* Digital Thermal Sensor */
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -1,5 +1,8 @@
#ifndef _ASM_X86_KAISER_H
#define _ASM_X86_KAISER_H
+
+#include <uapi/asm/processor-flags.h> /* For PCID constants */
+
/*
* This file includes the definitions for the KAISER feature.
* KAISER is a counter measure against x86_64 side channel attacks on
@@ -21,13 +24,21 @@
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
-andq $(~KAISER_SHADOW_PGD_OFFSET), \reg
+andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
+orq X86_CR3_PCID_KERN_VAR, \reg
movq \reg, %cr3
.endm
.macro _SWITCH_TO_USER_CR3 reg
movq %cr3, \reg
-orq $(KAISER_SHADOW_PGD_OFFSET), \reg
+andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
+/*
+ * This can obviously be one instruction by putting the
+ * KAISER_SHADOW_PGD_OFFSET bit in the X86_CR3_PCID_USER_VAR.
+ * But, just leave it now for simplicity.
+ */
+orq X86_CR3_PCID_USER_VAR, \reg
+orq $(KAISER_SHADOW_PGD_OFFSET), \reg
movq \reg, %cr3
.endm
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -106,6 +106,32 @@
_PAGE_SOFT_DIRTY)
#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
+/* The ASID is the lower 12 bits of CR3 */
+#define X86_CR3_PCID_ASID_MASK (_AC((1<<12)-1,UL))
+
+/* Mask for all the PCID-related bits in CR3: */
+#define X86_CR3_PCID_MASK (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_MASK)
+#if defined(CONFIG_KAISER) && defined(CONFIG_X86_64)
+#define X86_CR3_PCID_ASID_KERN (_AC(0x4,UL))
+#define X86_CR3_PCID_ASID_USER (_AC(0x6,UL))
+
+#define X86_CR3_PCID_KERN_FLUSH (X86_CR3_PCID_ASID_KERN)
+#define X86_CR3_PCID_USER_FLUSH (X86_CR3_PCID_ASID_USER)
+#define X86_CR3_PCID_KERN_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_KERN)
+#define X86_CR3_PCID_USER_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_USER)
+#else
+#define X86_CR3_PCID_ASID_KERN (_AC(0x0,UL))
+#define X86_CR3_PCID_ASID_USER (_AC(0x0,UL))
+/*
+ * PCIDs are unsupported on 32-bit and none of these bits can be
+ * set in CR3:
+ */
+#define X86_CR3_PCID_KERN_FLUSH (0)
+#define X86_CR3_PCID_USER_FLUSH (0)
+#define X86_CR3_PCID_KERN_NOFLUSH (0)
+#define X86_CR3_PCID_USER_NOFLUSH (0)
+#endif
+
/*
* The cache modes defined here are used to translate between pure SW usage
* and the HW defined cache mode bits and/or PAT entries.
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -12,7 +12,6 @@ static inline void __invpcid(unsigned lo
unsigned long type)
{
struct { u64 d[2]; } desc = { { pcid, addr } };
-
/*
* The memory clobber is because the whole point is to invalidate
* stale TLB entries and, especially if we're flushing global
@@ -133,14 +132,25 @@ static inline void cr4_set_bits_and_upda
static inline void __native_flush_tlb(void)
{
+ if (!cpu_feature_enabled(X86_FEATURE_INVPCID)) {
+ /*
+ * If current->mm == NULL then we borrow a mm which may change during a
+ * task switch and therefore we must not be preempted while we write CR3
+ * back:
+ */
+ preempt_disable();
+ native_write_cr3(native_read_cr3());
+ preempt_enable();
+ return;
+ }
/*
- * If current->mm == NULL then we borrow a mm which may change during a
- * task switch and therefore we must not be preempted while we write CR3
- * back:
- */
- preempt_disable();
- native_write_cr3(native_read_cr3());
- preempt_enable();
+ * We are no longer using globals with KAISER, so a
+ * "nonglobals" flush would work too. But, this is more
+ * conservative.
+ *
+ * Note, this works with CR4.PCIDE=0 or 1.
+ */
+ invpcid_flush_all();
}
static inline void __native_flush_tlb_global_irq_disabled(void)
@@ -162,6 +172,8 @@ static inline void __native_flush_tlb_gl
/*
* Using INVPCID is considerably faster than a pair of writes
* to CR4 sandwiched inside an IRQ flag save/restore.
+ *
+ * Note, this works with CR4.PCIDE=0 or 1.
*/
invpcid_flush_all();
return;
@@ -181,7 +193,31 @@ static inline void __native_flush_tlb_gl
static inline void __native_flush_tlb_single(unsigned long addr)
{
- asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+ /*
+ * SIMICS #GP's if you run INVPCID with type 2/3
+ * and X86_CR4_PCIDE clear. Shame!
+ *
+ * The ASIDs used below are hard-coded. But, we must not
+ * call invpcid(type=1/2) before CR4.PCIDE=1. Just call
+ * invpcid in the case we are called early.
+ */
+ if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
+ asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+ return;
+ }
+ /* Flush the address out of both PCIDs. */
+ /*
+ * An optimization here might be to determine addresses
+ * that are only kernel-mapped and only flush the kernel
+ * ASID. But, userspace flushes are probably much more
+ * important performance-wise.
+ *
+ * Make sure to do only a single invpcid when KAISER is
+ * disabled and we have only a single ASID.
+ */
+ if (X86_CR3_PCID_ASID_KERN != X86_CR3_PCID_ASID_USER)
+ invpcid_flush_one(X86_CR3_PCID_ASID_KERN, addr);
+ invpcid_flush_one(X86_CR3_PCID_ASID_USER, addr);
}
static inline void __flush_tlb_all(void)
--- a/arch/x86/include/uapi/asm/processor-flags.h
+++ b/arch/x86/include/uapi/asm/processor-flags.h
@@ -77,7 +77,8 @@
#define X86_CR3_PWT _BITUL(X86_CR3_PWT_BIT)
#define X86_CR3_PCD_BIT 4 /* Page Cache Disable */
#define X86_CR3_PCD _BITUL(X86_CR3_PCD_BIT)
-#define X86_CR3_PCID_MASK _AC(0x00000fff,UL) /* PCID Mask */
+#define X86_CR3_PCID_NOFLUSH_BIT 63 /* Preserve old PCID */
+#define X86_CR3_PCID_NOFLUSH _BITULL(X86_CR3_PCID_NOFLUSH_BIT)
/*
* Intel CPU features in CR4
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -321,11 +321,45 @@ static __always_inline void setup_smap(s
}
}
+/*
+ * These can have bit 63 set, so we can not just use a plain "or"
+ * instruction to get their value or'd into CR3. It would take
+ * another register. So, we use a memory reference to these
+ * instead.
+ *
+ * This is also handy because systems that do not support
+ * PCIDs just end up or'ing a 0 into their CR3, which does
+ * no harm.
+ */
+__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_KERN_VAR = 0;
+__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_USER_VAR = 0;
+
static void setup_pcid(struct cpuinfo_x86 *c)
{
if (cpu_has(c, X86_FEATURE_PCID)) {
if (cpu_has(c, X86_FEATURE_PGE)) {
cr4_set_bits(X86_CR4_PCIDE);
+ /*
+ * These variables are used by the entry/exit
+ * code to change PCIDs.
+ */
+#ifdef CONFIG_KAISER
+ X86_CR3_PCID_KERN_VAR = X86_CR3_PCID_KERN_NOFLUSH;
+ X86_CR3_PCID_USER_VAR = X86_CR3_PCID_USER_NOFLUSH;
+#endif
+ /*
+ * INVPCID has two "groups" of types:
+ * 1/2: Invalidate an individual address
+ * 3/4: Invalidate all contexts
+ *
+ * 1/2 take a PCID, but 3/4 do not. So, 3/4
+ * ignore the PCID argument in the descriptor.
+ * But, we have to be careful not to call 1/2
+ * with an actual non-zero PCID in them before
+ * we do the above cr4_set_bits().
+ */
+ if (cpu_has(c, X86_FEATURE_INVPCID))
+ set_cpu_cap(c, X86_FEATURE_INVPCID_SINGLE);
} else {
/*
* flush_tlb_all(), as currently implemented, won't
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -759,7 +759,8 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, u
return 1;
/* PCID can not be enabled when cr3[11:0]!=000H or EFER.LMA=0 */
- if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_MASK) || !is_long_mode(vcpu))
+ if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_ASID_MASK) ||
+ !is_long_mode(vcpu))
return 1;
}
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -240,6 +240,8 @@ static void __init kaiser_init_all_pgds(
} while (0)
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
+extern unsigned long X86_CR3_PCID_KERN_VAR;
+extern unsigned long X86_CR3_PCID_USER_VAR;
/*
* If anything in here fails, we will likely die on one of the
* first kernel->user transitions and init will die. But, we
@@ -290,6 +292,11 @@ void __init kaiser_init(void)
kaiser_add_user_map_early(&debug_idt_table,
sizeof(gate_desc) * NR_VECTORS,
__PAGE_KERNEL);
+
+ kaiser_add_user_map_early(&X86_CR3_PCID_KERN_VAR, PAGE_SIZE,
+ __PAGE_KERNEL);
+ kaiser_add_user_map_early(&X86_CR3_PCID_USER_VAR, PAGE_SIZE,
+ __PAGE_KERNEL);
}
/* Add a mapping to the shadow mapping, and synchronize the mappings */
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -34,6 +34,46 @@ struct flush_tlb_info {
unsigned long flush_end;
};
+static void load_new_mm_cr3(pgd_t *pgdir)
+{
+ unsigned long new_mm_cr3 = __pa(pgdir);
+
+ /*
+ * KAISER, plus PCIDs needs some extra work here. But,
+ * if either of features is not present, we need no
+ * PCIDs here and just do a normal, full TLB flush with
+ * the write_cr3()
+ */
+ if (!IS_ENABLED(CONFIG_KAISER) ||
+ !cpu_feature_enabled(X86_FEATURE_PCID))
+ goto out_set_cr3;
+ /*
+ * We reuse the same PCID for different tasks, so we must
+ * flush all the entires for the PCID out when we change
+ * tasks.
+ */
+ new_mm_cr3 = X86_CR3_PCID_KERN_FLUSH | __pa(pgdir);
+
+ /*
+ * The flush from load_cr3() may leave old TLB entries
+ * for userspace in place. We must flush that context
+ * separately. We can theoretically delay doing this
+ * until we actually load up the userspace CR3, but
+ * that's a bit tricky. We have to have the "need to
+ * flush userspace PCID" bit per-cpu and check it in the
+ * exit-to-userspace paths.
+ */
+ invpcid_flush_single_context(X86_CR3_PCID_ASID_USER);
+
+out_set_cr3:
+ /*
+ * Caution: many callers of this function expect
+ * that load_cr3() is serializing and orders TLB
+ * fills with respect to the mm_cpumask writes.
+ */
+ write_cr3(new_mm_cr3);
+}
+
/*
* We cannot call mmdrop() because we are in interrupt context,
* instead update mm->cpu_vm_mask.
@@ -45,7 +85,7 @@ void leave_mm(int cpu)
BUG();
if (cpumask_test_cpu(cpu, mm_cpumask(active_mm))) {
cpumask_clear_cpu(cpu, mm_cpumask(active_mm));
- load_cr3(swapper_pg_dir);
+ load_new_mm_cr3(swapper_pg_dir);
/*
* This gets called in the idle path where RCU
* functions differently. Tracing normally
@@ -105,7 +145,7 @@ void switch_mm_irqs_off(struct mm_struct
* ordering guarantee we need.
*
*/
- load_cr3(next->pgd);
+ load_new_mm_cr3(next->pgd);
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
@@ -152,7 +192,7 @@ void switch_mm_irqs_off(struct mm_struct
* As above, load_cr3() is serializing and orders TLB
* fills with respect to the mm_cpumask write.
*/
- load_cr3(next->pgd);
+ load_new_mm_cr3(next->pgd);
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
load_mm_cr4(next);
load_mm_ldt(next);
Patches currently in stable-queue which might be from dave.hansen(a)linux.intel.com are
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
This is a note to let you know that I've just added the patch titled
kaiser: merged update
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-merged-update.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Dave Hansen <dave.hansen(a)linux.intel.com>
Date: Wed, 30 Aug 2017 16:23:00 -0700
Subject: kaiser: merged update
From: Dave Hansen <dave.hansen(a)linux.intel.com>
Merged fixes and cleanups, rebased to 4.9.51 tree (no 5-level paging).
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 105 ++++++++++-
arch/x86/include/asm/kaiser.h | 43 ++--
arch/x86/include/asm/pgtable.h | 18 +
arch/x86/include/asm/pgtable_64.h | 48 ++++-
arch/x86/include/asm/pgtable_types.h | 6
arch/x86/kernel/espfix_64.c | 13 -
arch/x86/kernel/head_64.S | 19 +-
arch/x86/kernel/ldt.c | 27 ++
arch/x86/kernel/tracepoint.c | 2
arch/x86/mm/kaiser.c | 317 +++++++++++++++++++++++++----------
arch/x86/mm/pageattr.c | 63 +++++-
arch/x86/mm/pgtable.c | 40 +---
include/linux/kaiser.h | 26 ++
kernel/fork.c | 9
security/Kconfig | 5
15 files changed, 551 insertions(+), 190 deletions(-)
create mode 100644 include/linux/kaiser.h
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -230,6 +230,13 @@ entry_SYSCALL_64_fastpath:
movq RIP(%rsp), %rcx
movq EFLAGS(%rsp), %r11
RESTORE_C_REGS_EXCEPT_RCX_R11
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
SWITCH_USER_CR3
movq RSP(%rsp), %rsp
USERGS_SYSRET64
@@ -326,11 +333,25 @@ return_from_SYSCALL_64:
syscall_return_via_sysret:
/* rcx and r11 are already restored (see code above) */
RESTORE_C_REGS_EXCEPT_RCX_R11
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
SWITCH_USER_CR3
movq RSP(%rsp), %rsp
USERGS_SYSRET64
opportunistic_sysret_failed:
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
SWITCH_USER_CR3
SWAPGS
jmp restore_c_regs_and_iret
@@ -1087,6 +1108,13 @@ ENTRY(error_entry)
cld
SAVE_C_REGS 8
SAVE_EXTRA_REGS 8
+ /*
+ * error_entry() always returns with a kernel gsbase and
+ * CR3. We must also have a kernel CR3/gsbase before
+ * calling TRACE_IRQS_*. Just unconditionally switch to
+ * the kernel CR3 here.
+ */
+ SWITCH_KERNEL_CR3
xorl %ebx, %ebx
testb $3, CS+8(%rsp)
jz .Lerror_kernelspace
@@ -1096,7 +1124,6 @@ ENTRY(error_entry)
* from user mode due to an IRET fault.
*/
SWAPGS
- SWITCH_KERNEL_CR3
.Lerror_entry_from_usermode_after_swapgs:
/*
@@ -1148,7 +1175,6 @@ ENTRY(error_entry)
* Switch to kernel gsbase:
*/
SWAPGS
- SWITCH_KERNEL_CR3
/*
* Pretend that the exception came from user mode: set up pt_regs
@@ -1249,7 +1275,10 @@ ENTRY(nmi)
*/
SWAPGS_UNSAFE_STACK
- SWITCH_KERNEL_CR3_NO_STACK
+ /*
+ * percpu variables are mapped with user CR3, so no need
+ * to switch CR3 here.
+ */
cld
movq %rsp, %rdx
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
@@ -1283,14 +1312,33 @@ ENTRY(nmi)
movq %rsp, %rdi
movq $-1, %rsi
+#ifdef CONFIG_KAISER
+ /* Unconditionally use kernel CR3 for do_nmi() */
+ /* %rax is saved above, so OK to clobber here */
+ movq %cr3, %rax
+ pushq %rax
+#ifdef CONFIG_KAISER_REAL_SWITCH
+ andq $(~0x1000), %rax
+#endif
+ movq %rax, %cr3
+#endif
call do_nmi
+ /*
+ * Unconditionally restore CR3. I know we return to
+ * kernel code that needs user CR3, but do we ever return
+ * to "user mode" where we need the kernel CR3?
+ */
+#ifdef CONFIG_KAISER
+ popq %rax
+ mov %rax, %cr3
+#endif
/*
* Return back to user mode. We must *not* do the normal exit
- * work, because we don't want to enable interrupts. Fortunately,
- * do_nmi doesn't modify pt_regs.
+ * work, because we don't want to enable interrupts. Do not
+ * switch to user CR3: we might be going back to kernel code
+ * that had a user CR3 set.
*/
- SWITCH_USER_CR3
SWAPGS
jmp restore_c_regs_and_iret
@@ -1486,23 +1534,54 @@ end_repeat_nmi:
ALLOC_PT_GPREGS_ON_STACK
/*
- * Use paranoid_entry to handle SWAPGS, but no need to use paranoid_exit
- * as we should not be calling schedule in NMI context.
- * Even with normal interrupts enabled. An NMI should not be
- * setting NEED_RESCHED or anything that normal interrupts and
- * exceptions might do.
+ * Use the same approach as paranoid_entry to handle SWAPGS, but
+ * without CR3 handling since we do that differently in NMIs. No
+ * need to use paranoid_exit as we should not be calling schedule
+ * in NMI context. Even with normal interrupts enabled. An NMI
+ * should not be setting NEED_RESCHED or anything that normal
+ * interrupts and exceptions might do.
*/
- call paranoid_entry
+ cld
+ SAVE_C_REGS
+ SAVE_EXTRA_REGS
+ movl $1, %ebx
+ movl $MSR_GS_BASE, %ecx
+ rdmsr
+ testl %edx, %edx
+ js 1f /* negative -> in kernel */
+ SWAPGS
+ xorl %ebx, %ebx
+1:
+#ifdef CONFIG_KAISER
+ /* Unconditionally use kernel CR3 for do_nmi() */
+ /* %rax is saved above, so OK to clobber here */
+ movq %cr3, %rax
+ pushq %rax
+#ifdef CONFIG_KAISER_REAL_SWITCH
+ andq $(~0x1000), %rax
+#endif
+ movq %rax, %cr3
+#endif
/* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
movq %rsp, %rdi
+ addq $8, %rdi /* point %rdi at ptregs, fixed up for CR3 */
movq $-1, %rsi
call do_nmi
+ /*
+ * Unconditionally restore CR3. We might be returning to
+ * kernel code that needs user CR3, like just just before
+ * a sysret.
+ */
+#ifdef CONFIG_KAISER
+ popq %rax
+ mov %rax, %cr3
+#endif
testl %ebx, %ebx /* swapgs needed? */
jnz nmi_restore
nmi_swapgs:
- SWITCH_USER_CR3_NO_STACK
+ /* We fixed up CR3 above, so no need to switch it here */
SWAPGS_UNSAFE_STACK
nmi_restore:
RESTORE_EXTRA_REGS
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -16,13 +16,17 @@
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
+#ifdef CONFIG_KAISER_REAL_SWITCH
andq $(~0x1000), \reg
+#endif
movq \reg, %cr3
.endm
.macro _SWITCH_TO_USER_CR3 reg
movq %cr3, \reg
+#ifdef CONFIG_KAISER_REAL_SWITCH
orq $(0x1000), \reg
+#endif
movq \reg, %cr3
.endm
@@ -65,48 +69,53 @@ movq PER_CPU_VAR(unsafe_stack_register_b
.endm
#endif /* CONFIG_KAISER */
+
#else /* __ASSEMBLY__ */
#ifdef CONFIG_KAISER
-// Upon kernel/user mode switch, it may happen that
-// the address space has to be switched before the registers have been stored.
-// To change the address space, another register is needed.
-// A register therefore has to be stored/restored.
-//
-DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+/*
+ * Upon kernel/user mode switch, it may happen that the address
+ * space has to be switched before the registers have been
+ * stored. To change the address space, another register is
+ * needed. A register therefore has to be stored/restored.
+*/
-#endif /* CONFIG_KAISER */
+DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
/**
- * shadowmem_add_mapping - map a virtual memory part to the shadow mapping
+ * kaiser_add_mapping - map a virtual memory part to the shadow (user) mapping
* @addr: the start address of the range
* @size: the size of the range
* @flags: The mapping flags of the pages
*
- * the mapping is done on a global scope, so no bigger synchronization has to be done.
- * the pages have to be manually unmapped again when they are not needed any longer.
+ * The mapping is done on a global scope, so no bigger
+ * synchronization has to be done. the pages have to be
+ * manually unmapped again when they are not needed any longer.
*/
-extern void kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags);
+extern int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags);
/**
- * shadowmem_remove_mapping - unmap a virtual memory part of the shadow mapping
+ * kaiser_remove_mapping - unmap a virtual memory part of the shadow mapping
* @addr: the start address of the range
* @size: the size of the range
*/
extern void kaiser_remove_mapping(unsigned long start, unsigned long size);
/**
- * shadowmem_initialize_mapping - Initalize the shadow mapping
+ * kaiser_initialize_mapping - Initalize the shadow mapping
*
- * most parts of the shadow mapping can be mapped upon boot time.
- * only the thread stacks have to be mapped on runtime.
- * the mapped regions are not unmapped at all.
+ * Most parts of the shadow mapping can be mapped upon boot
+ * time. Only per-process things like the thread stacks
+ * or a new LDT have to be mapped at runtime. These boot-
+ * time mappings are permanent and nevertunmapped.
*/
extern void kaiser_init(void);
-#endif
+#endif /* CONFIG_KAISER */
+
+#endif /* __ASSEMBLY */
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -690,7 +690,17 @@ static inline pud_t *pud_offset(pgd_t *p
static inline int pgd_bad(pgd_t pgd)
{
- return (pgd_flags(pgd) & ~_PAGE_USER) != _KERNPG_TABLE;
+ pgdval_t ignore_flags = _PAGE_USER;
+ /*
+ * We set NX on KAISER pgds that map userspace memory so
+ * that userspace can not meaningfully use the kernel
+ * page table by accident; it will fault on the first
+ * instruction it tries to run. See native_set_pgd().
+ */
+ if (IS_ENABLED(CONFIG_KAISER))
+ ignore_flags |= _PAGE_NX;
+
+ return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE;
}
static inline int pgd_none(pgd_t pgd)
@@ -905,8 +915,10 @@ static inline void clone_pgd_range(pgd_t
{
memcpy(dst, src, count * sizeof(pgd_t));
#ifdef CONFIG_KAISER
- // clone the shadow pgd part as well
- memcpy(native_get_shadow_pgd(dst), native_get_shadow_pgd(src), count * sizeof(pgd_t));
+ /* Clone the shadow pgd part as well */
+ memcpy(native_get_shadow_pgd(dst),
+ native_get_shadow_pgd(src),
+ count * sizeof(pgd_t));
#endif
}
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -107,26 +107,58 @@ static inline void native_pud_clear(pud_
}
#ifdef CONFIG_KAISER
-static inline pgd_t * native_get_shadow_pgd(pgd_t *pgdp) {
+static inline pgd_t * native_get_shadow_pgd(pgd_t *pgdp)
+{
return (pgd_t *)(void*)((unsigned long)(void*)pgdp | (unsigned long)PAGE_SIZE);
}
-static inline pgd_t * native_get_normal_pgd(pgd_t *pgdp) {
+static inline pgd_t * native_get_normal_pgd(pgd_t *pgdp)
+{
return (pgd_t *)(void*)((unsigned long)(void*)pgdp & ~(unsigned long)PAGE_SIZE);
}
+#else
+static inline pgd_t * native_get_shadow_pgd(pgd_t *pgdp)
+{
+ BUILD_BUG_ON(1);
+ return NULL;
+}
+static inline pgd_t * native_get_normal_pgd(pgd_t *pgdp)
+{
+ return pgdp;
+}
#endif /* CONFIG_KAISER */
+/*
+ * Page table pages are page-aligned. The lower half of the top
+ * level is used for userspace and the top half for the kernel.
+ * This returns true for user pages that need to get copied into
+ * both the user and kernel copies of the page tables, and false
+ * for kernel pages that should only be in the kernel copy.
+ */
+static inline bool is_userspace_pgd(void *__ptr)
+{
+ unsigned long ptr = (unsigned long)__ptr;
+
+ return ((ptr % PAGE_SIZE) < (PAGE_SIZE / 2));
+}
+
static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
{
#ifdef CONFIG_KAISER
- // We know that a pgd is page aligned.
- // Therefore the lower indices have to be mapped to user space.
- // These pages are mapped to the shadow mapping.
- if ((((unsigned long)pgdp) % PAGE_SIZE) < (PAGE_SIZE / 2)) {
+ pteval_t extra_kern_pgd_flags = 0;
+ /* Do we need to also populate the shadow pgd? */
+ if (is_userspace_pgd(pgdp)) {
native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
+ /*
+ * Even if the entry is *mapping* userspace, ensure
+ * that userspace can not use it. This way, if we
+ * get out to userspace running on the kernel CR3,
+ * userspace will crash instead of running.
+ */
+ extra_kern_pgd_flags = _PAGE_NX;
}
-
- pgdp->pgd = pgd.pgd & ~_PAGE_USER;
+ pgdp->pgd = pgd.pgd;
+ pgdp->pgd |= extra_kern_pgd_flags;
#else /* CONFIG_KAISER */
*pgdp = pgd;
#endif
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -48,7 +48,7 @@
#ifdef CONFIG_KAISER
#define _PAGE_GLOBAL (_AT(pteval_t, 0))
#else
-#define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
+#define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
#endif
#define _PAGE_SOFTW1 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW1)
#define _PAGE_SOFTW2 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW2)
@@ -123,11 +123,7 @@
#define _PAGE_DEVMAP (_AT(pteval_t, 0))
#endif
-#ifdef CONFIG_KAISER
-#define _PAGE_PROTNONE (_AT(pteval_t, 0))
-#else
#define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
-#endif
#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
_PAGE_ACCESSED | _PAGE_DIRTY)
--- a/arch/x86/kernel/espfix_64.c
+++ b/arch/x86/kernel/espfix_64.c
@@ -127,11 +127,14 @@ void __init init_espfix_bsp(void)
/* Install the espfix pud into the kernel page directory */
pgd_p = &init_level4_pgt[pgd_index(ESPFIX_BASE_ADDR)];
pgd_populate(&init_mm, pgd_p, (pud_t *)espfix_pud_page);
-#ifdef CONFIG_KAISER
- // add the esp stack pud to the shadow mapping here.
- // This can be done directly, because the fixup stack has its own pud
- set_pgd(native_get_shadow_pgd(pgd_p), __pgd(_PAGE_TABLE | __pa((pud_t *)espfix_pud_page)));
-#endif
+ /*
+ * Just copy the top-level PGD that is mapping the espfix
+ * area to ensure it is mapped into the shadow user page
+ * tables.
+ */
+ if (IS_ENABLED(CONFIG_KAISER))
+ set_pgd(native_get_shadow_pgd(pgd_p),
+ __pgd(_KERNPG_TABLE | __pa((pud_t *)espfix_pud_page)));
/* Randomize the locations */
init_espfix_random();
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -406,11 +406,24 @@ GLOBAL(early_recursion_flag)
GLOBAL(name)
#ifdef CONFIG_KAISER
+/*
+ * Each PGD needs to be 8k long and 8k aligned. We do not
+ * ever go out to userspace with these, so we do not
+ * strictly *need* the second page, but this allows us to
+ * have a single set_pgd() implementation that does not
+ * need to worry about whether it has 4k or 8k to work
+ * with.
+ *
+ * This ensures PGDs are 8k long:
+ */
+#define KAISER_USER_PGD_FILL 512
+/* This ensures they are 8k-aligned: */
#define NEXT_PGD_PAGE(name) \
.balign 2 * PAGE_SIZE; \
GLOBAL(name)
#else
#define NEXT_PGD_PAGE(name) NEXT_PAGE(name)
+#define KAISER_USER_PGD_FILL 0
#endif
/* Automate the creation of 1 to 1 mapping pmd entries */
@@ -425,6 +438,7 @@ GLOBAL(name)
NEXT_PGD_PAGE(early_level4_pgt)
.fill 511,8,0
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(early_dynamic_pgts)
.fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0
@@ -433,7 +447,8 @@ NEXT_PAGE(early_dynamic_pgts)
#ifndef CONFIG_XEN
NEXT_PGD_PAGE(init_level4_pgt)
- .fill 2*512,8,0
+ .fill 512,8,0
+ .fill KAISER_USER_PGD_FILL,8,0
#else
NEXT_PGD_PAGE(init_level4_pgt)
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
@@ -442,6 +457,7 @@ NEXT_PGD_PAGE(init_level4_pgt)
.org init_level4_pgt + L4_START_KERNEL*8, 0
/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(level3_ident_pgt)
.quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
@@ -452,6 +468,7 @@ NEXT_PAGE(level2_ident_pgt)
*/
PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)
#endif
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(level3_kernel_pgt)
.fill L3_START_KERNEL,8,0
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -18,6 +18,7 @@
#include <linux/uaccess.h>
#include <asm/ldt.h>
+#include <asm/kaiser.h>
#include <asm/desc.h>
#include <asm/mmu_context.h>
#include <asm/syscalls.h>
@@ -34,11 +35,21 @@ static void flush_ldt(void *current_mm)
set_ldt(pc->ldt->entries, pc->ldt->size);
}
+static void __free_ldt_struct(struct ldt_struct *ldt)
+{
+ if (ldt->size * LDT_ENTRY_SIZE > PAGE_SIZE)
+ vfree(ldt->entries);
+ else
+ free_page((unsigned long)ldt->entries);
+ kfree(ldt);
+}
+
/* The caller must call finalize_ldt_struct on the result. LDT starts zeroed. */
static struct ldt_struct *alloc_ldt_struct(int size)
{
struct ldt_struct *new_ldt;
int alloc_size;
+ int ret = 0;
if (size > LDT_ENTRIES)
return NULL;
@@ -66,6 +77,14 @@ static struct ldt_struct *alloc_ldt_stru
return NULL;
}
+ // FIXME: make kaiser_add_mapping() return an error code
+ // when it fails
+ kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
+ __PAGE_KERNEL);
+ if (ret) {
+ __free_ldt_struct(new_ldt);
+ return NULL;
+ }
new_ldt->size = size;
return new_ldt;
}
@@ -92,12 +111,10 @@ static void free_ldt_struct(struct ldt_s
if (likely(!ldt))
return;
+ kaiser_remove_mapping((unsigned long)ldt->entries,
+ ldt->size * LDT_ENTRY_SIZE);
paravirt_free_ldt(ldt->entries, ldt->size);
- if (ldt->size * LDT_ENTRY_SIZE > PAGE_SIZE)
- vfree(ldt->entries);
- else
- free_page((unsigned long)ldt->entries);
- kfree(ldt);
+ __free_ldt_struct(ldt);
}
/*
--- a/arch/x86/kernel/tracepoint.c
+++ b/arch/x86/kernel/tracepoint.c
@@ -9,10 +9,12 @@
#include <linux/atomic.h>
atomic_t trace_idt_ctr = ATOMIC_INIT(0);
+__aligned(PAGE_SIZE)
struct desc_ptr trace_idt_descr = { NR_VECTORS * 16 - 1,
(unsigned long) trace_idt_table };
/* No need to be aligned, but done to keep all IDTs defined the same way. */
+__aligned(PAGE_SIZE)
gate_desc trace_idt_table[NR_VECTORS] __page_aligned_bss;
static int trace_irq_vector_refcount;
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -1,160 +1,305 @@
-
-
+#include <linux/bug.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/bug.h>
#include <linux/init.h>
+#include <linux/interrupt.h>
#include <linux/spinlock.h>
#include <linux/mm.h>
-
#include <linux/uaccess.h>
+
+#include <asm/kaiser.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/desc.h>
#ifdef CONFIG_KAISER
__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+/*
+ * At runtime, the only things we map are some things for CPU
+ * hotplug, and stacks for new processes. No two CPUs will ever
+ * be populating the same addresses, so we only need to ensure
+ * that we protect between two CPUs trying to allocate and
+ * populate the same page table page.
+ *
+ * Only take this lock when doing a set_p[4um]d(), but it is not
+ * needed for doing a set_pte(). We assume that only the *owner*
+ * of a given allocation will be doing this for _their_
+ * allocation.
+ *
+ * This ensures that once a system has been running for a while
+ * and there have been stacks all over and these page tables
+ * are fully populated, there will be no further acquisitions of
+ * this lock.
+ */
+static DEFINE_SPINLOCK(shadow_table_allocation_lock);
-/**
- * Get the real ppn from a address in kernel mapping.
- * @param address The virtual adrress
- * @return the physical address
+/*
+ * Returns -1 on error.
*/
-static inline unsigned long get_pa_from_mapping (unsigned long address)
+static inline unsigned long get_pa_from_mapping(unsigned long vaddr)
{
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
pte_t *pte;
- pgd = pgd_offset_k(address);
- BUG_ON(pgd_none(*pgd) || pgd_large(*pgd));
+ pgd = pgd_offset_k(vaddr);
+ /*
+ * We made all the kernel PGDs present in kaiser_init().
+ * We expect them to stay that way.
+ */
+ BUG_ON(pgd_none(*pgd));
+ /*
+ * PGDs are either 512GB or 128TB on all x86_64
+ * configurations. We don't handle these.
+ */
+ BUG_ON(pgd_large(*pgd));
+
+ pud = pud_offset(pgd, vaddr);
+ if (pud_none(*pud)) {
+ WARN_ON_ONCE(1);
+ return -1;
+ }
- pud = pud_offset(pgd, address);
- BUG_ON(pud_none(*pud));
+ if (pud_large(*pud))
+ return (pud_pfn(*pud) << PAGE_SHIFT) | (vaddr & ~PUD_PAGE_MASK);
- if (pud_large(*pud)) {
- return (pud_pfn(*pud) << PAGE_SHIFT) | (address & ~PUD_PAGE_MASK);
+ pmd = pmd_offset(pud, vaddr);
+ if (pmd_none(*pmd)) {
+ WARN_ON_ONCE(1);
+ return -1;
}
- pmd = pmd_offset(pud, address);
- BUG_ON(pmd_none(*pmd));
+ if (pmd_large(*pmd))
+ return (pmd_pfn(*pmd) << PAGE_SHIFT) | (vaddr & ~PMD_PAGE_MASK);
- if (pmd_large(*pmd)) {
- return (pmd_pfn(*pmd) << PAGE_SHIFT) | (address & ~PMD_PAGE_MASK);
+ pte = pte_offset_kernel(pmd, vaddr);
+ if (pte_none(*pte)) {
+ WARN_ON_ONCE(1);
+ return -1;
}
- pte = pte_offset_kernel(pmd, address);
- BUG_ON(pte_none(*pte));
-
- return (pte_pfn(*pte) << PAGE_SHIFT) | (address & ~PAGE_MASK);
+ return (pte_pfn(*pte) << PAGE_SHIFT) | (vaddr & ~PAGE_MASK);
}
-void _kaiser_copy (unsigned long start_addr, unsigned long size,
- unsigned long flags)
+/*
+ * This is a relatively normal page table walk, except that it
+ * also tries to allocate page tables pages along the way.
+ *
+ * Returns a pointer to a PTE on success, or NULL on failure.
+ */
+static pte_t *kaiser_pagetable_walk(unsigned long address, bool is_atomic)
{
- pgd_t *pgd;
- pud_t *pud;
pmd_t *pmd;
- pte_t *pte;
- unsigned long address;
- unsigned long end_addr = start_addr + size;
- unsigned long target_address;
+ pud_t *pud;
+ pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(address));
+ gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
- for (address = PAGE_ALIGN(start_addr - (PAGE_SIZE - 1));
- address < PAGE_ALIGN(end_addr); address += PAGE_SIZE) {
- target_address = get_pa_from_mapping(address);
+ might_sleep();
+ if (is_atomic) {
+ gfp &= ~GFP_KERNEL;
+ gfp |= __GFP_HIGH | __GFP_ATOMIC;
+ }
- pgd = native_get_shadow_pgd(pgd_offset_k(address));
+ if (pgd_none(*pgd)) {
+ WARN_ONCE(1, "All shadow pgds should have been populated");
+ return NULL;
+ }
+ BUILD_BUG_ON(pgd_large(*pgd) != 0);
- BUG_ON(pgd_none(*pgd) && "All shadow pgds should be mapped at this time\n");
- BUG_ON(pgd_large(*pgd));
+ pud = pud_offset(pgd, address);
+ /* The shadow page tables do not use large mappings: */
+ if (pud_large(*pud)) {
+ WARN_ON(1);
+ return NULL;
+ }
+ if (pud_none(*pud)) {
+ unsigned long new_pmd_page = __get_free_page(gfp);
+ if (!new_pmd_page)
+ return NULL;
+ spin_lock(&shadow_table_allocation_lock);
+ if (pud_none(*pud))
+ set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
+ else
+ free_page(new_pmd_page);
+ spin_unlock(&shadow_table_allocation_lock);
+ }
- pud = pud_offset(pgd, address);
- if (pud_none(*pud)) {
- set_pud(pud, __pud(_PAGE_TABLE | __pa(pmd_alloc_one(0, address))));
- }
- BUG_ON(pud_large(*pud));
+ pmd = pmd_offset(pud, address);
+ /* The shadow page tables do not use large mappings: */
+ if (pmd_large(*pmd)) {
+ WARN_ON(1);
+ return NULL;
+ }
+ if (pmd_none(*pmd)) {
+ unsigned long new_pte_page = __get_free_page(gfp);
+ if (!new_pte_page)
+ return NULL;
+ spin_lock(&shadow_table_allocation_lock);
+ if (pmd_none(*pmd))
+ set_pmd(pmd, __pmd(_KERNPG_TABLE | __pa(new_pte_page)));
+ else
+ free_page(new_pte_page);
+ spin_unlock(&shadow_table_allocation_lock);
+ }
- pmd = pmd_offset(pud, address);
- if (pmd_none(*pmd)) {
- set_pmd(pmd, __pmd(_PAGE_TABLE | __pa(pte_alloc_one_kernel(0, address))));
- }
- BUG_ON(pmd_large(*pmd));
+ return pte_offset_kernel(pmd, address);
+}
- pte = pte_offset_kernel(pmd, address);
+int kaiser_add_user_map(const void *__start_addr, unsigned long size,
+ unsigned long flags)
+{
+ int ret = 0;
+ pte_t *pte;
+ unsigned long start_addr = (unsigned long )__start_addr;
+ unsigned long address = start_addr & PAGE_MASK;
+ unsigned long end_addr = PAGE_ALIGN(start_addr + size);
+ unsigned long target_address;
+
+ for (;address < end_addr; address += PAGE_SIZE) {
+ target_address = get_pa_from_mapping(address);
+ if (target_address == -1) {
+ ret = -EIO;
+ break;
+ }
+ pte = kaiser_pagetable_walk(address, false);
if (pte_none(*pte)) {
set_pte(pte, __pte(flags | target_address));
} else {
- BUG_ON(__pa(pte_page(*pte)) != target_address);
+ pte_t tmp;
+ set_pte(&tmp, __pte(flags | target_address));
+ WARN_ON_ONCE(!pte_same(*pte, tmp));
}
}
+ return ret;
}
-// at first, add a pmd for every pgd entry in the shadowmem-kernel-part of the kernel mapping
-static inline void __init _kaiser_init(void)
+static int kaiser_add_user_map_ptrs(const void *start, const void *end, unsigned long flags)
+{
+ unsigned long size = end - start;
+
+ return kaiser_add_user_map(start, size, flags);
+}
+
+/*
+ * Ensure that the top level of the (shadow) page tables are
+ * entirely populated. This ensures that all processes that get
+ * forked have the same entries. This way, we do not have to
+ * ever go set up new entries in older processes.
+ *
+ * Note: we never free these, so there are no updates to them
+ * after this.
+ */
+static void __init kaiser_init_all_pgds(void)
{
pgd_t *pgd;
int i = 0;
pgd = native_get_shadow_pgd(pgd_offset_k((unsigned long )0));
for (i = PTRS_PER_PGD / 2; i < PTRS_PER_PGD; i++) {
- set_pgd(pgd + i, __pgd(_PAGE_TABLE |__pa(pud_alloc_one(0, 0))));
+ pgd_t new_pgd;
+ pud_t *pud = pud_alloc_one(&init_mm, PAGE_OFFSET + i * PGDIR_SIZE);
+ if (!pud) {
+ WARN_ON(1);
+ break;
+ }
+ new_pgd = __pgd(_KERNPG_TABLE |__pa(pud));
+ /*
+ * Make sure not to stomp on some other pgd entry.
+ */
+ if (!pgd_none(pgd[i])) {
+ WARN_ON(1);
+ continue;
+ }
+ set_pgd(pgd + i, new_pgd);
}
}
+#define kaiser_add_user_map_early(start, size, flags) do { \
+ int __ret = kaiser_add_user_map(start, size, flags); \
+ WARN_ON(__ret); \
+} while (0)
+
+#define kaiser_add_user_map_ptrs_early(start, end, flags) do { \
+ int __ret = kaiser_add_user_map_ptrs(start, end, flags); \
+ WARN_ON(__ret); \
+} while (0)
+
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
-spinlock_t shadow_table_lock;
+/*
+ * If anything in here fails, we will likely die on one of the
+ * first kernel->user transitions and init will die. But, we
+ * will have most of the kernel up by then and should be able to
+ * get a clean warning out of it. If we BUG_ON() here, we run
+ * the risk of being before we have good console output.
+ */
void __init kaiser_init(void)
{
int cpu;
- spin_lock_init(&shadow_table_lock);
-
- spin_lock(&shadow_table_lock);
- _kaiser_init();
+ kaiser_init_all_pgds();
for_each_possible_cpu(cpu) {
- // map the per cpu user variables
- _kaiser_copy(
- (unsigned long) (__per_cpu_user_mapped_start + per_cpu_offset(cpu)),
- (unsigned long) __per_cpu_user_mapped_end - (unsigned long) __per_cpu_user_mapped_start,
- __PAGE_KERNEL);
- }
-
- // map the entry/exit text section, which is responsible to switch between user- and kernel mode
- _kaiser_copy(
- (unsigned long) __entry_text_start,
- (unsigned long) __entry_text_end - (unsigned long) __entry_text_start,
- __PAGE_KERNEL_RX);
-
- // the fixed map address of the idt_table
- _kaiser_copy(
- (unsigned long) idt_descr.address,
- sizeof(gate_desc) * NR_VECTORS,
- __PAGE_KERNEL_RO);
+ void *percpu_vaddr = __per_cpu_user_mapped_start +
+ per_cpu_offset(cpu);
+ unsigned long percpu_sz = __per_cpu_user_mapped_end -
+ __per_cpu_user_mapped_start;
+ kaiser_add_user_map_early(percpu_vaddr, percpu_sz,
+ __PAGE_KERNEL);
+ }
- spin_unlock(&shadow_table_lock);
+ /*
+ * Map the entry/exit text section, which is needed at
+ * switches from user to and from kernel.
+ */
+ kaiser_add_user_map_ptrs_early(__entry_text_start, __entry_text_end,
+ __PAGE_KERNEL_RX);
+
+#if defined(CONFIG_FUNCTION_GRAPH_TRACER) || defined(CONFIG_KASAN)
+ kaiser_add_user_map_ptrs_early(__irqentry_text_start,
+ __irqentry_text_end,
+ __PAGE_KERNEL_RX);
+#endif
+ kaiser_add_user_map_early((void *)idt_descr.address,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL_RO);
+#ifdef CONFIG_TRACING
+ kaiser_add_user_map_early(&trace_idt_descr,
+ sizeof(trace_idt_descr),
+ __PAGE_KERNEL);
+ kaiser_add_user_map_early(&trace_idt_table,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL);
+#endif
+ kaiser_add_user_map_early(&debug_idt_descr, sizeof(debug_idt_descr),
+ __PAGE_KERNEL);
+ kaiser_add_user_map_early(&debug_idt_table,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL);
}
+extern void unmap_pud_range_nofree(pgd_t *pgd, unsigned long start, unsigned long end);
// add a mapping to the shadow-mapping, and synchronize the mappings
-void kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
+int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
{
- spin_lock(&shadow_table_lock);
- _kaiser_copy(addr, size, flags);
- spin_unlock(&shadow_table_lock);
+ return kaiser_add_user_map((const void *)addr, size, flags);
}
-extern void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end);
void kaiser_remove_mapping(unsigned long start, unsigned long size)
{
- pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(start));
- spin_lock(&shadow_table_lock);
- do {
- unmap_pud_range(pgd, start, start + size);
- } while (pgd++ != native_get_shadow_pgd(pgd_offset_k(start + size)));
- spin_unlock(&shadow_table_lock);
+ unsigned long end = start + size;
+ unsigned long addr;
+
+ for (addr = start; addr < end; addr += PGDIR_SIZE) {
+ pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(addr));
+ /*
+ * unmap_p4d_range() handles > P4D_SIZE unmaps,
+ * so no need to trim 'end'.
+ */
+ unmap_pud_range_nofree(pgd, addr, end);
+ }
}
#endif /* CONFIG_KAISER */
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -52,6 +52,7 @@ static DEFINE_SPINLOCK(cpa_lock);
#define CPA_FLUSHTLB 1
#define CPA_ARRAY 2
#define CPA_PAGES_ARRAY 4
+#define CPA_FREE_PAGETABLES 8
#ifdef CONFIG_PROC_FS
static unsigned long direct_pages_count[PG_LEVEL_NUM];
@@ -729,10 +730,13 @@ static int split_large_page(struct cpa_d
return 0;
}
-static bool try_to_free_pte_page(pte_t *pte)
+static bool try_to_free_pte_page(struct cpa_data *cpa, pte_t *pte)
{
int i;
+ if (!(cpa->flags & CPA_FREE_PAGETABLES))
+ return false;
+
for (i = 0; i < PTRS_PER_PTE; i++)
if (!pte_none(pte[i]))
return false;
@@ -741,10 +745,13 @@ static bool try_to_free_pte_page(pte_t *
return true;
}
-static bool try_to_free_pmd_page(pmd_t *pmd)
+static bool try_to_free_pmd_page(struct cpa_data *cpa, pmd_t *pmd)
{
int i;
+ if (!(cpa->flags & CPA_FREE_PAGETABLES))
+ return false;
+
for (i = 0; i < PTRS_PER_PMD; i++)
if (!pmd_none(pmd[i]))
return false;
@@ -753,7 +760,9 @@ static bool try_to_free_pmd_page(pmd_t *
return true;
}
-static bool unmap_pte_range(pmd_t *pmd, unsigned long start, unsigned long end)
+static bool unmap_pte_range(struct cpa_data *cpa, pmd_t *pmd,
+ unsigned long start,
+ unsigned long end)
{
pte_t *pte = pte_offset_kernel(pmd, start);
@@ -764,22 +773,23 @@ static bool unmap_pte_range(pmd_t *pmd,
pte++;
}
- if (try_to_free_pte_page((pte_t *)pmd_page_vaddr(*pmd))) {
+ if (try_to_free_pte_page(cpa, (pte_t *)pmd_page_vaddr(*pmd))) {
pmd_clear(pmd);
return true;
}
return false;
}
-static void __unmap_pmd_range(pud_t *pud, pmd_t *pmd,
+static void __unmap_pmd_range(struct cpa_data *cpa, pud_t *pud, pmd_t *pmd,
unsigned long start, unsigned long end)
{
- if (unmap_pte_range(pmd, start, end))
- if (try_to_free_pmd_page((pmd_t *)pud_page_vaddr(*pud)))
+ if (unmap_pte_range(cpa, pmd, start, end))
+ if (try_to_free_pmd_page(cpa, (pmd_t *)pud_page_vaddr(*pud)))
pud_clear(pud);
}
-static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
+static void unmap_pmd_range(struct cpa_data *cpa, pud_t *pud,
+ unsigned long start, unsigned long end)
{
pmd_t *pmd = pmd_offset(pud, start);
@@ -790,7 +800,7 @@ static void unmap_pmd_range(pud_t *pud,
unsigned long next_page = (start + PMD_SIZE) & PMD_MASK;
unsigned long pre_end = min_t(unsigned long, end, next_page);
- __unmap_pmd_range(pud, pmd, start, pre_end);
+ __unmap_pmd_range(cpa, pud, pmd, start, pre_end);
start = pre_end;
pmd++;
@@ -803,7 +813,8 @@ static void unmap_pmd_range(pud_t *pud,
if (pmd_large(*pmd))
pmd_clear(pmd);
else
- __unmap_pmd_range(pud, pmd, start, start + PMD_SIZE);
+ __unmap_pmd_range(cpa, pud, pmd,
+ start, start + PMD_SIZE);
start += PMD_SIZE;
pmd++;
@@ -813,17 +824,19 @@ static void unmap_pmd_range(pud_t *pud,
* 4K leftovers?
*/
if (start < end)
- return __unmap_pmd_range(pud, pmd, start, end);
+ return __unmap_pmd_range(cpa, pud, pmd, start, end);
/*
* Try again to free the PMD page if haven't succeeded above.
*/
if (!pud_none(*pud))
- if (try_to_free_pmd_page((pmd_t *)pud_page_vaddr(*pud)))
+ if (try_to_free_pmd_page(cpa, (pmd_t *)pud_page_vaddr(*pud)))
pud_clear(pud);
}
-void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
+static void __unmap_pud_range(struct cpa_data *cpa, pgd_t *pgd,
+ unsigned long start,
+ unsigned long end)
{
pud_t *pud = pud_offset(pgd, start);
@@ -834,7 +847,7 @@ void unmap_pud_range(pgd_t *pgd, unsigne
unsigned long next_page = (start + PUD_SIZE) & PUD_MASK;
unsigned long pre_end = min_t(unsigned long, end, next_page);
- unmap_pmd_range(pud, start, pre_end);
+ unmap_pmd_range(cpa, pud, start, pre_end);
start = pre_end;
pud++;
@@ -848,7 +861,7 @@ void unmap_pud_range(pgd_t *pgd, unsigne
if (pud_large(*pud))
pud_clear(pud);
else
- unmap_pmd_range(pud, start, start + PUD_SIZE);
+ unmap_pmd_range(cpa, pud, start, start + PUD_SIZE);
start += PUD_SIZE;
pud++;
@@ -858,7 +871,7 @@ void unmap_pud_range(pgd_t *pgd, unsigne
* 2M leftovers?
*/
if (start < end)
- unmap_pmd_range(pud, start, end);
+ unmap_pmd_range(cpa, pud, start, end);
/*
* No need to try to free the PUD page because we'll free it in
@@ -866,6 +879,24 @@ void unmap_pud_range(pgd_t *pgd, unsigne
*/
}
+static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
+{
+ struct cpa_data cpa = {
+ .flags = CPA_FREE_PAGETABLES,
+ };
+
+ __unmap_pud_range(&cpa, pgd, start, end);
+}
+
+void unmap_pud_range_nofree(pgd_t *pgd, unsigned long start, unsigned long end)
+{
+ struct cpa_data cpa = {
+ .flags = 0,
+ };
+
+ __unmap_pud_range(&cpa, pgd, start, end);
+}
+
static int alloc_pte_page(pmd_t *pmd)
{
pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL | __GFP_NOTRACK);
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -344,40 +344,26 @@ static inline void _pgd_free(pgd_t *pgd)
kmem_cache_free(pgd_cache, pgd);
}
#else
-static inline pgd_t *_pgd_alloc(void)
-{
-#ifdef CONFIG_KAISER
- // Instead of one PML4, we aquire two PML4s and, thus, an 8kb-aligned memory
- // block. Therefore, we have to allocate at least 3 pages. However, the
- // __get_free_pages returns us 4 pages. Hence, we store the base pointer at
- // the beginning of the page of our 8kb-aligned memory block in order to
- // correctly free it afterwars.
- unsigned long pages = __get_free_pages(PGALLOC_GFP, get_order(4*PAGE_SIZE));
-
- if(native_get_normal_pgd((pgd_t*) pages) == (pgd_t*) pages)
- {
- *((unsigned long*)(pages + 2 * PAGE_SIZE)) = pages;
- return (pgd_t *) pages;
- }
- else
- {
- *((unsigned long*)(pages + 3 * PAGE_SIZE)) = pages;
- return (pgd_t *) (pages + PAGE_SIZE);
- }
+#ifdef CONFIG_KAISER
+/*
+ * Instead of one pmd, we aquire two pmds. Being order-1, it is
+ * both 8k in size and 8k-aligned. That lets us just flip bit 12
+ * in a pointer to swap between the two 4k halves.
+ */
+#define PGD_ALLOCATION_ORDER 1
#else
- return (pgd_t *)__get_free_page(PGALLOC_GFP);
+#define PGD_ALLOCATION_ORDER 0
#endif
+
+static inline pgd_t *_pgd_alloc(void)
+{
+ return (pgd_t *)__get_free_pages(PGALLOC_GFP, PGD_ALLOCATION_ORDER);
}
static inline void _pgd_free(pgd_t *pgd)
{
-#ifdef CONFIG_KAISER
- unsigned long pages = *((unsigned long*) ((char*) pgd + 2 * PAGE_SIZE));
- free_pages(pages, get_order(4*PAGE_SIZE));
-#else
- free_page((unsigned long)pgd);
-#endif
+ free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER);
}
#endif /* CONFIG_X86_PAE */
--- /dev/null
+++ b/include/linux/kaiser.h
@@ -0,0 +1,26 @@
+#ifndef _INCLUDE_KAISER_H
+#define _INCLUDE_KAISER_H
+
+#ifdef CONFIG_KAISER
+#include <asm/kaiser.h>
+#else
+
+/*
+ * These stubs are used whenever CONFIG_KAISER is off, which
+ * includes architectures that support KAISER, but have it
+ * disabled.
+ */
+
+static inline void kaiser_init(void)
+{
+}
+static inline void kaiser_remove_mapping(unsigned long start, unsigned long size)
+{
+}
+static inline int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
+{
+ return 0;
+}
+
+#endif /* !CONFIG_KAISER */
+#endif /* _INCLUDE_KAISER_H */
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -58,6 +58,7 @@
#include <linux/tsacct_kern.h>
#include <linux/cn_proc.h>
#include <linux/freezer.h>
+#include <linux/kaiser.h>
#include <linux/delayacct.h>
#include <linux/taskstats_kern.h>
#include <linux/random.h>
@@ -472,7 +473,6 @@ void set_task_stack_end_magic(struct tas
*stackend = STACK_END_MAGIC; /* for overflow detection */
}
-extern void kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags);
static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
{
struct task_struct *tsk;
@@ -500,9 +500,10 @@ static struct task_struct *dup_task_stru
* functions again.
*/
tsk->stack = stack;
-#ifdef CONFIG_KAISER
- kaiser_add_mapping((unsigned long)tsk->stack, THREAD_SIZE, __PAGE_KERNEL);
-#endif
+
+ err= kaiser_add_mapping((unsigned long)tsk->stack, THREAD_SIZE, __PAGE_KERNEL);
+ if (err)
+ goto free_stack;
#ifdef CONFIG_VMAP_STACK
tsk->stack_vm_area = stack_vm_area;
#endif
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -32,12 +32,17 @@ config SECURITY
If you are unsure how to answer this question, answer N.
config KAISER
bool "Remove the kernel mapping in user mode"
+ default y
depends on X86_64
depends on !PARAVIRT
help
This enforces a strict kernel and user space isolation in order to close
hardware side channels on kernel address information.
+config KAISER_REAL_SWITCH
+ bool "KAISER: actually switch page tables"
+ default y
+
config SECURITYFS
bool "Enable the securityfs filesystem"
help
Patches currently in stable-queue which might be from dave.hansen(a)linux.intel.com are
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
This is a note to let you know that I've just added the patch titled
mfd: cros ec: spi: Don't send first message too soon
to the 3.18-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mfd-cros-ec-spi-don-t-send-first-message-too-soon.patch
and it can be found in the queue-3.18 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 15d8374874ded0bec37ef27f8301a6d54032c0e5 Mon Sep 17 00:00:00 2001
From: Jon Hunter <jonathanh(a)nvidia.com>
Date: Tue, 14 Nov 2017 14:43:27 +0000
Subject: mfd: cros ec: spi: Don't send first message too soon
From: Jon Hunter <jonathanh(a)nvidia.com>
commit 15d8374874ded0bec37ef27f8301a6d54032c0e5 upstream.
On the Tegra124 Nyan-Big chromebook the very first SPI message sent to
the EC is failing.
The Tegra SPI driver configures the SPI chip-selects to be active-high
by default (and always has for many years). The EC SPI requires an
active-low chip-select and so the Tegra chip-select is reconfigured to
be active-low when the EC SPI driver calls spi_setup(). The problem is
that if the first SPI message to the EC is sent too soon after
reconfiguring the SPI chip-select, it fails.
The EC SPI driver prevents back-to-back SPI messages being sent too
soon by keeping track of the time the last transfer was sent via the
variable 'last_transfer_ns'. To prevent the very first transfer being
sent too soon, initialise the 'last_transfer_ns' variable after calling
spi_setup() and before sending the first SPI message.
Signed-off-by: Jon Hunter <jonathanh(a)nvidia.com>
Reviewed-by: Brian Norris <briannorris(a)chromium.org>
Reviewed-by: Douglas Anderson <dianders(a)chromium.org>
Acked-by: Benson Leung <bleung(a)chromium.org>
Signed-off-by: Lee Jones <lee.jones(a)linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/mfd/cros_ec_spi.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/mfd/cros_ec_spi.c
+++ b/drivers/mfd/cros_ec_spi.c
@@ -23,6 +23,7 @@
#include <linux/slab.h>
#include <linux/spi/spi.h>
+ ec_spi->last_transfer_ns = ktime_get_ns();
/* The header byte, which follows the preamble */
#define EC_MSG_HEADER 0xec
Patches currently in stable-queue which might be from jonathanh(a)nvidia.com are
queue-3.18/mfd-cros-ec-spi-don-t-send-first-message-too-soon.patch
On 03/01/18 03:11, kernelci.org bot wrote:
> stable/linux-4.14.y boot: 108 boots: 0 failed, 107 passed with 1 conflict (v4.14.11)
>
> Full Boot Summary: https://kernelci.org/boot/all/job/stable/branch/linux-4.14.y/kernel/v4.14.1…
> Full Build Summary: https://kernelci.org/build/stable/branch/linux-4.14.y/kernel/v4.14.11/
>
> Tree: stable
> Branch: linux-4.14.y
> Git Describe: v4.14.11
> Git Commit: 0d59679df5b53755c00ea0292df696f97bfc950d
> Git URL: http://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
> Tested: 59 unique boards, 23 SoC families, 16 builds out of 185
>
> Boot Regressions Detected:
>
> x86:
>
> x86_64_defconfig:
> qemu:
> lab-mhart: new failure (last pass: v4.14.10)
>
> Conflicting Boot Failure Detected: (These likely are not failures as other labs are reporting PASS. Needs review.)
>
> x86:
>
> x86_64_defconfig:
> qemu:
> lab-mhart: FAIL
> lab-collabora: PASS
Well, it turns out this is not exactly a conflict as there's a
subtle difference between the qemu devices in lab-mhart and in
lab-collabora. The ones in lab-collabora are configured to use
KVM, and it looks like the ones in lab-mhart aren't.
So this job with KVM enabled passes in lab-collabora:
https://lava.collabora.co.uk/scheduler/job/1032358
but it fails if I tell LAVA (qemu) to disable KVM:
https://lava.collabora.co.uk/scheduler/job/1032359
with the same panic as in lab-mhart. It seems like it's failing
to return from an interrupt:
http://lava.streamtester.net/scheduler/job/87308
[ 2.678828] ? native_iret+0x7/0x7
[ 2.679208] WARNING: can't dereference iret registers at 00000000ffc66068 for ip page_fault+0x11/0x60
This triggered an automated bisection on kernelci.org, please see
the results below.
I may run another bisection with this config enabled earlier in
the history to track down the actual change in the code that
introduced the issue, let me know if it's worth doing.
Hope this helps!
Best wishes,
Guillaume
--8<-----------------------------------------------------------------------8<--
Bisection result for stable/linux-4.14.y (v4.14.11) on qemu
Good known revision:
b8ce823 Linux 4.14.10
Bad known revision:
0d59679 Linux 4.14.11
Extra parameters:
Tree: stable
URL: http://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Branch: linux-4.14.y
Target: qemu
Lab: lab-mhart
Defconfig: x86_64_defconfig
Plan: boot
Breaking commit found:
-------------------------------------------------------------------------------
commit 3dfd9fd8d897214b1a880c7fd8ed36b88faa1c02
Author: Dave Hansen <dave.hansen(a)linux.intel.com>
Date: Mon Dec 4 15:08:03 2017 +0100
x86/mm/pti: Add Kconfig
commit 385ce0ea4c078517fa51c261882c4e72fba53005 upstream.
Finally allow CONFIG_PAGE_TABLE_ISOLATION to be enabled.
PARAVIRT generally requires that the kernel not manage its own page tables.
It also means that the hypervisor and kernel must agree wholeheartedly
about what format the page tables are in and what they contain.
PAGE_TABLE_ISOLATION, unfortunately, changes the rules and they
can not be used together.
I've seen conflicting feedback from maintainers lately about whether they
want the Kconfig magic to go first or last in a patch series. It's going
last here because the partially-applied series leads to kernels that can
not boot in a bunch of cases. I did a run through the entire series with
CONFIG_PAGE_TABLE_ISOLATION=y to look for build errors, though.
[ tglx: Removed SMP and !PARAVIRT dependencies as they not longer exist ]
Signed-off-by: Dave Hansen <dave.hansen(a)linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: David Laight <David.Laight(a)aculab.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: Eduardo Valentin <eduval(a)amazon.com>
Cc: Greg KH <gregkh(a)linuxfoundation.org>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Juergen Gross <jgross(a)suse.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: aliguori(a)amazon.com
Cc: daniel.gruss(a)iaik.tugraz.at
Cc: hughd(a)google.com
Cc: keescook(a)google.com
Cc: linux-mm(a)kvack.org
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
diff --git a/security/Kconfig b/security/Kconfig
index e8e4494..6614b93 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -54,6 +54,17 @@ config SECURITY_NETWORK
implement socket and networking access controls.
If you are unsure how to answer this question, answer N.
+config PAGE_TABLE_ISOLATION
+ bool "Remove the kernel mapping in user mode"
+ depends on X86_64 && !UML
+ default y
+ help
+ This feature reduces the number of hardware side channels by
+ ensuring that the majority of kernel addresses are not mapped
+ into userspace.
+
+ See Documentation/x86/pagetable-isolation.txt for more details.
+
config SECURITY_INFINIBAND
bool "Infiniband Security Hooks"
depends on SECURITY && INFINIBAND
-------------------------------------------------------------------------------
Git bisection log:
-------------------------------------------------------------------------------
git bisect start
# good: [b8ce8232fcc37fe7a97db79ea0a5f32098c25e72] Linux 4.14.10
git bisect good b8ce8232fcc37fe7a97db79ea0a5f32098c25e72
# bad: [0d59679df5b53755c00ea0292df696f97bfc950d] Linux 4.14.11
git bisect bad 0d59679df5b53755c00ea0292df696f97bfc950d
# bad: [621b5ae0f9f4f9ef91bf441afc086ecf5e752d51] s390/qeth: apply takeover changes when mode is toggled
git bisect bad 621b5ae0f9f4f9ef91bf441afc086ecf5e752d51
# bad: [66f833dbed02d39c44440b6b35ac088655c32edb] ring-buffer: Mask out the info bits when returning buffer page length
git bisect bad 66f833dbed02d39c44440b6b35ac088655c32edb
# good: [d230c1917f57c3beee2e0204a4c8c58999758b95] x86/mm/pti: Map ESPFIX into user space
git bisect good d230c1917f57c3beee2e0204a4c8c58999758b95
# good: [36a72ab52c8d969a7a302082f52731c1be0e9ada] x86/mm: Optimize RESTORE_CR3
git bisect good 36a72ab52c8d969a7a302082f52731c1be0e9ada
# bad: [3dfd9fd8d897214b1a880c7fd8ed36b88faa1c02] x86/mm/pti: Add Kconfig
git bisect bad 3dfd9fd8d897214b1a880c7fd8ed36b88faa1c02
# good: [ef4b38472d6b1bf587554dfc7d5ab7abc835c1a5] x86/mm: Clarify the whole ASID/kernel PCID/user PCID naming
git bisect good ef4b38472d6b1bf587554dfc7d5ab7abc835c1a5
# good: [33d9d7836f0fa02777667d72bc815c12fbe61cac] x86/dumpstack: Indicate in Oops whether PTI is configured and enabled
git bisect good 33d9d7836f0fa02777667d72bc815c12fbe61cac
# first bad commit: [3dfd9fd8d897214b1a880c7fd8ed36b88faa1c02] x86/mm/pti: Add Kconfig
-------------------------------------------------------------------------------
> ---
> For more info write to <info(a)kernelci.org>
>
>
>
> _______________________________________________
> Kernel-build-reports mailing list
> Kernel-build-reports(a)lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/kernel-build-reports
This is a note to let you know that I've just added the patch titled
KPTI: Report when enabled
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kpti-report-when-enabled.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From keescook(a)chromium.org Wed Jan 3 20:49:35 2018
From: Kees Cook <keescook(a)chromium.org>
Date: Wed, 3 Jan 2018 10:43:32 -0800
Subject: KPTI: Report when enabled
To: Greg KH <gregkh(a)linuxfoundation.org>
Message-ID: <20180103184332.GA18888@beast>
Content-Disposition: inline
From: Kees Cook <keescook(a)chromium.org>
Make sure dmesg reports when KPTI is enabled.
Signed-off-by: Kees Cook <keescook(a)chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -11,6 +11,9 @@
#include <linux/uaccess.h>
#include <linux/ftrace.h>
+#undef pr_fmt
+#define pr_fmt(fmt) "Kernel/User page tables isolation: " fmt
+
#include <asm/kaiser.h>
#include <asm/tlbflush.h> /* to verify its kaiser declarations */
#include <asm/pgtable.h>
@@ -293,7 +296,7 @@ enable:
return;
disable:
- pr_info("Kernel/User page tables isolation: disabled\n");
+ pr_info("disabled\n");
silent_disable:
kaiser_enabled = 0;
@@ -353,6 +356,8 @@ void __init kaiser_init(void)
kaiser_add_user_map_early(&debug_idt_table,
sizeof(gate_desc) * NR_VECTORS,
__PAGE_KERNEL);
+
+ pr_info("enabled\n");
}
/* Add a mapping to the shadow mapping, and synchronize the mappings */
Patches currently in stable-queue which might be from keescook(a)chromium.org are
queue-4.4/kpti-rename-to-page_table_isolation.patch
queue-4.4/kpti-report-when-enabled.patch
This is a note to let you know that I've just added the patch titled
KPTI: Report when enabled
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kpti-report-when-enabled.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From keescook(a)chromium.org Wed Jan 3 20:48:07 2018
From: Kees Cook <keescook(a)chromium.org>
Date: Wed, 3 Jan 2018 10:18:01 -0800
Subject: KPTI: Report when enabled
To: Greg KH <gregkh(a)linuxfoundation.org>
Message-ID: <20180103181801.GA33383@beast>
Content-Disposition: inline
From: Kees Cook <keescook(a)chromium.org>
Make sure dmesg reports when KPTI is enabled.
Signed-off-by: Kees Cook <keescook(a)chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -10,6 +10,9 @@
#include <linux/mm.h>
#include <linux/uaccess.h>
+#undef pr_fmt
+#define pr_fmt(fmt) "Kernel/User page tables isolation: " fmt
+
#include <asm/kaiser.h>
#include <asm/tlbflush.h> /* to verify its kaiser declarations */
#include <asm/pgtable.h>
@@ -292,7 +295,7 @@ enable:
return;
disable:
- pr_info("Kernel/User page tables isolation: disabled\n");
+ pr_info("disabled\n");
silent_disable:
kaiser_enabled = 0;
@@ -352,6 +355,8 @@ void __init kaiser_init(void)
kaiser_add_user_map_early(&debug_idt_table,
sizeof(gate_desc) * NR_VECTORS,
__PAGE_KERNEL);
+
+ pr_info("enabled\n");
}
/* Add a mapping to the shadow mapping, and synchronize the mappings */
Patches currently in stable-queue which might be from keescook(a)chromium.org are
queue-4.9/kpti-rename-to-page_table_isolation.patch
queue-4.9/kpti-report-when-enabled.patch
This is a note to let you know that I've just added the patch titled
x86/kaiser: Rename and simplify X86_FEATURE_KAISER handling
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kaiser-rename-and-simplify-x86_feature_kaiser-handling.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Borislav Petkov <bp(a)suse.de>
Date: Tue, 2 Jan 2018 14:19:48 +0100
Subject: x86/kaiser: Rename and simplify X86_FEATURE_KAISER handling
From: Borislav Petkov <bp(a)suse.de>
Concentrate it in arch/x86/mm/kaiser.c and use the upstream string "nopti".
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/kernel-parameters.txt | 2 +-
arch/x86/kernel/cpu/common.c | 18 ------------------
arch/x86/mm/kaiser.c | 20 +++++++++++++++++++-
3 files changed, 20 insertions(+), 20 deletions(-)
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2523,7 +2523,7 @@ bytes respectively. Such letter suffixes
nojitter [IA-64] Disables jitter checking for ITC timers.
- nokaiser [X86-64] Disable KAISER isolation of kernel from user.
+ nopti [X86-64] Disable KAISER isolation of kernel from user.
no-kvmclock [X86,KVM] Disable paravirtualized KVM clock driver
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -178,20 +178,6 @@ static int __init x86_pcid_setup(char *s
return 1;
}
__setup("nopcid", x86_pcid_setup);
-
-static int __init x86_nokaiser_setup(char *s)
-{
- /* nokaiser doesn't accept parameters */
- if (s)
- return -EINVAL;
-#ifdef CONFIG_KAISER
- kaiser_enabled = 0;
- setup_clear_cpu_cap(X86_FEATURE_KAISER);
- pr_info("nokaiser: KAISER feature disabled\n");
-#endif
- return 0;
-}
-early_param("nokaiser", x86_nokaiser_setup);
#endif
static int __init x86_noinvpcid_setup(char *s)
@@ -761,10 +747,6 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
c->x86_power = cpuid_edx(0x80000007);
init_scattered_cpuid_features(c);
-#ifdef CONFIG_KAISER
- if (kaiser_enabled)
- set_cpu_cap(c, X86_FEATURE_KAISER);
-#endif
}
static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -275,8 +275,13 @@ void __init kaiser_init(void)
{
int cpu;
- if (!kaiser_enabled)
+ if (!kaiser_enabled) {
+ setup_clear_cpu_cap(X86_FEATURE_KAISER);
return;
+ }
+
+ setup_force_cpu_cap(X86_FEATURE_KAISER);
+
kaiser_init_all_pgds();
for_each_possible_cpu(cpu) {
@@ -419,3 +424,16 @@ void kaiser_flush_tlb_on_return_to_user(
X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
}
EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
+
+static int __init x86_nokaiser_setup(char *s)
+{
+ /* nopti doesn't accept parameters */
+ if (s)
+ return -EINVAL;
+
+ kaiser_enabled = 0;
+ pr_info("Kernel/User page tables isolation: disabled\n");
+
+ return 0;
+}
+early_param("nopti", x86_nokaiser_setup);
Patches currently in stable-queue which might be from bp(a)suse.de are
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/x86-kaiser-reenable-paravirt.patch
queue-4.4/x86-kaiser-rename-and-simplify-x86_feature_kaiser-handling.patch
queue-4.4/x86-kaiser-check-boottime-cmdline-params.patch
queue-4.4/x86-kaiser-move-feature-detection-up.patch
This is a note to let you know that I've just added the patch titled
x86/paravirt: Dont patch flush_tlb_single
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-paravirt-dont-patch-flush_tlb_single.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Thomas Gleixner <tglx(a)linutronix.de>
Date: Mon, 4 Dec 2017 15:07:30 +0100
Subject: x86/paravirt: Dont patch flush_tlb_single
From: Thomas Gleixner <tglx(a)linutronix.de>
commit a035795499ca1c2bd1928808d1a156eda1420383 upstream
native_flush_tlb_single() will be changed with the upcoming
PAGE_TABLE_ISOLATION feature. This requires to have more code in
there than INVLPG.
Remove the paravirt patching for it.
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe(a)redhat.com>
Reviewed-by: Juergen Gross <jgross(a)suse.com>
Acked-by: Peter Zijlstra <peterz(a)infradead.org>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Borislav Petkov <bpetkov(a)suse.de>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: Dave Hansen <dave.hansen(a)intel.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: David Laight <David.Laight(a)aculab.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: Eduardo Valentin <eduval(a)amazon.com>
Cc: Greg KH <gregkh(a)linuxfoundation.org>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Rik van Riel <riel(a)redhat.com>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: aliguori(a)amazon.com
Cc: daniel.gruss(a)iaik.tugraz.at
Cc: hughd(a)google.com
Cc: keescook(a)google.com
Cc: linux-mm(a)kvack.org
Cc: michael.schwarz(a)iaik.tugraz.at
Cc: moritz.lipp(a)iaik.tugraz.at
Cc: richard.fellner(a)student.tugraz.at
Link: https://lkml.kernel.org/r/20171204150606.828111617@linutronix.de
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Acked-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/paravirt_patch_64.c | 2 --
1 file changed, 2 deletions(-)
--- a/arch/x86/kernel/paravirt_patch_64.c
+++ b/arch/x86/kernel/paravirt_patch_64.c
@@ -9,7 +9,6 @@ DEF_NATIVE(pv_irq_ops, save_fl, "pushfq;
DEF_NATIVE(pv_mmu_ops, read_cr2, "movq %cr2, %rax");
DEF_NATIVE(pv_mmu_ops, read_cr3, "movq %cr3, %rax");
DEF_NATIVE(pv_mmu_ops, write_cr3, "movq %rdi, %cr3");
-DEF_NATIVE(pv_mmu_ops, flush_tlb_single, "invlpg (%rdi)");
DEF_NATIVE(pv_cpu_ops, clts, "clts");
DEF_NATIVE(pv_cpu_ops, wbinvd, "wbinvd");
@@ -59,7 +58,6 @@ unsigned native_patch(u8 type, u16 clobb
PATCH_SITE(pv_mmu_ops, read_cr3);
PATCH_SITE(pv_mmu_ops, write_cr3);
PATCH_SITE(pv_cpu_ops, clts);
- PATCH_SITE(pv_mmu_ops, flush_tlb_single);
PATCH_SITE(pv_cpu_ops, wbinvd);
#if defined(CONFIG_PARAVIRT_SPINLOCKS)
case PARAVIRT_PATCH(pv_lock_ops.queued_spin_unlock):
Patches currently in stable-queue which might be from tglx(a)linutronix.de are
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/x86-boot-add-early-cmdline-parsing-for-options-with-arguments.patch
This is a note to let you know that I've just added the patch titled
x86/kaiser: Rename and simplify X86_FEATURE_KAISER handling
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kaiser-rename-and-simplify-x86_feature_kaiser-handling.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Borislav Petkov <bp(a)suse.de>
Date: Tue, 2 Jan 2018 14:19:48 +0100
Subject: x86/kaiser: Rename and simplify X86_FEATURE_KAISER handling
From: Borislav Petkov <bp(a)suse.de>
Concentrate it in arch/x86/mm/kaiser.c and use the upstream string "nopti".
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/kernel-parameters.txt | 2 +-
arch/x86/kernel/cpu/common.c | 18 ------------------
arch/x86/mm/kaiser.c | 20 +++++++++++++++++++-
3 files changed, 20 insertions(+), 20 deletions(-)
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2763,7 +2763,7 @@ bytes respectively. Such letter suffixes
nojitter [IA-64] Disables jitter checking for ITC timers.
- nokaiser [X86-64] Disable KAISER isolation of kernel from user.
+ nopti [X86-64] Disable KAISER isolation of kernel from user.
no-kvmclock [X86,KVM] Disable paravirtualized KVM clock driver
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -179,20 +179,6 @@ static int __init x86_pcid_setup(char *s
return 1;
}
__setup("nopcid", x86_pcid_setup);
-
-static int __init x86_nokaiser_setup(char *s)
-{
- /* nokaiser doesn't accept parameters */
- if (s)
- return -EINVAL;
-#ifdef CONFIG_KAISER
- kaiser_enabled = 0;
- setup_clear_cpu_cap(X86_FEATURE_KAISER);
- pr_info("nokaiser: KAISER feature disabled\n");
-#endif
- return 0;
-}
-early_param("nokaiser", x86_nokaiser_setup);
#endif
static int __init x86_noinvpcid_setup(char *s)
@@ -813,10 +799,6 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a);
init_scattered_cpuid_features(c);
-#ifdef CONFIG_KAISER
- if (kaiser_enabled)
- set_cpu_cap(c, X86_FEATURE_KAISER);
-#endif
}
static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -274,8 +274,13 @@ void __init kaiser_init(void)
{
int cpu;
- if (!kaiser_enabled)
+ if (!kaiser_enabled) {
+ setup_clear_cpu_cap(X86_FEATURE_KAISER);
return;
+ }
+
+ setup_force_cpu_cap(X86_FEATURE_KAISER);
+
kaiser_init_all_pgds();
for_each_possible_cpu(cpu) {
@@ -418,3 +423,16 @@ void kaiser_flush_tlb_on_return_to_user(
X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
}
EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
+
+static int __init x86_nokaiser_setup(char *s)
+{
+ /* nopti doesn't accept parameters */
+ if (s)
+ return -EINVAL;
+
+ kaiser_enabled = 0;
+ pr_info("Kernel/User page tables isolation: disabled\n");
+
+ return 0;
+}
+early_param("nopti", x86_nokaiser_setup);
Patches currently in stable-queue which might be from bp(a)suse.de are
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/x86-kaiser-reenable-paravirt.patch
queue-4.9/x86-kaiser-rename-and-simplify-x86_feature_kaiser-handling.patch
queue-4.9/x86-kaiser-check-boottime-cmdline-params.patch
queue-4.9/x86-kaiser-move-feature-detection-up.patch
This is a note to let you know that I've just added the patch titled
x86/kaiser: Reenable PARAVIRT
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kaiser-reenable-paravirt.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Borislav Petkov <bp(a)suse.de>
Date: Tue, 2 Jan 2018 14:19:49 +0100
Subject: x86/kaiser: Reenable PARAVIRT
From: Borislav Petkov <bp(a)suse.de>
Now that the required bits have been addressed, reenable
PARAVIRT.
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
security/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -34,7 +34,7 @@ config SECURITY
config KAISER
bool "Remove the kernel mapping in user mode"
default y
- depends on X86_64 && SMP && !PARAVIRT
+ depends on X86_64 && SMP
help
This enforces a strict kernel and user space isolation, in order
to close hardware side channels on kernel address information.
Patches currently in stable-queue which might be from bp(a)suse.de are
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/x86-kaiser-reenable-paravirt.patch
queue-4.9/x86-kaiser-rename-and-simplify-x86_feature_kaiser-handling.patch
queue-4.9/x86-kaiser-check-boottime-cmdline-params.patch
queue-4.9/x86-kaiser-move-feature-detection-up.patch
This is a note to let you know that I've just added the patch titled
x86/kaiser: Move feature detection up
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kaiser-move-feature-detection-up.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Borislav Petkov <bp(a)suse.de>
Date: Mon, 25 Dec 2017 13:57:16 +0100
Subject: x86/kaiser: Move feature detection up
From: Borislav Petkov <bp(a)suse.de>
... before the first use of kaiser_enabled as otherwise funky
things happen:
about to get started...
(XEN) d0v0 Unhandled page fault fault/trap [#14, ec=0000]
(XEN) Pagetable walk from ffff88022a449090:
(XEN) L4[0x110] = 0000000229e0e067 0000000000001e0e
(XEN) L3[0x008] = 0000000000000000 ffffffffffffffff
(XEN) domain_crash_sync called from entry.S: fault at ffff82d08033fd08
entry.o#create_bounce_frame+0x135/0x14d
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:
(XEN) ----[ Xen-4.9.1_02-3.21 x86_64 debug=n Not tainted ]----
(XEN) CPU: 0
(XEN) RIP: e033:[<ffffffff81007460>]
(XEN) RFLAGS: 0000000000000286 EM: 1 CONTEXT: pv guest (d0v0)
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/kaiser.h | 2 ++
arch/x86/kernel/setup.c | 7 +++++++
arch/x86/mm/kaiser.c | 2 --
3 files changed, 9 insertions(+), 2 deletions(-)
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -96,8 +96,10 @@ DECLARE_PER_CPU(unsigned long, x86_cr3_p
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
extern int kaiser_enabled;
+extern void __init kaiser_check_boottime_disable(void);
#else
#define kaiser_enabled 0
+static inline void __init kaiser_check_boottime_disable(void) {}
#endif /* CONFIG_KAISER */
/*
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -114,6 +114,7 @@
#include <asm/microcode.h>
#include <asm/mmu_context.h>
#include <asm/kaslr.h>
+#include <asm/kaiser.h>
/*
* max_low_pfn_mapped: highest direct mapped pfn under 4GB
@@ -1019,6 +1020,12 @@ void __init setup_arch(char **cmdline_p)
*/
init_hypervisor_platform();
+ /*
+ * This needs to happen right after XENPV is set on xen and
+ * kaiser_enabled is checked below in cleanup_highmap().
+ */
+ kaiser_check_boottime_disable();
+
x86_init.resources.probe_roms();
/* after parse_early_param, so could debug it */
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -310,8 +310,6 @@ void __init kaiser_init(void)
{
int cpu;
- kaiser_check_boottime_disable();
-
if (!kaiser_enabled)
return;
Patches currently in stable-queue which might be from bp(a)suse.de are
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/x86-kaiser-reenable-paravirt.patch
queue-4.9/x86-kaiser-rename-and-simplify-x86_feature_kaiser-handling.patch
queue-4.9/x86-kaiser-check-boottime-cmdline-params.patch
queue-4.9/x86-kaiser-move-feature-detection-up.patch
This is a note to let you know that I've just added the patch titled
kaiser: x86_cr3_pcid_noflush and x86_cr3_pcid_user
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 27 Aug 2017 16:24:27 -0700
Subject: kaiser: x86_cr3_pcid_noflush and x86_cr3_pcid_user
From: Hugh Dickins <hughd(a)google.com>
Mostly this commit is just unshouting X86_CR3_PCID_KERN_VAR and
X86_CR3_PCID_USER_VAR: we usually name variables in lower-case.
But why does x86_cr3_pcid_noflush need to be __aligned(PAGE_SIZE)?
Ah, it's a leftover from when kaiser_add_user_map() once complained
about mapping the same page twice. Make it __read_mostly instead.
(I'm a little uneasy about all the unrelated data which shares its
page getting user-mapped too, but that was so before, and not a big
deal: though we call it user-mapped, it's not mapped with _PAGE_USER.)
And there is a little change around the two calls to do_nmi().
Previously they set the NOFLUSH bit (if PCID supported) when
forcing to kernel context before do_nmi(); now they also have the
NOFLUSH bit set (if PCID supported) when restoring context after:
nothing done in do_nmi() should require a TLB to be flushed here.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 8 ++++----
arch/x86/include/asm/kaiser.h | 11 +++++------
arch/x86/mm/kaiser.c | 13 +++++++------
3 files changed, 16 insertions(+), 16 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1316,11 +1316,11 @@ ENTRY(nmi)
/* Unconditionally use kernel CR3 for do_nmi() */
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
+ /* If PCID enabled, NOFLUSH now and NOFLUSH on return */
+ orq x86_cr3_pcid_noflush, %rax
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
- /* Add back kernel PCID and "no flush" bit */
- orq X86_CR3_PCID_KERN_VAR, %rax
movq %rax, %cr3
#endif
call do_nmi
@@ -1560,11 +1560,11 @@ end_repeat_nmi:
/* Unconditionally use kernel CR3 for do_nmi() */
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
+ /* If PCID enabled, NOFLUSH now and NOFLUSH on return */
+ orq x86_cr3_pcid_noflush, %rax
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
- /* Add back kernel PCID and "no flush" bit */
- orq X86_CR3_PCID_KERN_VAR, %rax
movq %rax, %cr3
#endif
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -25,7 +25,7 @@
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
-orq X86_CR3_PCID_KERN_VAR, \reg
+orq x86_cr3_pcid_noflush, \reg
movq \reg, %cr3
.endm
@@ -37,11 +37,10 @@ movq \reg, %cr3
* not enabled): so that the one register can update both memory and cr3.
*/
movq %cr3, \reg
-andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
-orq PER_CPU_VAR(X86_CR3_PCID_USER_VAR), \reg
+orq PER_CPU_VAR(x86_cr3_pcid_user), \reg
js 9f
/* FLUSH this time, reset to NOFLUSH for next time (if PCID enabled) */
-movb \regb, PER_CPU_VAR(X86_CR3_PCID_USER_VAR+7)
+movb \regb, PER_CPU_VAR(x86_cr3_pcid_user+7)
9:
movq \reg, %cr3
.endm
@@ -94,8 +93,8 @@ movq PER_CPU_VAR(unsafe_stack_register_b
*/
DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
-extern unsigned long X86_CR3_PCID_KERN_VAR;
-DECLARE_PER_CPU(unsigned long, X86_CR3_PCID_USER_VAR);
+extern unsigned long x86_cr3_pcid_noflush;
+DECLARE_PER_CPU(unsigned long, x86_cr3_pcid_user);
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -28,8 +28,8 @@ DEFINE_PER_CPU_USER_MAPPED(unsigned long
* This is also handy because systems that do not support PCIDs
* just end up or'ing a 0 into their CR3, which does no harm.
*/
-__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_KERN_VAR;
-DEFINE_PER_CPU(unsigned long, X86_CR3_PCID_USER_VAR);
+unsigned long x86_cr3_pcid_noflush __read_mostly;
+DEFINE_PER_CPU(unsigned long, x86_cr3_pcid_user);
/*
* At runtime, the only things we map are some things for CPU
@@ -303,7 +303,8 @@ void __init kaiser_init(void)
sizeof(gate_desc) * NR_VECTORS,
__PAGE_KERNEL);
- kaiser_add_user_map_early(&X86_CR3_PCID_KERN_VAR, PAGE_SIZE,
+ kaiser_add_user_map_early(&x86_cr3_pcid_noflush,
+ sizeof(x86_cr3_pcid_noflush),
__PAGE_KERNEL);
}
@@ -381,8 +382,8 @@ void kaiser_setup_pcid(void)
* These variables are used by the entry/exit
* code to change PCID and pgd and TLB flushing.
*/
- X86_CR3_PCID_KERN_VAR = kern_cr3;
- this_cpu_write(X86_CR3_PCID_USER_VAR, user_cr3);
+ x86_cr3_pcid_noflush = kern_cr3;
+ this_cpu_write(x86_cr3_pcid_user, user_cr3);
}
/*
@@ -392,7 +393,7 @@ void kaiser_setup_pcid(void)
*/
void kaiser_flush_tlb_on_return_to_user(void)
{
- this_cpu_write(X86_CR3_PCID_USER_VAR,
+ this_cpu_write(x86_cr3_pcid_user,
X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
}
EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
x86/kaiser: Check boottime cmdline params
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kaiser-check-boottime-cmdline-params.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Borislav Petkov <bp(a)suse.de>
Date: Tue, 2 Jan 2018 14:19:48 +0100
Subject: x86/kaiser: Check boottime cmdline params
From: Borislav Petkov <bp(a)suse.de>
AMD (and possibly other vendors) are not affected by the leak
KAISER is protecting against.
Keep the "nopti" for traditional reasons and add pti=<on|off|auto>
like upstream.
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/kernel-parameters.txt | 6 +++
arch/x86/mm/kaiser.c | 59 +++++++++++++++++++++++++-----------
2 files changed, 47 insertions(+), 18 deletions(-)
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -3327,6 +3327,12 @@ bytes respectively. Such letter suffixes
pt. [PARIDE]
See Documentation/blockdev/paride.txt.
+ pti= [X86_64]
+ Control KAISER user/kernel address space isolation:
+ on - enable
+ off - disable
+ auto - default setting
+
pty.legacy_count=
[KNL] Number of legacy pty's. Overwrites compiled-in
default number.
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -15,6 +15,7 @@
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/desc.h>
+#include <asm/cmdline.h>
int kaiser_enabled __read_mostly = 1;
EXPORT_SYMBOL(kaiser_enabled); /* for inlined TLB flush functions */
@@ -263,6 +264,43 @@ static void __init kaiser_init_all_pgds(
WARN_ON(__ret); \
} while (0)
+void __init kaiser_check_boottime_disable(void)
+{
+ bool enable = true;
+ char arg[5];
+ int ret;
+
+ ret = cmdline_find_option(boot_command_line, "pti", arg, sizeof(arg));
+ if (ret > 0) {
+ if (!strncmp(arg, "on", 2))
+ goto enable;
+
+ if (!strncmp(arg, "off", 3))
+ goto disable;
+
+ if (!strncmp(arg, "auto", 4))
+ goto skip;
+ }
+
+ if (cmdline_find_option_bool(boot_command_line, "nopti"))
+ goto disable;
+
+skip:
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
+ goto disable;
+
+enable:
+ if (enable)
+ setup_force_cpu_cap(X86_FEATURE_KAISER);
+
+ return;
+
+disable:
+ pr_info("Kernel/User page tables isolation: disabled\n");
+ kaiser_enabled = 0;
+ setup_clear_cpu_cap(X86_FEATURE_KAISER);
+}
+
/*
* If anything in here fails, we will likely die on one of the
* first kernel->user transitions and init will die. But, we
@@ -274,12 +312,10 @@ void __init kaiser_init(void)
{
int cpu;
- if (!kaiser_enabled) {
- setup_clear_cpu_cap(X86_FEATURE_KAISER);
- return;
- }
+ kaiser_check_boottime_disable();
- setup_force_cpu_cap(X86_FEATURE_KAISER);
+ if (!kaiser_enabled)
+ return;
kaiser_init_all_pgds();
@@ -423,16 +459,3 @@ void kaiser_flush_tlb_on_return_to_user(
X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
}
EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
-
-static int __init x86_nokaiser_setup(char *s)
-{
- /* nopti doesn't accept parameters */
- if (s)
- return -EINVAL;
-
- kaiser_enabled = 0;
- pr_info("Kernel/User page tables isolation: disabled\n");
-
- return 0;
-}
-early_param("nopti", x86_nokaiser_setup);
Patches currently in stable-queue which might be from bp(a)suse.de are
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/x86-kaiser-reenable-paravirt.patch
queue-4.9/x86-kaiser-rename-and-simplify-x86_feature_kaiser-handling.patch
queue-4.9/x86-kaiser-check-boottime-cmdline-params.patch
queue-4.9/x86-kaiser-move-feature-detection-up.patch
This is a note to let you know that I've just added the patch titled
kaiser: vmstat show NR_KAISERTABLE as nr_overhead
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sat, 9 Sep 2017 21:27:32 -0700
Subject: kaiser: vmstat show NR_KAISERTABLE as nr_overhead
From: Hugh Dickins <hughd(a)google.com>
The kaiser update made an interesting choice, never to free any shadow
page tables. Contention on global spinlock was worrying, particularly
with it held across page table scans when freeing. Something had to be
done: I was going to add refcounting; but simply never to free them is
an appealing choice, minimizing contention without complicating the code
(the more a page table is found already, the less the spinlock is used).
But leaking pages in this way is also a worry: can we get away with it?
At the very least, we need a count to show how bad it actually gets:
in principle, one might end up wasting about 1/256 of memory that way
(1/512 for when direct-mapped pages have to be user-mapped, plus 1/512
for when they are user-mapped from the vmalloc area on another occasion
(but we don't have vmalloc'ed stacks, so only large ldts are vmalloc'ed).
Add per-cpu stat NR_KAISERTABLE: including 256 at startup for the
shared pgd entries, and 1 for each intermediate page table added
thereafter for user-mapping - but leave out the 1 per mm, for its
shadow pgd, because that distracts from the monotonic increase.
Shown in /proc/vmstat as nr_overhead (0 if kaiser not enabled).
In practice, it doesn't look so bad so far: more like 1/12000 after
nine hours of gtests below; and movable pageblock segregation should
tend to cluster the kaiser tables into a subset of the address space
(if not, they will be bad for compaction too). But production may
tell a different story: keep an eye on this number, and bring back
lighter freeing if it gets out of control (maybe a shrinker).
["nr_overhead" should of course say "nr_kaisertable", if it needs
to stay; but for the moment we are being coy, preferring that when
Joe Blow notices a new line in his /proc/vmstat, he does not get
too curious about what this "kaiser" stuff might be.]
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 16 +++++++++++-----
include/linux/mmzone.h | 3 ++-
mm/vmstat.c | 1 +
3 files changed, 14 insertions(+), 6 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -121,9 +121,11 @@ static pte_t *kaiser_pagetable_walk(unsi
if (!new_pmd_page)
return NULL;
spin_lock(&shadow_table_allocation_lock);
- if (pud_none(*pud))
+ if (pud_none(*pud)) {
set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
- else
+ __inc_zone_page_state(virt_to_page((void *)
+ new_pmd_page), NR_KAISERTABLE);
+ } else
free_page(new_pmd_page);
spin_unlock(&shadow_table_allocation_lock);
}
@@ -139,9 +141,11 @@ static pte_t *kaiser_pagetable_walk(unsi
if (!new_pte_page)
return NULL;
spin_lock(&shadow_table_allocation_lock);
- if (pmd_none(*pmd))
+ if (pmd_none(*pmd)) {
set_pmd(pmd, __pmd(_KERNPG_TABLE | __pa(new_pte_page)));
- else
+ __inc_zone_page_state(virt_to_page((void *)
+ new_pte_page), NR_KAISERTABLE);
+ } else
free_page(new_pte_page);
spin_unlock(&shadow_table_allocation_lock);
}
@@ -205,11 +209,13 @@ static void __init kaiser_init_all_pgds(
pgd = native_get_shadow_pgd(pgd_offset_k((unsigned long )0));
for (i = PTRS_PER_PGD / 2; i < PTRS_PER_PGD; i++) {
pgd_t new_pgd;
- pud_t *pud = pud_alloc_one(&init_mm, PAGE_OFFSET + i * PGDIR_SIZE);
+ pud_t *pud = pud_alloc_one(&init_mm,
+ PAGE_OFFSET + i * PGDIR_SIZE);
if (!pud) {
WARN_ON(1);
break;
}
+ inc_zone_page_state(virt_to_page(pud), NR_KAISERTABLE);
new_pgd = __pgd(_KERNPG_TABLE |__pa(pud));
/*
* Make sure not to stomp on some other pgd entry.
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -124,8 +124,9 @@ enum zone_stat_item {
NR_SLAB_UNRECLAIMABLE,
NR_PAGETABLE, /* used for pagetables */
NR_KERNEL_STACK_KB, /* measured in KiB */
- /* Second 128 byte cacheline */
+ NR_KAISERTABLE,
NR_BOUNCE,
+ /* Second 128 byte cacheline */
#if IS_ENABLED(CONFIG_ZSMALLOC)
NR_ZSPAGES, /* allocated in zsmalloc */
#endif
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -932,6 +932,7 @@ const char * const vmstat_text[] = {
"nr_slab_unreclaimable",
"nr_page_table_pages",
"nr_kernel_stack",
+ "nr_overhead",
"nr_bounce",
#if IS_ENABLED(CONFIG_ZSMALLOC)
"nr_zspages",
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: use ALTERNATIVE instead of x86_cr3_pcid_noflush
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Tue, 3 Oct 2017 20:49:04 -0700
Subject: kaiser: use ALTERNATIVE instead of x86_cr3_pcid_noflush
From: Hugh Dickins <hughd(a)google.com>
Now that we're playing the ALTERNATIVE game, use that more efficient
method: instead of user-mapping an extra page, and reading an extra
cacheline each time for x86_cr3_pcid_noflush.
Neel has found that __stringify(bts $X86_CR3_PCID_NOFLUSH_BIT, %rax)
is a working substitute for the "bts $63, %rax" in these ALTERNATIVEs;
but the one line with $63 in looks clearer, so let's stick with that.
Worried about what happens with an ALTERNATIVE between the jump and
jump label in another ALTERNATIVE? I was, but have checked the
combinations in SWITCH_KERNEL_CR3_NO_STACK at entry_SYSCALL_64,
and it does a good job.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 7 ++++---
arch/x86/include/asm/kaiser.h | 6 +++---
arch/x86/mm/kaiser.c | 11 +----------
3 files changed, 8 insertions(+), 16 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1084,7 +1084,8 @@ ENTRY(paranoid_entry)
jz 2f
orl $2, %ebx
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
- orq x86_cr3_pcid_noflush, %rax
+ /* If PCID enabled, set X86_CR3_PCID_NOFLUSH_BIT */
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
movq %rax, %cr3
2:
#endif
@@ -1344,7 +1345,7 @@ ENTRY(nmi)
/* %rax is saved above, so OK to clobber here */
ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
/* If PCID enabled, NOFLUSH now and NOFLUSH on return */
- orq x86_cr3_pcid_noflush, %rax
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
@@ -1588,7 +1589,7 @@ end_repeat_nmi:
/* %rax is saved above, so OK to clobber here */
ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
/* If PCID enabled, NOFLUSH now and NOFLUSH on return */
- orq x86_cr3_pcid_noflush, %rax
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -25,7 +25,8 @@
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
-orq x86_cr3_pcid_noflush, \reg
+/* If PCID enabled, set X86_CR3_PCID_NOFLUSH_BIT */
+ALTERNATIVE "", "bts $63, \reg", X86_FEATURE_PCID
movq \reg, %cr3
.endm
@@ -39,7 +40,7 @@ movq \reg, %cr3
movq %cr3, \reg
orq PER_CPU_VAR(x86_cr3_pcid_user), \reg
js 9f
-/* FLUSH this time, reset to NOFLUSH for next time (if PCID enabled) */
+/* If PCID enabled, FLUSH this time, reset to NOFLUSH for next time */
movb \regb, PER_CPU_VAR(x86_cr3_pcid_user+7)
9:
movq \reg, %cr3
@@ -90,7 +91,6 @@ movq PER_CPU_VAR(unsafe_stack_register_b
*/
DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
-extern unsigned long x86_cr3_pcid_noflush;
DECLARE_PER_CPU(unsigned long, x86_cr3_pcid_user);
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -31,7 +31,6 @@ DEFINE_PER_CPU_USER_MAPPED(unsigned long
* This is also handy because systems that do not support PCIDs
* just end up or'ing a 0 into their CR3, which does no harm.
*/
-unsigned long x86_cr3_pcid_noflush __read_mostly;
DEFINE_PER_CPU(unsigned long, x86_cr3_pcid_user);
/*
@@ -356,10 +355,6 @@ void __init kaiser_init(void)
kaiser_add_user_map_early(&debug_idt_table,
sizeof(gate_desc) * NR_VECTORS,
__PAGE_KERNEL);
-
- kaiser_add_user_map_early(&x86_cr3_pcid_noflush,
- sizeof(x86_cr3_pcid_noflush),
- __PAGE_KERNEL);
}
/* Add a mapping to the shadow mapping, and synchronize the mappings */
@@ -433,18 +428,14 @@ pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp,
void kaiser_setup_pcid(void)
{
- unsigned long kern_cr3 = 0;
unsigned long user_cr3 = KAISER_SHADOW_PGD_OFFSET;
- if (this_cpu_has(X86_FEATURE_PCID)) {
- kern_cr3 |= X86_CR3_PCID_KERN_NOFLUSH;
+ if (this_cpu_has(X86_FEATURE_PCID))
user_cr3 |= X86_CR3_PCID_USER_NOFLUSH;
- }
/*
* These variables are used by the entry/exit
* code to change PCID and pgd and TLB flushing.
*/
- x86_cr3_pcid_noflush = kern_cr3;
this_cpu_write(x86_cr3_pcid_user, user_cr3);
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: tidied up kaiser_add/remove_mapping slightly
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 19:23:08 -0700
Subject: kaiser: tidied up kaiser_add/remove_mapping slightly
From: Hugh Dickins <hughd(a)google.com>
Yes, unmap_pud_range_nofree()'s declaration ought to be in a
header file really, but I'm not sure we want to use it anyway:
so for now just declare it inside kaiser_remove_mapping().
And there doesn't seem to be such a thing as unmap_p4d_range(),
even in a 5-level paging tree.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -285,8 +285,7 @@ void __init kaiser_init(void)
__PAGE_KERNEL);
}
-extern void unmap_pud_range_nofree(pgd_t *pgd, unsigned long start, unsigned long end);
-// add a mapping to the shadow-mapping, and synchronize the mappings
+/* Add a mapping to the shadow mapping, and synchronize the mappings */
int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
{
return kaiser_add_user_map((const void *)addr, size, flags);
@@ -294,15 +293,13 @@ int kaiser_add_mapping(unsigned long add
void kaiser_remove_mapping(unsigned long start, unsigned long size)
{
+ extern void unmap_pud_range_nofree(pgd_t *pgd,
+ unsigned long start, unsigned long end);
unsigned long end = start + size;
unsigned long addr;
for (addr = start; addr < end; addr += PGDIR_SIZE) {
pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(addr));
- /*
- * unmap_p4d_range() handles > P4D_SIZE unmaps,
- * so no need to trim 'end'.
- */
unmap_pud_range_nofree(pgd, addr, end);
}
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: tidied up asm/kaiser.h somewhat
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-tidied-up-asm-kaiser.h-somewhat.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 19:18:07 -0700
Subject: kaiser: tidied up asm/kaiser.h somewhat
From: Hugh Dickins <hughd(a)google.com>
Mainly deleting a surfeit of blank lines, and reflowing header comment.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/kaiser.h | 32 +++++++++++++-------------------
1 file changed, 13 insertions(+), 19 deletions(-)
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -1,15 +1,17 @@
#ifndef _ASM_X86_KAISER_H
#define _ASM_X86_KAISER_H
-
-/* This file includes the definitions for the KAISER feature.
- * KAISER is a counter measure against x86_64 side channel attacks on the kernel virtual memory.
- * It has a shodow-pgd for every process. the shadow-pgd has a minimalistic kernel-set mapped,
- * but includes the whole user memory. Within a kernel context switch, or when an interrupt is handled,
- * the pgd is switched to the normal one. When the system switches to user mode, the shadow pgd is enabled.
- * By this, the virtual memory chaches are freed, and the user may not attack the whole kernel memory.
+/*
+ * This file includes the definitions for the KAISER feature.
+ * KAISER is a counter measure against x86_64 side channel attacks on
+ * the kernel virtual memory. It has a shadow pgd for every process: the
+ * shadow pgd has a minimalistic kernel-set mapped, but includes the whole
+ * user memory. Within a kernel context switch, or when an interrupt is handled,
+ * the pgd is switched to the normal one. When the system switches to user mode,
+ * the shadow pgd is enabled. By this, the virtual memory caches are freed,
+ * and the user may not attack the whole kernel memory.
*
- * A minimalistic kernel mapping holds the parts needed to be mapped in user mode, as the entry/exit functions
- * of the user space, or the stacks.
+ * A minimalistic kernel mapping holds the parts needed to be mapped in user
+ * mode, such as the entry/exit functions of the user space, or the stacks.
*/
#ifdef __ASSEMBLY__
#ifdef CONFIG_KAISER
@@ -48,13 +50,10 @@ _SWITCH_TO_KERNEL_CR3 %rax
movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
.endm
-
.macro SWITCH_USER_CR3_NO_STACK
-
movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)
_SWITCH_TO_USER_CR3 %rax
movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
-
.endm
#else /* CONFIG_KAISER */
@@ -72,7 +71,6 @@ movq PER_CPU_VAR(unsafe_stack_register_b
#else /* __ASSEMBLY__ */
-
#ifdef CONFIG_KAISER
/*
* Upon kernel/user mode switch, it may happen that the address
@@ -80,7 +78,6 @@ movq PER_CPU_VAR(unsafe_stack_register_b
* stored. To change the address space, another register is
* needed. A register therefore has to be stored/restored.
*/
-
DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
/**
@@ -95,7 +92,6 @@ DECLARE_PER_CPU_USER_MAPPED(unsigned lon
*/
extern int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags);
-
/**
* kaiser_remove_mapping - unmap a virtual memory part of the shadow mapping
* @addr: the start address of the range
@@ -104,12 +100,12 @@ extern int kaiser_add_mapping(unsigned l
extern void kaiser_remove_mapping(unsigned long start, unsigned long size);
/**
- * kaiser_initialize_mapping - Initalize the shadow mapping
+ * kaiser_init - Initialize the shadow mapping
*
* Most parts of the shadow mapping can be mapped upon boot
* time. Only per-process things like the thread stacks
* or a new LDT have to be mapped at runtime. These boot-
- * time mappings are permanent and nevertunmapped.
+ * time mappings are permanent and never unmapped.
*/
extern void kaiser_init(void);
@@ -117,6 +113,4 @@ extern void kaiser_init(void);
#endif /* __ASSEMBLY */
-
-
#endif /* _ASM_X86_KAISER_H */
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: stack map PAGE_SIZE at THREAD_SIZE-PAGE_SIZE
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-stack-map-page_size-at-thread_size-page_size.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 18:57:03 -0700
Subject: kaiser: stack map PAGE_SIZE at THREAD_SIZE-PAGE_SIZE
From: Hugh Dickins <hughd(a)google.com>
Kaiser only needs to map one page of the stack; and
kernel/fork.c did not build on powerpc (no __PAGE_KERNEL).
It's all cleaner if linux/kaiser.h provides kaiser_map_thread_stack()
and kaiser_unmap_thread_stack() wrappers around asm/kaiser.h's
kaiser_add_mapping() and kaiser_remove_mapping(). And use
linux/kaiser.h in init/main.c to avoid the #ifdefs there.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/kaiser.h | 40 +++++++++++++++++++++++++++++++++-------
init/main.c | 6 +-----
kernel/fork.c | 7 ++-----
3 files changed, 36 insertions(+), 17 deletions(-)
--- a/include/linux/kaiser.h
+++ b/include/linux/kaiser.h
@@ -1,26 +1,52 @@
-#ifndef _INCLUDE_KAISER_H
-#define _INCLUDE_KAISER_H
+#ifndef _LINUX_KAISER_H
+#define _LINUX_KAISER_H
#ifdef CONFIG_KAISER
#include <asm/kaiser.h>
+
+static inline int kaiser_map_thread_stack(void *stack)
+{
+ /*
+ * Map that page of kernel stack on which we enter from user context.
+ */
+ return kaiser_add_mapping((unsigned long)stack +
+ THREAD_SIZE - PAGE_SIZE, PAGE_SIZE, __PAGE_KERNEL);
+}
+
+static inline void kaiser_unmap_thread_stack(void *stack)
+{
+ /*
+ * Note: may be called even when kaiser_map_thread_stack() failed.
+ */
+ kaiser_remove_mapping((unsigned long)stack +
+ THREAD_SIZE - PAGE_SIZE, PAGE_SIZE);
+}
#else
/*
* These stubs are used whenever CONFIG_KAISER is off, which
- * includes architectures that support KAISER, but have it
- * disabled.
+ * includes architectures that support KAISER, but have it disabled.
*/
static inline void kaiser_init(void)
{
}
-static inline void kaiser_remove_mapping(unsigned long start, unsigned long size)
+static inline int kaiser_add_mapping(unsigned long addr,
+ unsigned long size, unsigned long flags)
+{
+ return 0;
+}
+static inline void kaiser_remove_mapping(unsigned long start,
+ unsigned long size)
{
}
-static inline int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
+static inline int kaiser_map_thread_stack(void *stack)
{
return 0;
}
+static inline void kaiser_unmap_thread_stack(void *stack)
+{
+}
#endif /* !CONFIG_KAISER */
-#endif /* _INCLUDE_KAISER_H */
+#endif /* _LINUX_KAISER_H */
--- a/init/main.c
+++ b/init/main.c
@@ -80,15 +80,13 @@
#include <linux/integrity.h>
#include <linux/proc_ns.h>
#include <linux/io.h>
+#include <linux/kaiser.h>
#include <asm/io.h>
#include <asm/bugs.h>
#include <asm/setup.h>
#include <asm/sections.h>
#include <asm/cacheflush.h>
-#ifdef CONFIG_KAISER
-#include <asm/kaiser.h>
-#endif
static int kernel_init(void *);
@@ -476,9 +474,7 @@ static void __init mm_init(void)
pgtable_init();
vmalloc_init();
ioremap_huge_init();
-#ifdef CONFIG_KAISER
kaiser_init();
-#endif
}
asmlinkage __visible void __init start_kernel(void)
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -212,12 +212,9 @@ static unsigned long *alloc_thread_stack
#endif
}
-extern void kaiser_remove_mapping(unsigned long start_addr, unsigned long size);
static inline void free_thread_stack(struct task_struct *tsk)
{
-#ifdef CONFIG_KAISER
- kaiser_remove_mapping((unsigned long)tsk->stack, THREAD_SIZE);
-#endif
+ kaiser_unmap_thread_stack(tsk->stack);
#ifdef CONFIG_VMAP_STACK
if (task_stack_vm_area(tsk)) {
unsigned long flags;
@@ -501,7 +498,7 @@ static struct task_struct *dup_task_stru
*/
tsk->stack = stack;
- err= kaiser_add_mapping((unsigned long)tsk->stack, THREAD_SIZE, __PAGE_KERNEL);
+ err= kaiser_map_thread_stack(tsk->stack);
if (err)
goto free_stack;
#ifdef CONFIG_VMAP_STACK
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: PCID 0 for kernel and 128 for user
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-pcid-0-for-kernel-and-128-for-user.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Fri, 8 Sep 2017 19:26:30 -0700
Subject: kaiser: PCID 0 for kernel and 128 for user
From: Hugh Dickins <hughd(a)google.com>
Why was 4 chosen for kernel PCID and 6 for user PCID?
No good reason in a backport where PCIDs are only used for Kaiser.
If we continue with those, then we shall need to add Andy Lutomirski's
4.13 commit 6c690ee1039b ("x86/mm: Split read_cr3() into read_cr3_pa()
and __read_cr3()"), which deals with the problem of read_cr3() callers
finding stray bits in the cr3 that they expected to be page-aligned;
and for hibernation, his 4.14 commit f34902c5c6c0 ("x86/hibernate/64:
Mask off CR3's PCID bits in the saved CR3").
But if 0 is used for kernel PCID, then there's no need to add in those
commits - whenever the kernel looks, it sees 0 in the lower bits; and
0 for kernel seems an obvious choice.
And I naughtily propose 128 for user PCID. Because there's a place
in _SWITCH_TO_USER_CR3 where it takes note of the need for TLB FLUSH,
but needs to reset that to NOFLUSH for the next occasion. Currently
it does so with a "movb $(0x80)" into the high byte of the per-cpu
quadword, but that will cause a machine without PCID support to crash.
Now, if %al just happened to have 0x80 in it at that point, on a
machine with PCID support, but 0 on a machine without PCID support...
(That will go badly wrong once the pgd can be at a physical address
above 2^56, but even with 5-level paging, physical goes up to 2^52.)
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/kaiser.h | 19 ++++++++++++-------
arch/x86/include/asm/pgtable_types.h | 7 ++++---
arch/x86/mm/tlb.c | 3 +++
3 files changed, 19 insertions(+), 10 deletions(-)
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -29,14 +29,19 @@ orq X86_CR3_PCID_KERN_VAR, \reg
movq \reg, %cr3
.endm
-.macro _SWITCH_TO_USER_CR3 reg
+.macro _SWITCH_TO_USER_CR3 reg regb
+/*
+ * regb must be the low byte portion of reg: because we have arranged
+ * for the low byte of the user PCID to serve as the high byte of NOFLUSH
+ * (0x80 for each when PCID is enabled, or 0x00 when PCID and NOFLUSH are
+ * not enabled): so that the one register can update both memory and cr3.
+ */
movq %cr3, \reg
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
orq PER_CPU_VAR(X86_CR3_PCID_USER_VAR), \reg
js 9f
-// FLUSH this time, reset to NOFLUSH for next time
-// But if nopcid? Consider using 0x80 for user pcid?
-movb $(0x80), PER_CPU_VAR(X86_CR3_PCID_USER_VAR+7)
+/* FLUSH this time, reset to NOFLUSH for next time (if PCID enabled) */
+movb \regb, PER_CPU_VAR(X86_CR3_PCID_USER_VAR+7)
9:
movq \reg, %cr3
.endm
@@ -49,7 +54,7 @@ popq %rax
.macro SWITCH_USER_CR3
pushq %rax
-_SWITCH_TO_USER_CR3 %rax
+_SWITCH_TO_USER_CR3 %rax %al
popq %rax
.endm
@@ -61,7 +66,7 @@ movq PER_CPU_VAR(unsafe_stack_register_b
.macro SWITCH_USER_CR3_NO_STACK
movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)
-_SWITCH_TO_USER_CR3 %rax
+_SWITCH_TO_USER_CR3 %rax %al
movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
.endm
@@ -69,7 +74,7 @@ movq PER_CPU_VAR(unsafe_stack_register_b
.macro SWITCH_KERNEL_CR3 reg
.endm
-.macro SWITCH_USER_CR3 reg
+.macro SWITCH_USER_CR3 reg regb
.endm
.macro SWITCH_USER_CR3_NO_STACK
.endm
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -146,16 +146,17 @@
/* Mask for all the PCID-related bits in CR3: */
#define X86_CR3_PCID_MASK (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_MASK)
+#define X86_CR3_PCID_ASID_KERN (_AC(0x0,UL))
+
#if defined(CONFIG_KAISER) && defined(CONFIG_X86_64)
-#define X86_CR3_PCID_ASID_KERN (_AC(0x4,UL))
-#define X86_CR3_PCID_ASID_USER (_AC(0x6,UL))
+/* Let X86_CR3_PCID_ASID_USER be usable for the X86_CR3_PCID_NOFLUSH bit */
+#define X86_CR3_PCID_ASID_USER (_AC(0x80,UL))
#define X86_CR3_PCID_KERN_FLUSH (X86_CR3_PCID_ASID_KERN)
#define X86_CR3_PCID_USER_FLUSH (X86_CR3_PCID_ASID_USER)
#define X86_CR3_PCID_KERN_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_KERN)
#define X86_CR3_PCID_USER_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_USER)
#else
-#define X86_CR3_PCID_ASID_KERN (_AC(0x0,UL))
#define X86_CR3_PCID_ASID_USER (_AC(0x0,UL))
/*
* PCIDs are unsupported on 32-bit and none of these bits can be
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -50,6 +50,9 @@ static void load_new_mm_cr3(pgd_t *pgdir
* invpcid_flush_single_context(X86_CR3_PCID_ASID_USER) could
* do it here, but can only be used if X86_FEATURE_INVPCID is
* available - and many machines support pcid without invpcid.
+ *
+ * The line below is a no-op: X86_CR3_PCID_KERN_FLUSH is now 0;
+ * but keep that line in there in case something changes.
*/
new_mm_cr3 |= X86_CR3_PCID_KERN_FLUSH;
kaiser_flush_tlb_on_return_to_user();
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: paranoid_entry pass cr3 need to paranoid_exit
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Tue, 26 Sep 2017 18:43:07 -0700
Subject: kaiser: paranoid_entry pass cr3 need to paranoid_exit
From: Hugh Dickins <hughd(a)google.com>
Neel Natu points out that paranoid_entry() was wrong to assume that
an entry that did not need swapgs would not need SWITCH_KERNEL_CR3:
paranoid_entry (used for debug breakpoint, int3, double fault or MCE;
though I think it's only the MCE case that is cause for concern here)
can break in at an awkward time, between cr3 switch and swapgs, but
its handling always needs kernel gs and kernel cr3.
Easy to fix in itself, but paranoid_entry() also needs to convey to
paranoid_exit() (and my reading of macro idtentry says paranoid_entry
and paranoid_exit are always paired) how to restore the prior state.
The swapgs state is already conveyed by %ebx (0 or 1), so extend that
also to convey when SWITCH_USER_CR3 will be needed (2 or 3).
(Yes, I'd much prefer that 0 meant no swapgs, whereas it's the other
way round: and a convention shared with error_entry() and error_exit(),
which I don't want to touch. Perhaps I should have inverted the bit
for switch cr3 too, but did not.)
paranoid_exit() would be straightforward, except for TRACE_IRQS: it
did TRACE_IRQS_IRETQ when doing swapgs, but TRACE_IRQS_IRETQ_DEBUG
when not: which is it supposed to use when SWITCH_USER_CR3 is split
apart from that? As best as I can determine, commit 5963e317b1e9
("ftrace/x86: Do not change stacks in DEBUG when calling lockdep")
missed the swapgs case, and should have used TRACE_IRQS_IRETQ_DEBUG
there too (the discrepancy has nothing to do with the liberal use
of _NO_STACK and _UNSAFE_STACK hereabouts: TRACE_IRQS_OFF_DEBUG has
just been used in all cases); discrepancy lovingly preserved across
several paranoid_exit() cleanups, but I'm now removing it.
Neel further indicates that to use SWITCH_USER_CR3_NO_STACK there in
paranoid_exit() is now not only unnecessary but unsafe: might corrupt
syscall entry's unsafe_stack_register_backup of %rax. Just use
SWITCH_USER_CR3: and delete SWITCH_USER_CR3_NO_STACK altogether,
before we make the mistake of using it again.
hughd adds: this commit fixes an issue in the Kaiser-without-PCIDs
part of the series, and ought to be moved earlier, if you decided
to make a release of Kaiser-without-PCIDs.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 46 ++++++++++++++++++++++++++++++---------
arch/x86/entry/entry_64_compat.S | 2 -
arch/x86/include/asm/kaiser.h | 8 ------
3 files changed, 37 insertions(+), 19 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1053,7 +1053,11 @@ idtentry machine_check has_error_cod
/*
* Save all registers in pt_regs, and switch gs if needed.
* Use slow, but surefire "are we in kernel?" check.
- * Return: ebx=0: need swapgs on exit, ebx=1: otherwise
+ *
+ * Return: ebx=0: needs swapgs but not SWITCH_USER_CR3 in paranoid_exit
+ * ebx=1: needs neither swapgs nor SWITCH_USER_CR3 in paranoid_exit
+ * ebx=2: needs both swapgs and SWITCH_USER_CR3 in paranoid_exit
+ * ebx=3: needs SWITCH_USER_CR3 but not swapgs in paranoid_exit
*/
ENTRY(paranoid_entry)
cld
@@ -1065,9 +1069,26 @@ ENTRY(paranoid_entry)
testl %edx, %edx
js 1f /* negative -> in kernel */
SWAPGS
- SWITCH_KERNEL_CR3
xorl %ebx, %ebx
-1: ret
+1:
+#ifdef CONFIG_KAISER
+ /*
+ * We might have come in between a swapgs and a SWITCH_KERNEL_CR3
+ * on entry, or between a SWITCH_USER_CR3 and a swapgs on exit.
+ * Do a conditional SWITCH_KERNEL_CR3: this could safely be done
+ * unconditionally, but we need to find out whether the reverse
+ * should be done on return (conveyed to paranoid_exit in %ebx).
+ */
+ movq %cr3, %rax
+ testl $KAISER_SHADOW_PGD_OFFSET, %eax
+ jz 2f
+ orl $2, %ebx
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ orq x86_cr3_pcid_noflush, %rax
+ movq %rax, %cr3
+2:
+#endif
+ ret
END(paranoid_entry)
/*
@@ -1080,20 +1101,25 @@ END(paranoid_entry)
* be complicated. Fortunately, we there's no good reason
* to try to handle preemption here.
*
- * On entry, ebx is "no swapgs" flag (1: don't need swapgs, 0: need it)
+ * On entry: ebx=0: needs swapgs but not SWITCH_USER_CR3
+ * ebx=1: needs neither swapgs nor SWITCH_USER_CR3
+ * ebx=2: needs both swapgs and SWITCH_USER_CR3
+ * ebx=3: needs SWITCH_USER_CR3 but not swapgs
*/
ENTRY(paranoid_exit)
DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF_DEBUG
- testl %ebx, %ebx /* swapgs needed? */
+ TRACE_IRQS_IRETQ_DEBUG
+#ifdef CONFIG_KAISER
+ testl $2, %ebx /* SWITCH_USER_CR3 needed? */
+ jz paranoid_exit_no_switch
+ SWITCH_USER_CR3
+paranoid_exit_no_switch:
+#endif
+ testl $1, %ebx /* swapgs needed? */
jnz paranoid_exit_no_swapgs
- TRACE_IRQS_IRETQ
- SWITCH_USER_CR3_NO_STACK
SWAPGS_UNSAFE_STACK
- jmp paranoid_exit_restore
paranoid_exit_no_swapgs:
- TRACE_IRQS_IRETQ_DEBUG
-paranoid_exit_restore:
RESTORE_EXTRA_REGS
RESTORE_C_REGS
REMOVE_PT_GPREGS_FROM_STACK 8
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -343,7 +343,7 @@ ENTRY(entry_INT80_compat)
/* Go back to user mode. */
TRACE_IRQS_ON
- SWITCH_USER_CR3_NO_STACK
+ SWITCH_USER_CR3
SWAPGS
jmp restore_regs_and_iret
END(entry_INT80_compat)
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -63,20 +63,12 @@ _SWITCH_TO_KERNEL_CR3 %rax
movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
.endm
-.macro SWITCH_USER_CR3_NO_STACK
-movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)
-_SWITCH_TO_USER_CR3 %rax %al
-movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
-.endm
-
#else /* CONFIG_KAISER */
.macro SWITCH_KERNEL_CR3 reg
.endm
.macro SWITCH_USER_CR3 reg regb
.endm
-.macro SWITCH_USER_CR3_NO_STACK
-.endm
.macro SWITCH_KERNEL_CR3_NO_STACK
.endm
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: name that 0x1000 KAISER_SHADOW_PGD_OFFSET
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sat, 9 Sep 2017 17:31:18 -0700
Subject: kaiser: name that 0x1000 KAISER_SHADOW_PGD_OFFSET
From: Hugh Dickins <hughd(a)google.com>
There's a 0x1000 in various places, which looks better with a name.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 4 ++--
arch/x86/include/asm/kaiser.h | 7 +++++--
2 files changed, 7 insertions(+), 4 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1318,7 +1318,7 @@ ENTRY(nmi)
movq %cr3, %rax
pushq %rax
#ifdef CONFIG_KAISER_REAL_SWITCH
- andq $(~0x1000), %rax
+ andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
#endif
movq %rax, %cr3
#endif
@@ -1561,7 +1561,7 @@ end_repeat_nmi:
movq %cr3, %rax
pushq %rax
#ifdef CONFIG_KAISER_REAL_SWITCH
- andq $(~0x1000), %rax
+ andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
#endif
movq %rax, %cr3
#endif
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -13,13 +13,16 @@
* A minimalistic kernel mapping holds the parts needed to be mapped in user
* mode, such as the entry/exit functions of the user space, or the stacks.
*/
+
+#define KAISER_SHADOW_PGD_OFFSET 0x1000
+
#ifdef __ASSEMBLY__
#ifdef CONFIG_KAISER
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
#ifdef CONFIG_KAISER_REAL_SWITCH
-andq $(~0x1000), \reg
+andq $(~KAISER_SHADOW_PGD_OFFSET), \reg
#endif
movq \reg, %cr3
.endm
@@ -27,7 +30,7 @@ movq \reg, %cr3
.macro _SWITCH_TO_USER_CR3 reg
movq %cr3, \reg
#ifdef CONFIG_KAISER_REAL_SWITCH
-orq $(0x1000), \reg
+orq $(KAISER_SHADOW_PGD_OFFSET), \reg
#endif
movq \reg, %cr3
.endm
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: load_new_mm_cr3() let SWITCH_USER_CR3 flush user
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Thu, 17 Aug 2017 15:00:37 -0700
Subject: kaiser: load_new_mm_cr3() let SWITCH_USER_CR3 flush user
From: Hugh Dickins <hughd(a)google.com>
We have many machines (Westmere, Sandybridge, Ivybridge) supporting
PCID but not INVPCID: on these load_new_mm_cr3() simply crashed.
Flushing user context inside load_new_mm_cr3() without the use of
invpcid is difficult: momentarily switch from kernel to user context
and back to do so? I'm not sure whether that can be safely done at
all, and would risk polluting user context with kernel internals,
and kernel context with stale user externals.
Instead, follow the hint in the comment that was there: change
X86_CR3_PCID_USER_VAR to be a per-cpu variable, then load_new_mm_cr3()
can leave a note in it, for SWITCH_USER_CR3 on return to userspace to
flush user context TLB, instead of default X86_CR3_PCID_USER_NOFLUSH.
Which works well enough that there's no need to do it this way only
when invpcid is unsupported: it's a good alternative to invpcid here.
But there's a couple of inlines in asm/tlbflush.h that need to do the
same trick, so it's best to localize all this per-cpu business in
mm/kaiser.c: moving that part of the initialization from setup_pcid()
to kaiser_setup_pcid(); with kaiser_flush_tlb_on_return_to_user() the
function for noting an X86_CR3_PCID_USER_FLUSH. And let's keep a
KAISER_SHADOW_PGD_OFFSET in there, to avoid the extra OR on exit.
I did try to make the feature tests in asm/tlbflush.h more consistent
with each other: there seem to be far too many ways of performing such
tests, and I don't have a good grasp of their differences. At first
I converted them all to be static_cpu_has(): but that proved to be a
mistake, as the comment in __native_flush_tlb_single() hints; so then
I reversed and made them all this_cpu_has(). Probably all gratuitous
change, but that's the way it's working at present.
I am slightly bothered by the way non-per-cpu X86_CR3_PCID_KERN_VAR
gets re-initialized by each cpu (before and after these changes):
no problem when (as usual) all cpus on a machine have the same
features, but in principle incorrect. However, my experiment
to per-cpu-ify that one did not end well...
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/kaiser.h | 18 +++++++-----
arch/x86/include/asm/tlbflush.h | 56 +++++++++++++++++++++++++++-------------
arch/x86/kernel/cpu/common.c | 22 ---------------
arch/x86/mm/kaiser.c | 50 +++++++++++++++++++++++++++++++----
arch/x86/mm/tlb.c | 46 ++++++++++++--------------------
5 files changed, 113 insertions(+), 79 deletions(-)
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -32,13 +32,12 @@ movq \reg, %cr3
.macro _SWITCH_TO_USER_CR3 reg
movq %cr3, \reg
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
-/*
- * This can obviously be one instruction by putting the
- * KAISER_SHADOW_PGD_OFFSET bit in the X86_CR3_PCID_USER_VAR.
- * But, just leave it now for simplicity.
- */
-orq X86_CR3_PCID_USER_VAR, \reg
-orq $(KAISER_SHADOW_PGD_OFFSET), \reg
+orq PER_CPU_VAR(X86_CR3_PCID_USER_VAR), \reg
+js 9f
+// FLUSH this time, reset to NOFLUSH for next time
+// But if nopcid? Consider using 0x80 for user pcid?
+movb $(0x80), PER_CPU_VAR(X86_CR3_PCID_USER_VAR+7)
+9:
movq \reg, %cr3
.endm
@@ -90,6 +89,11 @@ movq PER_CPU_VAR(unsafe_stack_register_b
*/
DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+extern unsigned long X86_CR3_PCID_KERN_VAR;
+DECLARE_PER_CPU(unsigned long, X86_CR3_PCID_USER_VAR);
+
+extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
+
/**
* kaiser_add_mapping - map a virtual memory part to the shadow (user) mapping
* @addr: the start address of the range
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -13,6 +13,7 @@ static inline void __invpcid(unsigned lo
unsigned long type)
{
struct { u64 d[2]; } desc = { { pcid, addr } };
+
/*
* The memory clobber is because the whole point is to invalidate
* stale TLB entries and, especially if we're flushing global
@@ -131,27 +132,42 @@ static inline void cr4_set_bits_and_upda
cr4_set_bits(mask);
}
+/*
+ * Declare a couple of kaiser interfaces here for convenience,
+ * to avoid the need for asm/kaiser.h in unexpected places.
+ */
+#ifdef CONFIG_KAISER
+extern void kaiser_setup_pcid(void);
+extern void kaiser_flush_tlb_on_return_to_user(void);
+#else
+static inline void kaiser_setup_pcid(void)
+{
+}
+static inline void kaiser_flush_tlb_on_return_to_user(void)
+{
+}
+#endif
+
static inline void __native_flush_tlb(void)
{
- if (!cpu_feature_enabled(X86_FEATURE_INVPCID)) {
+ if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
- * If current->mm == NULL then we borrow a mm which may change during a
- * task switch and therefore we must not be preempted while we write CR3
- * back:
+ * Note, this works with CR4.PCIDE=0 or 1.
*/
- preempt_disable();
- native_write_cr3(native_read_cr3());
- preempt_enable();
+ invpcid_flush_all_nonglobals();
return;
}
+
/*
- * We are no longer using globals with KAISER, so a
- * "nonglobals" flush would work too. But, this is more
- * conservative.
- *
- * Note, this works with CR4.PCIDE=0 or 1.
+ * If current->mm == NULL then we borrow a mm which may change during a
+ * task switch and therefore we must not be preempted while we write CR3
+ * back:
*/
- invpcid_flush_all();
+ preempt_disable();
+ if (this_cpu_has(X86_FEATURE_PCID))
+ kaiser_flush_tlb_on_return_to_user();
+ native_write_cr3(native_read_cr3());
+ preempt_enable();
}
static inline void __native_flush_tlb_global_irq_disabled(void)
@@ -167,9 +183,13 @@ static inline void __native_flush_tlb_gl
static inline void __native_flush_tlb_global(void)
{
+#ifdef CONFIG_KAISER
+ /* Globals are not used at all */
+ __native_flush_tlb();
+#else
unsigned long flags;
- if (static_cpu_has(X86_FEATURE_INVPCID)) {
+ if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Using INVPCID is considerably faster than a pair of writes
* to CR4 sandwiched inside an IRQ flag save/restore.
@@ -186,10 +206,9 @@ static inline void __native_flush_tlb_gl
* be called from deep inside debugging code.)
*/
raw_local_irq_save(flags);
-
__native_flush_tlb_global_irq_disabled();
-
raw_local_irq_restore(flags);
+#endif
}
static inline void __native_flush_tlb_single(unsigned long addr)
@@ -200,9 +219,12 @@ static inline void __native_flush_tlb_si
*
* The ASIDs used below are hard-coded. But, we must not
* call invpcid(type=1/2) before CR4.PCIDE=1. Just call
- * invpcid in the case we are called early.
+ * invlpg in the case we are called early.
*/
+
if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
+ if (this_cpu_has(X86_FEATURE_PCID))
+ kaiser_flush_tlb_on_return_to_user();
asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
return;
}
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -324,33 +324,12 @@ static __always_inline void setup_smap(s
}
}
-/*
- * These can have bit 63 set, so we can not just use a plain "or"
- * instruction to get their value or'd into CR3. It would take
- * another register. So, we use a memory reference to these
- * instead.
- *
- * This is also handy because systems that do not support
- * PCIDs just end up or'ing a 0 into their CR3, which does
- * no harm.
- */
-__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_KERN_VAR = 0;
-__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_USER_VAR = 0;
-
static void setup_pcid(struct cpuinfo_x86 *c)
{
if (cpu_has(c, X86_FEATURE_PCID)) {
if (cpu_has(c, X86_FEATURE_PGE)) {
cr4_set_bits(X86_CR4_PCIDE);
/*
- * These variables are used by the entry/exit
- * code to change PCIDs.
- */
-#ifdef CONFIG_KAISER
- X86_CR3_PCID_KERN_VAR = X86_CR3_PCID_KERN_NOFLUSH;
- X86_CR3_PCID_USER_VAR = X86_CR3_PCID_USER_NOFLUSH;
-#endif
- /*
* INVPCID has two "groups" of types:
* 1/2: Invalidate an individual address
* 3/4: Invalidate all contexts
@@ -375,6 +354,7 @@ static void setup_pcid(struct cpuinfo_x8
clear_cpu_cap(c, X86_FEATURE_PCID);
}
}
+ kaiser_setup_pcid();
}
/*
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -11,12 +11,26 @@
#include <linux/uaccess.h>
#include <asm/kaiser.h>
+#include <asm/tlbflush.h> /* to verify its kaiser declarations */
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/desc.h>
+
#ifdef CONFIG_KAISER
+__visible
+DEFINE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+
+/*
+ * These can have bit 63 set, so we can not just use a plain "or"
+ * instruction to get their value or'd into CR3. It would take
+ * another register. So, we use a memory reference to these instead.
+ *
+ * This is also handy because systems that do not support PCIDs
+ * just end up or'ing a 0 into their CR3, which does no harm.
+ */
+__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_KERN_VAR;
+DEFINE_PER_CPU(unsigned long, X86_CR3_PCID_USER_VAR);
-__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
/*
* At runtime, the only things we map are some things for CPU
* hotplug, and stacks for new processes. No two CPUs will ever
@@ -238,9 +252,6 @@ static void __init kaiser_init_all_pgds(
WARN_ON(__ret); \
} while (0)
-extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
-extern unsigned long X86_CR3_PCID_KERN_VAR;
-extern unsigned long X86_CR3_PCID_USER_VAR;
/*
* If anything in here fails, we will likely die on one of the
* first kernel->user transitions and init will die. But, we
@@ -294,8 +305,6 @@ void __init kaiser_init(void)
kaiser_add_user_map_early(&X86_CR3_PCID_KERN_VAR, PAGE_SIZE,
__PAGE_KERNEL);
- kaiser_add_user_map_early(&X86_CR3_PCID_USER_VAR, PAGE_SIZE,
- __PAGE_KERNEL);
}
/* Add a mapping to the shadow mapping, and synchronize the mappings */
@@ -358,4 +367,33 @@ pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp,
}
return pgd;
}
+
+void kaiser_setup_pcid(void)
+{
+ unsigned long kern_cr3 = 0;
+ unsigned long user_cr3 = KAISER_SHADOW_PGD_OFFSET;
+
+ if (this_cpu_has(X86_FEATURE_PCID)) {
+ kern_cr3 |= X86_CR3_PCID_KERN_NOFLUSH;
+ user_cr3 |= X86_CR3_PCID_USER_NOFLUSH;
+ }
+ /*
+ * These variables are used by the entry/exit
+ * code to change PCID and pgd and TLB flushing.
+ */
+ X86_CR3_PCID_KERN_VAR = kern_cr3;
+ this_cpu_write(X86_CR3_PCID_USER_VAR, user_cr3);
+}
+
+/*
+ * Make a note that this cpu will need to flush USER tlb on return to user.
+ * Caller checks whether this_cpu_has(X86_FEATURE_PCID) before calling:
+ * if cpu does not, then the NOFLUSH bit will never have been set.
+ */
+void kaiser_flush_tlb_on_return_to_user(void)
+{
+ this_cpu_write(X86_CR3_PCID_USER_VAR,
+ X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
+}
+EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
#endif /* CONFIG_KAISER */
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -6,13 +6,14 @@
#include <linux/interrupt.h>
#include <linux/export.h>
#include <linux/cpu.h>
+#include <linux/debugfs.h>
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
#include <asm/cache.h>
#include <asm/apic.h>
#include <asm/uv/uv.h>
-#include <linux/debugfs.h>
+#include <asm/kaiser.h>
/*
* TLB flushing, formerly SMP-only
@@ -38,34 +39,23 @@ static void load_new_mm_cr3(pgd_t *pgdir
{
unsigned long new_mm_cr3 = __pa(pgdir);
- /*
- * KAISER, plus PCIDs needs some extra work here. But,
- * if either of features is not present, we need no
- * PCIDs here and just do a normal, full TLB flush with
- * the write_cr3()
- */
- if (!IS_ENABLED(CONFIG_KAISER) ||
- !cpu_feature_enabled(X86_FEATURE_PCID))
- goto out_set_cr3;
- /*
- * We reuse the same PCID for different tasks, so we must
- * flush all the entires for the PCID out when we change
- * tasks.
- */
- new_mm_cr3 = X86_CR3_PCID_KERN_FLUSH | __pa(pgdir);
-
- /*
- * The flush from load_cr3() may leave old TLB entries
- * for userspace in place. We must flush that context
- * separately. We can theoretically delay doing this
- * until we actually load up the userspace CR3, but
- * that's a bit tricky. We have to have the "need to
- * flush userspace PCID" bit per-cpu and check it in the
- * exit-to-userspace paths.
- */
- invpcid_flush_single_context(X86_CR3_PCID_ASID_USER);
+#ifdef CONFIG_KAISER
+ if (this_cpu_has(X86_FEATURE_PCID)) {
+ /*
+ * We reuse the same PCID for different tasks, so we must
+ * flush all the entries for the PCID out when we change tasks.
+ * Flush KERN below, flush USER when returning to userspace in
+ * kaiser's SWITCH_USER_CR3 (_SWITCH_TO_USER_CR3) macro.
+ *
+ * invpcid_flush_single_context(X86_CR3_PCID_ASID_USER) could
+ * do it here, but can only be used if X86_FEATURE_INVPCID is
+ * available - and many machines support pcid without invpcid.
+ */
+ new_mm_cr3 |= X86_CR3_PCID_KERN_FLUSH;
+ kaiser_flush_tlb_on_return_to_user();
+ }
+#endif /* CONFIG_KAISER */
-out_set_cr3:
/*
* Caution: many callers of this function expect
* that load_cr3() is serializing and orders TLB
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: kaiser_flush_tlb_on_return_to_user() check PCID
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sat, 4 Nov 2017 18:43:06 -0700
Subject: kaiser: kaiser_flush_tlb_on_return_to_user() check PCID
From: Hugh Dickins <hughd(a)google.com>
Let kaiser_flush_tlb_on_return_to_user() do the X86_FEATURE_PCID
check, instead of each caller doing it inline first: nobody needs
to optimize for the noPCID case, it's clearer this way, and better
suits later changes. Replace those no-op X86_CR3_PCID_KERN_FLUSH lines
by a BUILD_BUG_ON() in load_new_mm_cr3(), in case something changes.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/tlbflush.h | 4 ++--
arch/x86/mm/kaiser.c | 6 +++---
arch/x86/mm/tlb.c | 8 ++++----
3 files changed, 9 insertions(+), 9 deletions(-)
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -158,7 +158,7 @@ static inline void __native_flush_tlb(vo
* back:
*/
preempt_disable();
- if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID))
+ if (kaiser_enabled)
kaiser_flush_tlb_on_return_to_user();
native_write_cr3(native_read_cr3());
preempt_enable();
@@ -217,7 +217,7 @@ static inline void __native_flush_tlb_si
*/
if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
- if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID))
+ if (kaiser_enabled)
kaiser_flush_tlb_on_return_to_user();
asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
return;
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -435,12 +435,12 @@ void kaiser_setup_pcid(void)
/*
* Make a note that this cpu will need to flush USER tlb on return to user.
- * Caller checks whether this_cpu_has(X86_FEATURE_PCID) before calling:
- * if cpu does not, then the NOFLUSH bit will never have been set.
+ * If cpu does not have PCID, then the NOFLUSH bit will never have been set.
*/
void kaiser_flush_tlb_on_return_to_user(void)
{
- this_cpu_write(x86_cr3_pcid_user,
+ if (this_cpu_has(X86_FEATURE_PCID))
+ this_cpu_write(x86_cr3_pcid_user,
X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
}
EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -39,7 +39,7 @@ static void load_new_mm_cr3(pgd_t *pgdir
{
unsigned long new_mm_cr3 = __pa(pgdir);
- if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID)) {
+ if (kaiser_enabled) {
/*
* We reuse the same PCID for different tasks, so we must
* flush all the entries for the PCID out when we change tasks.
@@ -50,10 +50,10 @@ static void load_new_mm_cr3(pgd_t *pgdir
* do it here, but can only be used if X86_FEATURE_INVPCID is
* available - and many machines support pcid without invpcid.
*
- * The line below is a no-op: X86_CR3_PCID_KERN_FLUSH is now 0;
- * but keep that line in there in case something changes.
+ * If X86_CR3_PCID_KERN_FLUSH actually added something, then it
+ * would be needed in the write_cr3() below - if PCIDs enabled.
*/
- new_mm_cr3 |= X86_CR3_PCID_KERN_FLUSH;
+ BUILD_BUG_ON(X86_CR3_PCID_KERN_FLUSH);
kaiser_flush_tlb_on_return_to_user();
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: kaiser_remove_mapping() move along the pgd
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Mon, 2 Oct 2017 10:57:24 -0700
Subject: kaiser: kaiser_remove_mapping() move along the pgd
From: Hugh Dickins <hughd(a)google.com>
When removing the bogus comment from kaiser_remove_mapping(),
I really ought to have checked the extent of its bogosity: as
Neel points out, there is nothing to stop unmap_pud_range_nofree()
from continuing beyond the end of a pud (and starting in the wrong
position on the next).
Fix kaiser_remove_mapping() to constrain the extent and advance pgd
pointer correctly: use pgd_addr_end() macro as used throughout base
mm (but don't assume page-rounded start and size in this case).
But this bug was very unlikely to trigger in this backport: since
any buddy allocation is contained within a single pud extent, and
we are not using vmapped stacks (and are only mapping one page of
stack anyway): the only way to hit this bug here would be when
freeing a large modified ldt.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -319,11 +319,13 @@ void kaiser_remove_mapping(unsigned long
extern void unmap_pud_range_nofree(pgd_t *pgd,
unsigned long start, unsigned long end);
unsigned long end = start + size;
- unsigned long addr;
+ unsigned long addr, next;
+ pgd_t *pgd;
- for (addr = start; addr < end; addr += PGDIR_SIZE) {
- pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(addr));
- unmap_pud_range_nofree(pgd, addr, end);
+ pgd = native_get_shadow_pgd(pgd_offset_k(start));
+ for (addr = start; addr < end; pgd++, addr = next) {
+ next = pgd_addr_end(addr, end);
+ unmap_pud_range_nofree(pgd, addr, next);
}
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: fix unlikely error in alloc_ldt_struct()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Mon, 4 Dec 2017 20:13:35 -0800
Subject: kaiser: fix unlikely error in alloc_ldt_struct()
From: Hugh Dickins <hughd(a)google.com>
An error from kaiser_add_mapping() here is not at all likely, but
Eric Biggers rightly points out that __free_ldt_struct() relies on
new_ldt->size being initialized: move that up.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/ldt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -79,11 +79,11 @@ static struct ldt_struct *alloc_ldt_stru
ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
__PAGE_KERNEL);
+ new_ldt->size = size;
if (ret) {
__free_ldt_struct(new_ldt);
return NULL;
}
- new_ldt->size = size;
return new_ldt;
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: fix regs to do_nmi() ifndef CONFIG_KAISER
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Thu, 21 Sep 2017 20:39:56 -0700
Subject: kaiser: fix regs to do_nmi() ifndef CONFIG_KAISER
From: Hugh Dickins <hughd(a)google.com>
pjt has observed that nmi's second (nmi_from_kernel) call to do_nmi()
adjusted the %rdi regs arg, rightly when CONFIG_KAISER, but wrongly
when not CONFIG_KAISER.
Although the minimal change is to add an #ifdef CONFIG_KAISER around
the addq line, that looks cluttered, and I prefer how the first call
to do_nmi() handled it: prepare args in %rdi and %rsi before getting
into the CONFIG_KAISER block, since it does not touch them at all.
And while we're here, place the "#ifdef CONFIG_KAISER" that follows
each, to enclose the "Unconditionally restore CR3" comment: matching
how the "Unconditionally use kernel CR3" comment above is enclosed.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1323,12 +1323,13 @@ ENTRY(nmi)
movq %rax, %cr3
#endif
call do_nmi
+
+#ifdef CONFIG_KAISER
/*
* Unconditionally restore CR3. I know we return to
* kernel code that needs user CR3, but do we ever return
* to "user mode" where we need the kernel CR3?
*/
-#ifdef CONFIG_KAISER
popq %rax
mov %rax, %cr3
#endif
@@ -1552,6 +1553,8 @@ end_repeat_nmi:
SWAPGS
xorl %ebx, %ebx
1:
+ movq %rsp, %rdi
+ movq $-1, %rsi
#ifdef CONFIG_KAISER
/* Unconditionally use kernel CR3 for do_nmi() */
/* %rax is saved above, so OK to clobber here */
@@ -1564,16 +1567,14 @@ end_repeat_nmi:
#endif
/* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
- movq %rsp, %rdi
- addq $8, %rdi /* point %rdi at ptregs, fixed up for CR3 */
- movq $-1, %rsi
call do_nmi
+
+#ifdef CONFIG_KAISER
/*
* Unconditionally restore CR3. We might be returning to
* kernel code that needs user CR3, like just just before
* a sysret.
*/
-#ifdef CONFIG_KAISER
popq %rax
mov %rax, %cr3
#endif
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: KAISER depends on SMP
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-kaiser-depends-on-smp.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Wed, 13 Sep 2017 14:03:10 -0700
Subject: kaiser: KAISER depends on SMP
From: Hugh Dickins <hughd(a)google.com>
It is absurd that KAISER should depend on SMP, but apparently nobody
has tried a UP build before: which breaks on implicit declaration of
function 'per_cpu_offset' in arch/x86/mm/kaiser.c.
Now, you would expect that to be trivially fixed up; but looking at
the System.map when that block is #ifdef'ed out of kaiser_init(),
I see that in a UP build __per_cpu_user_mapped_end is precisely at
__per_cpu_user_mapped_start, and the items carefully gathered into
that section for user-mapping on SMP, dispersed elsewhere on UP.
So, some other kind of section assignment will be needed on UP,
but implementing that is not a priority: just make KAISER depend
on SMP for now.
Also inserted a blank line before the option, tidied up the
brief Kconfig help message, and added an "If unsure, Y".
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
security/Kconfig | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -30,14 +30,16 @@ config SECURITY
model will be used.
If you are unsure how to answer this question, answer N.
+
config KAISER
bool "Remove the kernel mapping in user mode"
default y
- depends on X86_64
- depends on !PARAVIRT
+ depends on X86_64 && SMP && !PARAVIRT
help
- This enforces a strict kernel and user space isolation in order to close
- hardware side channels on kernel address information.
+ This enforces a strict kernel and user space isolation, in order
+ to close hardware side channels on kernel address information.
+
+ If you are unsure how to answer this question, answer Y.
config KAISER_REAL_SWITCH
bool "KAISER: actually switch page tables"
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: ENOMEM if kaiser_pagetable_walk() NULL
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-enomem-if-kaiser_pagetable_walk-null.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 18:48:02 -0700
Subject: kaiser: ENOMEM if kaiser_pagetable_walk() NULL
From: Hugh Dickins <hughd(a)google.com>
kaiser_add_user_map() took no notice when kaiser_pagetable_walk() failed.
And avoid its might_sleep() when atomic (though atomic at present unused).
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -98,11 +98,11 @@ static pte_t *kaiser_pagetable_walk(unsi
pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(address));
gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
- might_sleep();
if (is_atomic) {
gfp &= ~GFP_KERNEL;
gfp |= __GFP_HIGH | __GFP_ATOMIC;
- }
+ } else
+ might_sleep();
if (pgd_none(*pgd)) {
WARN_ONCE(1, "All shadow pgds should have been populated");
@@ -159,13 +159,17 @@ int kaiser_add_user_map(const void *__st
unsigned long end_addr = PAGE_ALIGN(start_addr + size);
unsigned long target_address;
- for (;address < end_addr; address += PAGE_SIZE) {
+ for (; address < end_addr; address += PAGE_SIZE) {
target_address = get_pa_from_mapping(address);
if (target_address == -1) {
ret = -EIO;
break;
}
pte = kaiser_pagetable_walk(address, false);
+ if (!pte) {
+ ret = -ENOMEM;
+ break;
+ }
if (pte_none(*pte)) {
set_pte(pte, __pte(flags | target_address));
} else {
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: enhanced by kernel and user PCIDs
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-enhanced-by-kernel-and-user-pcids.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Wed, 30 Aug 2017 16:23:00 -0700
Subject: kaiser: enhanced by kernel and user PCIDs
From: Hugh Dickins <hughd(a)google.com>
Merged performance improvements to Kaiser, using distinct kernel
and user Process Context Identifiers to minimize the TLB flushing.
[This work actually all from Dave Hansen 2017-08-30:
still omitting trackswitch mods, and KAISER_REAL_SWITCH deleted.]
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 10 ++++-
arch/x86/entry/entry_64_compat.S | 1
arch/x86/include/asm/cpufeatures.h | 1
arch/x86/include/asm/kaiser.h | 15 ++++++-
arch/x86/include/asm/pgtable_types.h | 26 +++++++++++++
arch/x86/include/asm/tlbflush.h | 54 +++++++++++++++++++++++-----
arch/x86/include/uapi/asm/processor-flags.h | 3 +
arch/x86/kernel/cpu/common.c | 34 +++++++++++++++++
arch/x86/kvm/x86.c | 3 +
arch/x86/mm/kaiser.c | 7 +++
arch/x86/mm/tlb.c | 46 ++++++++++++++++++++++-
11 files changed, 182 insertions(+), 18 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1317,7 +1317,10 @@ ENTRY(nmi)
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
pushq %rax
- andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
+ /* mask off "user" bit of pgd address and 12 PCID bits: */
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ /* Add back kernel PCID and "no flush" bit */
+ orq X86_CR3_PCID_KERN_VAR, %rax
movq %rax, %cr3
#endif
call do_nmi
@@ -1558,7 +1561,10 @@ end_repeat_nmi:
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
pushq %rax
- andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
+ /* mask off "user" bit of pgd address and 12 PCID bits: */
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ /* Add back kernel PCID and "no flush" bit */
+ orq X86_CR3_PCID_KERN_VAR, %rax
movq %rax, %cr3
#endif
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -13,6 +13,7 @@
#include <asm/irqflags.h>
#include <asm/asm.h>
#include <asm/smap.h>
+#include <asm/pgtable_types.h>
#include <asm/kaiser.h>
#include <linux/linkage.h>
#include <linux/err.h>
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -189,6 +189,7 @@
#define X86_FEATURE_CPB ( 7*32+ 2) /* AMD Core Performance Boost */
#define X86_FEATURE_EPB ( 7*32+ 3) /* IA32_ENERGY_PERF_BIAS support */
+#define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 4) /* Effectively INVPCID && CR4.PCIDE=1 */
#define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */
#define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -1,5 +1,8 @@
#ifndef _ASM_X86_KAISER_H
#define _ASM_X86_KAISER_H
+
+#include <uapi/asm/processor-flags.h> /* For PCID constants */
+
/*
* This file includes the definitions for the KAISER feature.
* KAISER is a counter measure against x86_64 side channel attacks on
@@ -21,13 +24,21 @@
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
-andq $(~KAISER_SHADOW_PGD_OFFSET), \reg
+andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
+orq X86_CR3_PCID_KERN_VAR, \reg
movq \reg, %cr3
.endm
.macro _SWITCH_TO_USER_CR3 reg
movq %cr3, \reg
-orq $(KAISER_SHADOW_PGD_OFFSET), \reg
+andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
+/*
+ * This can obviously be one instruction by putting the
+ * KAISER_SHADOW_PGD_OFFSET bit in the X86_CR3_PCID_USER_VAR.
+ * But, just leave it now for simplicity.
+ */
+orq X86_CR3_PCID_USER_VAR, \reg
+orq $(KAISER_SHADOW_PGD_OFFSET), \reg
movq \reg, %cr3
.endm
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -141,6 +141,32 @@
_PAGE_SOFT_DIRTY)
#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
+/* The ASID is the lower 12 bits of CR3 */
+#define X86_CR3_PCID_ASID_MASK (_AC((1<<12)-1,UL))
+
+/* Mask for all the PCID-related bits in CR3: */
+#define X86_CR3_PCID_MASK (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_MASK)
+#if defined(CONFIG_KAISER) && defined(CONFIG_X86_64)
+#define X86_CR3_PCID_ASID_KERN (_AC(0x4,UL))
+#define X86_CR3_PCID_ASID_USER (_AC(0x6,UL))
+
+#define X86_CR3_PCID_KERN_FLUSH (X86_CR3_PCID_ASID_KERN)
+#define X86_CR3_PCID_USER_FLUSH (X86_CR3_PCID_ASID_USER)
+#define X86_CR3_PCID_KERN_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_KERN)
+#define X86_CR3_PCID_USER_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_USER)
+#else
+#define X86_CR3_PCID_ASID_KERN (_AC(0x0,UL))
+#define X86_CR3_PCID_ASID_USER (_AC(0x0,UL))
+/*
+ * PCIDs are unsupported on 32-bit and none of these bits can be
+ * set in CR3:
+ */
+#define X86_CR3_PCID_KERN_FLUSH (0)
+#define X86_CR3_PCID_USER_FLUSH (0)
+#define X86_CR3_PCID_KERN_NOFLUSH (0)
+#define X86_CR3_PCID_USER_NOFLUSH (0)
+#endif
+
/*
* The cache modes defined here are used to translate between pure SW usage
* and the HW defined cache mode bits and/or PAT entries.
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -13,7 +13,6 @@ static inline void __invpcid(unsigned lo
unsigned long type)
{
struct { u64 d[2]; } desc = { { pcid, addr } };
-
/*
* The memory clobber is because the whole point is to invalidate
* stale TLB entries and, especially if we're flushing global
@@ -134,14 +133,25 @@ static inline void cr4_set_bits_and_upda
static inline void __native_flush_tlb(void)
{
+ if (!cpu_feature_enabled(X86_FEATURE_INVPCID)) {
+ /*
+ * If current->mm == NULL then we borrow a mm which may change during a
+ * task switch and therefore we must not be preempted while we write CR3
+ * back:
+ */
+ preempt_disable();
+ native_write_cr3(native_read_cr3());
+ preempt_enable();
+ return;
+ }
/*
- * If current->mm == NULL then we borrow a mm which may change during a
- * task switch and therefore we must not be preempted while we write CR3
- * back:
- */
- preempt_disable();
- native_write_cr3(native_read_cr3());
- preempt_enable();
+ * We are no longer using globals with KAISER, so a
+ * "nonglobals" flush would work too. But, this is more
+ * conservative.
+ *
+ * Note, this works with CR4.PCIDE=0 or 1.
+ */
+ invpcid_flush_all();
}
static inline void __native_flush_tlb_global_irq_disabled(void)
@@ -163,6 +173,8 @@ static inline void __native_flush_tlb_gl
/*
* Using INVPCID is considerably faster than a pair of writes
* to CR4 sandwiched inside an IRQ flag save/restore.
+ *
+ * Note, this works with CR4.PCIDE=0 or 1.
*/
invpcid_flush_all();
return;
@@ -182,7 +194,31 @@ static inline void __native_flush_tlb_gl
static inline void __native_flush_tlb_single(unsigned long addr)
{
- asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+ /*
+ * SIMICS #GP's if you run INVPCID with type 2/3
+ * and X86_CR4_PCIDE clear. Shame!
+ *
+ * The ASIDs used below are hard-coded. But, we must not
+ * call invpcid(type=1/2) before CR4.PCIDE=1. Just call
+ * invpcid in the case we are called early.
+ */
+ if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
+ asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+ return;
+ }
+ /* Flush the address out of both PCIDs. */
+ /*
+ * An optimization here might be to determine addresses
+ * that are only kernel-mapped and only flush the kernel
+ * ASID. But, userspace flushes are probably much more
+ * important performance-wise.
+ *
+ * Make sure to do only a single invpcid when KAISER is
+ * disabled and we have only a single ASID.
+ */
+ if (X86_CR3_PCID_ASID_KERN != X86_CR3_PCID_ASID_USER)
+ invpcid_flush_one(X86_CR3_PCID_ASID_KERN, addr);
+ invpcid_flush_one(X86_CR3_PCID_ASID_USER, addr);
}
static inline void __flush_tlb_all(void)
--- a/arch/x86/include/uapi/asm/processor-flags.h
+++ b/arch/x86/include/uapi/asm/processor-flags.h
@@ -77,7 +77,8 @@
#define X86_CR3_PWT _BITUL(X86_CR3_PWT_BIT)
#define X86_CR3_PCD_BIT 4 /* Page Cache Disable */
#define X86_CR3_PCD _BITUL(X86_CR3_PCD_BIT)
-#define X86_CR3_PCID_MASK _AC(0x00000fff,UL) /* PCID Mask */
+#define X86_CR3_PCID_NOFLUSH_BIT 63 /* Preserve old PCID */
+#define X86_CR3_PCID_NOFLUSH _BITULL(X86_CR3_PCID_NOFLUSH_BIT)
/*
* Intel CPU features in CR4
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -324,11 +324,45 @@ static __always_inline void setup_smap(s
}
}
+/*
+ * These can have bit 63 set, so we can not just use a plain "or"
+ * instruction to get their value or'd into CR3. It would take
+ * another register. So, we use a memory reference to these
+ * instead.
+ *
+ * This is also handy because systems that do not support
+ * PCIDs just end up or'ing a 0 into their CR3, which does
+ * no harm.
+ */
+__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_KERN_VAR = 0;
+__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_USER_VAR = 0;
+
static void setup_pcid(struct cpuinfo_x86 *c)
{
if (cpu_has(c, X86_FEATURE_PCID)) {
if (cpu_has(c, X86_FEATURE_PGE)) {
cr4_set_bits(X86_CR4_PCIDE);
+ /*
+ * These variables are used by the entry/exit
+ * code to change PCIDs.
+ */
+#ifdef CONFIG_KAISER
+ X86_CR3_PCID_KERN_VAR = X86_CR3_PCID_KERN_NOFLUSH;
+ X86_CR3_PCID_USER_VAR = X86_CR3_PCID_USER_NOFLUSH;
+#endif
+ /*
+ * INVPCID has two "groups" of types:
+ * 1/2: Invalidate an individual address
+ * 3/4: Invalidate all contexts
+ *
+ * 1/2 take a PCID, but 3/4 do not. So, 3/4
+ * ignore the PCID argument in the descriptor.
+ * But, we have to be careful not to call 1/2
+ * with an actual non-zero PCID in them before
+ * we do the above cr4_set_bits().
+ */
+ if (cpu_has(c, X86_FEATURE_INVPCID))
+ set_cpu_cap(c, X86_FEATURE_INVPCID_SINGLE);
} else {
/*
* flush_tlb_all(), as currently implemented, won't
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -773,7 +773,8 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, u
return 1;
/* PCID can not be enabled when cr3[11:0]!=000H or EFER.LMA=0 */
- if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_MASK) || !is_long_mode(vcpu))
+ if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_ASID_MASK) ||
+ !is_long_mode(vcpu))
return 1;
}
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -239,6 +239,8 @@ static void __init kaiser_init_all_pgds(
} while (0)
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
+extern unsigned long X86_CR3_PCID_KERN_VAR;
+extern unsigned long X86_CR3_PCID_USER_VAR;
/*
* If anything in here fails, we will likely die on one of the
* first kernel->user transitions and init will die. But, we
@@ -289,6 +291,11 @@ void __init kaiser_init(void)
kaiser_add_user_map_early(&debug_idt_table,
sizeof(gate_desc) * NR_VECTORS,
__PAGE_KERNEL);
+
+ kaiser_add_user_map_early(&X86_CR3_PCID_KERN_VAR, PAGE_SIZE,
+ __PAGE_KERNEL);
+ kaiser_add_user_map_early(&X86_CR3_PCID_USER_VAR, PAGE_SIZE,
+ __PAGE_KERNEL);
}
/* Add a mapping to the shadow mapping, and synchronize the mappings */
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -34,6 +34,46 @@ struct flush_tlb_info {
unsigned long flush_end;
};
+static void load_new_mm_cr3(pgd_t *pgdir)
+{
+ unsigned long new_mm_cr3 = __pa(pgdir);
+
+ /*
+ * KAISER, plus PCIDs needs some extra work here. But,
+ * if either of features is not present, we need no
+ * PCIDs here and just do a normal, full TLB flush with
+ * the write_cr3()
+ */
+ if (!IS_ENABLED(CONFIG_KAISER) ||
+ !cpu_feature_enabled(X86_FEATURE_PCID))
+ goto out_set_cr3;
+ /*
+ * We reuse the same PCID for different tasks, so we must
+ * flush all the entires for the PCID out when we change
+ * tasks.
+ */
+ new_mm_cr3 = X86_CR3_PCID_KERN_FLUSH | __pa(pgdir);
+
+ /*
+ * The flush from load_cr3() may leave old TLB entries
+ * for userspace in place. We must flush that context
+ * separately. We can theoretically delay doing this
+ * until we actually load up the userspace CR3, but
+ * that's a bit tricky. We have to have the "need to
+ * flush userspace PCID" bit per-cpu and check it in the
+ * exit-to-userspace paths.
+ */
+ invpcid_flush_single_context(X86_CR3_PCID_ASID_USER);
+
+out_set_cr3:
+ /*
+ * Caution: many callers of this function expect
+ * that load_cr3() is serializing and orders TLB
+ * fills with respect to the mm_cpumask writes.
+ */
+ write_cr3(new_mm_cr3);
+}
+
/*
* We cannot call mmdrop() because we are in interrupt context,
* instead update mm->cpu_vm_mask.
@@ -45,7 +85,7 @@ void leave_mm(int cpu)
BUG();
if (cpumask_test_cpu(cpu, mm_cpumask(active_mm))) {
cpumask_clear_cpu(cpu, mm_cpumask(active_mm));
- load_cr3(swapper_pg_dir);
+ load_new_mm_cr3(swapper_pg_dir);
/*
* This gets called in the idle path where RCU
* functions differently. Tracing normally
@@ -120,7 +160,7 @@ void switch_mm_irqs_off(struct mm_struct
* ordering guarantee we need.
*
*/
- load_cr3(next->pgd);
+ load_new_mm_cr3(next->pgd);
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
@@ -167,7 +207,7 @@ void switch_mm_irqs_off(struct mm_struct
* As above, load_cr3() is serializing and orders TLB
* fills with respect to the mm_cpumask write.
*/
- load_cr3(next->pgd);
+ load_new_mm_cr3(next->pgd);
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
load_mm_cr4(next);
load_mm_ldt(next);
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: drop is_atomic arg to kaiser_pagetable_walk()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 29 Oct 2017 11:36:19 -0700
Subject: kaiser: drop is_atomic arg to kaiser_pagetable_walk()
From: Hugh Dickins <hughd(a)google.com>
I have not observed a might_sleep() warning from setup_fixmap_gdt()'s
use of kaiser_add_mapping() in our tree (why not?), but like upstream
we have not provided a way for that to pass is_atomic true down to
kaiser_pagetable_walk(), and at startup it's far from a likely source
of trouble: so just delete the walk's is_atomic arg and might_sleep().
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -107,19 +107,13 @@ static inline unsigned long get_pa_from_
*
* Returns a pointer to a PTE on success, or NULL on failure.
*/
-static pte_t *kaiser_pagetable_walk(unsigned long address, bool is_atomic)
+static pte_t *kaiser_pagetable_walk(unsigned long address)
{
pmd_t *pmd;
pud_t *pud;
pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(address));
gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
- if (is_atomic) {
- gfp &= ~GFP_KERNEL;
- gfp |= __GFP_HIGH | __GFP_ATOMIC;
- } else
- might_sleep();
-
if (pgd_none(*pgd)) {
WARN_ONCE(1, "All shadow pgds should have been populated");
return NULL;
@@ -194,7 +188,7 @@ static int kaiser_add_user_map(const voi
ret = -EIO;
break;
}
- pte = kaiser_pagetable_walk(address, false);
+ pte = kaiser_pagetable_walk(address);
if (!pte) {
ret = -ENOMEM;
break;
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: fix build and FIXME in alloc_ldt_struct()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 17:09:44 -0700
Subject: kaiser: fix build and FIXME in alloc_ldt_struct()
From: Hugh Dickins <hughd(a)google.com>
Include linux/kaiser.h instead of asm/kaiser.h to build ldt.c without
CONFIG_KAISER. kaiser_add_mapping() does already return an error code,
so fix the FIXME.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/ldt.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -16,9 +16,9 @@
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/uaccess.h>
+#include <linux/kaiser.h>
#include <asm/ldt.h>
-#include <asm/kaiser.h>
#include <asm/desc.h>
#include <asm/mmu_context.h>
#include <asm/syscalls.h>
@@ -49,7 +49,7 @@ static struct ldt_struct *alloc_ldt_stru
{
struct ldt_struct *new_ldt;
int alloc_size;
- int ret = 0;
+ int ret;
if (size > LDT_ENTRIES)
return NULL;
@@ -77,10 +77,8 @@ static struct ldt_struct *alloc_ldt_stru
return NULL;
}
- // FIXME: make kaiser_add_mapping() return an error code
- // when it fails
- kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
- __PAGE_KERNEL);
+ ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
+ __PAGE_KERNEL);
if (ret) {
__free_ldt_struct(new_ldt);
return NULL;
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: do not set _PAGE_NX on pgd_none
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-do-not-set-_page_nx-on-pgd_none.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Tue, 5 Sep 2017 12:05:01 -0700
Subject: kaiser: do not set _PAGE_NX on pgd_none
From: Hugh Dickins <hughd(a)google.com>
native_pgd_clear() uses native_set_pgd(), so native_set_pgd() must
avoid setting the _PAGE_NX bit on an otherwise pgd_none() entry:
usually that just generated a warning on exit, but sometimes
more mysterious and damaging failures (our production machines
could not complete booting).
The original fix to this just avoided adding _PAGE_NX to
an empty entry; but eventually more problems surfaced with kexec,
and EFI mapping expected to be a problem too. So now instead
change native_set_pgd() to update shadow only if _PAGE_USER:
A few places (kernel/machine_kexec_64.c, platform/efi/efi_64.c for sure)
use set_pgd() to set up a temporary internal virtual address space, with
physical pages remapped at what Kaiser regards as userspace addresses:
Kaiser then assumes a shadow pgd follows, which it will try to corrupt.
This appears to be responsible for the recent kexec and kdump failures;
though it's unclear how those did not manifest as a problem before.
Ah, the shadow pgd will only be assumed to "follow" if the requested
pgd is on an even-numbered page: so I suppose it was going wrong 50%
of the time all along.
What we need is a flag to set_pgd(), to tell it we're dealing with
userspace. Er, isn't that what the pgd's _PAGE_USER bit is saying?
Add a test for that. But we cannot do the same for pgd_clear()
(which may be called to clear corrupted entries - set aside the
question of "corrupt in which pgd?" until later), so there just
rely on pgd_clear() not being called in the problematic cases -
with a WARN_ON_ONCE() which should fire half the time if it is.
But this is getting too big for an inline function: move it into
arch/x86/mm/kaiser.c (which then demands a boot/compressed mod);
and de-void and de-space native_get_shadow/normal_pgd() while here.
Also make an unnecessary change to KASLR's init_trampoline(): it was
using set_pgd() to assign a pgd-value to a global variable (not in a
pg directory page), which was rather scary given Kaiser's previous
set_pgd() implementation: not a problem now, but too scary to leave
as was, it could easily blow up if we have to change set_pgd() again.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/boot/compressed/misc.h | 1
arch/x86/include/asm/pgtable_64.h | 51 +++++++++-----------------------------
arch/x86/mm/kaiser.c | 42 +++++++++++++++++++++++++++++++
arch/x86/mm/kaslr.c | 4 +-
4 files changed, 58 insertions(+), 40 deletions(-)
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -9,6 +9,7 @@
*/
#undef CONFIG_PARAVIRT
#undef CONFIG_PARAVIRT_SPINLOCKS
+#undef CONFIG_KAISER
#undef CONFIG_KASAN
#include <linux/linkage.h>
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -107,61 +107,36 @@ static inline void native_pud_clear(pud_
}
#ifdef CONFIG_KAISER
-static inline pgd_t * native_get_shadow_pgd(pgd_t *pgdp)
+extern pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd);
+
+static inline pgd_t *native_get_shadow_pgd(pgd_t *pgdp)
{
- return (pgd_t *)(void*)((unsigned long)(void*)pgdp | (unsigned long)PAGE_SIZE);
+ return (pgd_t *)((unsigned long)pgdp | (unsigned long)PAGE_SIZE);
}
-static inline pgd_t * native_get_normal_pgd(pgd_t *pgdp)
+static inline pgd_t *native_get_normal_pgd(pgd_t *pgdp)
{
- return (pgd_t *)(void*)((unsigned long)(void*)pgdp & ~(unsigned long)PAGE_SIZE);
+ return (pgd_t *)((unsigned long)pgdp & ~(unsigned long)PAGE_SIZE);
}
#else
-static inline pgd_t * native_get_shadow_pgd(pgd_t *pgdp)
+static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+ return pgd;
+}
+static inline pgd_t *native_get_shadow_pgd(pgd_t *pgdp)
{
BUILD_BUG_ON(1);
return NULL;
}
-static inline pgd_t * native_get_normal_pgd(pgd_t *pgdp)
+static inline pgd_t *native_get_normal_pgd(pgd_t *pgdp)
{
return pgdp;
}
#endif /* CONFIG_KAISER */
-/*
- * Page table pages are page-aligned. The lower half of the top
- * level is used for userspace and the top half for the kernel.
- * This returns true for user pages that need to get copied into
- * both the user and kernel copies of the page tables, and false
- * for kernel pages that should only be in the kernel copy.
- */
-static inline bool is_userspace_pgd(void *__ptr)
-{
- unsigned long ptr = (unsigned long)__ptr;
-
- return ((ptr % PAGE_SIZE) < (PAGE_SIZE / 2));
-}
-
static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
{
-#ifdef CONFIG_KAISER
- pteval_t extra_kern_pgd_flags = 0;
- /* Do we need to also populate the shadow pgd? */
- if (is_userspace_pgd(pgdp)) {
- native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
- /*
- * Even if the entry is *mapping* userspace, ensure
- * that userspace can not use it. This way, if we
- * get out to userspace running on the kernel CR3,
- * userspace will crash instead of running.
- */
- extra_kern_pgd_flags = _PAGE_NX;
- }
- pgdp->pgd = pgd.pgd;
- pgdp->pgd |= extra_kern_pgd_flags;
-#else /* CONFIG_KAISER */
- *pgdp = pgd;
-#endif
+ *pgdp = kaiser_set_shadow_pgd(pgdp, pgd);
}
static inline void native_pgd_clear(pgd_t *pgd)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -302,4 +302,46 @@ void kaiser_remove_mapping(unsigned long
unmap_pud_range_nofree(pgd, addr, end);
}
}
+
+/*
+ * Page table pages are page-aligned. The lower half of the top
+ * level is used for userspace and the top half for the kernel.
+ * This returns true for user pages that need to get copied into
+ * both the user and kernel copies of the page tables, and false
+ * for kernel pages that should only be in the kernel copy.
+ */
+static inline bool is_userspace_pgd(pgd_t *pgdp)
+{
+ return ((unsigned long)pgdp % PAGE_SIZE) < (PAGE_SIZE / 2);
+}
+
+pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+ /*
+ * Do we need to also populate the shadow pgd? Check _PAGE_USER to
+ * skip cases like kexec and EFI which make temporary low mappings.
+ */
+ if (pgd.pgd & _PAGE_USER) {
+ if (is_userspace_pgd(pgdp)) {
+ native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
+ /*
+ * Even if the entry is *mapping* userspace, ensure
+ * that userspace can not use it. This way, if we
+ * get out to userspace running on the kernel CR3,
+ * userspace will crash instead of running.
+ */
+ pgd.pgd |= _PAGE_NX;
+ }
+ } else if (!pgd.pgd) {
+ /*
+ * pgd_clear() cannot check _PAGE_USER, and is even used to
+ * clear corrupted pgd entries: so just rely on cases like
+ * kexec and EFI never to be using pgd_clear().
+ */
+ if (!WARN_ON_ONCE((unsigned long)pgdp & PAGE_SIZE) &&
+ is_userspace_pgd(pgdp))
+ native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
+ }
+ return pgd;
+}
#endif /* CONFIG_KAISER */
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -189,6 +189,6 @@ void __meminit init_trampoline(void)
*pud_tramp = *pud;
}
- set_pgd(&trampoline_pgd_entry,
- __pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
+ /* Avoid set_pgd(), in case it's complicated by CONFIG_KAISER */
+ trampoline_pgd_entry = __pgd(_KERNPG_TABLE | __pa(pud_page_tramp));
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: disabled on Xen PV
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-disabled-on-xen-pv.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Jiri Kosina <jkosina(a)suse.cz>
Date: Tue, 2 Jan 2018 14:19:49 +0100
Subject: kaiser: disabled on Xen PV
From: Jiri Kosina <jkosina(a)suse.cz>
Kaiser cannot be used on paravirtualized MMUs (namely reading and writing CR3).
This does not work with KAISER as the CR3 switch from and to user space PGD
would require to map the whole XEN_PV machinery into both.
More importantly, enabling KAISER on Xen PV doesn't make too much sense, as PV
guests use distinct %cr3 values for kernel and user already.
Signed-off-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 5 +++++
1 file changed, 5 insertions(+)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -263,6 +263,9 @@ void __init kaiser_check_boottime_disabl
char arg[5];
int ret;
+ if (boot_cpu_has(X86_FEATURE_XENPV))
+ goto silent_disable;
+
ret = cmdline_find_option(boot_command_line, "pti", arg, sizeof(arg));
if (ret > 0) {
if (!strncmp(arg, "on", 2))
@@ -290,6 +293,8 @@ enable:
disable:
pr_info("Kernel/User page tables isolation: disabled\n");
+
+silent_disable:
kaiser_enabled = 0;
setup_clear_cpu_cap(X86_FEATURE_KAISER);
}
Patches currently in stable-queue which might be from jkosina(a)suse.cz are
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-disabled-on-xen-pv.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
This is a note to let you know that I've just added the patch titled
kaiser: asm/tlbflush.h handle noPGE at lower level
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sat, 4 Nov 2017 18:23:24 -0700
Subject: kaiser: asm/tlbflush.h handle noPGE at lower level
From: Hugh Dickins <hughd(a)google.com>
I found asm/tlbflush.h too twisty, and think it safer not to avoid
__native_flush_tlb_global_irq_disabled() in the kaiser_enabled case,
but instead let it handle kaiser_enabled along with cr3: it can just
use __native_flush_tlb() for that, no harm in re-disabling preemption.
(This is not the same change as Kirill and Dave have suggested for
upstream, flipping PGE in cr4: that's neat, but needs a cpu_has_pge
check; cr3 is enough for kaiser, and thought to be cheaper than cr4.)
Also delete the X86_FEATURE_INVPCID invpcid_flush_all_nonglobals()
preference from __native_flush_tlb(): unlike the invpcid_flush_all()
preference in __native_flush_tlb_global(), it's not seen in upstream
4.14, and was recently reported to be surprisingly slow.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/tlbflush.h | 27 +++------------------------
1 file changed, 3 insertions(+), 24 deletions(-)
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -152,14 +152,6 @@ static inline void kaiser_flush_tlb_on_r
static inline void __native_flush_tlb(void)
{
- if (this_cpu_has(X86_FEATURE_INVPCID)) {
- /*
- * Note, this works with CR4.PCIDE=0 or 1.
- */
- invpcid_flush_all_nonglobals();
- return;
- }
-
/*
* If current->mm == NULL then we borrow a mm which may change during a
* task switch and therefore we must not be preempted while we write CR3
@@ -183,11 +175,8 @@ static inline void __native_flush_tlb_gl
/* restore PGE as it was before */
native_write_cr4(cr4);
} else {
- /*
- * x86_64 microcode update comes this way when CR4.PGE is not
- * enabled, and it's safer for all callers to allow this case.
- */
- native_write_cr3(native_read_cr3());
+ /* do it with cr3, letting kaiser flush user PCID */
+ __native_flush_tlb();
}
}
@@ -195,12 +184,6 @@ static inline void __native_flush_tlb_gl
{
unsigned long flags;
- if (kaiser_enabled) {
- /* Globals are not used at all */
- __native_flush_tlb();
- return;
- }
-
if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Using INVPCID is considerably faster than a pair of writes
@@ -256,11 +239,7 @@ static inline void __native_flush_tlb_si
static inline void __flush_tlb_all(void)
{
- if (boot_cpu_has(X86_FEATURE_PGE))
- __flush_tlb_global();
- else
- __flush_tlb();
-
+ __flush_tlb_global();
/*
* Note: if we somehow had PCID but not PGE, then this wouldn't work --
* we'd end up flushing kernel translations for the current ASID but
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: delete KAISER_REAL_SWITCH option
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-delete-kaiser_real_switch-option.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 18:30:43 -0700
Subject: kaiser: delete KAISER_REAL_SWITCH option
From: Hugh Dickins <hughd(a)google.com>
We fail to see what CONFIG_KAISER_REAL_SWITCH is for: it seems to be
left over from early development, and now just obscures tricky parts
of the code. Delete it before adding PCIDs, or nokaiser boot option.
(Or if there is some good reason to keep the option, then it needs
a help text - and a "depends on KAISER", so that all those without
KAISER are not asked the question. But we'd much rather delete it.)
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 4 ----
arch/x86/include/asm/kaiser.h | 4 ----
security/Kconfig | 4 ----
3 files changed, 12 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1317,9 +1317,7 @@ ENTRY(nmi)
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
pushq %rax
-#ifdef CONFIG_KAISER_REAL_SWITCH
andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
-#endif
movq %rax, %cr3
#endif
call do_nmi
@@ -1560,9 +1558,7 @@ end_repeat_nmi:
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
pushq %rax
-#ifdef CONFIG_KAISER_REAL_SWITCH
andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
-#endif
movq %rax, %cr3
#endif
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -21,17 +21,13 @@
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
-#ifdef CONFIG_KAISER_REAL_SWITCH
andq $(~KAISER_SHADOW_PGD_OFFSET), \reg
-#endif
movq \reg, %cr3
.endm
.macro _SWITCH_TO_USER_CR3 reg
movq %cr3, \reg
-#ifdef CONFIG_KAISER_REAL_SWITCH
orq $(KAISER_SHADOW_PGD_OFFSET), \reg
-#endif
movq \reg, %cr3
.endm
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -41,10 +41,6 @@ config KAISER
If you are unsure how to answer this question, answer Y.
-config KAISER_REAL_SWITCH
- bool "KAISER: actually switch page tables"
- default y
-
config SECURITYFS
bool "Enable the securityfs filesystem"
help
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: align addition to x86/mm/Makefile
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-align-addition-to-x86-mm-makefile.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 19:51:10 -0700
Subject: kaiser: align addition to x86/mm/Makefile
From: Hugh Dickins <hughd(a)google.com>
Use tab not space so they line up properly, kaslr.o also.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -37,5 +37,5 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulatio
obj-$(CONFIG_X86_INTEL_MPX) += mpx.o
obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o
-obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
-obj-$(CONFIG_KAISER) += kaiser.o
+obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
+obj-$(CONFIG_KAISER) += kaiser.o
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: add "nokaiser" boot option, using ALTERNATIVE
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-add-nokaiser-boot-option-using-alternative.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 20:37:21 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 24 Sep 2017 16:59:49 -0700
Subject: kaiser: add "nokaiser" boot option, using ALTERNATIVE
From: Hugh Dickins <hughd(a)google.com>
Added "nokaiser" boot option: an early param like "noinvpcid".
Most places now check int kaiser_enabled (#defined 0 when not
CONFIG_KAISER) instead of #ifdef CONFIG_KAISER; but entry_64.S
and entry_64_compat.S are using the ALTERNATIVE technique, which
patches in the preferred instructions at runtime. That technique
is tied to x86 cpu features, so X86_FEATURE_KAISER is fabricated.
Prior to "nokaiser", Kaiser #defined _PAGE_GLOBAL 0: revert that,
but be careful with both _PAGE_GLOBAL and CR4.PGE: setting them when
nokaiser like when !CONFIG_KAISER, but not setting either when kaiser -
neither matters on its own, but it's hard to be sure that _PAGE_GLOBAL
won't get set in some obscure corner, or something add PGE into CR4.
By omitting _PAGE_GLOBAL from __supported_pte_mask when kaiser_enabled,
all page table setup which uses pte_pfn() masks it out of the ptes.
It's slightly shameful that the same declaration versus definition of
kaiser_enabled appears in not one, not two, but in three header files
(asm/kaiser.h, asm/pgtable.h, asm/tlbflush.h). I felt safer that way,
than with #including any of those in any of the others; and did not
feel it worth an asm/kaiser_enabled.h - kernel/cpu/common.c includes
them all, so we shall hear about it if they get out of synch.
Cleanups while in the area: removed the silly #ifdef CONFIG_KAISER
from kaiser.c; removed the unused native_get_normal_pgd(); removed
the spurious reg clutter from SWITCH_*_CR3 macro stubs; corrected some
comments. But more interestingly, set CR4.PSE in secondary_startup_64:
the manual is clear that it does not matter whether it's 0 or 1 when
4-level-pts are enabled, but I was distracted to find cr4 different on
BSP and auxiliaries - BSP alone was adding PSE, in probe_page_size_mask().
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/kernel-parameters.txt | 2 +
arch/x86/entry/entry_64.S | 15 ++++++-----
arch/x86/include/asm/cpufeatures.h | 3 ++
arch/x86/include/asm/kaiser.h | 27 +++++++++++++++------
arch/x86/include/asm/pgtable.h | 20 +++++++++++----
arch/x86/include/asm/pgtable_64.h | 13 +++-------
arch/x86/include/asm/pgtable_types.h | 4 ---
arch/x86/include/asm/tlbflush.h | 39 +++++++++++++++++++------------
arch/x86/kernel/cpu/common.c | 28 +++++++++++++++++++++-
arch/x86/kernel/espfix_64.c | 3 +-
arch/x86/kernel/head_64.S | 4 +--
arch/x86/mm/init.c | 2 -
arch/x86/mm/init_64.c | 10 +++++++
arch/x86/mm/kaiser.c | 26 +++++++++++++++++---
arch/x86/mm/pgtable.c | 8 +-----
arch/x86/mm/tlb.c | 4 ---
tools/arch/x86/include/asm/cpufeatures.h | 3 ++
17 files changed, 146 insertions(+), 65 deletions(-)
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2763,6 +2763,8 @@ bytes respectively. Such letter suffixes
nojitter [IA-64] Disables jitter checking for ITC timers.
+ nokaiser [X86-64] Disable KAISER isolation of kernel from user.
+
no-kvmclock [X86,KVM] Disable paravirtualized KVM clock driver
no-kvmapf [X86,KVM] Disable paravirtualized asynchronous page
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1079,7 +1079,7 @@ ENTRY(paranoid_entry)
* unconditionally, but we need to find out whether the reverse
* should be done on return (conveyed to paranoid_exit in %ebx).
*/
- movq %cr3, %rax
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
testl $KAISER_SHADOW_PGD_OFFSET, %eax
jz 2f
orl $2, %ebx
@@ -1111,6 +1111,7 @@ ENTRY(paranoid_exit)
TRACE_IRQS_OFF_DEBUG
TRACE_IRQS_IRETQ_DEBUG
#ifdef CONFIG_KAISER
+ /* No ALTERNATIVE for X86_FEATURE_KAISER: paranoid_entry sets %ebx */
testl $2, %ebx /* SWITCH_USER_CR3 needed? */
jz paranoid_exit_no_switch
SWITCH_USER_CR3
@@ -1341,13 +1342,14 @@ ENTRY(nmi)
#ifdef CONFIG_KAISER
/* Unconditionally use kernel CR3 for do_nmi() */
/* %rax is saved above, so OK to clobber here */
- movq %cr3, %rax
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
/* If PCID enabled, NOFLUSH now and NOFLUSH on return */
orq x86_cr3_pcid_noflush, %rax
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
movq %rax, %cr3
+2:
#endif
call do_nmi
@@ -1357,8 +1359,7 @@ ENTRY(nmi)
* kernel code that needs user CR3, but do we ever return
* to "user mode" where we need the kernel CR3?
*/
- popq %rax
- mov %rax, %cr3
+ ALTERNATIVE "", "popq %rax; movq %rax, %cr3", X86_FEATURE_KAISER
#endif
/*
@@ -1585,13 +1586,14 @@ end_repeat_nmi:
#ifdef CONFIG_KAISER
/* Unconditionally use kernel CR3 for do_nmi() */
/* %rax is saved above, so OK to clobber here */
- movq %cr3, %rax
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
/* If PCID enabled, NOFLUSH now and NOFLUSH on return */
orq x86_cr3_pcid_noflush, %rax
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
movq %rax, %cr3
+2:
#endif
/* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
@@ -1603,8 +1605,7 @@ end_repeat_nmi:
* kernel code that needs user CR3, like just just before
* a sysret.
*/
- popq %rax
- mov %rax, %cr3
+ ALTERNATIVE "", "popq %rax; movq %rax, %cr3", X86_FEATURE_KAISER
#endif
testl %ebx, %ebx /* swapgs needed? */
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -198,6 +198,9 @@
#define X86_FEATURE_AVX512_4VNNIW (7*32+16) /* AVX-512 Neural Network Instructions */
#define X86_FEATURE_AVX512_4FMAPS (7*32+17) /* AVX-512 Multiply Accumulation Single precision */
+/* Because the ALTERNATIVE scheme is for members of the X86_FEATURE club... */
+#define X86_FEATURE_KAISER ( 7*32+31) /* CONFIG_KAISER w/o nokaiser */
+
/* Virtualization flags: Linux defined, word 8 */
#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
#define X86_FEATURE_VNMI ( 8*32+ 1) /* Intel Virtual NMI */
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -46,28 +46,33 @@ movq \reg, %cr3
.endm
.macro SWITCH_KERNEL_CR3
-pushq %rax
+ALTERNATIVE "jmp 8f", "pushq %rax", X86_FEATURE_KAISER
_SWITCH_TO_KERNEL_CR3 %rax
popq %rax
+8:
.endm
.macro SWITCH_USER_CR3
-pushq %rax
+ALTERNATIVE "jmp 8f", "pushq %rax", X86_FEATURE_KAISER
_SWITCH_TO_USER_CR3 %rax %al
popq %rax
+8:
.endm
.macro SWITCH_KERNEL_CR3_NO_STACK
-movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)
+ALTERNATIVE "jmp 8f", \
+ __stringify(movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)), \
+ X86_FEATURE_KAISER
_SWITCH_TO_KERNEL_CR3 %rax
movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
+8:
.endm
#else /* CONFIG_KAISER */
-.macro SWITCH_KERNEL_CR3 reg
+.macro SWITCH_KERNEL_CR3
.endm
-.macro SWITCH_USER_CR3 reg regb
+.macro SWITCH_USER_CR3
.endm
.macro SWITCH_KERNEL_CR3_NO_STACK
.endm
@@ -90,6 +95,16 @@ DECLARE_PER_CPU(unsigned long, x86_cr3_p
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
+extern int kaiser_enabled;
+#else
+#define kaiser_enabled 0
+#endif /* CONFIG_KAISER */
+
+/*
+ * Kaiser function prototypes are needed even when CONFIG_KAISER is not set,
+ * so as to build with tests on kaiser_enabled instead of #ifdefs.
+ */
+
/**
* kaiser_add_mapping - map a virtual memory part to the shadow (user) mapping
* @addr: the start address of the range
@@ -119,8 +134,6 @@ extern void kaiser_remove_mapping(unsign
*/
extern void kaiser_init(void);
-#endif /* CONFIG_KAISER */
-
#endif /* __ASSEMBLY */
#endif /* _ASM_X86_KAISER_H */
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -18,6 +18,12 @@
#ifndef __ASSEMBLY__
#include <asm/x86_init.h>
+#ifdef CONFIG_KAISER
+extern int kaiser_enabled;
+#else
+#define kaiser_enabled 0
+#endif
+
void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd);
void ptdump_walk_pgd_level_checkwx(void);
@@ -697,7 +703,7 @@ static inline int pgd_bad(pgd_t pgd)
* page table by accident; it will fault on the first
* instruction it tries to run. See native_set_pgd().
*/
- if (IS_ENABLED(CONFIG_KAISER))
+ if (kaiser_enabled)
ignore_flags |= _PAGE_NX;
return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE;
@@ -913,12 +919,14 @@ static inline void pmdp_set_wrprotect(st
*/
static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count)
{
- memcpy(dst, src, count * sizeof(pgd_t));
+ memcpy(dst, src, count * sizeof(pgd_t));
#ifdef CONFIG_KAISER
- /* Clone the shadow pgd part as well */
- memcpy(native_get_shadow_pgd(dst),
- native_get_shadow_pgd(src),
- count * sizeof(pgd_t));
+ if (kaiser_enabled) {
+ /* Clone the shadow pgd part as well */
+ memcpy(native_get_shadow_pgd(dst),
+ native_get_shadow_pgd(src),
+ count * sizeof(pgd_t));
+ }
#endif
}
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -111,13 +111,12 @@ extern pgd_t kaiser_set_shadow_pgd(pgd_t
static inline pgd_t *native_get_shadow_pgd(pgd_t *pgdp)
{
+#ifdef CONFIG_DEBUG_VM
+ /* linux/mmdebug.h may not have been included at this point */
+ BUG_ON(!kaiser_enabled);
+#endif
return (pgd_t *)((unsigned long)pgdp | (unsigned long)PAGE_SIZE);
}
-
-static inline pgd_t *native_get_normal_pgd(pgd_t *pgdp)
-{
- return (pgd_t *)((unsigned long)pgdp & ~(unsigned long)PAGE_SIZE);
-}
#else
static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
{
@@ -128,10 +127,6 @@ static inline pgd_t *native_get_shadow_p
BUILD_BUG_ON(1);
return NULL;
}
-static inline pgd_t *native_get_normal_pgd(pgd_t *pgdp)
-{
- return pgdp;
-}
#endif /* CONFIG_KAISER */
static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -45,11 +45,7 @@
#define _PAGE_ACCESSED (_AT(pteval_t, 1) << _PAGE_BIT_ACCESSED)
#define _PAGE_DIRTY (_AT(pteval_t, 1) << _PAGE_BIT_DIRTY)
#define _PAGE_PSE (_AT(pteval_t, 1) << _PAGE_BIT_PSE)
-#ifdef CONFIG_KAISER
-#define _PAGE_GLOBAL (_AT(pteval_t, 0))
-#else
#define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
-#endif
#define _PAGE_SOFTW1 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW1)
#define _PAGE_SOFTW2 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW2)
#define _PAGE_PAT (_AT(pteval_t, 1) << _PAGE_BIT_PAT)
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -137,9 +137,11 @@ static inline void cr4_set_bits_and_upda
* to avoid the need for asm/kaiser.h in unexpected places.
*/
#ifdef CONFIG_KAISER
+extern int kaiser_enabled;
extern void kaiser_setup_pcid(void);
extern void kaiser_flush_tlb_on_return_to_user(void);
#else
+#define kaiser_enabled 0
static inline void kaiser_setup_pcid(void)
{
}
@@ -164,7 +166,7 @@ static inline void __native_flush_tlb(vo
* back:
*/
preempt_disable();
- if (this_cpu_has(X86_FEATURE_PCID))
+ if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID))
kaiser_flush_tlb_on_return_to_user();
native_write_cr3(native_read_cr3());
preempt_enable();
@@ -175,20 +177,30 @@ static inline void __native_flush_tlb_gl
unsigned long cr4;
cr4 = this_cpu_read(cpu_tlbstate.cr4);
- /* clear PGE */
- native_write_cr4(cr4 & ~X86_CR4_PGE);
- /* write old PGE again and flush TLBs */
- native_write_cr4(cr4);
+ if (cr4 & X86_CR4_PGE) {
+ /* clear PGE and flush TLB of all entries */
+ native_write_cr4(cr4 & ~X86_CR4_PGE);
+ /* restore PGE as it was before */
+ native_write_cr4(cr4);
+ } else {
+ /*
+ * x86_64 microcode update comes this way when CR4.PGE is not
+ * enabled, and it's safer for all callers to allow this case.
+ */
+ native_write_cr3(native_read_cr3());
+ }
}
static inline void __native_flush_tlb_global(void)
{
-#ifdef CONFIG_KAISER
- /* Globals are not used at all */
- __native_flush_tlb();
-#else
unsigned long flags;
+ if (kaiser_enabled) {
+ /* Globals are not used at all */
+ __native_flush_tlb();
+ return;
+ }
+
if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Using INVPCID is considerably faster than a pair of writes
@@ -208,7 +220,6 @@ static inline void __native_flush_tlb_gl
raw_local_irq_save(flags);
__native_flush_tlb_global_irq_disabled();
raw_local_irq_restore(flags);
-#endif
}
static inline void __native_flush_tlb_single(unsigned long addr)
@@ -223,7 +234,7 @@ static inline void __native_flush_tlb_si
*/
if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
- if (this_cpu_has(X86_FEATURE_PCID))
+ if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID))
kaiser_flush_tlb_on_return_to_user();
asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
return;
@@ -238,9 +249,9 @@ static inline void __native_flush_tlb_si
* Make sure to do only a single invpcid when KAISER is
* disabled and we have only a single ASID.
*/
- if (X86_CR3_PCID_ASID_KERN != X86_CR3_PCID_ASID_USER)
- invpcid_flush_one(X86_CR3_PCID_ASID_KERN, addr);
- invpcid_flush_one(X86_CR3_PCID_ASID_USER, addr);
+ if (kaiser_enabled)
+ invpcid_flush_one(X86_CR3_PCID_ASID_USER, addr);
+ invpcid_flush_one(X86_CR3_PCID_ASID_KERN, addr);
}
static inline void __flush_tlb_all(void)
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -179,6 +179,20 @@ static int __init x86_pcid_setup(char *s
return 1;
}
__setup("nopcid", x86_pcid_setup);
+
+static int __init x86_nokaiser_setup(char *s)
+{
+ /* nokaiser doesn't accept parameters */
+ if (s)
+ return -EINVAL;
+#ifdef CONFIG_KAISER
+ kaiser_enabled = 0;
+ setup_clear_cpu_cap(X86_FEATURE_KAISER);
+ pr_info("nokaiser: KAISER feature disabled\n");
+#endif
+ return 0;
+}
+early_param("nokaiser", x86_nokaiser_setup);
#endif
static int __init x86_noinvpcid_setup(char *s)
@@ -327,7 +341,7 @@ static __always_inline void setup_smap(s
static void setup_pcid(struct cpuinfo_x86 *c)
{
if (cpu_has(c, X86_FEATURE_PCID)) {
- if (cpu_has(c, X86_FEATURE_PGE)) {
+ if (cpu_has(c, X86_FEATURE_PGE) || kaiser_enabled) {
cr4_set_bits(X86_CR4_PCIDE);
/*
* INVPCID has two "groups" of types:
@@ -799,6 +813,10 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a);
init_scattered_cpuid_features(c);
+#ifdef CONFIG_KAISER
+ if (kaiser_enabled)
+ set_cpu_cap(c, X86_FEATURE_KAISER);
+#endif
}
static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
@@ -1537,6 +1555,14 @@ void cpu_init(void)
* try to read it.
*/
cr4_init_shadow();
+ if (!kaiser_enabled) {
+ /*
+ * secondary_startup_64() deferred setting PGE in cr4:
+ * probe_page_size_mask() sets it on the boot cpu,
+ * but it needs to be set on each secondary cpu.
+ */
+ cr4_set_bits(X86_CR4_PGE);
+ }
/*
* Load microcode on this cpu if a valid microcode is available.
--- a/arch/x86/kernel/espfix_64.c
+++ b/arch/x86/kernel/espfix_64.c
@@ -132,9 +132,10 @@ void __init init_espfix_bsp(void)
* area to ensure it is mapped into the shadow user page
* tables.
*/
- if (IS_ENABLED(CONFIG_KAISER))
+ if (kaiser_enabled) {
set_pgd(native_get_shadow_pgd(pgd_p),
__pgd(_KERNPG_TABLE | __pa((pud_t *)espfix_pud_page)));
+ }
/* Randomize the locations */
init_espfix_random();
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -190,8 +190,8 @@ ENTRY(secondary_startup_64)
movq $(init_level4_pgt - __START_KERNEL_map), %rax
1:
- /* Enable PAE mode and PGE */
- movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx
+ /* Enable PAE and PSE, but defer PGE until kaiser_enabled is decided */
+ movl $(X86_CR4_PAE | X86_CR4_PSE), %ecx
movq %rcx, %cr4
/* Setup early boot stage 4 level pagetables. */
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -177,7 +177,7 @@ static void __init probe_page_size_mask(
cr4_set_bits_and_update_boot(X86_CR4_PSE);
/* Enable PGE if available */
- if (boot_cpu_has(X86_FEATURE_PGE)) {
+ if (boot_cpu_has(X86_FEATURE_PGE) && !kaiser_enabled) {
cr4_set_bits_and_update_boot(X86_CR4_PGE);
__supported_pte_mask |= _PAGE_GLOBAL;
} else
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -324,6 +324,16 @@ void __init cleanup_highmap(void)
continue;
if (vaddr < (unsigned long) _text || vaddr > end)
set_pmd(pmd, __pmd(0));
+ else if (kaiser_enabled) {
+ /*
+ * level2_kernel_pgt is initialized with _PAGE_GLOBAL:
+ * clear that now. This is not important, so long as
+ * CR4.PGE remains clear, but it removes an anomaly.
+ * Physical mapping setup below avoids _PAGE_GLOBAL
+ * by use of massage_pgprot() inside pfn_pte() etc.
+ */
+ set_pmd(pmd, pmd_clear_flags(*pmd, _PAGE_GLOBAL));
+ }
}
}
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -16,7 +16,9 @@
#include <asm/pgalloc.h>
#include <asm/desc.h>
-#ifdef CONFIG_KAISER
+int kaiser_enabled __read_mostly = 1;
+EXPORT_SYMBOL(kaiser_enabled); /* for inlined TLB flush functions */
+
__visible
DEFINE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
@@ -167,8 +169,8 @@ static pte_t *kaiser_pagetable_walk(unsi
return pte_offset_kernel(pmd, address);
}
-int kaiser_add_user_map(const void *__start_addr, unsigned long size,
- unsigned long flags)
+static int kaiser_add_user_map(const void *__start_addr, unsigned long size,
+ unsigned long flags)
{
int ret = 0;
pte_t *pte;
@@ -177,6 +179,15 @@ int kaiser_add_user_map(const void *__st
unsigned long end_addr = PAGE_ALIGN(start_addr + size);
unsigned long target_address;
+ /*
+ * It is convenient for callers to pass in __PAGE_KERNEL etc,
+ * and there is no actual harm from setting _PAGE_GLOBAL, so
+ * long as CR4.PGE is not set. But it is nonetheless troubling
+ * to see Kaiser itself setting _PAGE_GLOBAL (now that "nokaiser"
+ * requires that not to be #defined to 0): so mask it off here.
+ */
+ flags &= ~_PAGE_GLOBAL;
+
for (; address < end_addr; address += PAGE_SIZE) {
target_address = get_pa_from_mapping(address);
if (target_address == -1) {
@@ -263,6 +274,8 @@ void __init kaiser_init(void)
{
int cpu;
+ if (!kaiser_enabled)
+ return;
kaiser_init_all_pgds();
for_each_possible_cpu(cpu) {
@@ -311,6 +324,8 @@ void __init kaiser_init(void)
/* Add a mapping to the shadow mapping, and synchronize the mappings */
int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
{
+ if (!kaiser_enabled)
+ return 0;
return kaiser_add_user_map((const void *)addr, size, flags);
}
@@ -322,6 +337,8 @@ void kaiser_remove_mapping(unsigned long
unsigned long addr, next;
pgd_t *pgd;
+ if (!kaiser_enabled)
+ return;
pgd = native_get_shadow_pgd(pgd_offset_k(start));
for (addr = start; addr < end; pgd++, addr = next) {
next = pgd_addr_end(addr, end);
@@ -343,6 +360,8 @@ static inline bool is_userspace_pgd(pgd_
pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
{
+ if (!kaiser_enabled)
+ return pgd;
/*
* Do we need to also populate the shadow pgd? Check _PAGE_USER to
* skip cases like kexec and EFI which make temporary low mappings.
@@ -399,4 +418,3 @@ void kaiser_flush_tlb_on_return_to_user(
X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
}
EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
-#endif /* CONFIG_KAISER */
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -345,16 +345,12 @@ static inline void _pgd_free(pgd_t *pgd)
}
#else
-#ifdef CONFIG_KAISER
/*
- * Instead of one pmd, we aquire two pmds. Being order-1, it is
+ * Instead of one pgd, Kaiser acquires two pgds. Being order-1, it is
* both 8k in size and 8k-aligned. That lets us just flip bit 12
* in a pointer to swap between the two 4k halves.
*/
-#define PGD_ALLOCATION_ORDER 1
-#else
-#define PGD_ALLOCATION_ORDER 0
-#endif
+#define PGD_ALLOCATION_ORDER kaiser_enabled
static inline pgd_t *_pgd_alloc(void)
{
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -39,8 +39,7 @@ static void load_new_mm_cr3(pgd_t *pgdir
{
unsigned long new_mm_cr3 = __pa(pgdir);
-#ifdef CONFIG_KAISER
- if (this_cpu_has(X86_FEATURE_PCID)) {
+ if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID)) {
/*
* We reuse the same PCID for different tasks, so we must
* flush all the entries for the PCID out when we change tasks.
@@ -57,7 +56,6 @@ static void load_new_mm_cr3(pgd_t *pgdir
new_mm_cr3 |= X86_CR3_PCID_KERN_FLUSH;
kaiser_flush_tlb_on_return_to_user();
}
-#endif /* CONFIG_KAISER */
/*
* Caution: many callers of this function expect
--- a/tools/arch/x86/include/asm/cpufeatures.h
+++ b/tools/arch/x86/include/asm/cpufeatures.h
@@ -197,6 +197,9 @@
#define X86_FEATURE_AVX512_4VNNIW (7*32+16) /* AVX-512 Neural Network Instructions */
#define X86_FEATURE_AVX512_4FMAPS (7*32+17) /* AVX-512 Multiply Accumulation Single precision */
+/* Because the ALTERNATIVE scheme is for members of the X86_FEATURE club... */
+#define X86_FEATURE_KAISER ( 7*32+31) /* CONFIG_KAISER w/o nokaiser */
+
/* Virtualization flags: Linux defined, word 8 */
#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
#define X86_FEATURE_VNMI ( 8*32+ 1) /* Intel Virtual NMI */
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.9/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.9/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.9/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.9/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.9/kaiser-merged-update.patch
queue-4.9/kaiser-delete-kaiser_real_switch-option.patch
queue-4.9/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.9/kaiser-fix-perf-crashes.patch
queue-4.9/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.9/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.9/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.9/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.9/kaiser-align-addition-to-x86-mm-makefile.patch
queue-4.9/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.9/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.9/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.9/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.9/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.9/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.9/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.9/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.9/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.9/kaiser-kernel-address-isolation.patch
queue-4.9/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.9/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.9/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.9/kaiser-kaiser-depends-on-smp.patch
queue-4.9/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
x86/paravirt: Dont patch flush_tlb_single
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-paravirt-dont-patch-flush_tlb_single.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Thomas Gleixner <tglx(a)linutronix.de>
Date: Mon, 4 Dec 2017 15:07:30 +0100
Subject: x86/paravirt: Dont patch flush_tlb_single
From: Thomas Gleixner <tglx(a)linutronix.de>
commit a035795499ca1c2bd1928808d1a156eda1420383 upstream
native_flush_tlb_single() will be changed with the upcoming
PAGE_TABLE_ISOLATION feature. This requires to have more code in
there than INVLPG.
Remove the paravirt patching for it.
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe(a)redhat.com>
Reviewed-by: Juergen Gross <jgross(a)suse.com>
Acked-by: Peter Zijlstra <peterz(a)infradead.org>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Borislav Petkov <bpetkov(a)suse.de>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: Dave Hansen <dave.hansen(a)intel.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: David Laight <David.Laight(a)aculab.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: Eduardo Valentin <eduval(a)amazon.com>
Cc: Greg KH <gregkh(a)linuxfoundation.org>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Rik van Riel <riel(a)redhat.com>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: aliguori(a)amazon.com
Cc: daniel.gruss(a)iaik.tugraz.at
Cc: hughd(a)google.com
Cc: keescook(a)google.com
Cc: linux-mm(a)kvack.org
Cc: michael.schwarz(a)iaik.tugraz.at
Cc: moritz.lipp(a)iaik.tugraz.at
Cc: richard.fellner(a)student.tugraz.at
Link: https://lkml.kernel.org/r/20171204150606.828111617@linutronix.de
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Acked-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/paravirt_patch_64.c | 2 --
1 file changed, 2 deletions(-)
--- a/arch/x86/kernel/paravirt_patch_64.c
+++ b/arch/x86/kernel/paravirt_patch_64.c
@@ -9,7 +9,6 @@ DEF_NATIVE(pv_irq_ops, save_fl, "pushfq;
DEF_NATIVE(pv_mmu_ops, read_cr2, "movq %cr2, %rax");
DEF_NATIVE(pv_mmu_ops, read_cr3, "movq %cr3, %rax");
DEF_NATIVE(pv_mmu_ops, write_cr3, "movq %rdi, %cr3");
-DEF_NATIVE(pv_mmu_ops, flush_tlb_single, "invlpg (%rdi)");
DEF_NATIVE(pv_cpu_ops, clts, "clts");
DEF_NATIVE(pv_cpu_ops, wbinvd, "wbinvd");
@@ -62,7 +61,6 @@ unsigned native_patch(u8 type, u16 clobb
PATCH_SITE(pv_mmu_ops, read_cr3);
PATCH_SITE(pv_mmu_ops, write_cr3);
PATCH_SITE(pv_cpu_ops, clts);
- PATCH_SITE(pv_mmu_ops, flush_tlb_single);
PATCH_SITE(pv_cpu_ops, wbinvd);
#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUED_SPINLOCKS)
case PARAVIRT_PATCH(pv_lock_ops.queued_spin_unlock):
Patches currently in stable-queue which might be from tglx(a)linutronix.de are
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/x86-boot-add-early-cmdline-parsing-for-options-with-arguments.patch
This is a note to let you know that I've just added the patch titled
x86/kaiser: Move feature detection up
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kaiser-move-feature-detection-up.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Borislav Petkov <bp(a)suse.de>
Date: Mon, 25 Dec 2017 13:57:16 +0100
Subject: x86/kaiser: Move feature detection up
From: Borislav Petkov <bp(a)suse.de>
... before the first use of kaiser_enabled as otherwise funky
things happen:
about to get started...
(XEN) d0v0 Unhandled page fault fault/trap [#14, ec=0000]
(XEN) Pagetable walk from ffff88022a449090:
(XEN) L4[0x110] = 0000000229e0e067 0000000000001e0e
(XEN) L3[0x008] = 0000000000000000 ffffffffffffffff
(XEN) domain_crash_sync called from entry.S: fault at ffff82d08033fd08
entry.o#create_bounce_frame+0x135/0x14d
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:
(XEN) ----[ Xen-4.9.1_02-3.21 x86_64 debug=n Not tainted ]----
(XEN) CPU: 0
(XEN) RIP: e033:[<ffffffff81007460>]
(XEN) RFLAGS: 0000000000000286 EM: 1 CONTEXT: pv guest (d0v0)
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/kaiser.h | 2 ++
arch/x86/kernel/setup.c | 7 +++++++
arch/x86/mm/kaiser.c | 2 --
3 files changed, 9 insertions(+), 2 deletions(-)
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -96,8 +96,10 @@ DECLARE_PER_CPU(unsigned long, x86_cr3_p
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
extern int kaiser_enabled;
+extern void __init kaiser_check_boottime_disable(void);
#else
#define kaiser_enabled 0
+static inline void __init kaiser_check_boottime_disable(void) {}
#endif /* CONFIG_KAISER */
/*
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -112,6 +112,7 @@
#include <asm/alternative.h>
#include <asm/prom.h>
#include <asm/microcode.h>
+#include <asm/kaiser.h>
/*
* max_low_pfn_mapped: highest direct mapped pfn under 4GB
@@ -1016,6 +1017,12 @@ void __init setup_arch(char **cmdline_p)
*/
init_hypervisor_platform();
+ /*
+ * This needs to happen right after XENPV is set on xen and
+ * kaiser_enabled is checked below in cleanup_highmap().
+ */
+ kaiser_check_boottime_disable();
+
x86_init.resources.probe_roms();
/* after parse_early_param, so could debug it */
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -311,8 +311,6 @@ void __init kaiser_init(void)
{
int cpu;
- kaiser_check_boottime_disable();
-
if (!kaiser_enabled)
return;
Patches currently in stable-queue which might be from bp(a)suse.de are
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/x86-kaiser-reenable-paravirt.patch
queue-4.4/x86-kaiser-rename-and-simplify-x86_feature_kaiser-handling.patch
queue-4.4/x86-kaiser-check-boottime-cmdline-params.patch
queue-4.4/x86-kaiser-move-feature-detection-up.patch
This is a note to let you know that I've just added the patch titled
x86/kaiser: Reenable PARAVIRT
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kaiser-reenable-paravirt.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Borislav Petkov <bp(a)suse.de>
Date: Tue, 2 Jan 2018 14:19:49 +0100
Subject: x86/kaiser: Reenable PARAVIRT
From: Borislav Petkov <bp(a)suse.de>
Now that the required bits have been addressed, reenable
PARAVIRT.
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
security/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -34,7 +34,7 @@ config SECURITY
config KAISER
bool "Remove the kernel mapping in user mode"
default y
- depends on X86_64 && SMP && !PARAVIRT
+ depends on X86_64 && SMP
help
This enforces a strict kernel and user space isolation, in order
to close hardware side channels on kernel address information.
Patches currently in stable-queue which might be from bp(a)suse.de are
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/x86-kaiser-reenable-paravirt.patch
queue-4.4/x86-kaiser-rename-and-simplify-x86_feature_kaiser-handling.patch
queue-4.4/x86-kaiser-check-boottime-cmdline-params.patch
queue-4.4/x86-kaiser-move-feature-detection-up.patch
This is a note to let you know that I've just added the patch titled
x86/kaiser: Check boottime cmdline params
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-kaiser-check-boottime-cmdline-params.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Borislav Petkov <bp(a)suse.de>
Date: Tue, 2 Jan 2018 14:19:48 +0100
Subject: x86/kaiser: Check boottime cmdline params
From: Borislav Petkov <bp(a)suse.de>
AMD (and possibly other vendors) are not affected by the leak
KAISER is protecting against.
Keep the "nopti" for traditional reasons and add pti=<on|off|auto>
like upstream.
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/kernel-parameters.txt | 6 +++
arch/x86/mm/kaiser.c | 59 +++++++++++++++++++++++++-----------
2 files changed, 47 insertions(+), 18 deletions(-)
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -3056,6 +3056,12 @@ bytes respectively. Such letter suffixes
pt. [PARIDE]
See Documentation/blockdev/paride.txt.
+ pti= [X86_64]
+ Control KAISER user/kernel address space isolation:
+ on - enable
+ off - disable
+ auto - default setting
+
pty.legacy_count=
[KNL] Number of legacy pty's. Overwrites compiled-in
default number.
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -16,6 +16,7 @@
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/desc.h>
+#include <asm/cmdline.h>
int kaiser_enabled __read_mostly = 1;
EXPORT_SYMBOL(kaiser_enabled); /* for inlined TLB flush functions */
@@ -264,6 +265,43 @@ static void __init kaiser_init_all_pgds(
WARN_ON(__ret); \
} while (0)
+void __init kaiser_check_boottime_disable(void)
+{
+ bool enable = true;
+ char arg[5];
+ int ret;
+
+ ret = cmdline_find_option(boot_command_line, "pti", arg, sizeof(arg));
+ if (ret > 0) {
+ if (!strncmp(arg, "on", 2))
+ goto enable;
+
+ if (!strncmp(arg, "off", 3))
+ goto disable;
+
+ if (!strncmp(arg, "auto", 4))
+ goto skip;
+ }
+
+ if (cmdline_find_option_bool(boot_command_line, "nopti"))
+ goto disable;
+
+skip:
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
+ goto disable;
+
+enable:
+ if (enable)
+ setup_force_cpu_cap(X86_FEATURE_KAISER);
+
+ return;
+
+disable:
+ pr_info("Kernel/User page tables isolation: disabled\n");
+ kaiser_enabled = 0;
+ setup_clear_cpu_cap(X86_FEATURE_KAISER);
+}
+
/*
* If anything in here fails, we will likely die on one of the
* first kernel->user transitions and init will die. But, we
@@ -275,12 +313,10 @@ void __init kaiser_init(void)
{
int cpu;
- if (!kaiser_enabled) {
- setup_clear_cpu_cap(X86_FEATURE_KAISER);
- return;
- }
+ kaiser_check_boottime_disable();
- setup_force_cpu_cap(X86_FEATURE_KAISER);
+ if (!kaiser_enabled)
+ return;
kaiser_init_all_pgds();
@@ -424,16 +460,3 @@ void kaiser_flush_tlb_on_return_to_user(
X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
}
EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
-
-static int __init x86_nokaiser_setup(char *s)
-{
- /* nopti doesn't accept parameters */
- if (s)
- return -EINVAL;
-
- kaiser_enabled = 0;
- pr_info("Kernel/User page tables isolation: disabled\n");
-
- return 0;
-}
-early_param("nopti", x86_nokaiser_setup);
Patches currently in stable-queue which might be from bp(a)suse.de are
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/x86-kaiser-reenable-paravirt.patch
queue-4.4/x86-kaiser-rename-and-simplify-x86_feature_kaiser-handling.patch
queue-4.4/x86-kaiser-check-boottime-cmdline-params.patch
queue-4.4/x86-kaiser-move-feature-detection-up.patch
This is a note to let you know that I've just added the patch titled
kaiser: x86_cr3_pcid_noflush and x86_cr3_pcid_user
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 27 Aug 2017 16:24:27 -0700
Subject: kaiser: x86_cr3_pcid_noflush and x86_cr3_pcid_user
From: Hugh Dickins <hughd(a)google.com>
Mostly this commit is just unshouting X86_CR3_PCID_KERN_VAR and
X86_CR3_PCID_USER_VAR: we usually name variables in lower-case.
But why does x86_cr3_pcid_noflush need to be __aligned(PAGE_SIZE)?
Ah, it's a leftover from when kaiser_add_user_map() once complained
about mapping the same page twice. Make it __read_mostly instead.
(I'm a little uneasy about all the unrelated data which shares its
page getting user-mapped too, but that was so before, and not a big
deal: though we call it user-mapped, it's not mapped with _PAGE_USER.)
And there is a little change around the two calls to do_nmi().
Previously they set the NOFLUSH bit (if PCID supported) when
forcing to kernel context before do_nmi(); now they also have the
NOFLUSH bit set (if PCID supported) when restoring context after:
nothing done in do_nmi() should require a TLB to be flushed here.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 8 ++++----
arch/x86/include/asm/kaiser.h | 11 +++++------
arch/x86/mm/kaiser.c | 13 +++++++------
3 files changed, 16 insertions(+), 16 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1290,11 +1290,11 @@ ENTRY(nmi)
/* Unconditionally use kernel CR3 for do_nmi() */
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
+ /* If PCID enabled, NOFLUSH now and NOFLUSH on return */
+ orq x86_cr3_pcid_noflush, %rax
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
- /* Add back kernel PCID and "no flush" bit */
- orq X86_CR3_PCID_KERN_VAR, %rax
movq %rax, %cr3
#endif
call do_nmi
@@ -1534,11 +1534,11 @@ end_repeat_nmi:
/* Unconditionally use kernel CR3 for do_nmi() */
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
+ /* If PCID enabled, NOFLUSH now and NOFLUSH on return */
+ orq x86_cr3_pcid_noflush, %rax
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
- /* Add back kernel PCID and "no flush" bit */
- orq X86_CR3_PCID_KERN_VAR, %rax
movq %rax, %cr3
#endif
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -25,7 +25,7 @@
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
-orq X86_CR3_PCID_KERN_VAR, \reg
+orq x86_cr3_pcid_noflush, \reg
movq \reg, %cr3
.endm
@@ -37,11 +37,10 @@ movq \reg, %cr3
* not enabled): so that the one register can update both memory and cr3.
*/
movq %cr3, \reg
-andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
-orq PER_CPU_VAR(X86_CR3_PCID_USER_VAR), \reg
+orq PER_CPU_VAR(x86_cr3_pcid_user), \reg
js 9f
/* FLUSH this time, reset to NOFLUSH for next time (if PCID enabled) */
-movb \regb, PER_CPU_VAR(X86_CR3_PCID_USER_VAR+7)
+movb \regb, PER_CPU_VAR(x86_cr3_pcid_user+7)
9:
movq \reg, %cr3
.endm
@@ -94,8 +93,8 @@ movq PER_CPU_VAR(unsafe_stack_register_b
*/
DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
-extern unsigned long X86_CR3_PCID_KERN_VAR;
-DECLARE_PER_CPU(unsigned long, X86_CR3_PCID_USER_VAR);
+extern unsigned long x86_cr3_pcid_noflush;
+DECLARE_PER_CPU(unsigned long, x86_cr3_pcid_user);
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -29,8 +29,8 @@ DEFINE_PER_CPU_USER_MAPPED(unsigned long
* This is also handy because systems that do not support PCIDs
* just end up or'ing a 0 into their CR3, which does no harm.
*/
-__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_KERN_VAR;
-DEFINE_PER_CPU(unsigned long, X86_CR3_PCID_USER_VAR);
+unsigned long x86_cr3_pcid_noflush __read_mostly;
+DEFINE_PER_CPU(unsigned long, x86_cr3_pcid_user);
/*
* At runtime, the only things we map are some things for CPU
@@ -304,7 +304,8 @@ void __init kaiser_init(void)
sizeof(gate_desc) * NR_VECTORS,
__PAGE_KERNEL);
- kaiser_add_user_map_early(&X86_CR3_PCID_KERN_VAR, PAGE_SIZE,
+ kaiser_add_user_map_early(&x86_cr3_pcid_noflush,
+ sizeof(x86_cr3_pcid_noflush),
__PAGE_KERNEL);
}
@@ -384,8 +385,8 @@ void kaiser_setup_pcid(void)
* These variables are used by the entry/exit
* code to change PCID and pgd and TLB flushing.
*/
- X86_CR3_PCID_KERN_VAR = kern_cr3;
- this_cpu_write(X86_CR3_PCID_USER_VAR, user_cr3);
+ x86_cr3_pcid_noflush = kern_cr3;
+ this_cpu_write(x86_cr3_pcid_user, user_cr3);
}
/*
@@ -395,7 +396,7 @@ void kaiser_setup_pcid(void)
*/
void kaiser_flush_tlb_on_return_to_user(void)
{
- this_cpu_write(X86_CR3_PCID_USER_VAR,
+ this_cpu_write(x86_cr3_pcid_user,
X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
}
EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: vmstat show NR_KAISERTABLE as nr_overhead
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sat, 9 Sep 2017 21:27:32 -0700
Subject: kaiser: vmstat show NR_KAISERTABLE as nr_overhead
From: Hugh Dickins <hughd(a)google.com>
The kaiser update made an interesting choice, never to free any shadow
page tables. Contention on global spinlock was worrying, particularly
with it held across page table scans when freeing. Something had to be
done: I was going to add refcounting; but simply never to free them is
an appealing choice, minimizing contention without complicating the code
(the more a page table is found already, the less the spinlock is used).
But leaking pages in this way is also a worry: can we get away with it?
At the very least, we need a count to show how bad it actually gets:
in principle, one might end up wasting about 1/256 of memory that way
(1/512 for when direct-mapped pages have to be user-mapped, plus 1/512
for when they are user-mapped from the vmalloc area on another occasion
(but we don't have vmalloc'ed stacks, so only large ldts are vmalloc'ed).
Add per-cpu stat NR_KAISERTABLE: including 256 at startup for the
shared pgd entries, and 1 for each intermediate page table added
thereafter for user-mapping - but leave out the 1 per mm, for its
shadow pgd, because that distracts from the monotonic increase.
Shown in /proc/vmstat as nr_overhead (0 if kaiser not enabled).
In practice, it doesn't look so bad so far: more like 1/12000 after
nine hours of gtests below; and movable pageblock segregation should
tend to cluster the kaiser tables into a subset of the address space
(if not, they will be bad for compaction too). But production may
tell a different story: keep an eye on this number, and bring back
lighter freeing if it gets out of control (maybe a shrinker).
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 16 +++++++++++-----
include/linux/mmzone.h | 3 ++-
mm/vmstat.c | 1 +
3 files changed, 14 insertions(+), 6 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -122,9 +122,11 @@ static pte_t *kaiser_pagetable_walk(unsi
if (!new_pmd_page)
return NULL;
spin_lock(&shadow_table_allocation_lock);
- if (pud_none(*pud))
+ if (pud_none(*pud)) {
set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
- else
+ __inc_zone_page_state(virt_to_page((void *)
+ new_pmd_page), NR_KAISERTABLE);
+ } else
free_page(new_pmd_page);
spin_unlock(&shadow_table_allocation_lock);
}
@@ -140,9 +142,11 @@ static pte_t *kaiser_pagetable_walk(unsi
if (!new_pte_page)
return NULL;
spin_lock(&shadow_table_allocation_lock);
- if (pmd_none(*pmd))
+ if (pmd_none(*pmd)) {
set_pmd(pmd, __pmd(_KERNPG_TABLE | __pa(new_pte_page)));
- else
+ __inc_zone_page_state(virt_to_page((void *)
+ new_pte_page), NR_KAISERTABLE);
+ } else
free_page(new_pte_page);
spin_unlock(&shadow_table_allocation_lock);
}
@@ -206,11 +210,13 @@ static void __init kaiser_init_all_pgds(
pgd = native_get_shadow_pgd(pgd_offset_k((unsigned long )0));
for (i = PTRS_PER_PGD / 2; i < PTRS_PER_PGD; i++) {
pgd_t new_pgd;
- pud_t *pud = pud_alloc_one(&init_mm, PAGE_OFFSET + i * PGDIR_SIZE);
+ pud_t *pud = pud_alloc_one(&init_mm,
+ PAGE_OFFSET + i * PGDIR_SIZE);
if (!pud) {
WARN_ON(1);
break;
}
+ inc_zone_page_state(virt_to_page(pud), NR_KAISERTABLE);
new_pgd = __pgd(_KERNPG_TABLE |__pa(pud));
/*
* Make sure not to stomp on some other pgd entry.
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -131,8 +131,9 @@ enum zone_stat_item {
NR_SLAB_RECLAIMABLE,
NR_SLAB_UNRECLAIMABLE,
NR_PAGETABLE, /* used for pagetables */
- NR_KERNEL_STACK,
/* Second 128 byte cacheline */
+ NR_KERNEL_STACK,
+ NR_KAISERTABLE,
NR_UNSTABLE_NFS, /* NFS unstable pages */
NR_BOUNCE,
NR_VMSCAN_WRITE,
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -736,6 +736,7 @@ const char * const vmstat_text[] = {
"nr_slab_unreclaimable",
"nr_page_table_pages",
"nr_kernel_stack",
+ "nr_overhead",
"nr_unstable",
"nr_bounce",
"nr_vmscan_write",
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: use ALTERNATIVE instead of x86_cr3_pcid_noflush
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Tue, 3 Oct 2017 20:49:04 -0700
Subject: kaiser: use ALTERNATIVE instead of x86_cr3_pcid_noflush
From: Hugh Dickins <hughd(a)google.com>
Now that we're playing the ALTERNATIVE game, use that more efficient
method: instead of user-mapping an extra page, and reading an extra
cacheline each time for x86_cr3_pcid_noflush.
Neel has found that __stringify(bts $X86_CR3_PCID_NOFLUSH_BIT, %rax)
is a working substitute for the "bts $63, %rax" in these ALTERNATIVEs;
but the one line with $63 in looks clearer, so let's stick with that.
Worried about what happens with an ALTERNATIVE between the jump and
jump label in another ALTERNATIVE? I was, but have checked the
combinations in SWITCH_KERNEL_CR3_NO_STACK at entry_SYSCALL_64,
and it does a good job.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 7 ++++---
arch/x86/include/asm/kaiser.h | 6 +++---
arch/x86/mm/kaiser.c | 11 +----------
3 files changed, 8 insertions(+), 16 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1056,7 +1056,8 @@ ENTRY(paranoid_entry)
jz 2f
orl $2, %ebx
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
- orq x86_cr3_pcid_noflush, %rax
+ /* If PCID enabled, set X86_CR3_PCID_NOFLUSH_BIT */
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
movq %rax, %cr3
2:
#endif
@@ -1318,7 +1319,7 @@ ENTRY(nmi)
/* %rax is saved above, so OK to clobber here */
ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
/* If PCID enabled, NOFLUSH now and NOFLUSH on return */
- orq x86_cr3_pcid_noflush, %rax
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
@@ -1562,7 +1563,7 @@ end_repeat_nmi:
/* %rax is saved above, so OK to clobber here */
ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
/* If PCID enabled, NOFLUSH now and NOFLUSH on return */
- orq x86_cr3_pcid_noflush, %rax
+ ALTERNATIVE "", "bts $63, %rax", X86_FEATURE_PCID
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -25,7 +25,8 @@
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
-orq x86_cr3_pcid_noflush, \reg
+/* If PCID enabled, set X86_CR3_PCID_NOFLUSH_BIT */
+ALTERNATIVE "", "bts $63, \reg", X86_FEATURE_PCID
movq \reg, %cr3
.endm
@@ -39,7 +40,7 @@ movq \reg, %cr3
movq %cr3, \reg
orq PER_CPU_VAR(x86_cr3_pcid_user), \reg
js 9f
-/* FLUSH this time, reset to NOFLUSH for next time (if PCID enabled) */
+/* If PCID enabled, FLUSH this time, reset to NOFLUSH for next time */
movb \regb, PER_CPU_VAR(x86_cr3_pcid_user+7)
9:
movq \reg, %cr3
@@ -90,7 +91,6 @@ movq PER_CPU_VAR(unsafe_stack_register_b
*/
DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
-extern unsigned long x86_cr3_pcid_noflush;
DECLARE_PER_CPU(unsigned long, x86_cr3_pcid_user);
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -32,7 +32,6 @@ DEFINE_PER_CPU_USER_MAPPED(unsigned long
* This is also handy because systems that do not support PCIDs
* just end up or'ing a 0 into their CR3, which does no harm.
*/
-unsigned long x86_cr3_pcid_noflush __read_mostly;
DEFINE_PER_CPU(unsigned long, x86_cr3_pcid_user);
/*
@@ -357,10 +356,6 @@ void __init kaiser_init(void)
kaiser_add_user_map_early(&debug_idt_table,
sizeof(gate_desc) * NR_VECTORS,
__PAGE_KERNEL);
-
- kaiser_add_user_map_early(&x86_cr3_pcid_noflush,
- sizeof(x86_cr3_pcid_noflush),
- __PAGE_KERNEL);
}
/* Add a mapping to the shadow mapping, and synchronize the mappings */
@@ -434,18 +429,14 @@ pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp,
void kaiser_setup_pcid(void)
{
- unsigned long kern_cr3 = 0;
unsigned long user_cr3 = KAISER_SHADOW_PGD_OFFSET;
- if (this_cpu_has(X86_FEATURE_PCID)) {
- kern_cr3 |= X86_CR3_PCID_KERN_NOFLUSH;
+ if (this_cpu_has(X86_FEATURE_PCID))
user_cr3 |= X86_CR3_PCID_USER_NOFLUSH;
- }
/*
* These variables are used by the entry/exit
* code to change PCID and pgd and TLB flushing.
*/
- x86_cr3_pcid_noflush = kern_cr3;
this_cpu_write(x86_cr3_pcid_user, user_cr3);
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: tidied up kaiser_add/remove_mapping slightly
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 19:23:08 -0700
Subject: kaiser: tidied up kaiser_add/remove_mapping slightly
From: Hugh Dickins <hughd(a)google.com>
Yes, unmap_pud_range_nofree()'s declaration ought to be in a
header file really, but I'm not sure we want to use it anyway:
so for now just declare it inside kaiser_remove_mapping().
And there doesn't seem to be such a thing as unmap_p4d_range(),
even in a 5-level paging tree.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -286,8 +286,7 @@ void __init kaiser_init(void)
__PAGE_KERNEL);
}
-extern void unmap_pud_range_nofree(pgd_t *pgd, unsigned long start, unsigned long end);
-// add a mapping to the shadow-mapping, and synchronize the mappings
+/* Add a mapping to the shadow mapping, and synchronize the mappings */
int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
{
return kaiser_add_user_map((const void *)addr, size, flags);
@@ -295,15 +294,13 @@ int kaiser_add_mapping(unsigned long add
void kaiser_remove_mapping(unsigned long start, unsigned long size)
{
+ extern void unmap_pud_range_nofree(pgd_t *pgd,
+ unsigned long start, unsigned long end);
unsigned long end = start + size;
unsigned long addr;
for (addr = start; addr < end; addr += PGDIR_SIZE) {
pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(addr));
- /*
- * unmap_p4d_range() handles > P4D_SIZE unmaps,
- * so no need to trim 'end'.
- */
unmap_pud_range_nofree(pgd, addr, end);
}
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: tidied up asm/kaiser.h somewhat
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-tidied-up-asm-kaiser.h-somewhat.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 19:18:07 -0700
Subject: kaiser: tidied up asm/kaiser.h somewhat
From: Hugh Dickins <hughd(a)google.com>
Mainly deleting a surfeit of blank lines, and reflowing header comment.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/kaiser.h | 32 +++++++++++++-------------------
1 file changed, 13 insertions(+), 19 deletions(-)
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -1,15 +1,17 @@
#ifndef _ASM_X86_KAISER_H
#define _ASM_X86_KAISER_H
-
-/* This file includes the definitions for the KAISER feature.
- * KAISER is a counter measure against x86_64 side channel attacks on the kernel virtual memory.
- * It has a shodow-pgd for every process. the shadow-pgd has a minimalistic kernel-set mapped,
- * but includes the whole user memory. Within a kernel context switch, or when an interrupt is handled,
- * the pgd is switched to the normal one. When the system switches to user mode, the shadow pgd is enabled.
- * By this, the virtual memory chaches are freed, and the user may not attack the whole kernel memory.
+/*
+ * This file includes the definitions for the KAISER feature.
+ * KAISER is a counter measure against x86_64 side channel attacks on
+ * the kernel virtual memory. It has a shadow pgd for every process: the
+ * shadow pgd has a minimalistic kernel-set mapped, but includes the whole
+ * user memory. Within a kernel context switch, or when an interrupt is handled,
+ * the pgd is switched to the normal one. When the system switches to user mode,
+ * the shadow pgd is enabled. By this, the virtual memory caches are freed,
+ * and the user may not attack the whole kernel memory.
*
- * A minimalistic kernel mapping holds the parts needed to be mapped in user mode, as the entry/exit functions
- * of the user space, or the stacks.
+ * A minimalistic kernel mapping holds the parts needed to be mapped in user
+ * mode, such as the entry/exit functions of the user space, or the stacks.
*/
#ifdef __ASSEMBLY__
#ifdef CONFIG_KAISER
@@ -48,13 +50,10 @@ _SWITCH_TO_KERNEL_CR3 %rax
movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
.endm
-
.macro SWITCH_USER_CR3_NO_STACK
-
movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)
_SWITCH_TO_USER_CR3 %rax
movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
-
.endm
#else /* CONFIG_KAISER */
@@ -72,7 +71,6 @@ movq PER_CPU_VAR(unsafe_stack_register_b
#else /* __ASSEMBLY__ */
-
#ifdef CONFIG_KAISER
/*
* Upon kernel/user mode switch, it may happen that the address
@@ -80,7 +78,6 @@ movq PER_CPU_VAR(unsafe_stack_register_b
* stored. To change the address space, another register is
* needed. A register therefore has to be stored/restored.
*/
-
DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
/**
@@ -95,7 +92,6 @@ DECLARE_PER_CPU_USER_MAPPED(unsigned lon
*/
extern int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags);
-
/**
* kaiser_remove_mapping - unmap a virtual memory part of the shadow mapping
* @addr: the start address of the range
@@ -104,12 +100,12 @@ extern int kaiser_add_mapping(unsigned l
extern void kaiser_remove_mapping(unsigned long start, unsigned long size);
/**
- * kaiser_initialize_mapping - Initalize the shadow mapping
+ * kaiser_init - Initialize the shadow mapping
*
* Most parts of the shadow mapping can be mapped upon boot
* time. Only per-process things like the thread stacks
* or a new LDT have to be mapped at runtime. These boot-
- * time mappings are permanent and nevertunmapped.
+ * time mappings are permanent and never unmapped.
*/
extern void kaiser_init(void);
@@ -117,6 +113,4 @@ extern void kaiser_init(void);
#endif /* __ASSEMBLY */
-
-
#endif /* _ASM_X86_KAISER_H */
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: stack map PAGE_SIZE at THREAD_SIZE-PAGE_SIZE
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-stack-map-page_size-at-thread_size-page_size.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 18:57:03 -0700
Subject: kaiser: stack map PAGE_SIZE at THREAD_SIZE-PAGE_SIZE
From: Hugh Dickins <hughd(a)google.com>
Kaiser only needs to map one page of the stack; and
kernel/fork.c did not build on powerpc (no __PAGE_KERNEL).
It's all cleaner if linux/kaiser.h provides kaiser_map_thread_stack()
and kaiser_unmap_thread_stack() wrappers around asm/kaiser.h's
kaiser_add_mapping() and kaiser_remove_mapping(). And use
linux/kaiser.h in init/main.c to avoid the #ifdefs there.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/kaiser.h | 40 +++++++++++++++++++++++++++++++++-------
init/main.c | 6 +-----
kernel/fork.c | 7 ++-----
3 files changed, 36 insertions(+), 17 deletions(-)
--- a/include/linux/kaiser.h
+++ b/include/linux/kaiser.h
@@ -1,26 +1,52 @@
-#ifndef _INCLUDE_KAISER_H
-#define _INCLUDE_KAISER_H
+#ifndef _LINUX_KAISER_H
+#define _LINUX_KAISER_H
#ifdef CONFIG_KAISER
#include <asm/kaiser.h>
+
+static inline int kaiser_map_thread_stack(void *stack)
+{
+ /*
+ * Map that page of kernel stack on which we enter from user context.
+ */
+ return kaiser_add_mapping((unsigned long)stack +
+ THREAD_SIZE - PAGE_SIZE, PAGE_SIZE, __PAGE_KERNEL);
+}
+
+static inline void kaiser_unmap_thread_stack(void *stack)
+{
+ /*
+ * Note: may be called even when kaiser_map_thread_stack() failed.
+ */
+ kaiser_remove_mapping((unsigned long)stack +
+ THREAD_SIZE - PAGE_SIZE, PAGE_SIZE);
+}
#else
/*
* These stubs are used whenever CONFIG_KAISER is off, which
- * includes architectures that support KAISER, but have it
- * disabled.
+ * includes architectures that support KAISER, but have it disabled.
*/
static inline void kaiser_init(void)
{
}
-static inline void kaiser_remove_mapping(unsigned long start, unsigned long size)
+static inline int kaiser_add_mapping(unsigned long addr,
+ unsigned long size, unsigned long flags)
+{
+ return 0;
+}
+static inline void kaiser_remove_mapping(unsigned long start,
+ unsigned long size)
{
}
-static inline int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
+static inline int kaiser_map_thread_stack(void *stack)
{
return 0;
}
+static inline void kaiser_unmap_thread_stack(void *stack)
+{
+}
#endif /* !CONFIG_KAISER */
-#endif /* _INCLUDE_KAISER_H */
+#endif /* _LINUX_KAISER_H */
--- a/init/main.c
+++ b/init/main.c
@@ -81,15 +81,13 @@
#include <linux/integrity.h>
#include <linux/proc_ns.h>
#include <linux/io.h>
+#include <linux/kaiser.h>
#include <asm/io.h>
#include <asm/bugs.h>
#include <asm/setup.h>
#include <asm/sections.h>
#include <asm/cacheflush.h>
-#ifdef CONFIG_KAISER
-#include <asm/kaiser.h>
-#endif
static int kernel_init(void *);
@@ -495,9 +493,7 @@ static void __init mm_init(void)
pgtable_init();
vmalloc_init();
ioremap_huge_init();
-#ifdef CONFIG_KAISER
kaiser_init();
-#endif
}
asmlinkage __visible void __init start_kernel(void)
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -168,12 +168,9 @@ static struct thread_info *alloc_thread_
return page ? page_address(page) : NULL;
}
-extern void kaiser_remove_mapping(unsigned long start_addr, unsigned long size);
static inline void free_thread_info(struct thread_info *ti)
{
-#ifdef CONFIG_KAISER
- kaiser_remove_mapping((unsigned long)ti, THREAD_SIZE);
-#endif
+ kaiser_unmap_thread_stack(ti);
free_kmem_pages((unsigned long)ti, THREAD_SIZE_ORDER);
}
# else
@@ -358,7 +355,7 @@ static struct task_struct *dup_task_stru
tsk->stack = ti;
- err= kaiser_add_mapping((unsigned long)tsk->stack, THREAD_SIZE, __PAGE_KERNEL);
+ err = kaiser_map_thread_stack(tsk->stack);
if (err)
goto free_ti;
#ifdef CONFIG_SECCOMP
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: PCID 0 for kernel and 128 for user
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-pcid-0-for-kernel-and-128-for-user.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Fri, 8 Sep 2017 19:26:30 -0700
Subject: kaiser: PCID 0 for kernel and 128 for user
From: Hugh Dickins <hughd(a)google.com>
Why was 4 chosen for kernel PCID and 6 for user PCID?
No good reason in a backport where PCIDs are only used for Kaiser.
If we continue with those, then we shall need to add Andy Lutomirski's
4.13 commit 6c690ee1039b ("x86/mm: Split read_cr3() into read_cr3_pa()
and __read_cr3()"), which deals with the problem of read_cr3() callers
finding stray bits in the cr3 that they expected to be page-aligned;
and for hibernation, his 4.14 commit f34902c5c6c0 ("x86/hibernate/64:
Mask off CR3's PCID bits in the saved CR3").
But if 0 is used for kernel PCID, then there's no need to add in those
commits - whenever the kernel looks, it sees 0 in the lower bits; and
0 for kernel seems an obvious choice.
And I naughtily propose 128 for user PCID. Because there's a place
in _SWITCH_TO_USER_CR3 where it takes note of the need for TLB FLUSH,
but needs to reset that to NOFLUSH for the next occasion. Currently
it does so with a "movb $(0x80)" into the high byte of the per-cpu
quadword, but that will cause a machine without PCID support to crash.
Now, if %al just happened to have 0x80 in it at that point, on a
machine with PCID support, but 0 on a machine without PCID support...
(That will go badly wrong once the pgd can be at a physical address
above 2^56, but even with 5-level paging, physical goes up to 2^52.)
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/kaiser.h | 19 ++++++++++++-------
arch/x86/include/asm/pgtable_types.h | 7 ++++---
arch/x86/mm/tlb.c | 3 +++
3 files changed, 19 insertions(+), 10 deletions(-)
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -29,14 +29,19 @@ orq X86_CR3_PCID_KERN_VAR, \reg
movq \reg, %cr3
.endm
-.macro _SWITCH_TO_USER_CR3 reg
+.macro _SWITCH_TO_USER_CR3 reg regb
+/*
+ * regb must be the low byte portion of reg: because we have arranged
+ * for the low byte of the user PCID to serve as the high byte of NOFLUSH
+ * (0x80 for each when PCID is enabled, or 0x00 when PCID and NOFLUSH are
+ * not enabled): so that the one register can update both memory and cr3.
+ */
movq %cr3, \reg
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
orq PER_CPU_VAR(X86_CR3_PCID_USER_VAR), \reg
js 9f
-// FLUSH this time, reset to NOFLUSH for next time
-// But if nopcid? Consider using 0x80 for user pcid?
-movb $(0x80), PER_CPU_VAR(X86_CR3_PCID_USER_VAR+7)
+/* FLUSH this time, reset to NOFLUSH for next time (if PCID enabled) */
+movb \regb, PER_CPU_VAR(X86_CR3_PCID_USER_VAR+7)
9:
movq \reg, %cr3
.endm
@@ -49,7 +54,7 @@ popq %rax
.macro SWITCH_USER_CR3
pushq %rax
-_SWITCH_TO_USER_CR3 %rax
+_SWITCH_TO_USER_CR3 %rax %al
popq %rax
.endm
@@ -61,7 +66,7 @@ movq PER_CPU_VAR(unsafe_stack_register_b
.macro SWITCH_USER_CR3_NO_STACK
movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)
-_SWITCH_TO_USER_CR3 %rax
+_SWITCH_TO_USER_CR3 %rax %al
movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
.endm
@@ -69,7 +74,7 @@ movq PER_CPU_VAR(unsafe_stack_register_b
.macro SWITCH_KERNEL_CR3 reg
.endm
-.macro SWITCH_USER_CR3 reg
+.macro SWITCH_USER_CR3 reg regb
.endm
.macro SWITCH_USER_CR3_NO_STACK
.endm
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -111,16 +111,17 @@
/* Mask for all the PCID-related bits in CR3: */
#define X86_CR3_PCID_MASK (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_MASK)
+#define X86_CR3_PCID_ASID_KERN (_AC(0x0,UL))
+
#if defined(CONFIG_KAISER) && defined(CONFIG_X86_64)
-#define X86_CR3_PCID_ASID_KERN (_AC(0x4,UL))
-#define X86_CR3_PCID_ASID_USER (_AC(0x6,UL))
+/* Let X86_CR3_PCID_ASID_USER be usable for the X86_CR3_PCID_NOFLUSH bit */
+#define X86_CR3_PCID_ASID_USER (_AC(0x80,UL))
#define X86_CR3_PCID_KERN_FLUSH (X86_CR3_PCID_ASID_KERN)
#define X86_CR3_PCID_USER_FLUSH (X86_CR3_PCID_ASID_USER)
#define X86_CR3_PCID_KERN_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_KERN)
#define X86_CR3_PCID_USER_NOFLUSH (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_USER)
#else
-#define X86_CR3_PCID_ASID_KERN (_AC(0x0,UL))
#define X86_CR3_PCID_ASID_USER (_AC(0x0,UL))
/*
* PCIDs are unsupported on 32-bit and none of these bits can be
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -50,6 +50,9 @@ static void load_new_mm_cr3(pgd_t *pgdir
* invpcid_flush_single_context(X86_CR3_PCID_ASID_USER) could
* do it here, but can only be used if X86_FEATURE_INVPCID is
* available - and many machines support pcid without invpcid.
+ *
+ * The line below is a no-op: X86_CR3_PCID_KERN_FLUSH is now 0;
+ * but keep that line in there in case something changes.
*/
new_mm_cr3 |= X86_CR3_PCID_KERN_FLUSH;
kaiser_flush_tlb_on_return_to_user();
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: paranoid_entry pass cr3 need to paranoid_exit
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Tue, 26 Sep 2017 18:43:07 -0700
Subject: kaiser: paranoid_entry pass cr3 need to paranoid_exit
From: Hugh Dickins <hughd(a)google.com>
Neel Natu points out that paranoid_entry() was wrong to assume that
an entry that did not need swapgs would not need SWITCH_KERNEL_CR3:
paranoid_entry (used for debug breakpoint, int3, double fault or MCE;
though I think it's only the MCE case that is cause for concern here)
can break in at an awkward time, between cr3 switch and swapgs, but
its handling always needs kernel gs and kernel cr3.
Easy to fix in itself, but paranoid_entry() also needs to convey to
paranoid_exit() (and my reading of macro idtentry says paranoid_entry
and paranoid_exit are always paired) how to restore the prior state.
The swapgs state is already conveyed by %ebx (0 or 1), so extend that
also to convey when SWITCH_USER_CR3 will be needed (2 or 3).
(Yes, I'd much prefer that 0 meant no swapgs, whereas it's the other
way round: and a convention shared with error_entry() and error_exit(),
which I don't want to touch. Perhaps I should have inverted the bit
for switch cr3 too, but did not.)
paranoid_exit() would be straightforward, except for TRACE_IRQS: it
did TRACE_IRQS_IRETQ when doing swapgs, but TRACE_IRQS_IRETQ_DEBUG
when not: which is it supposed to use when SWITCH_USER_CR3 is split
apart from that? As best as I can determine, commit 5963e317b1e9
("ftrace/x86: Do not change stacks in DEBUG when calling lockdep")
missed the swapgs case, and should have used TRACE_IRQS_IRETQ_DEBUG
there too (the discrepancy has nothing to do with the liberal use
of _NO_STACK and _UNSAFE_STACK hereabouts: TRACE_IRQS_OFF_DEBUG has
just been used in all cases); discrepancy lovingly preserved across
several paranoid_exit() cleanups, but I'm now removing it.
Neel further indicates that to use SWITCH_USER_CR3_NO_STACK there in
paranoid_exit() is now not only unnecessary but unsafe: might corrupt
syscall entry's unsafe_stack_register_backup of %rax. Just use
SWITCH_USER_CR3: and delete SWITCH_USER_CR3_NO_STACK altogether,
before we make the mistake of using it again.
hughd adds: this commit fixes an issue in the Kaiser-without-PCIDs
part of the series, and ought to be moved earlier, if you decided
to make a release of Kaiser-without-PCIDs.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 46 ++++++++++++++++++++++++++++++++----------
arch/x86/include/asm/kaiser.h | 8 -------
2 files changed, 36 insertions(+), 18 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1025,7 +1025,11 @@ idtentry machine_check has_error_cod
/*
* Save all registers in pt_regs, and switch gs if needed.
* Use slow, but surefire "are we in kernel?" check.
- * Return: ebx=0: need swapgs on exit, ebx=1: otherwise
+ *
+ * Return: ebx=0: needs swapgs but not SWITCH_USER_CR3 in paranoid_exit
+ * ebx=1: needs neither swapgs nor SWITCH_USER_CR3 in paranoid_exit
+ * ebx=2: needs both swapgs and SWITCH_USER_CR3 in paranoid_exit
+ * ebx=3: needs SWITCH_USER_CR3 but not swapgs in paranoid_exit
*/
ENTRY(paranoid_entry)
cld
@@ -1037,9 +1041,26 @@ ENTRY(paranoid_entry)
testl %edx, %edx
js 1f /* negative -> in kernel */
SWAPGS
- SWITCH_KERNEL_CR3
xorl %ebx, %ebx
-1: ret
+1:
+#ifdef CONFIG_KAISER
+ /*
+ * We might have come in between a swapgs and a SWITCH_KERNEL_CR3
+ * on entry, or between a SWITCH_USER_CR3 and a swapgs on exit.
+ * Do a conditional SWITCH_KERNEL_CR3: this could safely be done
+ * unconditionally, but we need to find out whether the reverse
+ * should be done on return (conveyed to paranoid_exit in %ebx).
+ */
+ movq %cr3, %rax
+ testl $KAISER_SHADOW_PGD_OFFSET, %eax
+ jz 2f
+ orl $2, %ebx
+ andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
+ orq x86_cr3_pcid_noflush, %rax
+ movq %rax, %cr3
+2:
+#endif
+ ret
END(paranoid_entry)
/*
@@ -1052,20 +1073,25 @@ END(paranoid_entry)
* be complicated. Fortunately, we there's no good reason
* to try to handle preemption here.
*
- * On entry, ebx is "no swapgs" flag (1: don't need swapgs, 0: need it)
+ * On entry: ebx=0: needs swapgs but not SWITCH_USER_CR3
+ * ebx=1: needs neither swapgs nor SWITCH_USER_CR3
+ * ebx=2: needs both swapgs and SWITCH_USER_CR3
+ * ebx=3: needs SWITCH_USER_CR3 but not swapgs
*/
ENTRY(paranoid_exit)
DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF_DEBUG
- testl %ebx, %ebx /* swapgs needed? */
+ TRACE_IRQS_IRETQ_DEBUG
+#ifdef CONFIG_KAISER
+ testl $2, %ebx /* SWITCH_USER_CR3 needed? */
+ jz paranoid_exit_no_switch
+ SWITCH_USER_CR3
+paranoid_exit_no_switch:
+#endif
+ testl $1, %ebx /* swapgs needed? */
jnz paranoid_exit_no_swapgs
- TRACE_IRQS_IRETQ
- SWITCH_USER_CR3_NO_STACK
SWAPGS_UNSAFE_STACK
- jmp paranoid_exit_restore
paranoid_exit_no_swapgs:
- TRACE_IRQS_IRETQ_DEBUG
-paranoid_exit_restore:
RESTORE_EXTRA_REGS
RESTORE_C_REGS
REMOVE_PT_GPREGS_FROM_STACK 8
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -63,20 +63,12 @@ _SWITCH_TO_KERNEL_CR3 %rax
movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
.endm
-.macro SWITCH_USER_CR3_NO_STACK
-movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)
-_SWITCH_TO_USER_CR3 %rax %al
-movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
-.endm
-
#else /* CONFIG_KAISER */
.macro SWITCH_KERNEL_CR3 reg
.endm
.macro SWITCH_USER_CR3 reg regb
.endm
-.macro SWITCH_USER_CR3_NO_STACK
-.endm
.macro SWITCH_KERNEL_CR3_NO_STACK
.endm
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: name that 0x1000 KAISER_SHADOW_PGD_OFFSET
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sat, 9 Sep 2017 17:31:18 -0700
Subject: kaiser: name that 0x1000 KAISER_SHADOW_PGD_OFFSET
From: Hugh Dickins <hughd(a)google.com>
There's a 0x1000 in various places, which looks better with a name.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 4 ++--
arch/x86/include/asm/kaiser.h | 7 +++++--
2 files changed, 7 insertions(+), 4 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1292,7 +1292,7 @@ ENTRY(nmi)
movq %cr3, %rax
pushq %rax
#ifdef CONFIG_KAISER_REAL_SWITCH
- andq $(~0x1000), %rax
+ andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
#endif
movq %rax, %cr3
#endif
@@ -1535,7 +1535,7 @@ end_repeat_nmi:
movq %cr3, %rax
pushq %rax
#ifdef CONFIG_KAISER_REAL_SWITCH
- andq $(~0x1000), %rax
+ andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
#endif
movq %rax, %cr3
#endif
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -13,13 +13,16 @@
* A minimalistic kernel mapping holds the parts needed to be mapped in user
* mode, such as the entry/exit functions of the user space, or the stacks.
*/
+
+#define KAISER_SHADOW_PGD_OFFSET 0x1000
+
#ifdef __ASSEMBLY__
#ifdef CONFIG_KAISER
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
#ifdef CONFIG_KAISER_REAL_SWITCH
-andq $(~0x1000), \reg
+andq $(~KAISER_SHADOW_PGD_OFFSET), \reg
#endif
movq \reg, %cr3
.endm
@@ -27,7 +30,7 @@ movq \reg, %cr3
.macro _SWITCH_TO_USER_CR3 reg
movq %cr3, \reg
#ifdef CONFIG_KAISER_REAL_SWITCH
-orq $(0x1000), \reg
+orq $(KAISER_SHADOW_PGD_OFFSET), \reg
#endif
movq \reg, %cr3
.endm
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: merged update
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-merged-update.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Dave Hansen <dave.hansen(a)linux.intel.com>
Date: Wed, 30 Aug 2017 16:23:00 -0700
Subject: kaiser: merged update
From: Dave Hansen <dave.hansen(a)linux.intel.com>
Merged fixes and cleanups, rebased to 4.4.89 tree (no 5-level paging).
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 106 ++++++++++-
arch/x86/include/asm/kaiser.h | 43 ++--
arch/x86/include/asm/pgtable.h | 18 +
arch/x86/include/asm/pgtable_64.h | 48 ++++-
arch/x86/include/asm/pgtable_types.h | 6
arch/x86/kernel/espfix_64.c | 13 -
arch/x86/kernel/head_64.S | 19 +-
arch/x86/kernel/ldt.c | 27 ++
arch/x86/kernel/tracepoint.c | 2
arch/x86/mm/kaiser.c | 318 +++++++++++++++++++++++++----------
arch/x86/mm/pageattr.c | 63 +++++-
arch/x86/mm/pgtable.c | 40 +---
include/linux/kaiser.h | 26 ++
kernel/fork.c | 9
security/Kconfig | 5
15 files changed, 553 insertions(+), 190 deletions(-)
create mode 100644 include/linux/kaiser.h
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -212,6 +212,13 @@ entry_SYSCALL_64_fastpath:
movq RIP(%rsp), %rcx
movq EFLAGS(%rsp), %r11
RESTORE_C_REGS_EXCEPT_RCX_R11
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
SWITCH_USER_CR3
movq RSP(%rsp), %rsp
/*
@@ -350,11 +357,25 @@ GLOBAL(int_ret_from_sys_call)
syscall_return_via_sysret:
/* rcx and r11 are already restored (see code above) */
RESTORE_C_REGS_EXCEPT_RCX_R11
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
SWITCH_USER_CR3
movq RSP(%rsp), %rsp
USERGS_SYSRET64
opportunistic_sysret_failed:
+ /*
+ * This opens a window where we have a user CR3, but are
+ * running in the kernel. This makes using the CS
+ * register useless for telling whether or not we need to
+ * switch CR3 in NMIs. Normal interrupts are OK because
+ * they are off here.
+ */
SWITCH_USER_CR3
SWAPGS
jmp restore_c_regs_and_iret
@@ -1059,6 +1080,13 @@ ENTRY(error_entry)
cld
SAVE_C_REGS 8
SAVE_EXTRA_REGS 8
+ /*
+ * error_entry() always returns with a kernel gsbase and
+ * CR3. We must also have a kernel CR3/gsbase before
+ * calling TRACE_IRQS_*. Just unconditionally switch to
+ * the kernel CR3 here.
+ */
+ SWITCH_KERNEL_CR3
xorl %ebx, %ebx
testb $3, CS+8(%rsp)
jz .Lerror_kernelspace
@@ -1069,7 +1097,6 @@ ENTRY(error_entry)
* from user mode due to an IRET fault.
*/
SWAPGS
- SWITCH_KERNEL_CR3
.Lerror_entry_from_usermode_after_swapgs:
/*
@@ -1122,7 +1149,7 @@ ENTRY(error_entry)
* Switch to kernel gsbase:
*/
SWAPGS
- SWITCH_KERNEL_CR3
+
/*
* Pretend that the exception came from user mode: set up pt_regs
* as if we faulted immediately after IRET and clear EBX so that
@@ -1222,7 +1249,10 @@ ENTRY(nmi)
*/
SWAPGS_UNSAFE_STACK
- SWITCH_KERNEL_CR3_NO_STACK
+ /*
+ * percpu variables are mapped with user CR3, so no need
+ * to switch CR3 here.
+ */
cld
movq %rsp, %rdx
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
@@ -1256,14 +1286,33 @@ ENTRY(nmi)
movq %rsp, %rdi
movq $-1, %rsi
+#ifdef CONFIG_KAISER
+ /* Unconditionally use kernel CR3 for do_nmi() */
+ /* %rax is saved above, so OK to clobber here */
+ movq %cr3, %rax
+ pushq %rax
+#ifdef CONFIG_KAISER_REAL_SWITCH
+ andq $(~0x1000), %rax
+#endif
+ movq %rax, %cr3
+#endif
call do_nmi
+ /*
+ * Unconditionally restore CR3. I know we return to
+ * kernel code that needs user CR3, but do we ever return
+ * to "user mode" where we need the kernel CR3?
+ */
+#ifdef CONFIG_KAISER
+ popq %rax
+ mov %rax, %cr3
+#endif
/*
* Return back to user mode. We must *not* do the normal exit
- * work, because we don't want to enable interrupts. Fortunately,
- * do_nmi doesn't modify pt_regs.
+ * work, because we don't want to enable interrupts. Do not
+ * switch to user CR3: we might be going back to kernel code
+ * that had a user CR3 set.
*/
- SWITCH_USER_CR3
SWAPGS
jmp restore_c_regs_and_iret
@@ -1459,23 +1508,54 @@ end_repeat_nmi:
ALLOC_PT_GPREGS_ON_STACK
/*
- * Use paranoid_entry to handle SWAPGS, but no need to use paranoid_exit
- * as we should not be calling schedule in NMI context.
- * Even with normal interrupts enabled. An NMI should not be
- * setting NEED_RESCHED or anything that normal interrupts and
- * exceptions might do.
+ * Use the same approach as paranoid_entry to handle SWAPGS, but
+ * without CR3 handling since we do that differently in NMIs. No
+ * need to use paranoid_exit as we should not be calling schedule
+ * in NMI context. Even with normal interrupts enabled. An NMI
+ * should not be setting NEED_RESCHED or anything that normal
+ * interrupts and exceptions might do.
*/
- call paranoid_entry
+ cld
+ SAVE_C_REGS
+ SAVE_EXTRA_REGS
+ movl $1, %ebx
+ movl $MSR_GS_BASE, %ecx
+ rdmsr
+ testl %edx, %edx
+ js 1f /* negative -> in kernel */
+ SWAPGS
+ xorl %ebx, %ebx
+1:
+#ifdef CONFIG_KAISER
+ /* Unconditionally use kernel CR3 for do_nmi() */
+ /* %rax is saved above, so OK to clobber here */
+ movq %cr3, %rax
+ pushq %rax
+#ifdef CONFIG_KAISER_REAL_SWITCH
+ andq $(~0x1000), %rax
+#endif
+ movq %rax, %cr3
+#endif
/* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
movq %rsp, %rdi
+ addq $8, %rdi /* point %rdi at ptregs, fixed up for CR3 */
movq $-1, %rsi
call do_nmi
+ /*
+ * Unconditionally restore CR3. We might be returning to
+ * kernel code that needs user CR3, like just just before
+ * a sysret.
+ */
+#ifdef CONFIG_KAISER
+ popq %rax
+ mov %rax, %cr3
+#endif
testl %ebx, %ebx /* swapgs needed? */
jnz nmi_restore
nmi_swapgs:
- SWITCH_USER_CR3_NO_STACK
+ /* We fixed up CR3 above, so no need to switch it here */
SWAPGS_UNSAFE_STACK
nmi_restore:
RESTORE_EXTRA_REGS
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -16,13 +16,17 @@
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
+#ifdef CONFIG_KAISER_REAL_SWITCH
andq $(~0x1000), \reg
+#endif
movq \reg, %cr3
.endm
.macro _SWITCH_TO_USER_CR3 reg
movq %cr3, \reg
+#ifdef CONFIG_KAISER_REAL_SWITCH
orq $(0x1000), \reg
+#endif
movq \reg, %cr3
.endm
@@ -65,48 +69,53 @@ movq PER_CPU_VAR(unsafe_stack_register_b
.endm
#endif /* CONFIG_KAISER */
+
#else /* __ASSEMBLY__ */
#ifdef CONFIG_KAISER
-// Upon kernel/user mode switch, it may happen that
-// the address space has to be switched before the registers have been stored.
-// To change the address space, another register is needed.
-// A register therefore has to be stored/restored.
-//
-DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+/*
+ * Upon kernel/user mode switch, it may happen that the address
+ * space has to be switched before the registers have been
+ * stored. To change the address space, another register is
+ * needed. A register therefore has to be stored/restored.
+*/
-#endif /* CONFIG_KAISER */
+DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
/**
- * shadowmem_add_mapping - map a virtual memory part to the shadow mapping
+ * kaiser_add_mapping - map a virtual memory part to the shadow (user) mapping
* @addr: the start address of the range
* @size: the size of the range
* @flags: The mapping flags of the pages
*
- * the mapping is done on a global scope, so no bigger synchronization has to be done.
- * the pages have to be manually unmapped again when they are not needed any longer.
+ * The mapping is done on a global scope, so no bigger
+ * synchronization has to be done. the pages have to be
+ * manually unmapped again when they are not needed any longer.
*/
-extern void kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags);
+extern int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags);
/**
- * shadowmem_remove_mapping - unmap a virtual memory part of the shadow mapping
+ * kaiser_remove_mapping - unmap a virtual memory part of the shadow mapping
* @addr: the start address of the range
* @size: the size of the range
*/
extern void kaiser_remove_mapping(unsigned long start, unsigned long size);
/**
- * shadowmem_initialize_mapping - Initalize the shadow mapping
+ * kaiser_initialize_mapping - Initalize the shadow mapping
*
- * most parts of the shadow mapping can be mapped upon boot time.
- * only the thread stacks have to be mapped on runtime.
- * the mapped regions are not unmapped at all.
+ * Most parts of the shadow mapping can be mapped upon boot
+ * time. Only per-process things like the thread stacks
+ * or a new LDT have to be mapped at runtime. These boot-
+ * time mappings are permanent and nevertunmapped.
*/
extern void kaiser_init(void);
-#endif
+#endif /* CONFIG_KAISER */
+
+#endif /* __ASSEMBLY */
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -653,7 +653,17 @@ static inline pud_t *pud_offset(pgd_t *p
static inline int pgd_bad(pgd_t pgd)
{
- return (pgd_flags(pgd) & ~_PAGE_USER) != _KERNPG_TABLE;
+ pgdval_t ignore_flags = _PAGE_USER;
+ /*
+ * We set NX on KAISER pgds that map userspace memory so
+ * that userspace can not meaningfully use the kernel
+ * page table by accident; it will fault on the first
+ * instruction it tries to run. See native_set_pgd().
+ */
+ if (IS_ENABLED(CONFIG_KAISER))
+ ignore_flags |= _PAGE_NX;
+
+ return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE;
}
static inline int pgd_none(pgd_t pgd)
@@ -857,8 +867,10 @@ static inline void clone_pgd_range(pgd_t
{
memcpy(dst, src, count * sizeof(pgd_t));
#ifdef CONFIG_KAISER
- // clone the shadow pgd part as well
- memcpy(native_get_shadow_pgd(dst), native_get_shadow_pgd(src), count * sizeof(pgd_t));
+ /* Clone the shadow pgd part as well */
+ memcpy(native_get_shadow_pgd(dst),
+ native_get_shadow_pgd(src),
+ count * sizeof(pgd_t));
#endif
}
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -107,26 +107,58 @@ static inline void native_pud_clear(pud_
}
#ifdef CONFIG_KAISER
-static inline pgd_t * native_get_shadow_pgd(pgd_t *pgdp) {
+static inline pgd_t * native_get_shadow_pgd(pgd_t *pgdp)
+{
return (pgd_t *)(void*)((unsigned long)(void*)pgdp | (unsigned long)PAGE_SIZE);
}
-static inline pgd_t * native_get_normal_pgd(pgd_t *pgdp) {
+static inline pgd_t * native_get_normal_pgd(pgd_t *pgdp)
+{
return (pgd_t *)(void*)((unsigned long)(void*)pgdp & ~(unsigned long)PAGE_SIZE);
}
+#else
+static inline pgd_t * native_get_shadow_pgd(pgd_t *pgdp)
+{
+ BUILD_BUG_ON(1);
+ return NULL;
+}
+static inline pgd_t * native_get_normal_pgd(pgd_t *pgdp)
+{
+ return pgdp;
+}
#endif /* CONFIG_KAISER */
+/*
+ * Page table pages are page-aligned. The lower half of the top
+ * level is used for userspace and the top half for the kernel.
+ * This returns true for user pages that need to get copied into
+ * both the user and kernel copies of the page tables, and false
+ * for kernel pages that should only be in the kernel copy.
+ */
+static inline bool is_userspace_pgd(void *__ptr)
+{
+ unsigned long ptr = (unsigned long)__ptr;
+
+ return ((ptr % PAGE_SIZE) < (PAGE_SIZE / 2));
+}
+
static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
{
#ifdef CONFIG_KAISER
- // We know that a pgd is page aligned.
- // Therefore the lower indices have to be mapped to user space.
- // These pages are mapped to the shadow mapping.
- if ((((unsigned long)pgdp) % PAGE_SIZE) < (PAGE_SIZE / 2)) {
+ pteval_t extra_kern_pgd_flags = 0;
+ /* Do we need to also populate the shadow pgd? */
+ if (is_userspace_pgd(pgdp)) {
native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
+ /*
+ * Even if the entry is *mapping* userspace, ensure
+ * that userspace can not use it. This way, if we
+ * get out to userspace running on the kernel CR3,
+ * userspace will crash instead of running.
+ */
+ extra_kern_pgd_flags = _PAGE_NX;
}
-
- pgdp->pgd = pgd.pgd & ~_PAGE_USER;
+ pgdp->pgd = pgd.pgd;
+ pgdp->pgd |= extra_kern_pgd_flags;
#else /* CONFIG_KAISER */
*pgdp = pgd;
#endif
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -42,7 +42,7 @@
#ifdef CONFIG_KAISER
#define _PAGE_GLOBAL (_AT(pteval_t, 0))
#else
-#define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
+#define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
#endif
#define _PAGE_SOFTW1 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW1)
#define _PAGE_SOFTW2 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW2)
@@ -93,11 +93,7 @@
#define _PAGE_NX (_AT(pteval_t, 0))
#endif
-#ifdef CONFIG_KAISER
-#define _PAGE_PROTNONE (_AT(pteval_t, 0))
-#else
#define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
-#endif
#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
_PAGE_ACCESSED | _PAGE_DIRTY)
--- a/arch/x86/kernel/espfix_64.c
+++ b/arch/x86/kernel/espfix_64.c
@@ -127,11 +127,14 @@ void __init init_espfix_bsp(void)
/* Install the espfix pud into the kernel page directory */
pgd_p = &init_level4_pgt[pgd_index(ESPFIX_BASE_ADDR)];
pgd_populate(&init_mm, pgd_p, (pud_t *)espfix_pud_page);
-#ifdef CONFIG_KAISER
- // add the esp stack pud to the shadow mapping here.
- // This can be done directly, because the fixup stack has its own pud
- set_pgd(native_get_shadow_pgd(pgd_p), __pgd(_PAGE_TABLE | __pa((pud_t *)espfix_pud_page)));
-#endif
+ /*
+ * Just copy the top-level PGD that is mapping the espfix
+ * area to ensure it is mapped into the shadow user page
+ * tables.
+ */
+ if (IS_ENABLED(CONFIG_KAISER))
+ set_pgd(native_get_shadow_pgd(pgd_p),
+ __pgd(_KERNPG_TABLE | __pa((pud_t *)espfix_pud_page)));
/* Randomize the locations */
init_espfix_random();
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -442,11 +442,24 @@ early_idt_ripmsg:
GLOBAL(name)
#ifdef CONFIG_KAISER
+/*
+ * Each PGD needs to be 8k long and 8k aligned. We do not
+ * ever go out to userspace with these, so we do not
+ * strictly *need* the second page, but this allows us to
+ * have a single set_pgd() implementation that does not
+ * need to worry about whether it has 4k or 8k to work
+ * with.
+ *
+ * This ensures PGDs are 8k long:
+ */
+#define KAISER_USER_PGD_FILL 512
+/* This ensures they are 8k-aligned: */
#define NEXT_PGD_PAGE(name) \
.balign 2 * PAGE_SIZE; \
GLOBAL(name)
#else
#define NEXT_PGD_PAGE(name) NEXT_PAGE(name)
+#define KAISER_USER_PGD_FILL 0
#endif
/* Automate the creation of 1 to 1 mapping pmd entries */
@@ -461,6 +474,7 @@ GLOBAL(name)
NEXT_PGD_PAGE(early_level4_pgt)
.fill 511,8,0
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(early_dynamic_pgts)
.fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0
@@ -469,7 +483,8 @@ NEXT_PAGE(early_dynamic_pgts)
#ifndef CONFIG_XEN
NEXT_PGD_PAGE(init_level4_pgt)
- .fill 2*512,8,0
+ .fill 512,8,0
+ .fill KAISER_USER_PGD_FILL,8,0
#else
NEXT_PGD_PAGE(init_level4_pgt)
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
@@ -478,6 +493,7 @@ NEXT_PGD_PAGE(init_level4_pgt)
.org init_level4_pgt + L4_START_KERNEL*8, 0
/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(level3_ident_pgt)
.quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
@@ -488,6 +504,7 @@ NEXT_PAGE(level2_ident_pgt)
*/
PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)
#endif
+ .fill KAISER_USER_PGD_FILL,8,0
NEXT_PAGE(level3_kernel_pgt)
.fill L3_START_KERNEL,8,0
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -18,6 +18,7 @@
#include <linux/uaccess.h>
#include <asm/ldt.h>
+#include <asm/kaiser.h>
#include <asm/desc.h>
#include <asm/mmu_context.h>
#include <asm/syscalls.h>
@@ -34,11 +35,21 @@ static void flush_ldt(void *current_mm)
set_ldt(pc->ldt->entries, pc->ldt->size);
}
+static void __free_ldt_struct(struct ldt_struct *ldt)
+{
+ if (ldt->size * LDT_ENTRY_SIZE > PAGE_SIZE)
+ vfree(ldt->entries);
+ else
+ free_page((unsigned long)ldt->entries);
+ kfree(ldt);
+}
+
/* The caller must call finalize_ldt_struct on the result. LDT starts zeroed. */
static struct ldt_struct *alloc_ldt_struct(int size)
{
struct ldt_struct *new_ldt;
int alloc_size;
+ int ret = 0;
if (size > LDT_ENTRIES)
return NULL;
@@ -66,6 +77,14 @@ static struct ldt_struct *alloc_ldt_stru
return NULL;
}
+ // FIXME: make kaiser_add_mapping() return an error code
+ // when it fails
+ kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
+ __PAGE_KERNEL);
+ if (ret) {
+ __free_ldt_struct(new_ldt);
+ return NULL;
+ }
new_ldt->size = size;
return new_ldt;
}
@@ -92,12 +111,10 @@ static void free_ldt_struct(struct ldt_s
if (likely(!ldt))
return;
+ kaiser_remove_mapping((unsigned long)ldt->entries,
+ ldt->size * LDT_ENTRY_SIZE);
paravirt_free_ldt(ldt->entries, ldt->size);
- if (ldt->size * LDT_ENTRY_SIZE > PAGE_SIZE)
- vfree(ldt->entries);
- else
- free_page((unsigned long)ldt->entries);
- kfree(ldt);
+ __free_ldt_struct(ldt);
}
/*
--- a/arch/x86/kernel/tracepoint.c
+++ b/arch/x86/kernel/tracepoint.c
@@ -9,10 +9,12 @@
#include <linux/atomic.h>
atomic_t trace_idt_ctr = ATOMIC_INIT(0);
+__aligned(PAGE_SIZE)
struct desc_ptr trace_idt_descr = { NR_VECTORS * 16 - 1,
(unsigned long) trace_idt_table };
/* No need to be aligned, but done to keep all IDTs defined the same way. */
+__aligned(PAGE_SIZE)
gate_desc trace_idt_table[NR_VECTORS] __page_aligned_bss;
static int trace_irq_vector_refcount;
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -1,160 +1,306 @@
-
-
+#include <linux/bug.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/bug.h>
#include <linux/init.h>
+#include <linux/interrupt.h>
#include <linux/spinlock.h>
#include <linux/mm.h>
-
#include <linux/uaccess.h>
+#include <linux/ftrace.h>
+
+#include <asm/kaiser.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/desc.h>
#ifdef CONFIG_KAISER
__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+/*
+ * At runtime, the only things we map are some things for CPU
+ * hotplug, and stacks for new processes. No two CPUs will ever
+ * be populating the same addresses, so we only need to ensure
+ * that we protect between two CPUs trying to allocate and
+ * populate the same page table page.
+ *
+ * Only take this lock when doing a set_p[4um]d(), but it is not
+ * needed for doing a set_pte(). We assume that only the *owner*
+ * of a given allocation will be doing this for _their_
+ * allocation.
+ *
+ * This ensures that once a system has been running for a while
+ * and there have been stacks all over and these page tables
+ * are fully populated, there will be no further acquisitions of
+ * this lock.
+ */
+static DEFINE_SPINLOCK(shadow_table_allocation_lock);
-/**
- * Get the real ppn from a address in kernel mapping.
- * @param address The virtual adrress
- * @return the physical address
+/*
+ * Returns -1 on error.
*/
-static inline unsigned long get_pa_from_mapping (unsigned long address)
+static inline unsigned long get_pa_from_mapping(unsigned long vaddr)
{
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
pte_t *pte;
- pgd = pgd_offset_k(address);
- BUG_ON(pgd_none(*pgd) || pgd_large(*pgd));
+ pgd = pgd_offset_k(vaddr);
+ /*
+ * We made all the kernel PGDs present in kaiser_init().
+ * We expect them to stay that way.
+ */
+ BUG_ON(pgd_none(*pgd));
+ /*
+ * PGDs are either 512GB or 128TB on all x86_64
+ * configurations. We don't handle these.
+ */
+ BUG_ON(pgd_large(*pgd));
+
+ pud = pud_offset(pgd, vaddr);
+ if (pud_none(*pud)) {
+ WARN_ON_ONCE(1);
+ return -1;
+ }
- pud = pud_offset(pgd, address);
- BUG_ON(pud_none(*pud));
+ if (pud_large(*pud))
+ return (pud_pfn(*pud) << PAGE_SHIFT) | (vaddr & ~PUD_PAGE_MASK);
- if (pud_large(*pud)) {
- return (pud_pfn(*pud) << PAGE_SHIFT) | (address & ~PUD_PAGE_MASK);
+ pmd = pmd_offset(pud, vaddr);
+ if (pmd_none(*pmd)) {
+ WARN_ON_ONCE(1);
+ return -1;
}
- pmd = pmd_offset(pud, address);
- BUG_ON(pmd_none(*pmd));
+ if (pmd_large(*pmd))
+ return (pmd_pfn(*pmd) << PAGE_SHIFT) | (vaddr & ~PMD_PAGE_MASK);
- if (pmd_large(*pmd)) {
- return (pmd_pfn(*pmd) << PAGE_SHIFT) | (address & ~PMD_PAGE_MASK);
+ pte = pte_offset_kernel(pmd, vaddr);
+ if (pte_none(*pte)) {
+ WARN_ON_ONCE(1);
+ return -1;
}
- pte = pte_offset_kernel(pmd, address);
- BUG_ON(pte_none(*pte));
-
- return (pte_pfn(*pte) << PAGE_SHIFT) | (address & ~PAGE_MASK);
+ return (pte_pfn(*pte) << PAGE_SHIFT) | (vaddr & ~PAGE_MASK);
}
-void _kaiser_copy (unsigned long start_addr, unsigned long size,
- unsigned long flags)
+/*
+ * This is a relatively normal page table walk, except that it
+ * also tries to allocate page tables pages along the way.
+ *
+ * Returns a pointer to a PTE on success, or NULL on failure.
+ */
+static pte_t *kaiser_pagetable_walk(unsigned long address, bool is_atomic)
{
- pgd_t *pgd;
- pud_t *pud;
pmd_t *pmd;
- pte_t *pte;
- unsigned long address;
- unsigned long end_addr = start_addr + size;
- unsigned long target_address;
+ pud_t *pud;
+ pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(address));
+ gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
- for (address = PAGE_ALIGN(start_addr - (PAGE_SIZE - 1));
- address < PAGE_ALIGN(end_addr); address += PAGE_SIZE) {
- target_address = get_pa_from_mapping(address);
+ might_sleep();
+ if (is_atomic) {
+ gfp &= ~GFP_KERNEL;
+ gfp |= __GFP_HIGH | __GFP_ATOMIC;
+ }
- pgd = native_get_shadow_pgd(pgd_offset_k(address));
+ if (pgd_none(*pgd)) {
+ WARN_ONCE(1, "All shadow pgds should have been populated");
+ return NULL;
+ }
+ BUILD_BUG_ON(pgd_large(*pgd) != 0);
- BUG_ON(pgd_none(*pgd) && "All shadow pgds should be mapped at this time\n");
- BUG_ON(pgd_large(*pgd));
+ pud = pud_offset(pgd, address);
+ /* The shadow page tables do not use large mappings: */
+ if (pud_large(*pud)) {
+ WARN_ON(1);
+ return NULL;
+ }
+ if (pud_none(*pud)) {
+ unsigned long new_pmd_page = __get_free_page(gfp);
+ if (!new_pmd_page)
+ return NULL;
+ spin_lock(&shadow_table_allocation_lock);
+ if (pud_none(*pud))
+ set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
+ else
+ free_page(new_pmd_page);
+ spin_unlock(&shadow_table_allocation_lock);
+ }
- pud = pud_offset(pgd, address);
- if (pud_none(*pud)) {
- set_pud(pud, __pud(_PAGE_TABLE | __pa(pmd_alloc_one(0, address))));
- }
- BUG_ON(pud_large(*pud));
+ pmd = pmd_offset(pud, address);
+ /* The shadow page tables do not use large mappings: */
+ if (pmd_large(*pmd)) {
+ WARN_ON(1);
+ return NULL;
+ }
+ if (pmd_none(*pmd)) {
+ unsigned long new_pte_page = __get_free_page(gfp);
+ if (!new_pte_page)
+ return NULL;
+ spin_lock(&shadow_table_allocation_lock);
+ if (pmd_none(*pmd))
+ set_pmd(pmd, __pmd(_KERNPG_TABLE | __pa(new_pte_page)));
+ else
+ free_page(new_pte_page);
+ spin_unlock(&shadow_table_allocation_lock);
+ }
- pmd = pmd_offset(pud, address);
- if (pmd_none(*pmd)) {
- set_pmd(pmd, __pmd(_PAGE_TABLE | __pa(pte_alloc_one_kernel(0, address))));
- }
- BUG_ON(pmd_large(*pmd));
+ return pte_offset_kernel(pmd, address);
+}
- pte = pte_offset_kernel(pmd, address);
+int kaiser_add_user_map(const void *__start_addr, unsigned long size,
+ unsigned long flags)
+{
+ int ret = 0;
+ pte_t *pte;
+ unsigned long start_addr = (unsigned long )__start_addr;
+ unsigned long address = start_addr & PAGE_MASK;
+ unsigned long end_addr = PAGE_ALIGN(start_addr + size);
+ unsigned long target_address;
+
+ for (;address < end_addr; address += PAGE_SIZE) {
+ target_address = get_pa_from_mapping(address);
+ if (target_address == -1) {
+ ret = -EIO;
+ break;
+ }
+ pte = kaiser_pagetable_walk(address, false);
if (pte_none(*pte)) {
set_pte(pte, __pte(flags | target_address));
} else {
- BUG_ON(__pa(pte_page(*pte)) != target_address);
+ pte_t tmp;
+ set_pte(&tmp, __pte(flags | target_address));
+ WARN_ON_ONCE(!pte_same(*pte, tmp));
}
}
+ return ret;
}
-// at first, add a pmd for every pgd entry in the shadowmem-kernel-part of the kernel mapping
-static inline void __init _kaiser_init(void)
+static int kaiser_add_user_map_ptrs(const void *start, const void *end, unsigned long flags)
+{
+ unsigned long size = end - start;
+
+ return kaiser_add_user_map(start, size, flags);
+}
+
+/*
+ * Ensure that the top level of the (shadow) page tables are
+ * entirely populated. This ensures that all processes that get
+ * forked have the same entries. This way, we do not have to
+ * ever go set up new entries in older processes.
+ *
+ * Note: we never free these, so there are no updates to them
+ * after this.
+ */
+static void __init kaiser_init_all_pgds(void)
{
pgd_t *pgd;
int i = 0;
pgd = native_get_shadow_pgd(pgd_offset_k((unsigned long )0));
for (i = PTRS_PER_PGD / 2; i < PTRS_PER_PGD; i++) {
- set_pgd(pgd + i, __pgd(_PAGE_TABLE |__pa(pud_alloc_one(0, 0))));
+ pgd_t new_pgd;
+ pud_t *pud = pud_alloc_one(&init_mm, PAGE_OFFSET + i * PGDIR_SIZE);
+ if (!pud) {
+ WARN_ON(1);
+ break;
+ }
+ new_pgd = __pgd(_KERNPG_TABLE |__pa(pud));
+ /*
+ * Make sure not to stomp on some other pgd entry.
+ */
+ if (!pgd_none(pgd[i])) {
+ WARN_ON(1);
+ continue;
+ }
+ set_pgd(pgd + i, new_pgd);
}
}
+#define kaiser_add_user_map_early(start, size, flags) do { \
+ int __ret = kaiser_add_user_map(start, size, flags); \
+ WARN_ON(__ret); \
+} while (0)
+
+#define kaiser_add_user_map_ptrs_early(start, end, flags) do { \
+ int __ret = kaiser_add_user_map_ptrs(start, end, flags); \
+ WARN_ON(__ret); \
+} while (0)
+
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
-spinlock_t shadow_table_lock;
+/*
+ * If anything in here fails, we will likely die on one of the
+ * first kernel->user transitions and init will die. But, we
+ * will have most of the kernel up by then and should be able to
+ * get a clean warning out of it. If we BUG_ON() here, we run
+ * the risk of being before we have good console output.
+ */
void __init kaiser_init(void)
{
int cpu;
- spin_lock_init(&shadow_table_lock);
-
- spin_lock(&shadow_table_lock);
- _kaiser_init();
+ kaiser_init_all_pgds();
for_each_possible_cpu(cpu) {
- // map the per cpu user variables
- _kaiser_copy(
- (unsigned long) (__per_cpu_user_mapped_start + per_cpu_offset(cpu)),
- (unsigned long) __per_cpu_user_mapped_end - (unsigned long) __per_cpu_user_mapped_start,
- __PAGE_KERNEL);
- }
-
- // map the entry/exit text section, which is responsible to switch between user- and kernel mode
- _kaiser_copy(
- (unsigned long) __entry_text_start,
- (unsigned long) __entry_text_end - (unsigned long) __entry_text_start,
- __PAGE_KERNEL_RX);
-
- // the fixed map address of the idt_table
- _kaiser_copy(
- (unsigned long) idt_descr.address,
- sizeof(gate_desc) * NR_VECTORS,
- __PAGE_KERNEL_RO);
+ void *percpu_vaddr = __per_cpu_user_mapped_start +
+ per_cpu_offset(cpu);
+ unsigned long percpu_sz = __per_cpu_user_mapped_end -
+ __per_cpu_user_mapped_start;
+ kaiser_add_user_map_early(percpu_vaddr, percpu_sz,
+ __PAGE_KERNEL);
+ }
- spin_unlock(&shadow_table_lock);
+ /*
+ * Map the entry/exit text section, which is needed at
+ * switches from user to and from kernel.
+ */
+ kaiser_add_user_map_ptrs_early(__entry_text_start, __entry_text_end,
+ __PAGE_KERNEL_RX);
+
+#if defined(CONFIG_FUNCTION_GRAPH_TRACER) || defined(CONFIG_KASAN)
+ kaiser_add_user_map_ptrs_early(__irqentry_text_start,
+ __irqentry_text_end,
+ __PAGE_KERNEL_RX);
+#endif
+ kaiser_add_user_map_early((void *)idt_descr.address,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL_RO);
+#ifdef CONFIG_TRACING
+ kaiser_add_user_map_early(&trace_idt_descr,
+ sizeof(trace_idt_descr),
+ __PAGE_KERNEL);
+ kaiser_add_user_map_early(&trace_idt_table,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL);
+#endif
+ kaiser_add_user_map_early(&debug_idt_descr, sizeof(debug_idt_descr),
+ __PAGE_KERNEL);
+ kaiser_add_user_map_early(&debug_idt_table,
+ sizeof(gate_desc) * NR_VECTORS,
+ __PAGE_KERNEL);
}
+extern void unmap_pud_range_nofree(pgd_t *pgd, unsigned long start, unsigned long end);
// add a mapping to the shadow-mapping, and synchronize the mappings
-void kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
+int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
{
- spin_lock(&shadow_table_lock);
- _kaiser_copy(addr, size, flags);
- spin_unlock(&shadow_table_lock);
+ return kaiser_add_user_map((const void *)addr, size, flags);
}
-extern void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end);
void kaiser_remove_mapping(unsigned long start, unsigned long size)
{
- pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(start));
- spin_lock(&shadow_table_lock);
- do {
- unmap_pud_range(pgd, start, start + size);
- } while (pgd++ != native_get_shadow_pgd(pgd_offset_k(start + size)));
- spin_unlock(&shadow_table_lock);
+ unsigned long end = start + size;
+ unsigned long addr;
+
+ for (addr = start; addr < end; addr += PGDIR_SIZE) {
+ pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(addr));
+ /*
+ * unmap_p4d_range() handles > P4D_SIZE unmaps,
+ * so no need to trim 'end'.
+ */
+ unmap_pud_range_nofree(pgd, addr, end);
+ }
}
#endif /* CONFIG_KAISER */
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -52,6 +52,7 @@ static DEFINE_SPINLOCK(cpa_lock);
#define CPA_FLUSHTLB 1
#define CPA_ARRAY 2
#define CPA_PAGES_ARRAY 4
+#define CPA_FREE_PAGETABLES 8
#ifdef CONFIG_PROC_FS
static unsigned long direct_pages_count[PG_LEVEL_NUM];
@@ -723,10 +724,13 @@ static int split_large_page(struct cpa_d
return 0;
}
-static bool try_to_free_pte_page(pte_t *pte)
+static bool try_to_free_pte_page(struct cpa_data *cpa, pte_t *pte)
{
int i;
+ if (!(cpa->flags & CPA_FREE_PAGETABLES))
+ return false;
+
for (i = 0; i < PTRS_PER_PTE; i++)
if (!pte_none(pte[i]))
return false;
@@ -735,10 +739,13 @@ static bool try_to_free_pte_page(pte_t *
return true;
}
-static bool try_to_free_pmd_page(pmd_t *pmd)
+static bool try_to_free_pmd_page(struct cpa_data *cpa, pmd_t *pmd)
{
int i;
+ if (!(cpa->flags & CPA_FREE_PAGETABLES))
+ return false;
+
for (i = 0; i < PTRS_PER_PMD; i++)
if (!pmd_none(pmd[i]))
return false;
@@ -759,7 +766,9 @@ static bool try_to_free_pud_page(pud_t *
return true;
}
-static bool unmap_pte_range(pmd_t *pmd, unsigned long start, unsigned long end)
+static bool unmap_pte_range(struct cpa_data *cpa, pmd_t *pmd,
+ unsigned long start,
+ unsigned long end)
{
pte_t *pte = pte_offset_kernel(pmd, start);
@@ -770,22 +779,23 @@ static bool unmap_pte_range(pmd_t *pmd,
pte++;
}
- if (try_to_free_pte_page((pte_t *)pmd_page_vaddr(*pmd))) {
+ if (try_to_free_pte_page(cpa, (pte_t *)pmd_page_vaddr(*pmd))) {
pmd_clear(pmd);
return true;
}
return false;
}
-static void __unmap_pmd_range(pud_t *pud, pmd_t *pmd,
+static void __unmap_pmd_range(struct cpa_data *cpa, pud_t *pud, pmd_t *pmd,
unsigned long start, unsigned long end)
{
- if (unmap_pte_range(pmd, start, end))
- if (try_to_free_pmd_page((pmd_t *)pud_page_vaddr(*pud)))
+ if (unmap_pte_range(cpa, pmd, start, end))
+ if (try_to_free_pmd_page(cpa, (pmd_t *)pud_page_vaddr(*pud)))
pud_clear(pud);
}
-static void unmap_pmd_range(pud_t *pud, unsigned long start, unsigned long end)
+static void unmap_pmd_range(struct cpa_data *cpa, pud_t *pud,
+ unsigned long start, unsigned long end)
{
pmd_t *pmd = pmd_offset(pud, start);
@@ -796,7 +806,7 @@ static void unmap_pmd_range(pud_t *pud,
unsigned long next_page = (start + PMD_SIZE) & PMD_MASK;
unsigned long pre_end = min_t(unsigned long, end, next_page);
- __unmap_pmd_range(pud, pmd, start, pre_end);
+ __unmap_pmd_range(cpa, pud, pmd, start, pre_end);
start = pre_end;
pmd++;
@@ -809,7 +819,8 @@ static void unmap_pmd_range(pud_t *pud,
if (pmd_large(*pmd))
pmd_clear(pmd);
else
- __unmap_pmd_range(pud, pmd, start, start + PMD_SIZE);
+ __unmap_pmd_range(cpa, pud, pmd,
+ start, start + PMD_SIZE);
start += PMD_SIZE;
pmd++;
@@ -819,17 +830,19 @@ static void unmap_pmd_range(pud_t *pud,
* 4K leftovers?
*/
if (start < end)
- return __unmap_pmd_range(pud, pmd, start, end);
+ return __unmap_pmd_range(cpa, pud, pmd, start, end);
/*
* Try again to free the PMD page if haven't succeeded above.
*/
if (!pud_none(*pud))
- if (try_to_free_pmd_page((pmd_t *)pud_page_vaddr(*pud)))
+ if (try_to_free_pmd_page(cpa, (pmd_t *)pud_page_vaddr(*pud)))
pud_clear(pud);
}
-void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
+static void __unmap_pud_range(struct cpa_data *cpa, pgd_t *pgd,
+ unsigned long start,
+ unsigned long end)
{
pud_t *pud = pud_offset(pgd, start);
@@ -840,7 +853,7 @@ void unmap_pud_range(pgd_t *pgd, unsigne
unsigned long next_page = (start + PUD_SIZE) & PUD_MASK;
unsigned long pre_end = min_t(unsigned long, end, next_page);
- unmap_pmd_range(pud, start, pre_end);
+ unmap_pmd_range(cpa, pud, start, pre_end);
start = pre_end;
pud++;
@@ -854,7 +867,7 @@ void unmap_pud_range(pgd_t *pgd, unsigne
if (pud_large(*pud))
pud_clear(pud);
else
- unmap_pmd_range(pud, start, start + PUD_SIZE);
+ unmap_pmd_range(cpa, pud, start, start + PUD_SIZE);
start += PUD_SIZE;
pud++;
@@ -864,7 +877,7 @@ void unmap_pud_range(pgd_t *pgd, unsigne
* 2M leftovers?
*/
if (start < end)
- unmap_pmd_range(pud, start, end);
+ unmap_pmd_range(cpa, pud, start, end);
/*
* No need to try to free the PUD page because we'll free it in
@@ -872,6 +885,24 @@ void unmap_pud_range(pgd_t *pgd, unsigne
*/
}
+static void unmap_pud_range(pgd_t *pgd, unsigned long start, unsigned long end)
+{
+ struct cpa_data cpa = {
+ .flags = CPA_FREE_PAGETABLES,
+ };
+
+ __unmap_pud_range(&cpa, pgd, start, end);
+}
+
+void unmap_pud_range_nofree(pgd_t *pgd, unsigned long start, unsigned long end)
+{
+ struct cpa_data cpa = {
+ .flags = 0,
+ };
+
+ __unmap_pud_range(&cpa, pgd, start, end);
+}
+
static void unmap_pgd_range(pgd_t *root, unsigned long addr, unsigned long end)
{
pgd_t *pgd_entry = root + pgd_index(addr);
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -340,40 +340,26 @@ static inline void _pgd_free(pgd_t *pgd)
kmem_cache_free(pgd_cache, pgd);
}
#else
-static inline pgd_t *_pgd_alloc(void)
-{
-#ifdef CONFIG_KAISER
- // Instead of one PML4, we aquire two PML4s and, thus, an 8kb-aligned memory
- // block. Therefore, we have to allocate at least 3 pages. However, the
- // __get_free_pages returns us 4 pages. Hence, we store the base pointer at
- // the beginning of the page of our 8kb-aligned memory block in order to
- // correctly free it afterwars.
- unsigned long pages = __get_free_pages(PGALLOC_GFP, get_order(4*PAGE_SIZE));
-
- if(native_get_normal_pgd((pgd_t*) pages) == (pgd_t*) pages)
- {
- *((unsigned long*)(pages + 2 * PAGE_SIZE)) = pages;
- return (pgd_t *) pages;
- }
- else
- {
- *((unsigned long*)(pages + 3 * PAGE_SIZE)) = pages;
- return (pgd_t *) (pages + PAGE_SIZE);
- }
+#ifdef CONFIG_KAISER
+/*
+ * Instead of one pmd, we aquire two pmds. Being order-1, it is
+ * both 8k in size and 8k-aligned. That lets us just flip bit 12
+ * in a pointer to swap between the two 4k halves.
+ */
+#define PGD_ALLOCATION_ORDER 1
#else
- return (pgd_t *)__get_free_page(PGALLOC_GFP);
+#define PGD_ALLOCATION_ORDER 0
#endif
+
+static inline pgd_t *_pgd_alloc(void)
+{
+ return (pgd_t *)__get_free_pages(PGALLOC_GFP, PGD_ALLOCATION_ORDER);
}
static inline void _pgd_free(pgd_t *pgd)
{
-#ifdef CONFIG_KAISER
- unsigned long pages = *((unsigned long*) ((char*) pgd + 2 * PAGE_SIZE));
- free_pages(pages, get_order(4*PAGE_SIZE));
-#else
- free_page((unsigned long)pgd);
-#endif
+ free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER);
}
#endif /* CONFIG_X86_PAE */
--- /dev/null
+++ b/include/linux/kaiser.h
@@ -0,0 +1,26 @@
+#ifndef _INCLUDE_KAISER_H
+#define _INCLUDE_KAISER_H
+
+#ifdef CONFIG_KAISER
+#include <asm/kaiser.h>
+#else
+
+/*
+ * These stubs are used whenever CONFIG_KAISER is off, which
+ * includes architectures that support KAISER, but have it
+ * disabled.
+ */
+
+static inline void kaiser_init(void)
+{
+}
+static inline void kaiser_remove_mapping(unsigned long start, unsigned long size)
+{
+}
+static inline int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
+{
+ return 0;
+}
+
+#endif /* !CONFIG_KAISER */
+#endif /* _INCLUDE_KAISER_H */
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -58,6 +58,7 @@
#include <linux/tsacct_kern.h>
#include <linux/cn_proc.h>
#include <linux/freezer.h>
+#include <linux/kaiser.h>
#include <linux/delayacct.h>
#include <linux/taskstats_kern.h>
#include <linux/random.h>
@@ -335,7 +336,6 @@ void set_task_stack_end_magic(struct tas
*stackend = STACK_END_MAGIC; /* for overflow detection */
}
-extern void kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags);
static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
{
struct task_struct *tsk;
@@ -357,9 +357,10 @@ static struct task_struct *dup_task_stru
goto free_ti;
tsk->stack = ti;
-#ifdef CONFIG_KAISER
- kaiser_add_mapping((unsigned long)tsk->stack, THREAD_SIZE, __PAGE_KERNEL);
-#endif
+
+ err= kaiser_add_mapping((unsigned long)tsk->stack, THREAD_SIZE, __PAGE_KERNEL);
+ if (err)
+ goto free_ti;
#ifdef CONFIG_SECCOMP
/*
* We must handle setting up seccomp filters once we're under
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -32,12 +32,17 @@ config SECURITY
If you are unsure how to answer this question, answer N.
config KAISER
bool "Remove the kernel mapping in user mode"
+ default y
depends on X86_64
depends on !PARAVIRT
help
This enforces a strict kernel and user space isolation in order to close
hardware side channels on kernel address information.
+config KAISER_REAL_SWITCH
+ bool "KAISER: actually switch page tables"
+ default y
+
config SECURITYFS
bool "Enable the securityfs filesystem"
help
Patches currently in stable-queue which might be from dave.hansen(a)linux.intel.com are
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
This is a note to let you know that I've just added the patch titled
kaiser: load_new_mm_cr3() let SWITCH_USER_CR3 flush user
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Thu, 17 Aug 2017 15:00:37 -0700
Subject: kaiser: load_new_mm_cr3() let SWITCH_USER_CR3 flush user
From: Hugh Dickins <hughd(a)google.com>
We have many machines (Westmere, Sandybridge, Ivybridge) supporting
PCID but not INVPCID: on these load_new_mm_cr3() simply crashed.
Flushing user context inside load_new_mm_cr3() without the use of
invpcid is difficult: momentarily switch from kernel to user context
and back to do so? I'm not sure whether that can be safely done at
all, and would risk polluting user context with kernel internals,
and kernel context with stale user externals.
Instead, follow the hint in the comment that was there: change
X86_CR3_PCID_USER_VAR to be a per-cpu variable, then load_new_mm_cr3()
can leave a note in it, for SWITCH_USER_CR3 on return to userspace to
flush user context TLB, instead of default X86_CR3_PCID_USER_NOFLUSH.
Which works well enough that there's no need to do it this way only
when invpcid is unsupported: it's a good alternative to invpcid here.
But there's a couple of inlines in asm/tlbflush.h that need to do the
same trick, so it's best to localize all this per-cpu business in
mm/kaiser.c: moving that part of the initialization from setup_pcid()
to kaiser_setup_pcid(); with kaiser_flush_tlb_on_return_to_user() the
function for noting an X86_CR3_PCID_USER_FLUSH. And let's keep a
KAISER_SHADOW_PGD_OFFSET in there, to avoid the extra OR on exit.
I did try to make the feature tests in asm/tlbflush.h more consistent
with each other: there seem to be far too many ways of performing such
tests, and I don't have a good grasp of their differences. At first
I converted them all to be static_cpu_has(): but that proved to be a
mistake, as the comment in __native_flush_tlb_single() hints; so then
I reversed and made them all this_cpu_has(). Probably all gratuitous
change, but that's the way it's working at present.
I am slightly bothered by the way non-per-cpu X86_CR3_PCID_KERN_VAR
gets re-initialized by each cpu (before and after these changes):
no problem when (as usual) all cpus on a machine have the same
features, but in principle incorrect. However, my experiment
to per-cpu-ify that one did not end well...
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/kaiser.h | 18 +++++++-----
arch/x86/include/asm/tlbflush.h | 58 +++++++++++++++++++++++++++-------------
arch/x86/kernel/cpu/common.c | 22 ---------------
arch/x86/mm/kaiser.c | 50 ++++++++++++++++++++++++++++++----
arch/x86/mm/tlb.c | 46 ++++++++++++-------------------
5 files changed, 114 insertions(+), 80 deletions(-)
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -32,13 +32,12 @@ movq \reg, %cr3
.macro _SWITCH_TO_USER_CR3 reg
movq %cr3, \reg
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), \reg
-/*
- * This can obviously be one instruction by putting the
- * KAISER_SHADOW_PGD_OFFSET bit in the X86_CR3_PCID_USER_VAR.
- * But, just leave it now for simplicity.
- */
-orq X86_CR3_PCID_USER_VAR, \reg
-orq $(KAISER_SHADOW_PGD_OFFSET), \reg
+orq PER_CPU_VAR(X86_CR3_PCID_USER_VAR), \reg
+js 9f
+// FLUSH this time, reset to NOFLUSH for next time
+// But if nopcid? Consider using 0x80 for user pcid?
+movb $(0x80), PER_CPU_VAR(X86_CR3_PCID_USER_VAR+7)
+9:
movq \reg, %cr3
.endm
@@ -90,6 +89,11 @@ movq PER_CPU_VAR(unsafe_stack_register_b
*/
DECLARE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+extern unsigned long X86_CR3_PCID_KERN_VAR;
+DECLARE_PER_CPU(unsigned long, X86_CR3_PCID_USER_VAR);
+
+extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
+
/**
* kaiser_add_mapping - map a virtual memory part to the shadow (user) mapping
* @addr: the start address of the range
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -12,6 +12,7 @@ static inline void __invpcid(unsigned lo
unsigned long type)
{
struct { u64 d[2]; } desc = { { pcid, addr } };
+
/*
* The memory clobber is because the whole point is to invalidate
* stale TLB entries and, especially if we're flushing global
@@ -130,27 +131,42 @@ static inline void cr4_set_bits_and_upda
cr4_set_bits(mask);
}
+/*
+ * Declare a couple of kaiser interfaces here for convenience,
+ * to avoid the need for asm/kaiser.h in unexpected places.
+ */
+#ifdef CONFIG_KAISER
+extern void kaiser_setup_pcid(void);
+extern void kaiser_flush_tlb_on_return_to_user(void);
+#else
+static inline void kaiser_setup_pcid(void)
+{
+}
+static inline void kaiser_flush_tlb_on_return_to_user(void)
+{
+}
+#endif
+
static inline void __native_flush_tlb(void)
{
- if (!cpu_feature_enabled(X86_FEATURE_INVPCID)) {
- /*
- * If current->mm == NULL then we borrow a mm which may change during a
- * task switch and therefore we must not be preempted while we write CR3
- * back:
+ if (this_cpu_has(X86_FEATURE_INVPCID)) {
+ /*
+ * Note, this works with CR4.PCIDE=0 or 1.
*/
- preempt_disable();
- native_write_cr3(native_read_cr3());
- preempt_enable();
+ invpcid_flush_all_nonglobals();
return;
}
+
/*
- * We are no longer using globals with KAISER, so a
- * "nonglobals" flush would work too. But, this is more
- * conservative.
- *
- * Note, this works with CR4.PCIDE=0 or 1.
+ * If current->mm == NULL then we borrow a mm which may change during a
+ * task switch and therefore we must not be preempted while we write CR3
+ * back:
*/
- invpcid_flush_all();
+ preempt_disable();
+ if (this_cpu_has(X86_FEATURE_PCID))
+ kaiser_flush_tlb_on_return_to_user();
+ native_write_cr3(native_read_cr3());
+ preempt_enable();
}
static inline void __native_flush_tlb_global_irq_disabled(void)
@@ -166,9 +182,13 @@ static inline void __native_flush_tlb_gl
static inline void __native_flush_tlb_global(void)
{
+#ifdef CONFIG_KAISER
+ /* Globals are not used at all */
+ __native_flush_tlb();
+#else
unsigned long flags;
- if (static_cpu_has(X86_FEATURE_INVPCID)) {
+ if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Using INVPCID is considerably faster than a pair of writes
* to CR4 sandwiched inside an IRQ flag save/restore.
@@ -185,10 +205,9 @@ static inline void __native_flush_tlb_gl
* be called from deep inside debugging code.)
*/
raw_local_irq_save(flags);
-
__native_flush_tlb_global_irq_disabled();
-
raw_local_irq_restore(flags);
+#endif
}
static inline void __native_flush_tlb_single(unsigned long addr)
@@ -199,9 +218,12 @@ static inline void __native_flush_tlb_si
*
* The ASIDs used below are hard-coded. But, we must not
* call invpcid(type=1/2) before CR4.PCIDE=1. Just call
- * invpcid in the case we are called early.
+ * invlpg in the case we are called early.
*/
+
if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
+ if (this_cpu_has(X86_FEATURE_PCID))
+ kaiser_flush_tlb_on_return_to_user();
asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
return;
}
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -321,33 +321,12 @@ static __always_inline void setup_smap(s
}
}
-/*
- * These can have bit 63 set, so we can not just use a plain "or"
- * instruction to get their value or'd into CR3. It would take
- * another register. So, we use a memory reference to these
- * instead.
- *
- * This is also handy because systems that do not support
- * PCIDs just end up or'ing a 0 into their CR3, which does
- * no harm.
- */
-__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_KERN_VAR = 0;
-__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_USER_VAR = 0;
-
static void setup_pcid(struct cpuinfo_x86 *c)
{
if (cpu_has(c, X86_FEATURE_PCID)) {
if (cpu_has(c, X86_FEATURE_PGE)) {
cr4_set_bits(X86_CR4_PCIDE);
/*
- * These variables are used by the entry/exit
- * code to change PCIDs.
- */
-#ifdef CONFIG_KAISER
- X86_CR3_PCID_KERN_VAR = X86_CR3_PCID_KERN_NOFLUSH;
- X86_CR3_PCID_USER_VAR = X86_CR3_PCID_USER_NOFLUSH;
-#endif
- /*
* INVPCID has two "groups" of types:
* 1/2: Invalidate an individual address
* 3/4: Invalidate all contexts
@@ -372,6 +351,7 @@ static void setup_pcid(struct cpuinfo_x8
clear_cpu_cap(c, X86_FEATURE_PCID);
}
}
+ kaiser_setup_pcid();
}
/*
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -12,12 +12,26 @@
#include <linux/ftrace.h>
#include <asm/kaiser.h>
+#include <asm/tlbflush.h> /* to verify its kaiser declarations */
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/desc.h>
+
#ifdef CONFIG_KAISER
+__visible
+DEFINE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
+
+/*
+ * These can have bit 63 set, so we can not just use a plain "or"
+ * instruction to get their value or'd into CR3. It would take
+ * another register. So, we use a memory reference to these instead.
+ *
+ * This is also handy because systems that do not support PCIDs
+ * just end up or'ing a 0 into their CR3, which does no harm.
+ */
+__aligned(PAGE_SIZE) unsigned long X86_CR3_PCID_KERN_VAR;
+DEFINE_PER_CPU(unsigned long, X86_CR3_PCID_USER_VAR);
-__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
/*
* At runtime, the only things we map are some things for CPU
* hotplug, and stacks for new processes. No two CPUs will ever
@@ -239,9 +253,6 @@ static void __init kaiser_init_all_pgds(
WARN_ON(__ret); \
} while (0)
-extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
-extern unsigned long X86_CR3_PCID_KERN_VAR;
-extern unsigned long X86_CR3_PCID_USER_VAR;
/*
* If anything in here fails, we will likely die on one of the
* first kernel->user transitions and init will die. But, we
@@ -295,8 +306,6 @@ void __init kaiser_init(void)
kaiser_add_user_map_early(&X86_CR3_PCID_KERN_VAR, PAGE_SIZE,
__PAGE_KERNEL);
- kaiser_add_user_map_early(&X86_CR3_PCID_USER_VAR, PAGE_SIZE,
- __PAGE_KERNEL);
}
/* Add a mapping to the shadow mapping, and synchronize the mappings */
@@ -361,4 +370,33 @@ pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp,
}
return pgd;
}
+
+void kaiser_setup_pcid(void)
+{
+ unsigned long kern_cr3 = 0;
+ unsigned long user_cr3 = KAISER_SHADOW_PGD_OFFSET;
+
+ if (this_cpu_has(X86_FEATURE_PCID)) {
+ kern_cr3 |= X86_CR3_PCID_KERN_NOFLUSH;
+ user_cr3 |= X86_CR3_PCID_USER_NOFLUSH;
+ }
+ /*
+ * These variables are used by the entry/exit
+ * code to change PCID and pgd and TLB flushing.
+ */
+ X86_CR3_PCID_KERN_VAR = kern_cr3;
+ this_cpu_write(X86_CR3_PCID_USER_VAR, user_cr3);
+}
+
+/*
+ * Make a note that this cpu will need to flush USER tlb on return to user.
+ * Caller checks whether this_cpu_has(X86_FEATURE_PCID) before calling:
+ * if cpu does not, then the NOFLUSH bit will never have been set.
+ */
+void kaiser_flush_tlb_on_return_to_user(void)
+{
+ this_cpu_write(X86_CR3_PCID_USER_VAR,
+ X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
+}
+EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
#endif /* CONFIG_KAISER */
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -6,13 +6,14 @@
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/cpu.h>
+#include <linux/debugfs.h>
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
#include <asm/cache.h>
#include <asm/apic.h>
#include <asm/uv/uv.h>
-#include <linux/debugfs.h>
+#include <asm/kaiser.h>
/*
* TLB flushing, formerly SMP-only
@@ -38,34 +39,23 @@ static void load_new_mm_cr3(pgd_t *pgdir
{
unsigned long new_mm_cr3 = __pa(pgdir);
- /*
- * KAISER, plus PCIDs needs some extra work here. But,
- * if either of features is not present, we need no
- * PCIDs here and just do a normal, full TLB flush with
- * the write_cr3()
- */
- if (!IS_ENABLED(CONFIG_KAISER) ||
- !cpu_feature_enabled(X86_FEATURE_PCID))
- goto out_set_cr3;
- /*
- * We reuse the same PCID for different tasks, so we must
- * flush all the entires for the PCID out when we change
- * tasks.
- */
- new_mm_cr3 = X86_CR3_PCID_KERN_FLUSH | __pa(pgdir);
-
- /*
- * The flush from load_cr3() may leave old TLB entries
- * for userspace in place. We must flush that context
- * separately. We can theoretically delay doing this
- * until we actually load up the userspace CR3, but
- * that's a bit tricky. We have to have the "need to
- * flush userspace PCID" bit per-cpu and check it in the
- * exit-to-userspace paths.
- */
- invpcid_flush_single_context(X86_CR3_PCID_ASID_USER);
+#ifdef CONFIG_KAISER
+ if (this_cpu_has(X86_FEATURE_PCID)) {
+ /*
+ * We reuse the same PCID for different tasks, so we must
+ * flush all the entries for the PCID out when we change tasks.
+ * Flush KERN below, flush USER when returning to userspace in
+ * kaiser's SWITCH_USER_CR3 (_SWITCH_TO_USER_CR3) macro.
+ *
+ * invpcid_flush_single_context(X86_CR3_PCID_ASID_USER) could
+ * do it here, but can only be used if X86_FEATURE_INVPCID is
+ * available - and many machines support pcid without invpcid.
+ */
+ new_mm_cr3 |= X86_CR3_PCID_KERN_FLUSH;
+ kaiser_flush_tlb_on_return_to_user();
+ }
+#endif /* CONFIG_KAISER */
-out_set_cr3:
/*
* Caution: many callers of this function expect
* that load_cr3() is serializing and orders TLB
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: kaiser_remove_mapping() move along the pgd
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Mon, 2 Oct 2017 10:57:24 -0700
Subject: kaiser: kaiser_remove_mapping() move along the pgd
From: Hugh Dickins <hughd(a)google.com>
When removing the bogus comment from kaiser_remove_mapping(),
I really ought to have checked the extent of its bogosity: as
Neel points out, there is nothing to stop unmap_pud_range_nofree()
from continuing beyond the end of a pud (and starting in the wrong
position on the next).
Fix kaiser_remove_mapping() to constrain the extent and advance pgd
pointer correctly: use pgd_addr_end() macro as used throughout base
mm (but don't assume page-rounded start and size in this case).
But this bug was very unlikely to trigger in this backport: since
any buddy allocation is contained within a single pud extent, and
we are not using vmapped stacks (and are only mapping one page of
stack anyway): the only way to hit this bug here would be when
freeing a large modified ldt.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -297,11 +297,13 @@ void kaiser_remove_mapping(unsigned long
extern void unmap_pud_range_nofree(pgd_t *pgd,
unsigned long start, unsigned long end);
unsigned long end = start + size;
- unsigned long addr;
+ unsigned long addr, next;
+ pgd_t *pgd;
- for (addr = start; addr < end; addr += PGDIR_SIZE) {
- pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(addr));
- unmap_pud_range_nofree(pgd, addr, end);
+ pgd = native_get_shadow_pgd(pgd_offset_k(start));
+ for (addr = start; addr < end; pgd++, addr = next) {
+ next = pgd_addr_end(addr, end);
+ unmap_pud_range_nofree(pgd, addr, next);
}
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: kaiser_flush_tlb_on_return_to_user() check PCID
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sat, 4 Nov 2017 18:43:06 -0700
Subject: kaiser: kaiser_flush_tlb_on_return_to_user() check PCID
From: Hugh Dickins <hughd(a)google.com>
Let kaiser_flush_tlb_on_return_to_user() do the X86_FEATURE_PCID
check, instead of each caller doing it inline first: nobody needs
to optimize for the noPCID case, it's clearer this way, and better
suits later changes. Replace those no-op X86_CR3_PCID_KERN_FLUSH lines
by a BUILD_BUG_ON() in load_new_mm_cr3(), in case something changes.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/tlbflush.h | 4 ++--
arch/x86/mm/kaiser.c | 6 +++---
arch/x86/mm/tlb.c | 8 ++++----
3 files changed, 9 insertions(+), 9 deletions(-)
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -157,7 +157,7 @@ static inline void __native_flush_tlb(vo
* back:
*/
preempt_disable();
- if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID))
+ if (kaiser_enabled)
kaiser_flush_tlb_on_return_to_user();
native_write_cr3(native_read_cr3());
preempt_enable();
@@ -216,7 +216,7 @@ static inline void __native_flush_tlb_si
*/
if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
- if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID))
+ if (kaiser_enabled)
kaiser_flush_tlb_on_return_to_user();
asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
return;
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -436,12 +436,12 @@ void kaiser_setup_pcid(void)
/*
* Make a note that this cpu will need to flush USER tlb on return to user.
- * Caller checks whether this_cpu_has(X86_FEATURE_PCID) before calling:
- * if cpu does not, then the NOFLUSH bit will never have been set.
+ * If cpu does not have PCID, then the NOFLUSH bit will never have been set.
*/
void kaiser_flush_tlb_on_return_to_user(void)
{
- this_cpu_write(x86_cr3_pcid_user,
+ if (this_cpu_has(X86_FEATURE_PCID))
+ this_cpu_write(x86_cr3_pcid_user,
X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
}
EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -39,7 +39,7 @@ static void load_new_mm_cr3(pgd_t *pgdir
{
unsigned long new_mm_cr3 = __pa(pgdir);
- if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID)) {
+ if (kaiser_enabled) {
/*
* We reuse the same PCID for different tasks, so we must
* flush all the entries for the PCID out when we change tasks.
@@ -50,10 +50,10 @@ static void load_new_mm_cr3(pgd_t *pgdir
* do it here, but can only be used if X86_FEATURE_INVPCID is
* available - and many machines support pcid without invpcid.
*
- * The line below is a no-op: X86_CR3_PCID_KERN_FLUSH is now 0;
- * but keep that line in there in case something changes.
+ * If X86_CR3_PCID_KERN_FLUSH actually added something, then it
+ * would be needed in the write_cr3() below - if PCIDs enabled.
*/
- new_mm_cr3 |= X86_CR3_PCID_KERN_FLUSH;
+ BUILD_BUG_ON(X86_CR3_PCID_KERN_FLUSH);
kaiser_flush_tlb_on_return_to_user();
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: KAISER depends on SMP
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-kaiser-depends-on-smp.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Wed, 13 Sep 2017 14:03:10 -0700
Subject: kaiser: KAISER depends on SMP
From: Hugh Dickins <hughd(a)google.com>
It is absurd that KAISER should depend on SMP, but apparently nobody
has tried a UP build before: which breaks on implicit declaration of
function 'per_cpu_offset' in arch/x86/mm/kaiser.c.
Now, you would expect that to be trivially fixed up; but looking at
the System.map when that block is #ifdef'ed out of kaiser_init(),
I see that in a UP build __per_cpu_user_mapped_end is precisely at
__per_cpu_user_mapped_start, and the items carefully gathered into
that section for user-mapping on SMP, dispersed elsewhere on UP.
So, some other kind of section assignment will be needed on UP,
but implementing that is not a priority: just make KAISER depend
on SMP for now.
Also inserted a blank line before the option, tidied up the
brief Kconfig help message, and added an "If unsure, Y".
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
security/Kconfig | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -30,14 +30,16 @@ config SECURITY
model will be used.
If you are unsure how to answer this question, answer N.
+
config KAISER
bool "Remove the kernel mapping in user mode"
default y
- depends on X86_64
- depends on !PARAVIRT
+ depends on X86_64 && SMP && !PARAVIRT
help
- This enforces a strict kernel and user space isolation in order to close
- hardware side channels on kernel address information.
+ This enforces a strict kernel and user space isolation, in order
+ to close hardware side channels on kernel address information.
+
+ If you are unsure how to answer this question, answer Y.
config KAISER_REAL_SWITCH
bool "KAISER: actually switch page tables"
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: fix unlikely error in alloc_ldt_struct()
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Mon, 4 Dec 2017 20:13:35 -0800
Subject: kaiser: fix unlikely error in alloc_ldt_struct()
From: Hugh Dickins <hughd(a)google.com>
An error from kaiser_add_mapping() here is not at all likely, but
Eric Biggers rightly points out that __free_ldt_struct() relies on
new_ldt->size being initialized: move that up.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/ldt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -79,11 +79,11 @@ static struct ldt_struct *alloc_ldt_stru
ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
__PAGE_KERNEL);
+ new_ldt->size = size;
if (ret) {
__free_ldt_struct(new_ldt);
return NULL;
}
- new_ldt->size = size;
return new_ldt;
}
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: fix regs to do_nmi() ifndef CONFIG_KAISER
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Thu, 21 Sep 2017 20:39:56 -0700
Subject: kaiser: fix regs to do_nmi() ifndef CONFIG_KAISER
From: Hugh Dickins <hughd(a)google.com>
pjt has observed that nmi's second (nmi_from_kernel) call to do_nmi()
adjusted the %rdi regs arg, rightly when CONFIG_KAISER, but wrongly
when not CONFIG_KAISER.
Although the minimal change is to add an #ifdef CONFIG_KAISER around
the addq line, that looks cluttered, and I prefer how the first call
to do_nmi() handled it: prepare args in %rdi and %rsi before getting
into the CONFIG_KAISER block, since it does not touch them at all.
And while we're here, place the "#ifdef CONFIG_KAISER" that follows
each, to enclose the "Unconditionally restore CR3" comment: matching
how the "Unconditionally use kernel CR3" comment above is enclosed.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1297,12 +1297,13 @@ ENTRY(nmi)
movq %rax, %cr3
#endif
call do_nmi
+
+#ifdef CONFIG_KAISER
/*
* Unconditionally restore CR3. I know we return to
* kernel code that needs user CR3, but do we ever return
* to "user mode" where we need the kernel CR3?
*/
-#ifdef CONFIG_KAISER
popq %rax
mov %rax, %cr3
#endif
@@ -1526,6 +1527,8 @@ end_repeat_nmi:
SWAPGS
xorl %ebx, %ebx
1:
+ movq %rsp, %rdi
+ movq $-1, %rsi
#ifdef CONFIG_KAISER
/* Unconditionally use kernel CR3 for do_nmi() */
/* %rax is saved above, so OK to clobber here */
@@ -1538,16 +1541,14 @@ end_repeat_nmi:
#endif
/* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
- movq %rsp, %rdi
- addq $8, %rdi /* point %rdi at ptregs, fixed up for CR3 */
- movq $-1, %rsi
call do_nmi
+
+#ifdef CONFIG_KAISER
/*
* Unconditionally restore CR3. We might be returning to
* kernel code that needs user CR3, like just just before
* a sysret.
*/
-#ifdef CONFIG_KAISER
popq %rax
mov %rax, %cr3
#endif
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: fix build and FIXME in alloc_ldt_struct()
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 17:09:44 -0700
Subject: kaiser: fix build and FIXME in alloc_ldt_struct()
From: Hugh Dickins <hughd(a)google.com>
Include linux/kaiser.h instead of asm/kaiser.h to build ldt.c without
CONFIG_KAISER. kaiser_add_mapping() does already return an error code,
so fix the FIXME.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/ldt.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -16,9 +16,9 @@
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/uaccess.h>
+#include <linux/kaiser.h>
#include <asm/ldt.h>
-#include <asm/kaiser.h>
#include <asm/desc.h>
#include <asm/mmu_context.h>
#include <asm/syscalls.h>
@@ -49,7 +49,7 @@ static struct ldt_struct *alloc_ldt_stru
{
struct ldt_struct *new_ldt;
int alloc_size;
- int ret = 0;
+ int ret;
if (size > LDT_ENTRIES)
return NULL;
@@ -77,10 +77,8 @@ static struct ldt_struct *alloc_ldt_stru
return NULL;
}
- // FIXME: make kaiser_add_mapping() return an error code
- // when it fails
- kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
- __PAGE_KERNEL);
+ ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
+ __PAGE_KERNEL);
if (ret) {
__free_ldt_struct(new_ldt);
return NULL;
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: ENOMEM if kaiser_pagetable_walk() NULL
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-enomem-if-kaiser_pagetable_walk-null.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 18:48:02 -0700
Subject: kaiser: ENOMEM if kaiser_pagetable_walk() NULL
From: Hugh Dickins <hughd(a)google.com>
kaiser_add_user_map() took no notice when kaiser_pagetable_walk() failed.
And avoid its might_sleep() when atomic (though atomic at present unused).
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -99,11 +99,11 @@ static pte_t *kaiser_pagetable_walk(unsi
pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(address));
gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
- might_sleep();
if (is_atomic) {
gfp &= ~GFP_KERNEL;
gfp |= __GFP_HIGH | __GFP_ATOMIC;
- }
+ } else
+ might_sleep();
if (pgd_none(*pgd)) {
WARN_ONCE(1, "All shadow pgds should have been populated");
@@ -160,13 +160,17 @@ int kaiser_add_user_map(const void *__st
unsigned long end_addr = PAGE_ALIGN(start_addr + size);
unsigned long target_address;
- for (;address < end_addr; address += PAGE_SIZE) {
+ for (; address < end_addr; address += PAGE_SIZE) {
target_address = get_pa_from_mapping(address);
if (target_address == -1) {
ret = -EIO;
break;
}
pte = kaiser_pagetable_walk(address, false);
+ if (!pte) {
+ ret = -ENOMEM;
+ break;
+ }
if (pte_none(*pte)) {
set_pte(pte, __pte(flags | target_address));
} else {
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: drop is_atomic arg to kaiser_pagetable_walk()
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 29 Oct 2017 11:36:19 -0700
Subject: kaiser: drop is_atomic arg to kaiser_pagetable_walk()
From: Hugh Dickins <hughd(a)google.com>
I have not observed a might_sleep() warning from setup_fixmap_gdt()'s
use of kaiser_add_mapping() in our tree (why not?), but like upstream
we have not provided a way for that to pass is_atomic true down to
kaiser_pagetable_walk(), and at startup it's far from a likely source
of trouble: so just delete the walk's is_atomic arg and might_sleep().
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -108,19 +108,13 @@ static inline unsigned long get_pa_from_
*
* Returns a pointer to a PTE on success, or NULL on failure.
*/
-static pte_t *kaiser_pagetable_walk(unsigned long address, bool is_atomic)
+static pte_t *kaiser_pagetable_walk(unsigned long address)
{
pmd_t *pmd;
pud_t *pud;
pgd_t *pgd = native_get_shadow_pgd(pgd_offset_k(address));
gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
- if (is_atomic) {
- gfp &= ~GFP_KERNEL;
- gfp |= __GFP_HIGH | __GFP_ATOMIC;
- } else
- might_sleep();
-
if (pgd_none(*pgd)) {
WARN_ONCE(1, "All shadow pgds should have been populated");
return NULL;
@@ -195,7 +189,7 @@ static int kaiser_add_user_map(const voi
ret = -EIO;
break;
}
- pte = kaiser_pagetable_walk(address, false);
+ pte = kaiser_pagetable_walk(address);
if (!pte) {
ret = -ENOMEM;
break;
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: do not set _PAGE_NX on pgd_none
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-do-not-set-_page_nx-on-pgd_none.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Tue, 5 Sep 2017 12:05:01 -0700
Subject: kaiser: do not set _PAGE_NX on pgd_none
From: Hugh Dickins <hughd(a)google.com>
native_pgd_clear() uses native_set_pgd(), so native_set_pgd() must
avoid setting the _PAGE_NX bit on an otherwise pgd_none() entry:
usually that just generated a warning on exit, but sometimes
more mysterious and damaging failures (our production machines
could not complete booting).
The original fix to this just avoided adding _PAGE_NX to
an empty entry; but eventually more problems surfaced with kexec,
and EFI mapping expected to be a problem too. So now instead
change native_set_pgd() to update shadow only if _PAGE_USER:
A few places (kernel/machine_kexec_64.c, platform/efi/efi_64.c for sure)
use set_pgd() to set up a temporary internal virtual address space, with
physical pages remapped at what Kaiser regards as userspace addresses:
Kaiser then assumes a shadow pgd follows, which it will try to corrupt.
This appears to be responsible for the recent kexec and kdump failures;
though it's unclear how those did not manifest as a problem before.
Ah, the shadow pgd will only be assumed to "follow" if the requested
pgd is on an even-numbered page: so I suppose it was going wrong 50%
of the time all along.
What we need is a flag to set_pgd(), to tell it we're dealing with
userspace. Er, isn't that what the pgd's _PAGE_USER bit is saying?
Add a test for that. But we cannot do the same for pgd_clear()
(which may be called to clear corrupted entries - set aside the
question of "corrupt in which pgd?" until later), so there just
rely on pgd_clear() not being called in the problematic cases -
with a WARN_ON_ONCE() which should fire half the time if it is.
But this is getting too big for an inline function: move it into
arch/x86/mm/kaiser.c (which then demands a boot/compressed mod);
and de-void and de-space native_get_shadow/normal_pgd() while here.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/boot/compressed/misc.h | 1
arch/x86/include/asm/pgtable_64.h | 51 +++++++++-----------------------------
arch/x86/mm/kaiser.c | 42 +++++++++++++++++++++++++++++++
3 files changed, 56 insertions(+), 38 deletions(-)
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -9,6 +9,7 @@
*/
#undef CONFIG_PARAVIRT
#undef CONFIG_PARAVIRT_SPINLOCKS
+#undef CONFIG_KAISER
#undef CONFIG_KASAN
#include <linux/linkage.h>
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -107,61 +107,36 @@ static inline void native_pud_clear(pud_
}
#ifdef CONFIG_KAISER
-static inline pgd_t * native_get_shadow_pgd(pgd_t *pgdp)
+extern pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd);
+
+static inline pgd_t *native_get_shadow_pgd(pgd_t *pgdp)
{
- return (pgd_t *)(void*)((unsigned long)(void*)pgdp | (unsigned long)PAGE_SIZE);
+ return (pgd_t *)((unsigned long)pgdp | (unsigned long)PAGE_SIZE);
}
-static inline pgd_t * native_get_normal_pgd(pgd_t *pgdp)
+static inline pgd_t *native_get_normal_pgd(pgd_t *pgdp)
{
- return (pgd_t *)(void*)((unsigned long)(void*)pgdp & ~(unsigned long)PAGE_SIZE);
+ return (pgd_t *)((unsigned long)pgdp & ~(unsigned long)PAGE_SIZE);
}
#else
-static inline pgd_t * native_get_shadow_pgd(pgd_t *pgdp)
+static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+ return pgd;
+}
+static inline pgd_t *native_get_shadow_pgd(pgd_t *pgdp)
{
BUILD_BUG_ON(1);
return NULL;
}
-static inline pgd_t * native_get_normal_pgd(pgd_t *pgdp)
+static inline pgd_t *native_get_normal_pgd(pgd_t *pgdp)
{
return pgdp;
}
#endif /* CONFIG_KAISER */
-/*
- * Page table pages are page-aligned. The lower half of the top
- * level is used for userspace and the top half for the kernel.
- * This returns true for user pages that need to get copied into
- * both the user and kernel copies of the page tables, and false
- * for kernel pages that should only be in the kernel copy.
- */
-static inline bool is_userspace_pgd(void *__ptr)
-{
- unsigned long ptr = (unsigned long)__ptr;
-
- return ((ptr % PAGE_SIZE) < (PAGE_SIZE / 2));
-}
-
static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
{
-#ifdef CONFIG_KAISER
- pteval_t extra_kern_pgd_flags = 0;
- /* Do we need to also populate the shadow pgd? */
- if (is_userspace_pgd(pgdp)) {
- native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
- /*
- * Even if the entry is *mapping* userspace, ensure
- * that userspace can not use it. This way, if we
- * get out to userspace running on the kernel CR3,
- * userspace will crash instead of running.
- */
- extra_kern_pgd_flags = _PAGE_NX;
- }
- pgdp->pgd = pgd.pgd;
- pgdp->pgd |= extra_kern_pgd_flags;
-#else /* CONFIG_KAISER */
- *pgdp = pgd;
-#endif
+ *pgdp = kaiser_set_shadow_pgd(pgdp, pgd);
}
static inline void native_pgd_clear(pgd_t *pgd)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -303,4 +303,46 @@ void kaiser_remove_mapping(unsigned long
unmap_pud_range_nofree(pgd, addr, end);
}
}
+
+/*
+ * Page table pages are page-aligned. The lower half of the top
+ * level is used for userspace and the top half for the kernel.
+ * This returns true for user pages that need to get copied into
+ * both the user and kernel copies of the page tables, and false
+ * for kernel pages that should only be in the kernel copy.
+ */
+static inline bool is_userspace_pgd(pgd_t *pgdp)
+{
+ return ((unsigned long)pgdp % PAGE_SIZE) < (PAGE_SIZE / 2);
+}
+
+pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+ /*
+ * Do we need to also populate the shadow pgd? Check _PAGE_USER to
+ * skip cases like kexec and EFI which make temporary low mappings.
+ */
+ if (pgd.pgd & _PAGE_USER) {
+ if (is_userspace_pgd(pgdp)) {
+ native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
+ /*
+ * Even if the entry is *mapping* userspace, ensure
+ * that userspace can not use it. This way, if we
+ * get out to userspace running on the kernel CR3,
+ * userspace will crash instead of running.
+ */
+ pgd.pgd |= _PAGE_NX;
+ }
+ } else if (!pgd.pgd) {
+ /*
+ * pgd_clear() cannot check _PAGE_USER, and is even used to
+ * clear corrupted pgd entries: so just rely on cases like
+ * kexec and EFI never to be using pgd_clear().
+ */
+ if (!WARN_ON_ONCE((unsigned long)pgdp & PAGE_SIZE) &&
+ is_userspace_pgd(pgdp))
+ native_get_shadow_pgd(pgdp)->pgd = pgd.pgd;
+ }
+ return pgd;
+}
#endif /* CONFIG_KAISER */
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: delete KAISER_REAL_SWITCH option
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-delete-kaiser_real_switch-option.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 3 Sep 2017 18:30:43 -0700
Subject: kaiser: delete KAISER_REAL_SWITCH option
From: Hugh Dickins <hughd(a)google.com>
We fail to see what CONFIG_KAISER_REAL_SWITCH is for: it seems to be
left over from early development, and now just obscures tricky parts
of the code. Delete it before adding PCIDs, or nokaiser boot option.
(Or if there is some good reason to keep the option, then it needs
a help text - and a "depends on KAISER", so that all those without
KAISER are not asked the question.)
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 4 ----
arch/x86/include/asm/kaiser.h | 4 ----
security/Kconfig | 4 ----
3 files changed, 12 deletions(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1291,9 +1291,7 @@ ENTRY(nmi)
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
pushq %rax
-#ifdef CONFIG_KAISER_REAL_SWITCH
andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
-#endif
movq %rax, %cr3
#endif
call do_nmi
@@ -1534,9 +1532,7 @@ end_repeat_nmi:
/* %rax is saved above, so OK to clobber here */
movq %cr3, %rax
pushq %rax
-#ifdef CONFIG_KAISER_REAL_SWITCH
andq $(~KAISER_SHADOW_PGD_OFFSET), %rax
-#endif
movq %rax, %cr3
#endif
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -21,17 +21,13 @@
.macro _SWITCH_TO_KERNEL_CR3 reg
movq %cr3, \reg
-#ifdef CONFIG_KAISER_REAL_SWITCH
andq $(~KAISER_SHADOW_PGD_OFFSET), \reg
-#endif
movq \reg, %cr3
.endm
.macro _SWITCH_TO_USER_CR3 reg
movq %cr3, \reg
-#ifdef CONFIG_KAISER_REAL_SWITCH
orq $(KAISER_SHADOW_PGD_OFFSET), \reg
-#endif
movq \reg, %cr3
.endm
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -41,10 +41,6 @@ config KAISER
If you are unsure how to answer this question, answer Y.
-config KAISER_REAL_SWITCH
- bool "KAISER: actually switch page tables"
- default y
-
config SECURITYFS
bool "Enable the securityfs filesystem"
help
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: disabled on Xen PV
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-disabled-on-xen-pv.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Jiri Kosina <jkosina(a)suse.cz>
Date: Tue, 2 Jan 2018 14:19:49 +0100
Subject: kaiser: disabled on Xen PV
From: Jiri Kosina <jkosina(a)suse.cz>
Kaiser cannot be used on paravirtualized MMUs (namely reading and writing CR3).
This does not work with KAISER as the CR3 switch from and to user space PGD
would require to map the whole XEN_PV machinery into both.
More importantly, enabling KAISER on Xen PV doesn't make too much sense, as PV
guests use distinct %cr3 values for kernel and user already.
Signed-off-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/kaiser.c | 5 +++++
1 file changed, 5 insertions(+)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -264,6 +264,9 @@ void __init kaiser_check_boottime_disabl
char arg[5];
int ret;
+ if (boot_cpu_has(X86_FEATURE_XENPV))
+ goto silent_disable;
+
ret = cmdline_find_option(boot_command_line, "pti", arg, sizeof(arg));
if (ret > 0) {
if (!strncmp(arg, "on", 2))
@@ -291,6 +294,8 @@ enable:
disable:
pr_info("Kernel/User page tables isolation: disabled\n");
+
+silent_disable:
kaiser_enabled = 0;
setup_clear_cpu_cap(X86_FEATURE_KAISER);
}
Patches currently in stable-queue which might be from jkosina(a)suse.cz are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-disabled-on-xen-pv.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: asm/tlbflush.h handle noPGE at lower level
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sat, 4 Nov 2017 18:23:24 -0700
Subject: kaiser: asm/tlbflush.h handle noPGE at lower level
From: Hugh Dickins <hughd(a)google.com>
I found asm/tlbflush.h too twisty, and think it safer not to avoid
__native_flush_tlb_global_irq_disabled() in the kaiser_enabled case,
but instead let it handle kaiser_enabled along with cr3: it can just
use __native_flush_tlb() for that, no harm in re-disabling preemption.
(This is not the same change as Kirill and Dave have suggested for
upstream, flipping PGE in cr4: that's neat, but needs a cpu_has_pge
check; cr3 is enough for kaiser, and thought to be cheaper than cr4.)
Also delete the X86_FEATURE_INVPCID invpcid_flush_all_nonglobals()
preference from __native_flush_tlb(): unlike the invpcid_flush_all()
preference in __native_flush_tlb_global(), it's not seen in upstream
4.14, and was recently reported to be surprisingly slow.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/tlbflush.h | 27 +++------------------------
1 file changed, 3 insertions(+), 24 deletions(-)
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -151,14 +151,6 @@ static inline void kaiser_flush_tlb_on_r
static inline void __native_flush_tlb(void)
{
- if (this_cpu_has(X86_FEATURE_INVPCID)) {
- /*
- * Note, this works with CR4.PCIDE=0 or 1.
- */
- invpcid_flush_all_nonglobals();
- return;
- }
-
/*
* If current->mm == NULL then we borrow a mm which may change during a
* task switch and therefore we must not be preempted while we write CR3
@@ -182,11 +174,8 @@ static inline void __native_flush_tlb_gl
/* restore PGE as it was before */
native_write_cr4(cr4);
} else {
- /*
- * x86_64 microcode update comes this way when CR4.PGE is not
- * enabled, and it's safer for all callers to allow this case.
- */
- native_write_cr3(native_read_cr3());
+ /* do it with cr3, letting kaiser flush user PCID */
+ __native_flush_tlb();
}
}
@@ -194,12 +183,6 @@ static inline void __native_flush_tlb_gl
{
unsigned long flags;
- if (kaiser_enabled) {
- /* Globals are not used at all */
- __native_flush_tlb();
- return;
- }
-
if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Using INVPCID is considerably faster than a pair of writes
@@ -255,11 +238,7 @@ static inline void __native_flush_tlb_si
static inline void __flush_tlb_all(void)
{
- if (cpu_has_pge)
- __flush_tlb_global();
- else
- __flush_tlb();
-
+ __flush_tlb_global();
/*
* Note: if we somehow had PCID but not PGE, then this wouldn't work --
* we'd end up flushing kernel translations for the current ASID but
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: add "nokaiser" boot option, using ALTERNATIVE
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-add-nokaiser-boot-option-using-alternative.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Sun, 24 Sep 2017 16:59:49 -0700
Subject: kaiser: add "nokaiser" boot option, using ALTERNATIVE
From: Hugh Dickins <hughd(a)google.com>
Added "nokaiser" boot option: an early param like "noinvpcid".
Most places now check int kaiser_enabled (#defined 0 when not
CONFIG_KAISER) instead of #ifdef CONFIG_KAISER; but entry_64.S
and entry_64_compat.S are using the ALTERNATIVE technique, which
patches in the preferred instructions at runtime. That technique
is tied to x86 cpu features, so X86_FEATURE_KAISER is fabricated.
Prior to "nokaiser", Kaiser #defined _PAGE_GLOBAL 0: revert that,
but be careful with both _PAGE_GLOBAL and CR4.PGE: setting them when
nokaiser like when !CONFIG_KAISER, but not setting either when kaiser -
neither matters on its own, but it's hard to be sure that _PAGE_GLOBAL
won't get set in some obscure corner, or something add PGE into CR4.
By omitting _PAGE_GLOBAL from __supported_pte_mask when kaiser_enabled,
all page table setup which uses pte_pfn() masks it out of the ptes.
It's slightly shameful that the same declaration versus definition of
kaiser_enabled appears in not one, not two, but in three header files
(asm/kaiser.h, asm/pgtable.h, asm/tlbflush.h). I felt safer that way,
than with #including any of those in any of the others; and did not
feel it worth an asm/kaiser_enabled.h - kernel/cpu/common.c includes
them all, so we shall hear about it if they get out of synch.
Cleanups while in the area: removed the silly #ifdef CONFIG_KAISER
from kaiser.c; removed the unused native_get_normal_pgd(); removed
the spurious reg clutter from SWITCH_*_CR3 macro stubs; corrected some
comments. But more interestingly, set CR4.PSE in secondary_startup_64:
the manual is clear that it does not matter whether it's 0 or 1 when
4-level-pts are enabled, but I was distracted to find cr4 different on
BSP and auxiliaries - BSP alone was adding PSE, in probe_page_size_mask().
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/kernel-parameters.txt | 2 +
arch/x86/entry/entry_64.S | 15 +++++++------
arch/x86/include/asm/cpufeature.h | 3 ++
arch/x86/include/asm/kaiser.h | 27 +++++++++++++++++-------
arch/x86/include/asm/pgtable.h | 20 ++++++++++++-----
arch/x86/include/asm/pgtable_64.h | 13 +++--------
arch/x86/include/asm/pgtable_types.h | 4 ---
arch/x86/include/asm/tlbflush.h | 39 ++++++++++++++++++++++-------------
arch/x86/kernel/cpu/common.c | 28 ++++++++++++++++++++++++-
arch/x86/kernel/espfix_64.c | 3 +-
arch/x86/kernel/head_64.S | 4 +--
arch/x86/mm/init.c | 2 -
arch/x86/mm/init_64.c | 10 ++++++++
arch/x86/mm/kaiser.c | 26 +++++++++++++++++++----
arch/x86/mm/pgtable.c | 8 +------
arch/x86/mm/tlb.c | 4 ---
16 files changed, 143 insertions(+), 65 deletions(-)
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2523,6 +2523,8 @@ bytes respectively. Such letter suffixes
nojitter [IA-64] Disables jitter checking for ITC timers.
+ nokaiser [X86-64] Disable KAISER isolation of kernel from user.
+
no-kvmclock [X86,KVM] Disable paravirtualized KVM clock driver
no-kvmapf [X86,KVM] Disable paravirtualized asynchronous page
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1051,7 +1051,7 @@ ENTRY(paranoid_entry)
* unconditionally, but we need to find out whether the reverse
* should be done on return (conveyed to paranoid_exit in %ebx).
*/
- movq %cr3, %rax
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
testl $KAISER_SHADOW_PGD_OFFSET, %eax
jz 2f
orl $2, %ebx
@@ -1083,6 +1083,7 @@ ENTRY(paranoid_exit)
TRACE_IRQS_OFF_DEBUG
TRACE_IRQS_IRETQ_DEBUG
#ifdef CONFIG_KAISER
+ /* No ALTERNATIVE for X86_FEATURE_KAISER: paranoid_entry sets %ebx */
testl $2, %ebx /* SWITCH_USER_CR3 needed? */
jz paranoid_exit_no_switch
SWITCH_USER_CR3
@@ -1315,13 +1316,14 @@ ENTRY(nmi)
#ifdef CONFIG_KAISER
/* Unconditionally use kernel CR3 for do_nmi() */
/* %rax is saved above, so OK to clobber here */
- movq %cr3, %rax
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
/* If PCID enabled, NOFLUSH now and NOFLUSH on return */
orq x86_cr3_pcid_noflush, %rax
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
movq %rax, %cr3
+2:
#endif
call do_nmi
@@ -1331,8 +1333,7 @@ ENTRY(nmi)
* kernel code that needs user CR3, but do we ever return
* to "user mode" where we need the kernel CR3?
*/
- popq %rax
- mov %rax, %cr3
+ ALTERNATIVE "", "popq %rax; movq %rax, %cr3", X86_FEATURE_KAISER
#endif
/*
@@ -1559,13 +1560,14 @@ end_repeat_nmi:
#ifdef CONFIG_KAISER
/* Unconditionally use kernel CR3 for do_nmi() */
/* %rax is saved above, so OK to clobber here */
- movq %cr3, %rax
+ ALTERNATIVE "jmp 2f", "movq %cr3, %rax", X86_FEATURE_KAISER
/* If PCID enabled, NOFLUSH now and NOFLUSH on return */
orq x86_cr3_pcid_noflush, %rax
pushq %rax
/* mask off "user" bit of pgd address and 12 PCID bits: */
andq $(~(X86_CR3_PCID_ASID_MASK | KAISER_SHADOW_PGD_OFFSET)), %rax
movq %rax, %cr3
+2:
#endif
/* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
@@ -1577,8 +1579,7 @@ end_repeat_nmi:
* kernel code that needs user CR3, like just just before
* a sysret.
*/
- popq %rax
- mov %rax, %cr3
+ ALTERNATIVE "", "popq %rax; movq %rax, %cr3", X86_FEATURE_KAISER
#endif
testl %ebx, %ebx /* swapgs needed? */
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -200,6 +200,9 @@
#define X86_FEATURE_HWP_PKG_REQ ( 7*32+14) /* Intel HWP_PKG_REQ */
#define X86_FEATURE_INTEL_PT ( 7*32+15) /* Intel Processor Trace */
+/* Because the ALTERNATIVE scheme is for members of the X86_FEATURE club... */
+#define X86_FEATURE_KAISER ( 7*32+31) /* CONFIG_KAISER w/o nokaiser */
+
/* Virtualization flags: Linux defined, word 8 */
#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
#define X86_FEATURE_VNMI ( 8*32+ 1) /* Intel Virtual NMI */
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -46,28 +46,33 @@ movq \reg, %cr3
.endm
.macro SWITCH_KERNEL_CR3
-pushq %rax
+ALTERNATIVE "jmp 8f", "pushq %rax", X86_FEATURE_KAISER
_SWITCH_TO_KERNEL_CR3 %rax
popq %rax
+8:
.endm
.macro SWITCH_USER_CR3
-pushq %rax
+ALTERNATIVE "jmp 8f", "pushq %rax", X86_FEATURE_KAISER
_SWITCH_TO_USER_CR3 %rax %al
popq %rax
+8:
.endm
.macro SWITCH_KERNEL_CR3_NO_STACK
-movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)
+ALTERNATIVE "jmp 8f", \
+ __stringify(movq %rax, PER_CPU_VAR(unsafe_stack_register_backup)), \
+ X86_FEATURE_KAISER
_SWITCH_TO_KERNEL_CR3 %rax
movq PER_CPU_VAR(unsafe_stack_register_backup), %rax
+8:
.endm
#else /* CONFIG_KAISER */
-.macro SWITCH_KERNEL_CR3 reg
+.macro SWITCH_KERNEL_CR3
.endm
-.macro SWITCH_USER_CR3 reg regb
+.macro SWITCH_USER_CR3
.endm
.macro SWITCH_KERNEL_CR3_NO_STACK
.endm
@@ -90,6 +95,16 @@ DECLARE_PER_CPU(unsigned long, x86_cr3_p
extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
+extern int kaiser_enabled;
+#else
+#define kaiser_enabled 0
+#endif /* CONFIG_KAISER */
+
+/*
+ * Kaiser function prototypes are needed even when CONFIG_KAISER is not set,
+ * so as to build with tests on kaiser_enabled instead of #ifdefs.
+ */
+
/**
* kaiser_add_mapping - map a virtual memory part to the shadow (user) mapping
* @addr: the start address of the range
@@ -119,8 +134,6 @@ extern void kaiser_remove_mapping(unsign
*/
extern void kaiser_init(void);
-#endif /* CONFIG_KAISER */
-
#endif /* __ASSEMBLY */
#endif /* _ASM_X86_KAISER_H */
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -18,6 +18,12 @@
#ifndef __ASSEMBLY__
#include <asm/x86_init.h>
+#ifdef CONFIG_KAISER
+extern int kaiser_enabled;
+#else
+#define kaiser_enabled 0
+#endif
+
void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd);
void ptdump_walk_pgd_level_checkwx(void);
@@ -660,7 +666,7 @@ static inline int pgd_bad(pgd_t pgd)
* page table by accident; it will fault on the first
* instruction it tries to run. See native_set_pgd().
*/
- if (IS_ENABLED(CONFIG_KAISER))
+ if (kaiser_enabled)
ignore_flags |= _PAGE_NX;
return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE;
@@ -865,12 +871,14 @@ static inline void pmdp_set_wrprotect(st
*/
static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count)
{
- memcpy(dst, src, count * sizeof(pgd_t));
+ memcpy(dst, src, count * sizeof(pgd_t));
#ifdef CONFIG_KAISER
- /* Clone the shadow pgd part as well */
- memcpy(native_get_shadow_pgd(dst),
- native_get_shadow_pgd(src),
- count * sizeof(pgd_t));
+ if (kaiser_enabled) {
+ /* Clone the shadow pgd part as well */
+ memcpy(native_get_shadow_pgd(dst),
+ native_get_shadow_pgd(src),
+ count * sizeof(pgd_t));
+ }
#endif
}
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -111,13 +111,12 @@ extern pgd_t kaiser_set_shadow_pgd(pgd_t
static inline pgd_t *native_get_shadow_pgd(pgd_t *pgdp)
{
+#ifdef CONFIG_DEBUG_VM
+ /* linux/mmdebug.h may not have been included at this point */
+ BUG_ON(!kaiser_enabled);
+#endif
return (pgd_t *)((unsigned long)pgdp | (unsigned long)PAGE_SIZE);
}
-
-static inline pgd_t *native_get_normal_pgd(pgd_t *pgdp)
-{
- return (pgd_t *)((unsigned long)pgdp & ~(unsigned long)PAGE_SIZE);
-}
#else
static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
{
@@ -128,10 +127,6 @@ static inline pgd_t *native_get_shadow_p
BUILD_BUG_ON(1);
return NULL;
}
-static inline pgd_t *native_get_normal_pgd(pgd_t *pgdp)
-{
- return pgdp;
-}
#endif /* CONFIG_KAISER */
static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -39,11 +39,7 @@
#define _PAGE_ACCESSED (_AT(pteval_t, 1) << _PAGE_BIT_ACCESSED)
#define _PAGE_DIRTY (_AT(pteval_t, 1) << _PAGE_BIT_DIRTY)
#define _PAGE_PSE (_AT(pteval_t, 1) << _PAGE_BIT_PSE)
-#ifdef CONFIG_KAISER
-#define _PAGE_GLOBAL (_AT(pteval_t, 0))
-#else
#define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
-#endif
#define _PAGE_SOFTW1 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW1)
#define _PAGE_SOFTW2 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW2)
#define _PAGE_PAT (_AT(pteval_t, 1) << _PAGE_BIT_PAT)
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -136,9 +136,11 @@ static inline void cr4_set_bits_and_upda
* to avoid the need for asm/kaiser.h in unexpected places.
*/
#ifdef CONFIG_KAISER
+extern int kaiser_enabled;
extern void kaiser_setup_pcid(void);
extern void kaiser_flush_tlb_on_return_to_user(void);
#else
+#define kaiser_enabled 0
static inline void kaiser_setup_pcid(void)
{
}
@@ -163,7 +165,7 @@ static inline void __native_flush_tlb(vo
* back:
*/
preempt_disable();
- if (this_cpu_has(X86_FEATURE_PCID))
+ if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID))
kaiser_flush_tlb_on_return_to_user();
native_write_cr3(native_read_cr3());
preempt_enable();
@@ -174,20 +176,30 @@ static inline void __native_flush_tlb_gl
unsigned long cr4;
cr4 = this_cpu_read(cpu_tlbstate.cr4);
- /* clear PGE */
- native_write_cr4(cr4 & ~X86_CR4_PGE);
- /* write old PGE again and flush TLBs */
- native_write_cr4(cr4);
+ if (cr4 & X86_CR4_PGE) {
+ /* clear PGE and flush TLB of all entries */
+ native_write_cr4(cr4 & ~X86_CR4_PGE);
+ /* restore PGE as it was before */
+ native_write_cr4(cr4);
+ } else {
+ /*
+ * x86_64 microcode update comes this way when CR4.PGE is not
+ * enabled, and it's safer for all callers to allow this case.
+ */
+ native_write_cr3(native_read_cr3());
+ }
}
static inline void __native_flush_tlb_global(void)
{
-#ifdef CONFIG_KAISER
- /* Globals are not used at all */
- __native_flush_tlb();
-#else
unsigned long flags;
+ if (kaiser_enabled) {
+ /* Globals are not used at all */
+ __native_flush_tlb();
+ return;
+ }
+
if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Using INVPCID is considerably faster than a pair of writes
@@ -207,7 +219,6 @@ static inline void __native_flush_tlb_gl
raw_local_irq_save(flags);
__native_flush_tlb_global_irq_disabled();
raw_local_irq_restore(flags);
-#endif
}
static inline void __native_flush_tlb_single(unsigned long addr)
@@ -222,7 +233,7 @@ static inline void __native_flush_tlb_si
*/
if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
- if (this_cpu_has(X86_FEATURE_PCID))
+ if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID))
kaiser_flush_tlb_on_return_to_user();
asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
return;
@@ -237,9 +248,9 @@ static inline void __native_flush_tlb_si
* Make sure to do only a single invpcid when KAISER is
* disabled and we have only a single ASID.
*/
- if (X86_CR3_PCID_ASID_KERN != X86_CR3_PCID_ASID_USER)
- invpcid_flush_one(X86_CR3_PCID_ASID_KERN, addr);
- invpcid_flush_one(X86_CR3_PCID_ASID_USER, addr);
+ if (kaiser_enabled)
+ invpcid_flush_one(X86_CR3_PCID_ASID_USER, addr);
+ invpcid_flush_one(X86_CR3_PCID_ASID_KERN, addr);
}
static inline void __flush_tlb_all(void)
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -178,6 +178,20 @@ static int __init x86_pcid_setup(char *s
return 1;
}
__setup("nopcid", x86_pcid_setup);
+
+static int __init x86_nokaiser_setup(char *s)
+{
+ /* nokaiser doesn't accept parameters */
+ if (s)
+ return -EINVAL;
+#ifdef CONFIG_KAISER
+ kaiser_enabled = 0;
+ setup_clear_cpu_cap(X86_FEATURE_KAISER);
+ pr_info("nokaiser: KAISER feature disabled\n");
+#endif
+ return 0;
+}
+early_param("nokaiser", x86_nokaiser_setup);
#endif
static int __init x86_noinvpcid_setup(char *s)
@@ -324,7 +338,7 @@ static __always_inline void setup_smap(s
static void setup_pcid(struct cpuinfo_x86 *c)
{
if (cpu_has(c, X86_FEATURE_PCID)) {
- if (cpu_has(c, X86_FEATURE_PGE)) {
+ if (cpu_has(c, X86_FEATURE_PGE) || kaiser_enabled) {
cr4_set_bits(X86_CR4_PCIDE);
/*
* INVPCID has two "groups" of types:
@@ -747,6 +761,10 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
c->x86_power = cpuid_edx(0x80000007);
init_scattered_cpuid_features(c);
+#ifdef CONFIG_KAISER
+ if (kaiser_enabled)
+ set_cpu_cap(c, X86_FEATURE_KAISER);
+#endif
}
static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
@@ -1406,6 +1424,14 @@ void cpu_init(void)
* try to read it.
*/
cr4_init_shadow();
+ if (!kaiser_enabled) {
+ /*
+ * secondary_startup_64() deferred setting PGE in cr4:
+ * probe_page_size_mask() sets it on the boot cpu,
+ * but it needs to be set on each secondary cpu.
+ */
+ cr4_set_bits(X86_CR4_PGE);
+ }
/*
* Load microcode on this cpu if a valid microcode is available.
--- a/arch/x86/kernel/espfix_64.c
+++ b/arch/x86/kernel/espfix_64.c
@@ -132,9 +132,10 @@ void __init init_espfix_bsp(void)
* area to ensure it is mapped into the shadow user page
* tables.
*/
- if (IS_ENABLED(CONFIG_KAISER))
+ if (kaiser_enabled) {
set_pgd(native_get_shadow_pgd(pgd_p),
__pgd(_KERNPG_TABLE | __pa((pud_t *)espfix_pud_page)));
+ }
/* Randomize the locations */
init_espfix_random();
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -183,8 +183,8 @@ ENTRY(secondary_startup_64)
movq $(init_level4_pgt - __START_KERNEL_map), %rax
1:
- /* Enable PAE mode and PGE */
- movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx
+ /* Enable PAE and PSE, but defer PGE until kaiser_enabled is decided */
+ movl $(X86_CR4_PAE | X86_CR4_PSE), %ecx
movq %rcx, %cr4
/* Setup early boot stage 4 level pagetables. */
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -165,7 +165,7 @@ static void __init probe_page_size_mask(
cr4_set_bits_and_update_boot(X86_CR4_PSE);
/* Enable PGE if available */
- if (cpu_has_pge) {
+ if (cpu_has_pge && !kaiser_enabled) {
cr4_set_bits_and_update_boot(X86_CR4_PGE);
__supported_pte_mask |= _PAGE_GLOBAL;
} else
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -395,6 +395,16 @@ void __init cleanup_highmap(void)
continue;
if (vaddr < (unsigned long) _text || vaddr > end)
set_pmd(pmd, __pmd(0));
+ else if (kaiser_enabled) {
+ /*
+ * level2_kernel_pgt is initialized with _PAGE_GLOBAL:
+ * clear that now. This is not important, so long as
+ * CR4.PGE remains clear, but it removes an anomaly.
+ * Physical mapping setup below avoids _PAGE_GLOBAL
+ * by use of massage_pgprot() inside pfn_pte() etc.
+ */
+ set_pmd(pmd, pmd_clear_flags(*pmd, _PAGE_GLOBAL));
+ }
}
}
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -17,7 +17,9 @@
#include <asm/pgalloc.h>
#include <asm/desc.h>
-#ifdef CONFIG_KAISER
+int kaiser_enabled __read_mostly = 1;
+EXPORT_SYMBOL(kaiser_enabled); /* for inlined TLB flush functions */
+
__visible
DEFINE_PER_CPU_USER_MAPPED(unsigned long, unsafe_stack_register_backup);
@@ -168,8 +170,8 @@ static pte_t *kaiser_pagetable_walk(unsi
return pte_offset_kernel(pmd, address);
}
-int kaiser_add_user_map(const void *__start_addr, unsigned long size,
- unsigned long flags)
+static int kaiser_add_user_map(const void *__start_addr, unsigned long size,
+ unsigned long flags)
{
int ret = 0;
pte_t *pte;
@@ -178,6 +180,15 @@ int kaiser_add_user_map(const void *__st
unsigned long end_addr = PAGE_ALIGN(start_addr + size);
unsigned long target_address;
+ /*
+ * It is convenient for callers to pass in __PAGE_KERNEL etc,
+ * and there is no actual harm from setting _PAGE_GLOBAL, so
+ * long as CR4.PGE is not set. But it is nonetheless troubling
+ * to see Kaiser itself setting _PAGE_GLOBAL (now that "nokaiser"
+ * requires that not to be #defined to 0): so mask it off here.
+ */
+ flags &= ~_PAGE_GLOBAL;
+
for (; address < end_addr; address += PAGE_SIZE) {
target_address = get_pa_from_mapping(address);
if (target_address == -1) {
@@ -264,6 +275,8 @@ void __init kaiser_init(void)
{
int cpu;
+ if (!kaiser_enabled)
+ return;
kaiser_init_all_pgds();
for_each_possible_cpu(cpu) {
@@ -312,6 +325,8 @@ void __init kaiser_init(void)
/* Add a mapping to the shadow mapping, and synchronize the mappings */
int kaiser_add_mapping(unsigned long addr, unsigned long size, unsigned long flags)
{
+ if (!kaiser_enabled)
+ return 0;
return kaiser_add_user_map((const void *)addr, size, flags);
}
@@ -323,6 +338,8 @@ void kaiser_remove_mapping(unsigned long
unsigned long addr, next;
pgd_t *pgd;
+ if (!kaiser_enabled)
+ return;
pgd = native_get_shadow_pgd(pgd_offset_k(start));
for (addr = start; addr < end; pgd++, addr = next) {
next = pgd_addr_end(addr, end);
@@ -344,6 +361,8 @@ static inline bool is_userspace_pgd(pgd_
pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
{
+ if (!kaiser_enabled)
+ return pgd;
/*
* Do we need to also populate the shadow pgd? Check _PAGE_USER to
* skip cases like kexec and EFI which make temporary low mappings.
@@ -400,4 +419,3 @@ void kaiser_flush_tlb_on_return_to_user(
X86_CR3_PCID_USER_FLUSH | KAISER_SHADOW_PGD_OFFSET);
}
EXPORT_SYMBOL(kaiser_flush_tlb_on_return_to_user);
-#endif /* CONFIG_KAISER */
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -341,16 +341,12 @@ static inline void _pgd_free(pgd_t *pgd)
}
#else
-#ifdef CONFIG_KAISER
/*
- * Instead of one pmd, we aquire two pmds. Being order-1, it is
+ * Instead of one pgd, Kaiser acquires two pgds. Being order-1, it is
* both 8k in size and 8k-aligned. That lets us just flip bit 12
* in a pointer to swap between the two 4k halves.
*/
-#define PGD_ALLOCATION_ORDER 1
-#else
-#define PGD_ALLOCATION_ORDER 0
-#endif
+#define PGD_ALLOCATION_ORDER kaiser_enabled
static inline pgd_t *_pgd_alloc(void)
{
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -39,8 +39,7 @@ static void load_new_mm_cr3(pgd_t *pgdir
{
unsigned long new_mm_cr3 = __pa(pgdir);
-#ifdef CONFIG_KAISER
- if (this_cpu_has(X86_FEATURE_PCID)) {
+ if (kaiser_enabled && this_cpu_has(X86_FEATURE_PCID)) {
/*
* We reuse the same PCID for different tasks, so we must
* flush all the entries for the PCID out when we change tasks.
@@ -57,7 +56,6 @@ static void load_new_mm_cr3(pgd_t *pgdir
new_mm_cr3 |= X86_CR3_PCID_KERN_FLUSH;
kaiser_flush_tlb_on_return_to_user();
}
-#endif /* CONFIG_KAISER */
/*
* Caution: many callers of this function expect
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
This is a note to let you know that I've just added the patch titled
kaiser: _pgd_alloc() without __GFP_REPEAT to avoid stalls
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Wed Jan 3 18:58:12 CET 2018
From: Hugh Dickins <hughd(a)google.com>
Date: Fri, 13 Oct 2017 12:10:00 -0700
Subject: kaiser: _pgd_alloc() without __GFP_REPEAT to avoid stalls
From: Hugh Dickins <hughd(a)google.com>
Synthetic filesystem mempressure testing has shown softlockups, with
hour-long page allocation stalls, and pgd_alloc() trying for order:1
with __GFP_REPEAT in one of the backtraces each time.
That's _pgd_alloc() going for a Kaiser double-pgd, using the __GFP_REPEAT
common to all page table allocations, but actually having no effect on
order:0 (see should_alloc_oom() and should_continue_reclaim() in this
tree, but beware that ports to another tree might behave differently).
Order:1 stack allocation has been working satisfactorily without
__GFP_REPEAT forever, and page table allocation only asks __GFP_REPEAT
for awkward occasions in a long-running process: it's not appropriate
at fork or exec time, and seems to be doing much more harm than good:
getting those contiguous pages under very heavy mempressure can be
hard (though even without it, Kaiser does generate more mempressure).
Mask out that __GFP_REPEAT inside _pgd_alloc(). Why not take it out
of the PGALLOG_GFP altogether, as v4.7 commit a3a9a59d2067 ("x86: get
rid of superfluous __GFP_REPEAT") did? Because I think that might
make a difference to our page table memcg charging, which I'd prefer
not to interfere with at this time.
hughd adds: __alloc_pages_slowpath() in the 4.4.89-stable tree handles
__GFP_REPEAT a little differently than in prod kernel or 3.18.72-stable,
so it may not always be exactly a no-op on order:0 pages, as said above;
but I think still appropriate to omit it from Kaiser or non-Kaiser pgd.
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Jiri Kosina <jkosina(a)suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/mm/pgtable.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -6,7 +6,7 @@
#include <asm/fixmap.h>
#include <asm/mtrr.h>
-#define PGALLOC_GFP GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO
+#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO)
#ifdef CONFIG_HIGHPTE
#define PGALLOC_USER_GFP __GFP_HIGHMEM
@@ -354,7 +354,9 @@ static inline void _pgd_free(pgd_t *pgd)
static inline pgd_t *_pgd_alloc(void)
{
- return (pgd_t *)__get_free_pages(PGALLOC_GFP, PGD_ALLOCATION_ORDER);
+ /* No __GFP_REPEAT: to avoid page allocation stalls in order-1 case */
+ return (pgd_t *)__get_free_pages(PGALLOC_GFP & ~__GFP_REPEAT,
+ PGD_ALLOCATION_ORDER);
}
static inline void _pgd_free(pgd_t *pgd)
Patches currently in stable-queue which might be from hughd(a)google.com are
queue-4.4/kaiser-vmstat-show-nr_kaisertable-as-nr_overhead.patch
queue-4.4/kaiser-add-nokaiser-boot-option-using-alternative.patch
queue-4.4/kaiser-fix-unlikely-error-in-alloc_ldt_struct.patch
queue-4.4/kaiser-_pgd_alloc-without-__gfp_repeat-to-avoid-stalls.patch
queue-4.4/kaiser-kaiser_flush_tlb_on_return_to_user-check-pcid.patch
queue-4.4/x86-paravirt-dont-patch-flush_tlb_single.patch
queue-4.4/kaiser-merged-update.patch
queue-4.4/kaiser-delete-kaiser_real_switch-option.patch
queue-4.4/kaiser-kaiser_remove_mapping-move-along-the-pgd.patch
queue-4.4/kaiser-fix-perf-crashes.patch
queue-4.4/kaiser-drop-is_atomic-arg-to-kaiser_pagetable_walk.patch
queue-4.4/kaiser-load_new_mm_cr3-let-switch_user_cr3-flush-user.patch
queue-4.4/kaiser-enhanced-by-kernel-and-user-pcids.patch
queue-4.4/kaiser-x86_cr3_pcid_noflush-and-x86_cr3_pcid_user.patch
queue-4.4/kaiser-use-alternative-instead-of-x86_cr3_pcid_noflush.patch
queue-4.4/kaiser-stack-map-page_size-at-thread_size-page_size.patch
queue-4.4/kaiser-name-that-0x1000-kaiser_shadow_pgd_offset.patch
queue-4.4/kaiser-fix-regs-to-do_nmi-ifndef-config_kaiser.patch
queue-4.4/kaiser-do-not-set-_page_nx-on-pgd_none.patch
queue-4.4/kaiser-tidied-up-asm-kaiser.h-somewhat.patch
queue-4.4/kaiser-cleanups-while-trying-for-gold-link.patch
queue-4.4/kaiser-tidied-up-kaiser_add-remove_mapping-slightly.patch
queue-4.4/kaiser-fix-build-and-fixme-in-alloc_ldt_struct.patch
queue-4.4/kaiser-kernel-address-isolation.patch
queue-4.4/kaiser-enomem-if-kaiser_pagetable_walk-null.patch
queue-4.4/kaiser-asm-tlbflush.h-handle-nopge-at-lower-level.patch
queue-4.4/kaiser-paranoid_entry-pass-cr3-need-to-paranoid_exit.patch
queue-4.4/kaiser-kaiser-depends-on-smp.patch
queue-4.4/kaiser-pcid-0-for-kernel-and-128-for-user.patch
On Wed, Jan 03, 2018 at 03:41:21PM +0000, Mark Brown wrote:
> On Wed, Jan 03, 2018 at 04:32:35PM +0100, Arnd Bergmann wrote:
>
> > It might be a problem with different 'make' versions, or something else
> > in the environment that leads to "-Wno-frame-address" not being set
> > on the command line instead.
>
> I'm using "GNU Make 4.1" - this is all just the default Debian stuff,
> I've not customized my x86 host environment.
Make 4.2.1 here with gcc 7.2.1 and I get tons of build warnings on my
3.18.y tree on my laptop, but my build machine running Fedora (make
4.2.1 and gcc 7.2.1) I get no build warnings at all.
I've given up trying to track it down for my laptop, something is
preventing the compiler flag from being added to the gcc command line,
and I don't care anymore...
thanks,
greg k-h
Commit-ID: f24c4d478013d82bd1b943df566fff3561d52864
Gitweb: https://git.kernel.org/tip/f24c4d478013d82bd1b943df566fff3561d52864
Author: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
AuthorDate: Tue, 2 Jan 2018 17:21:10 +0000
Committer: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Wed, 3 Jan 2018 13:54:31 +0100
efi/capsule-loader: Reinstate virtual capsule mapping
Commit:
82c3768b8d68 ("efi/capsule-loader: Use a cached copy of the capsule header")
... refactored the capsule loading code that maps the capsule header,
to avoid having to map it several times.
However, as it turns out, the vmap() call we ended up removing did not
just map the header, but the entire capsule image, and dropping this
virtual mapping breaks capsules that are processed by the firmware
immediately (i.e., without a reboot).
Unfortunately, that change was part of a larger refactor that allowed
a quirk to be implemented for Quark, which has a non-standard memory
layout for capsules, and we have slightly painted ourselves into a
corner by allowing quirk code to mangle the capsule header and memory
layout.
So we need to fix this without breaking Quark. Fortunately, Quark does
not appear to care about the virtual mapping, and so we can simply
do a partial revert of commit:
2a457fb31df6 ("efi/capsule-loader: Use page addresses rather than struct page pointers")
... and create a vmap() mapping of the entire capsule (including header)
based on the reinstated struct page array, unless running on Quark, in
which case we pass the capsule header copy as before.
Reported-by: Ge Song <ge.song(a)hxt-semitech.com>
Tested-by: Bryan O'Donoghue <pure.logic(a)nexus-software.ie>
Tested-by: Ge Song <ge.song(a)hxt-semitech.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Cc: <stable(a)vger.kernel.org>
Cc: Dave Young <dyoung(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Matt Fleming <matt(a)codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-efi(a)vger.kernel.org
Fixes: 82c3768b8d68 ("efi/capsule-loader: Use a cached copy of the capsule header")
Link: http://lkml.kernel.org/r/20180102172110.17018-3-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
---
arch/x86/platform/efi/quirks.c | 13 +++++++++-
drivers/firmware/efi/capsule-loader.c | 45 ++++++++++++++++++++++++++++-------
include/linux/efi.h | 4 +++-
3 files changed, 52 insertions(+), 10 deletions(-)
diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
index 8a99a2e..5b513cc 100644
--- a/arch/x86/platform/efi/quirks.c
+++ b/arch/x86/platform/efi/quirks.c
@@ -592,7 +592,18 @@ static int qrk_capsule_setup_info(struct capsule_info *cap_info, void **pkbuff,
/*
* Update the first page pointer to skip over the CSH header.
*/
- cap_info->pages[0] += csh->headersize;
+ cap_info->phys[0] += csh->headersize;
+
+ /*
+ * cap_info->capsule should point at a virtual mapping of the entire
+ * capsule, starting at the capsule header. Our image has the Quark
+ * security header prepended, so we cannot rely on the default vmap()
+ * mapping created by the generic capsule code.
+ * Given that the Quark firmware does not appear to care about the
+ * virtual mapping, let's just point cap_info->capsule at our copy
+ * of the capsule header.
+ */
+ cap_info->capsule = &cap_info->header;
return 1;
}
diff --git a/drivers/firmware/efi/capsule-loader.c b/drivers/firmware/efi/capsule-loader.c
index ec8ac5c..055e2e8 100644
--- a/drivers/firmware/efi/capsule-loader.c
+++ b/drivers/firmware/efi/capsule-loader.c
@@ -20,10 +20,6 @@
#define NO_FURTHER_WRITE_ACTION -1
-#ifndef phys_to_page
-#define phys_to_page(x) pfn_to_page((x) >> PAGE_SHIFT)
-#endif
-
/**
* efi_free_all_buff_pages - free all previous allocated buffer pages
* @cap_info: pointer to current instance of capsule_info structure
@@ -35,7 +31,7 @@
static void efi_free_all_buff_pages(struct capsule_info *cap_info)
{
while (cap_info->index > 0)
- __free_page(phys_to_page(cap_info->pages[--cap_info->index]));
+ __free_page(cap_info->pages[--cap_info->index]);
cap_info->index = NO_FURTHER_WRITE_ACTION;
}
@@ -71,6 +67,14 @@ int __efi_capsule_setup_info(struct capsule_info *cap_info)
cap_info->pages = temp_page;
+ temp_page = krealloc(cap_info->phys,
+ pages_needed * sizeof(phys_addr_t *),
+ GFP_KERNEL | __GFP_ZERO);
+ if (!temp_page)
+ return -ENOMEM;
+
+ cap_info->phys = temp_page;
+
return 0;
}
@@ -105,9 +109,24 @@ int __weak efi_capsule_setup_info(struct capsule_info *cap_info, void *kbuff,
**/
static ssize_t efi_capsule_submit_update(struct capsule_info *cap_info)
{
+ bool do_vunmap = false;
int ret;
- ret = efi_capsule_update(&cap_info->header, cap_info->pages);
+ /*
+ * cap_info->capsule may have been assigned already by a quirk
+ * handler, so only overwrite it if it is NULL
+ */
+ if (!cap_info->capsule) {
+ cap_info->capsule = vmap(cap_info->pages, cap_info->index,
+ VM_MAP, PAGE_KERNEL);
+ if (!cap_info->capsule)
+ return -ENOMEM;
+ do_vunmap = true;
+ }
+
+ ret = efi_capsule_update(cap_info->capsule, cap_info->phys);
+ if (do_vunmap)
+ vunmap(cap_info->capsule);
if (ret) {
pr_err("capsule update failed\n");
return ret;
@@ -165,10 +184,12 @@ static ssize_t efi_capsule_write(struct file *file, const char __user *buff,
goto failed;
}
- cap_info->pages[cap_info->index++] = page_to_phys(page);
+ cap_info->pages[cap_info->index] = page;
+ cap_info->phys[cap_info->index] = page_to_phys(page);
cap_info->page_bytes_remain = PAGE_SIZE;
+ cap_info->index++;
} else {
- page = phys_to_page(cap_info->pages[cap_info->index - 1]);
+ page = cap_info->pages[cap_info->index - 1];
}
kbuff = kmap(page);
@@ -252,6 +273,7 @@ static int efi_capsule_release(struct inode *inode, struct file *file)
struct capsule_info *cap_info = file->private_data;
kfree(cap_info->pages);
+ kfree(cap_info->phys);
kfree(file->private_data);
file->private_data = NULL;
return 0;
@@ -281,6 +303,13 @@ static int efi_capsule_open(struct inode *inode, struct file *file)
return -ENOMEM;
}
+ cap_info->phys = kzalloc(sizeof(void *), GFP_KERNEL);
+ if (!cap_info->phys) {
+ kfree(cap_info->pages);
+ kfree(cap_info);
+ return -ENOMEM;
+ }
+
file->private_data = cap_info;
return 0;
diff --git a/include/linux/efi.h b/include/linux/efi.h
index d813f7b..29fdf80 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -140,11 +140,13 @@ struct efi_boot_memmap {
struct capsule_info {
efi_capsule_header_t header;
+ efi_capsule_header_t *capsule;
int reset_type;
long index;
size_t count;
size_t total_size;
- phys_addr_t *pages;
+ struct page **pages;
+ phys_addr_t *phys;
size_t page_bytes_remain;
};
The patch
ASoC: skl: Fix kernel warning due to zero NHTL entry
has been applied to the asoc tree at
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying
to this mail.
Thanks,
Mark
>From 20a1ea2222e7cbf96e9bf8579362e971491e6aea Mon Sep 17 00:00:00 2001
From: Takashi Iwai <tiwai(a)suse.de>
Date: Wed, 3 Jan 2018 16:38:46 +0100
Subject: [PATCH] ASoC: skl: Fix kernel warning due to zero NHTL entry
I got the following kernel warning when loading snd-soc-skl module on
Dell Latitude 7270 laptop:
memremap attempted on mixed range 0x0000000000000000 size: 0x0
WARNING: CPU: 0 PID: 484 at kernel/memremap.c:98 memremap+0x8a/0x180
Call Trace:
skl_nhlt_init+0x82/0xf0 [snd_soc_skl]
skl_probe+0x2ee/0x7c0 [snd_soc_skl]
....
It seems that the machine doesn't support the SKL DSP gives the empty
NHLT entry, and it triggers the warning. For avoiding it, let do the
zero check before calling memremap().
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
sound/soc/intel/skylake/skl-nhlt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/sound/soc/intel/skylake/skl-nhlt.c b/sound/soc/intel/skylake/skl-nhlt.c
index d14c50a60289..1ce414d86d8a 100644
--- a/sound/soc/intel/skylake/skl-nhlt.c
+++ b/sound/soc/intel/skylake/skl-nhlt.c
@@ -43,7 +43,8 @@ struct nhlt_acpi_table *skl_nhlt_init(struct device *dev)
obj = acpi_evaluate_dsm(handle, &osc_guid, 1, 1, NULL);
if (obj && obj->type == ACPI_TYPE_BUFFER) {
nhlt_ptr = (struct nhlt_resource_desc *)obj->buffer.pointer;
- nhlt_table = (struct nhlt_acpi_table *)
+ if (nhlt_ptr->length)
+ nhlt_table = (struct nhlt_acpi_table *)
memremap(nhlt_ptr->min_addr, nhlt_ptr->length,
MEMREMAP_WB);
ACPI_FREE(obj);
--
2.15.1
On Wed, Jan 3, 2018 at 3:59 PM, Mark Brown <broonie(a)kernel.org> wrote:
> On Wed, Jan 03, 2018 at 03:52:12PM +0100, Arnd Bergmann wrote:
>
>> Greg and Mark, which x86-64 compiler versions do you use?
>
> gcc (Debian 6.3.0-18) 6.3.0 20170516
I installed that version here, same as my other gcc-6 versions, the
"-Wno-frame-address" option gets set and interpreted correctly:
$ git checkout v3.18.91 ; touch kernel/sched/core.c ; make V=1
defconfig kernel/sched/core.o CC=gcc-6 2>&1 | tail
...
gcc-6 -Wp,-MD,kernel/sched/.core.o.d -nostdinc -isystem
/usr/lib/gcc/x86_64-linux-gnu/6/include -I./arch/x86/include
-Iarch/x86/include/generated -Iinclude -I./arch/x86/include/uapi
-Iarch/x86/include/generated/uapi -I./include/uapi
-Iinclude/generated/uapi -include ./include/linux/kconfig.h
-D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs
-fno-strict-aliasing -fno-common -Werror-implicit-function-declaration
-Wno-format-security -std=gnu89 -m64 -mno-80387 -mno-fp-ret-in-387
-mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time
-maccumulate-outgoing-args -DCONFIG_AS_CFI=1
-DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1
-DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_CRC32=1 -DCONFIG_AS_AVX=1
-DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare
-fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow
-mno-avx -fno-delete-null-pointer-checks -Wno-frame-address -fno-PIE
-O2 --param=allow-store-data-races=0 -Wframe-larger-than=2048
-fno-stack-protector -Wno-unused-but-set-variable
-Wno-unused-const-variable -fno-omit-frame-pointer
-fno-optimize-sibling-calls -fno-var-tracking-assignments
-Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow
-fno-stack-check -fconserve-stack -Werror=implicit-int
-Werror=strict-prototypes -Werror=date-time -DCC_HAVE_ASM_GOTO
-D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(core)"
-D"KBUILD_MODNAME=KBUILD_STR(core)" -c -o kernel/sched/core.o
kernel/sched/core.c
It might be a problem with different 'make' versions, or something else
in the environment that leads to "-Wno-frame-address" not being set
on the command line instead.
Arnd
This is a note to let you know that I've just added the patch titled
x86/boot: Add early cmdline parsing for options with arguments
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-boot-add-early-cmdline-parsing-for-options-with-arguments.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e505371dd83963caae1a37ead9524e8d997341be Mon Sep 17 00:00:00 2001
From: Tom Lendacky <thomas.lendacky(a)amd.com>
Date: Mon, 17 Jul 2017 16:10:33 -0500
Subject: x86/boot: Add early cmdline parsing for options with arguments
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Tom Lendacky <thomas.lendacky(a)amd.com>
commit e505371dd83963caae1a37ead9524e8d997341be upstream.
Add a cmdline_find_option() function to look for cmdline options that
take arguments. The argument is returned in a supplied buffer and the
argument length (regardless of whether it fits in the supplied buffer)
is returned, with -1 indicating not found.
Signed-off-by: Tom Lendacky <thomas.lendacky(a)amd.com>
Reviewed-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Alexander Potapenko <glider(a)google.com>
Cc: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Brijesh Singh <brijesh.singh(a)amd.com>
Cc: Dave Young <dyoung(a)redhat.com>
Cc: Dmitry Vyukov <dvyukov(a)google.com>
Cc: Jonathan Corbet <corbet(a)lwn.net>
Cc: Konrad Rzeszutek Wilk <konrad.wilk(a)oracle.com>
Cc: Larry Woodman <lwoodman(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Matt Fleming <matt(a)codeblueprint.co.uk>
Cc: Michael S. Tsirkin <mst(a)redhat.com>
Cc: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Radim Krčmář <rkrcmar(a)redhat.com>
Cc: Rik van Riel <riel(a)redhat.com>
Cc: Toshimitsu Kani <toshi.kani(a)hpe.com>
Cc: kasan-dev(a)googlegroups.com
Cc: kvm(a)vger.kernel.org
Cc: linux-arch(a)vger.kernel.org
Cc: linux-doc(a)vger.kernel.org
Cc: linux-efi(a)vger.kernel.org
Cc: linux-mm(a)kvack.org
Link: http://lkml.kernel.org/r/36b5f97492a9745dce27682305f990fc20e5cf8a.150031921…
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/cmdline.h | 2
arch/x86/lib/cmdline.c | 105 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 107 insertions(+)
--- a/arch/x86/include/asm/cmdline.h
+++ b/arch/x86/include/asm/cmdline.h
@@ -2,5 +2,7 @@
#define _ASM_X86_CMDLINE_H
int cmdline_find_option_bool(const char *cmdline_ptr, const char *option);
+int cmdline_find_option(const char *cmdline_ptr, const char *option,
+ char *buffer, int bufsize);
#endif /* _ASM_X86_CMDLINE_H */
--- a/arch/x86/lib/cmdline.c
+++ b/arch/x86/lib/cmdline.c
@@ -82,3 +82,108 @@ int cmdline_find_option_bool(const char
return 0; /* Buffer overrun */
}
+
+/*
+ * Find a non-boolean option (i.e. option=argument). In accordance with
+ * standard Linux practice, if this option is repeated, this returns the
+ * last instance on the command line.
+ *
+ * @cmdline: the cmdline string
+ * @max_cmdline_size: the maximum size of cmdline
+ * @option: option string to look for
+ * @buffer: memory buffer to return the option argument
+ * @bufsize: size of the supplied memory buffer
+ *
+ * Returns the length of the argument (regardless of if it was
+ * truncated to fit in the buffer), or -1 on not found.
+ */
+static int
+__cmdline_find_option(const char *cmdline, int max_cmdline_size,
+ const char *option, char *buffer, int bufsize)
+{
+ char c;
+ int pos = 0, len = -1;
+ const char *opptr = NULL;
+ char *bufptr = buffer;
+ enum {
+ st_wordstart = 0, /* Start of word/after whitespace */
+ st_wordcmp, /* Comparing this word */
+ st_wordskip, /* Miscompare, skip */
+ st_bufcpy, /* Copying this to buffer */
+ } state = st_wordstart;
+
+ if (!cmdline)
+ return -1; /* No command line */
+
+ /*
+ * This 'pos' check ensures we do not overrun
+ * a non-NULL-terminated 'cmdline'
+ */
+ while (pos++ < max_cmdline_size) {
+ c = *(char *)cmdline++;
+ if (!c)
+ break;
+
+ switch (state) {
+ case st_wordstart:
+ if (myisspace(c))
+ break;
+
+ state = st_wordcmp;
+ opptr = option;
+ /* fall through */
+
+ case st_wordcmp:
+ if ((c == '=') && !*opptr) {
+ /*
+ * We matched all the way to the end of the
+ * option we were looking for, prepare to
+ * copy the argument.
+ */
+ len = 0;
+ bufptr = buffer;
+ state = st_bufcpy;
+ break;
+ } else if (c == *opptr++) {
+ /*
+ * We are currently matching, so continue
+ * to the next character on the cmdline.
+ */
+ break;
+ }
+ state = st_wordskip;
+ /* fall through */
+
+ case st_wordskip:
+ if (myisspace(c))
+ state = st_wordstart;
+ break;
+
+ case st_bufcpy:
+ if (myisspace(c)) {
+ state = st_wordstart;
+ } else {
+ /*
+ * Increment len, but don't overrun the
+ * supplied buffer and leave room for the
+ * NULL terminator.
+ */
+ if (++len < bufsize)
+ *bufptr++ = c;
+ }
+ break;
+ }
+ }
+
+ if (bufsize)
+ *bufptr = '\0';
+
+ return len;
+}
+
+int cmdline_find_option(const char *cmdline, const char *option, char *buffer,
+ int bufsize)
+{
+ return __cmdline_find_option(cmdline, COMMAND_LINE_SIZE, option,
+ buffer, bufsize);
+}
Patches currently in stable-queue which might be from thomas.lendacky(a)amd.com are
queue-4.4/x86-boot-add-early-cmdline-parsing-for-options-with-arguments.patch
This is a note to let you know that I've just added the patch titled
x86/boot: Add early cmdline parsing for options with arguments
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-boot-add-early-cmdline-parsing-for-options-with-arguments.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e505371dd83963caae1a37ead9524e8d997341be Mon Sep 17 00:00:00 2001
From: Tom Lendacky <thomas.lendacky(a)amd.com>
Date: Mon, 17 Jul 2017 16:10:33 -0500
Subject: x86/boot: Add early cmdline parsing for options with arguments
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Tom Lendacky <thomas.lendacky(a)amd.com>
commit e505371dd83963caae1a37ead9524e8d997341be upstream.
Add a cmdline_find_option() function to look for cmdline options that
take arguments. The argument is returned in a supplied buffer and the
argument length (regardless of whether it fits in the supplied buffer)
is returned, with -1 indicating not found.
Signed-off-by: Tom Lendacky <thomas.lendacky(a)amd.com>
Reviewed-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Alexander Potapenko <glider(a)google.com>
Cc: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Brijesh Singh <brijesh.singh(a)amd.com>
Cc: Dave Young <dyoung(a)redhat.com>
Cc: Dmitry Vyukov <dvyukov(a)google.com>
Cc: Jonathan Corbet <corbet(a)lwn.net>
Cc: Konrad Rzeszutek Wilk <konrad.wilk(a)oracle.com>
Cc: Larry Woodman <lwoodman(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Matt Fleming <matt(a)codeblueprint.co.uk>
Cc: Michael S. Tsirkin <mst(a)redhat.com>
Cc: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Radim Krčmář <rkrcmar(a)redhat.com>
Cc: Rik van Riel <riel(a)redhat.com>
Cc: Toshimitsu Kani <toshi.kani(a)hpe.com>
Cc: kasan-dev(a)googlegroups.com
Cc: kvm(a)vger.kernel.org
Cc: linux-arch(a)vger.kernel.org
Cc: linux-doc(a)vger.kernel.org
Cc: linux-efi(a)vger.kernel.org
Cc: linux-mm(a)kvack.org
Link: http://lkml.kernel.org/r/36b5f97492a9745dce27682305f990fc20e5cf8a.150031921…
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/cmdline.h | 2
arch/x86/lib/cmdline.c | 105 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 107 insertions(+)
--- a/arch/x86/include/asm/cmdline.h
+++ b/arch/x86/include/asm/cmdline.h
@@ -2,5 +2,7 @@
#define _ASM_X86_CMDLINE_H
int cmdline_find_option_bool(const char *cmdline_ptr, const char *option);
+int cmdline_find_option(const char *cmdline_ptr, const char *option,
+ char *buffer, int bufsize);
#endif /* _ASM_X86_CMDLINE_H */
--- a/arch/x86/lib/cmdline.c
+++ b/arch/x86/lib/cmdline.c
@@ -104,7 +104,112 @@ __cmdline_find_option_bool(const char *c
return 0; /* Buffer overrun */
}
+/*
+ * Find a non-boolean option (i.e. option=argument). In accordance with
+ * standard Linux practice, if this option is repeated, this returns the
+ * last instance on the command line.
+ *
+ * @cmdline: the cmdline string
+ * @max_cmdline_size: the maximum size of cmdline
+ * @option: option string to look for
+ * @buffer: memory buffer to return the option argument
+ * @bufsize: size of the supplied memory buffer
+ *
+ * Returns the length of the argument (regardless of if it was
+ * truncated to fit in the buffer), or -1 on not found.
+ */
+static int
+__cmdline_find_option(const char *cmdline, int max_cmdline_size,
+ const char *option, char *buffer, int bufsize)
+{
+ char c;
+ int pos = 0, len = -1;
+ const char *opptr = NULL;
+ char *bufptr = buffer;
+ enum {
+ st_wordstart = 0, /* Start of word/after whitespace */
+ st_wordcmp, /* Comparing this word */
+ st_wordskip, /* Miscompare, skip */
+ st_bufcpy, /* Copying this to buffer */
+ } state = st_wordstart;
+
+ if (!cmdline)
+ return -1; /* No command line */
+
+ /*
+ * This 'pos' check ensures we do not overrun
+ * a non-NULL-terminated 'cmdline'
+ */
+ while (pos++ < max_cmdline_size) {
+ c = *(char *)cmdline++;
+ if (!c)
+ break;
+
+ switch (state) {
+ case st_wordstart:
+ if (myisspace(c))
+ break;
+
+ state = st_wordcmp;
+ opptr = option;
+ /* fall through */
+
+ case st_wordcmp:
+ if ((c == '=') && !*opptr) {
+ /*
+ * We matched all the way to the end of the
+ * option we were looking for, prepare to
+ * copy the argument.
+ */
+ len = 0;
+ bufptr = buffer;
+ state = st_bufcpy;
+ break;
+ } else if (c == *opptr++) {
+ /*
+ * We are currently matching, so continue
+ * to the next character on the cmdline.
+ */
+ break;
+ }
+ state = st_wordskip;
+ /* fall through */
+
+ case st_wordskip:
+ if (myisspace(c))
+ state = st_wordstart;
+ break;
+
+ case st_bufcpy:
+ if (myisspace(c)) {
+ state = st_wordstart;
+ } else {
+ /*
+ * Increment len, but don't overrun the
+ * supplied buffer and leave room for the
+ * NULL terminator.
+ */
+ if (++len < bufsize)
+ *bufptr++ = c;
+ }
+ break;
+ }
+ }
+
+ if (bufsize)
+ *bufptr = '\0';
+
+ return len;
+}
+
int cmdline_find_option_bool(const char *cmdline, const char *option)
{
return __cmdline_find_option_bool(cmdline, COMMAND_LINE_SIZE, option);
}
+
+int cmdline_find_option(const char *cmdline, const char *option, char *buffer,
+ int bufsize)
+{
+ return __cmdline_find_option(cmdline, COMMAND_LINE_SIZE, option,
+ buffer, bufsize);
+}
Patches currently in stable-queue which might be from thomas.lendacky(a)amd.com are
queue-4.9/x86-boot-add-early-cmdline-parsing-for-options-with-arguments.patch
If we are asked for number of entries of an offset bigger than the
sg list we should not crash.
Cc: stable(a)vger.kernel.org
Signed-off-by: Gilad Ben-Yossef <gilad(a)benyossef.com>
---
drivers/staging/ccree/ssi_buffer_mgr.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index 13cb062..9e3f557 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -94,7 +94,7 @@ static unsigned int cc_get_sgl_nents(struct device *dev,
{
unsigned int nents = 0;
- while (nbytes) {
+ while (nbytes && sg_list) {
if (sg_list->length) {
nents++;
/* get the number of bytes in the last entry */
--
2.7.4
If we ran out of DMA pool buffers, we get into the unmap
code path with a NULL before. Deal with this by checking
the virtual mapping is not NULL.
Cc: stable(a)vger.kernel.org
Signed-off-by: Gilad Ben-Yossef <gilad(a)benyossef.com>
---
drivers/staging/ccree/ssi_buffer_mgr.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index 5e3cff3..13cb062 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -465,7 +465,8 @@ void cc_unmap_blkcipher_request(struct device *dev, void *ctx,
DMA_TO_DEVICE);
}
/* Release pool */
- if (req_ctx->dma_buf_type == CC_DMA_BUF_MLLI) {
+ if (req_ctx->dma_buf_type == CC_DMA_BUF_MLLI &&
+ req_ctx->mlli_params.mlli_virt_addr) {
dma_pool_free(req_ctx->mlli_params.curr_pool,
req_ctx->mlli_params.mlli_virt_addr,
req_ctx->mlli_params.mlli_dma_addr);
--
2.7.4
On Wed, Dec 20, 2017 at 11:49 PM, kernelci.org bot <bot(a)kernelci.org> wrote:
> Errors summary:
> 1 arch/mips/sgi-ip22/Platform:29: *** gcc doesn't support needed option -mr10k-cache-barrier=store. Stop.
This needs a backport of
c018595d83a3 ("MIPS: ip22: Fix ip28 build for modern gcc")
> 6 include/linux/irqdesc.h:94:33: error: 'NR_IRQS' undeclared here (not in a function)
> 1 arch/mips/jz4740/gpio.c:46:32: error: initializer element is not constant
> 1 arch/mips/jz4740/gpio.c:45:32: error: initializer element is not constant
> 1 arch/mips/jz4740/gpio.c:44:32: error: initializer element is not constant
> 1 arch/mips/jz4740/gpio.c:43:32: error: initializer element is not constant
> 1 arch/mips/jz4740/gpio.c:426:14: error: implicit declaration of function 'JZ4740_IRQ_INTC_GPIO' [-Werror=implicit-function-declaration]
> 1 arch/mips/jz4740/gpio.c:268:9: error: implicit declaration of function 'JZ4740_IRQ_GPIO' [-Werror=implicit-function-declaration]
Not sure, may have been fixed by
832f5dacfa0b ("MIPS: Remove all the uses of custom gpio.h")
which doesn't look appropriate for stable backports
> Warnings summary:
> 81 include/linux/blkdev.h:624:26: warning: switch condition has boolean value [-Wswitch-bool]
> 23 drivers/mtd/mtd_blkdevs.c:100:2: warning: switch condition has boolean value [-Wswitch-bool]
10fbd36e362a ("blk: rq_data_dir() should not return a boolean")
> 35 mm/page_alloc.c:5435:34: warning: array subscript is below array bounds [-Warray-bounds]
I think this was fixed by
90cae1fe1c35 ("mm/init: fix zone boundary creation")
which was intended to address another problem but got backported to 4.4
> 20 arch/arm/include/asm/cmpxchg.h:122:3: warning: value computed is not used [-Wunused-value]
> 5 arch/arm/include/asm/cmpxchg.h:205:3: warning: value computed is not used [-Wunused-value]
d975440bf80c ("asm-generic: cmpxchg: avoid warnings from macro-ized
cmpxchg() implementations")
> 10 drivers/rtc/rtc-pcf8563.c:444:5: warning: 'alm_pending' may be used uninitialized in this function [-Wmaybe-uninitialized]
cd1420d3a909 ("rtc: pfc8563: fix uninitialized variable warning")
> 6 crypto/wp512.c:987:1: warning: the frame size of 1112 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> 1 crypto/wp512.c:987:1: warning: the frame size of 1344 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> 1 crypto/wp512.c:987:1: warning: the frame size of 1280 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> 1 crypto/wp512.c:987:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> 1 crypto/wp512.c:987:1: warning: the frame size of 1144 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> 1 crypto/wp512.c:987:1: warning: the frame size of 1120 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> 1 crypto/serpent_generic.c:436:1: warning: the frame size of 1472 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> 1 crypto/serpent_generic.c:436:1: warning: the frame size of 1456 bytes is larger than 1024 bytes [-Wframe-larger-than=]
7d6e9105026 ("crypto: improve gcc optimization flags for serpent and wp512")
> 4 fs/nfsd/nfs4state.c:3962:3: warning: 'old_deny_bmap' may be used uninitialized in this function [-Wmaybe-uninitialized]
6ac75368e1a6 ("nfsd: work around a gcc-5.1 warning")
> 3 drivers/net/wireless/brcm80211/brcmfmac/fwsignal.c:1478:8: warning: 'skb' may be used uninitialized in this function [-Wmaybe-uninitialized]
22f44150aad7 ("brcmfmac: avoid gcc-5.1 warning")
> 2 lib/iommu-common.c:15:24: warning: large integer implicitly truncated to unsigned type [-Woverflow]
447f6a95a9c8 ("lib/iommu-common.c: do not use 0xffffffffffffffffl for
computing align_mask")
> 2 drivers/hid/hid-input.c:1163:67: warning: logical not is only applied to the left hand side of comparison [-Wlogical-not-parentheses]
09a5c34e8d6b ("HID: hid-input: Add parentheses to quell gcc warning")
> 2 drivers/atm/iphase.c:1176:12: warning: this 'if' clause does not guard... [-Wmisleading-indentation]
cbb41b91e68a ("atm: iphase: fix misleading indention")
> 2 arch/x86/include/asm/msr.h:209:23: warning: right shift count >= width of type [-Wshift-count-overflow]
cf991de2f614 ("x86/asm/msr: Make wrmsrl_safe() a function")
47edb65178cb ("x86/asm/msr: Make wrmsrl() a function")
> 1 fs/gfs2/dir.c:978:8: warning: 'leaf_no' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 fs/gfs2/dir.c:759:9: warning: 'leaf_no' may be used uninitialized in this function [-Wmaybe-uninitialized]
d39cb4a59729 ("gfs2: avoid uninitialized variable warning")
> 1 drivers/net/ethernet/ti/cpmac.c:1238:2: warning: #warning FIXME: unhardcode gpio&reset bits [-Wcpp]
d43e6fb4ac4a ("cpmac: remove hopeless #warning")
> 1 drivers/scsi/mvsas/mv_sas.c:736:3: warning: this 'else' clause does not guard... [-Wmisleading-indentation]
7789cd39274c ("mvsas: fix misleading indentation")
> 1 drivers/scsi/bfa/bfa_ioc.c:3673:4: warning: this 'if' clause does not guard... [-Wmisleading-indentation]
> 1 drivers/scsi/bfa/bfa_ioc.c:3665:3: warning: this 'if' clause does not guard... [-Wmisleading-indentation]
b7f4d6343820 ("bfa: Fix indentation")
> 1 arch/mips/ralink/rt305x.c:94:13: warning: 'rt305x_wdt_reset' defined but not used [-Wunused-function]
886f9c69fc68 ("MIPS: ralink: Remove unused rt*_wdt_reset functions")
> 1 arch/mips/ralink/prom.c:65:2: warning: 'argv' is used uninitialized in this function [-Wuninitialized]
> 1 arch/mips/ralink/prom.c:65:2: warning: 'argc' is used uninitialized in this function [-Wuninitialized]
9c48568b3692 ("MIPS: ralink: Cosmetic change to prom_init().")
> 1 arch/mips/dec/int-handler.S:198: Warning: macro instruction expanded into multiple instructions in a branch delay slot
> 1 arch/mips/dec/int-handler.S:149: Warning: macro instruction expanded into multiple instructions in a branch delay slot
3021773c7c3e ("MIPS: DEC: Avoid la pseudo-instruction in delay slots")
> 1 arch/arm/kernel/head-nommu.S:167: Warning: Use of r13 as a source register is deprecated when r15 is the destination register.
970d96f9a81b ("ARM: 8383/1: nommu: avoid deprecated source register on mov")
> 1 .config:871:warning: override: NOHIGHMEM changes choice state
> 1 .config:870:warning: override: SLOB changes choice state
> 1 .config:868:warning: override: KERNEL_XZ changes choice state
> 1 .config:829:warning: override: SLOB changes choice state
> 1 .config:827:warning: override: KERNEL_XZ changes choice state
> 1 .config:794:warning: override: SLOB changes choice state
> 1 .config:771:warning: override: SLOB changes choice state
236dec051078 ("kconfig: tinyconfig: provide whole choice blocks to
avoid warnings")
> 1 drivers/scsi/be2iscsi/be_main.c:3168:18: warning: logical not is only applied to the left hand side of comparison [-Wlogical-not-parentheses]
dd29dae00d39 ("be2iscsi: Fix bogus WARN_ON length check")
> 1 drivers/scsi/aic94xx/aic94xx_sds.c:597:2: warning: 'offs' may be used uninitialized in this function [-Wmaybe-uninitialized]
36dd5acd1965 ("aic94xx: Skip reading user settings if flash is not found")
> 1 drivers/rtc/rtc-armada38x.c:91:22: warning: unused variable 'flags' [-Wunused-variable]
d80238bbcad3 ("rtc: armada38x: Remove unused variable from
armada38x_rtc_set_time()")
> 1 drivers/net/wireless/iwlegacy/3945.c:1022:5: warning: suggest explicit braces to avoid ambiguous 'else' [-Wparentheses]
2cce76c3fab4 ("iwlegacy: avoid warning about missing braces")
> 1 drivers/net/ethernet/neterion/vxge/vxge-main.c:2149:13: warning: 'adaptive_coalesce_rx_interrupts' defined but not used [-Wunused-function]
> 1 drivers/net/ethernet/neterion/vxge/vxge-main.c:2121:13: warning: 'adaptive_coalesce_tx_interrupts' defined but not used [-Wunused-function]
57e7c8cef224 ("net: vxge: avoid unused function warnings")
> 1 drivers/net/ethernet/dec/tulip/winbond-840.c:910:2: warning: #warning Processor architecture undefined [-Wcpp]
de92718883dd ("net: tulip: turn compile-time warning into dev_warn()")
> 1 drivers/net/ethernet/dec/tulip/uli526x.c:1086:4: warning: this 'else' clause does not guard... [-Wmisleading-indentation]
e1395a321eab ("drivers/net/ethernet/dec/tulip/uli526x.c: fix
misleading indentation in uli526x_timer")
> 1 drivers/mtd/maps/pmcmsp-flash.c:149:30: warning: passing argument 1 of 'strncpy' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers]
79ad07d45743 ("mtd: pmcmsp-flash: Allocating too much in init_msp_flash()")
906b268477bc ("mtd: pmcmsp: use kstrndup instead of kmalloc+strncpy")
> 1 drivers/media/platform/coda/./trace.h:12:0: warning: "TRACE_SYSTEM_STRING" redefined
> 1 include/trace/ftrace.h:28:0: warning: "TRACE_SYSTEM_STRING" redefined
ae1c75d6ce3a ("[media] coda: remove extraneous TRACE_SYSTEM_STRING")
> 1 drivers/net/fddi/defxx.c:595:10: warning: '*((void *)&bar_len+8)' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 drivers/net/fddi/defxx.c:595:10: warning: '*((void *)&bar_len+4)' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 drivers/net/fddi/defxx.c:594:5: warning: '*((void *)&bar_start+8)' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 drivers/net/fddi/defxx.c:594:5: warning: '*((void *)&bar_start+4)' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 drivers/net/fddi/defxx.c:586:10: warning: 'bar_start' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 drivers/net/fddi/defxx.c:586:10: warning: 'bar_len' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 drivers/net/fddi/defxx.c:583:10: warning: 'bar_start' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 drivers/net/fddi/defxx.c:583:10: warning: 'bar_len' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 drivers/net/fddi/defxx.c:3720:5: warning: '*((void *)&bar_start+8)' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 drivers/net/fddi/defxx.c:3720:5: warning: '*((void *)&bar_start+16)' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 include/linux/ioport.h:199:37: warning: 'bar_start' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 include/linux/ioport.h:198:33: warning: 'bar_start' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 include/linux/ioport.h:198:33: warning: 'bar_len' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 include/linux/ioport.h:198:33: warning: '*((void *)&bar_len+8)' may be used uninitialized in this function [-Wmaybe-uninitialized]
> 1 include/linux/ioport.h:198:33: warning: '*((void *)&bar_len+16)' may be used uninitialized in this function [-Wmaybe-uninitialized]
62f2aaabcf41 ("defxx: fix build warning")
> 1 include/linux/kernel.h:729:17: warning: comparison of distinct pointer types lacks a cast
> 1 include/asm-generic/div64.h:43:28: warning: comparison of distinct pointer types lacks a cast
30544af54837 ("e1000e: fix call to do_div() to use u64 arg")
> 1 drivers/scsi/megaraid/megaraid_sas_fusion.c:1723:3: warning: this 'if' clause does not guard... [-Wmisleading-indentation]
4a5c814d9339 ("megaraid_sas : Add separate functions for building
sysPD IOs and non RW LDIOs")
not sure if that is appropriate for stable or even applies.
Arnd
This is the start of the stable review cycle for the 3.18.91 release.
There are 32 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Wed Jan 3 13:59:44 UTC 2018.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v3.x/stable-review/patch-3.18.91-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-3.18.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Linux 3.18.91-rc1
Linus Torvalds <torvalds(a)linux-foundation.org>
n_tty: fix EXTPROC vs ICANON interaction with TIOCINQ (aka FIONREAD)
Daniel Thompson <daniel.thompson(a)linaro.org>
usb: xhci: Add XHCI_TRUST_TX_LENGTH for Renesas uPD720201
Oliver Neukum <oneukum(a)suse.com>
usb: add RESET_RESUME for ELSA MicroLink 56K
Dmitry Fleytman Dmitry Fleytman <dmitry.fleytman(a)gmail.com>
usb: Add device quirk for Logitech HD Pro Webcam C925e
Daniele Palmas <dnlplm(a)gmail.com>
USB: serial: option: add support for Telit ME910 PID 0x1101
Mohamed Ghannam <simo.ghannam(a)gmail.com>
net: ipv4: fix for a race condition in raw_sendmsg
Tonghao Zhang <xiangxia.m.yue(a)gmail.com>
sctp: Replace use of sockets_allocated with specified macro.
Tobias Jordan <Tobias.Jordan(a)elektrobit.com>
net: mvmdio: disable/unprepare clocks in EPROBE_DEFER case
Brian King <brking(a)linux.vnet.ibm.com>
tg3: Fix rx hang on MTU change with 5717/5719
Christoph Paasch <cpaasch(a)apple.com>
tcp md5sig: Use skb's saddr when replying to an incoming segment
Sebastian Sjoholm <ssjoholm(a)mac.com>
net: qmi_wwan: add Sierra EM7565 1199:9091
Kevin Cernekee <cernekee(a)chromium.org>
netlink: Add netns check on taps
Kevin Cernekee <cernekee(a)chromium.org>
net: igmp: Use correct source address on IGMPv3 reports
Eric Dumazet <edumazet(a)google.com>
ipv6: mcast: better catch silly mtu values
Eric Dumazet <edumazet(a)google.com>
ipv4: igmp: guard against silly MTU values
Linus Torvalds <torvalds(a)linux-foundation.org>
kbuild: add '-fno-stack-check' to kernel build options
Johan Hovold <johan(a)kernel.org>
ASoC: twl4030: fix child-node lookup
Steven Rostedt (VMware) <rostedt(a)goodmis.org>
ring-buffer: Mask out the info bits when returning buffer page length
Jing Xia <jing.xia(a)spreadtrum.com>
tracing: Fix crash when it fails to alloc ring buffer
Steven Rostedt (VMware) <rostedt(a)goodmis.org>
tracing: Fix possible double free on failure of allocating trace buffer
Steven Rostedt (VMware) <rostedt(a)goodmis.org>
tracing: Remove extra zeroing out of the ring buffer page
Yelena Krivosheev <yelena(a)marvell.com>
net: mvneta: clear interface link status on port disable
Ravi Bangoria <ravi.bangoria(a)linux.vnet.ibm.com>
powerpc/perf: Dereference BHRB entries safely
Wanpeng Li <wanpeng.li(a)hotmail.com>
KVM: X86: Fix load RFLAGS w/o the fixed bit
Helge Deller <deller(a)gmx.de>
parisc: Hide Diva-built-in serial aux and graphics card
Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
PCI / PM: Force devices to D0 in pci_pm_thaw_noirq()
Takashi Iwai <tiwai(a)suse.de>
ALSA: usb-audio: Fix the missing ctl name suffix at parsing SU
Takashi Iwai <tiwai(a)suse.de>
ALSA: rawmidi: Avoid racy info ioctl via ctl device
Johan Hovold <johan(a)kernel.org>
mfd: twl6040: Fix child-node lookup
Johan Hovold <johan(a)kernel.org>
mfd: twl4030-audio: Fix sibling-node lookup
Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
crypto: mcryptd - protect the per-CPU queue with a lock
Takashi Iwai <tiwai(a)suse.de>
ACPI: APEI / ERST: Fix missing error handling in erst_reader()
-------------
Diffstat:
Makefile | 7 ++++--
arch/powerpc/perf/core-book3s.c | 8 +++++--
arch/x86/kvm/x86.c | 2 +-
crypto/mcryptd.c | 23 ++++++++----------
drivers/acpi/apei/erst.c | 2 +-
drivers/mfd/twl4030-audio.c | 9 +++++--
drivers/mfd/twl6040.c | 12 ++++++----
drivers/net/ethernet/broadcom/tg3.c | 4 +++-
drivers/net/ethernet/marvell/mvmdio.c | 3 ++-
drivers/net/ethernet/marvell/mvneta.c | 4 ++++
drivers/net/usb/qmi_wwan.c | 1 +
drivers/parisc/lba_pci.c | 33 ++++++++++++++++++++++++++
drivers/pci/pci-driver.c | 7 +++++-
drivers/tty/n_tty.c | 4 ++--
drivers/usb/core/quirks.c | 6 ++++-
drivers/usb/host/xhci-pci.c | 3 +++
drivers/usb/serial/option.c | 8 +++++++
include/crypto/mcryptd.h | 1 +
include/net/ip.h | 2 ++
kernel/trace/ring_buffer.c | 6 ++++-
kernel/trace/trace.c | 13 ++++-------
net/ipv4/devinet.c | 2 +-
net/ipv4/igmp.c | 44 +++++++++++++++++++++++++++--------
net/ipv4/ip_tunnel.c | 4 ++--
net/ipv4/raw.c | 15 ++++++++----
net/ipv4/tcp_ipv4.c | 2 +-
net/ipv6/mcast.c | 25 ++++++++++++--------
net/ipv6/tcp_ipv6.c | 2 +-
net/netlink/af_netlink.c | 3 +++
net/sctp/socket.c | 4 ++--
sound/core/rawmidi.c | 15 +++++++++---
sound/soc/codecs/twl4030.c | 4 +++-
sound/usb/mixer.c | 27 ++++++++++++---------
33 files changed, 217 insertions(+), 88 deletions(-)
From: Len Brown <len.brown(a)intel.com>
If native_calibrate_tsc() can not discover the TSC frequency,
via CPUID or via built-in table, it must return without
setting X86_FEATURE_TSC_KNOWN_FREQ. Otherwise, X86_FEATURE_TSC_KNOWN_FREQ
will prevent TSC refined calibration.
This patch allows Linux to correctly support future Intel hardware,
that has (cpu_khz != tsc_khz), and support for CPUID.15
without support for CPUID.15.crystal_khz.
This patch is needed since X86_FEATURE_TSC_KNOWN_FREQ was added in Linux-4.10:
commit 4ca4df0b7eb0
("x86/tsc: Mark TSC frequency determined by CPUID as known")
If not applied, such systems will run with tsc_khz = cpu_khz,
which may result in under-stated TSC rate, and time-of-day drift.
Signed-off-by: Len Brown <len.brown(a)intel.com>
Cc: Bin Gao <bin.gao(a)intel.com>
Cc: <stable(a)vger.kernel.org> # v4.10+
---
arch/x86/kernel/tsc.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 8ea117f8142e..ce4b71119c36 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -612,6 +612,8 @@ unsigned long native_calibrate_tsc(void)
}
}
+ if (crystal_khz == 0)
+ return 0;
/*
* TSC frequency determined by CPUID is a "hardware reported"
* frequency and is the most accurate one so far we have. This
--
2.14.0-rc0
Display WA #1183 was recently added to workaround
"Failures when enabling DPLL0 with eDP link rate 2.16
or 4.32 GHz and CD clock frequency 308.57 or 617.14 MHz
(CDCLK_CTL CD Frequency Select 10b or 11b) used in this
enabling or in previous enabling."
This workaround was designed to minimize the impact only
to save the bad case with that link rates. But HW engineers
indicated that it should be safe to apply broadly, although
they were expecting the DPLL0 link rate to be unchanged on
runtime.
We need to cover 2 cases: when we are in fact enabling DPLL0
and when we are just changing the frequency with small
differences.
This is based on previous patch by Rodrigo Vivi with suggestions
from Ville Syrjälä.
Cc: Arthur J Runyan <arthur.j.runyan(a)intel.com>
Cc: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Lucas De Marchi <lucas.demarchi(a)intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171204232210.4958-1-lucas.d…
[ Lucas: Backport to 4.15 adding back variable that has been removed on
commits not meant to be backported ]
Signed-off-by: Lucas De Marchi <lucas.demarchi(a)intel.com>
---
drivers/gpu/drm/i915/i915_reg.h | 2 ++
drivers/gpu/drm/i915/intel_cdclk.c | 35 ++++++++++++++++++++++++---------
drivers/gpu/drm/i915/intel_runtime_pm.c | 10 ++++++++++
3 files changed, 38 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 3866c49bc390..333f40bc03bb 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -6977,6 +6977,7 @@ enum {
#define RESET_PCH_HANDSHAKE_ENABLE (1<<4)
#define GEN8_CHICKEN_DCPR_1 _MMIO(0x46430)
+#define SKL_SELECT_ALTERNATE_DC_EXIT (1<<30)
#define MASK_WAKEMEM (1<<13)
#define SKL_DFSM _MMIO(0x51000)
@@ -8522,6 +8523,7 @@ enum skl_power_gate {
#define BXT_CDCLK_CD2X_DIV_SEL_2 (2<<22)
#define BXT_CDCLK_CD2X_DIV_SEL_4 (3<<22)
#define BXT_CDCLK_CD2X_PIPE(pipe) ((pipe)<<20)
+#define CDCLK_DIVMUX_CD_OVERRIDE (1<<19)
#define BXT_CDCLK_CD2X_PIPE_NONE BXT_CDCLK_CD2X_PIPE(3)
#define BXT_CDCLK_SSA_PRECHARGE_ENABLE (1<<16)
#define CDCLK_FREQ_DECIMAL_MASK (0x7ff)
diff --git a/drivers/gpu/drm/i915/intel_cdclk.c b/drivers/gpu/drm/i915/intel_cdclk.c
index b2a6d62b71c0..60cf4e58389a 100644
--- a/drivers/gpu/drm/i915/intel_cdclk.c
+++ b/drivers/gpu/drm/i915/intel_cdclk.c
@@ -860,16 +860,10 @@ static void skl_set_preferred_cdclk_vco(struct drm_i915_private *dev_priv,
static void skl_dpll0_enable(struct drm_i915_private *dev_priv, int vco)
{
- int min_cdclk = skl_calc_cdclk(0, vco);
u32 val;
WARN_ON(vco != 8100000 && vco != 8640000);
- /* select the minimum CDCLK before enabling DPLL 0 */
- val = CDCLK_FREQ_337_308 | skl_cdclk_decimal(min_cdclk);
- I915_WRITE(CDCLK_CTL, val);
- POSTING_READ(CDCLK_CTL);
-
/*
* We always enable DPLL0 with the lowest link rate possible, but still
* taking into account the VCO required to operate the eDP panel at the
@@ -923,7 +917,7 @@ static void skl_set_cdclk(struct drm_i915_private *dev_priv,
{
int cdclk = cdclk_state->cdclk;
int vco = cdclk_state->vco;
- u32 freq_select, pcu_ack;
+ u32 freq_select, pcu_ack, cdclk_ctl;
int ret;
WARN_ON((cdclk == 24000) != (vco == 0));
@@ -940,7 +934,7 @@ static void skl_set_cdclk(struct drm_i915_private *dev_priv,
return;
}
- /* set CDCLK_CTL */
+ /* Choose frequency for this cdclk */
switch (cdclk) {
case 450000:
case 432000:
@@ -968,10 +962,33 @@ static void skl_set_cdclk(struct drm_i915_private *dev_priv,
dev_priv->cdclk.hw.vco != vco)
skl_dpll0_disable(dev_priv);
+ cdclk_ctl = I915_READ(CDCLK_CTL);
+
+ if (dev_priv->cdclk.hw.vco != vco) {
+ /* Wa Display #1183: skl,kbl,cfl */
+ cdclk_ctl &= ~(CDCLK_FREQ_SEL_MASK | CDCLK_FREQ_DECIMAL_MASK);
+ cdclk_ctl |= freq_select | skl_cdclk_decimal(cdclk);
+ I915_WRITE(CDCLK_CTL, cdclk_ctl);
+ }
+
+ /* Wa Display #1183: skl,kbl,cfl */
+ cdclk_ctl |= CDCLK_DIVMUX_CD_OVERRIDE;
+ I915_WRITE(CDCLK_CTL, cdclk_ctl);
+ POSTING_READ(CDCLK_CTL);
+
if (dev_priv->cdclk.hw.vco != vco)
skl_dpll0_enable(dev_priv, vco);
- I915_WRITE(CDCLK_CTL, freq_select | skl_cdclk_decimal(cdclk));
+ /* Wa Display #1183: skl,kbl,cfl */
+ cdclk_ctl &= ~(CDCLK_FREQ_SEL_MASK | CDCLK_FREQ_DECIMAL_MASK);
+ I915_WRITE(CDCLK_CTL, cdclk_ctl);
+
+ cdclk_ctl |= freq_select | skl_cdclk_decimal(cdclk);
+ I915_WRITE(CDCLK_CTL, cdclk_ctl);
+
+ /* Wa Display #1183: skl,kbl,cfl */
+ cdclk_ctl &= ~CDCLK_DIVMUX_CD_OVERRIDE;
+ I915_WRITE(CDCLK_CTL, cdclk_ctl);
POSTING_READ(CDCLK_CTL);
/* inform PCU of the change */
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index 8af286c63d3b..e0bc2debdad0 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -598,6 +598,11 @@ void gen9_enable_dc5(struct drm_i915_private *dev_priv)
DRM_DEBUG_KMS("Enabling DC5\n");
+ /* Wa Display #1183: skl,kbl,cfl */
+ if (IS_GEN9_BC(dev_priv))
+ I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) |
+ SKL_SELECT_ALTERNATE_DC_EXIT);
+
gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC5);
}
@@ -625,6 +630,11 @@ void skl_disable_dc6(struct drm_i915_private *dev_priv)
{
DRM_DEBUG_KMS("Disabling DC6\n");
+ /* Wa Display #1183: skl,kbl,cfl */
+ if (IS_GEN9_BC(dev_priv))
+ I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) |
+ SKL_SELECT_ALTERNATE_DC_EXIT);
+
gen9_set_dc_state(dev_priv, DC_STATE_DISABLE);
}
--
2.14.3
This is a note to let you know that I've just added the patch titled
tcp_bbr: reset long-term bandwidth sampling on loss recovery undo
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tcp_bbr-reset-long-term-bandwidth-sampling-on-loss-recovery-undo.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 600647d467c6d04b3954b41a6ee1795b5ae00550 Mon Sep 17 00:00:00 2001
From: Neal Cardwell <ncardwell(a)google.com>
Date: Thu, 7 Dec 2017 12:43:32 -0500
Subject: tcp_bbr: reset long-term bandwidth sampling on loss recovery undo
From: Neal Cardwell <ncardwell(a)google.com>
commit 600647d467c6d04b3954b41a6ee1795b5ae00550 upstream.
Fix BBR so that upon notification of a loss recovery undo BBR resets
long-term bandwidth sampling.
Under high reordering, reordering events can be interpreted as loss.
If the reordering and spurious loss estimates are high enough, this
can cause BBR to spuriously estimate that we are seeing loss rates
high enough to trigger long-term bandwidth estimation. To avoid that
problem, this commit resets long-term bandwidth sampling on loss
recovery undo events.
Signed-off-by: Neal Cardwell <ncardwell(a)google.com>
Reviewed-by: Yuchung Cheng <ycheng(a)google.com>
Acked-by: Soheil Hassas Yeganeh <soheil(a)google.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/ipv4/tcp_bbr.c | 1 +
1 file changed, 1 insertion(+)
--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -847,6 +847,7 @@ static u32 bbr_undo_cwnd(struct sock *sk
bbr->full_bw = 0; /* spurious slow-down; reset full pipe detection */
bbr->full_bw_cnt = 0;
+ bbr_reset_lt_bw_sampling(sk);
return tcp_sk(sk)->snd_cwnd;
}
Patches currently in stable-queue which might be from ncardwell(a)google.com are
queue-4.9/tcp_bbr-reset-full-pipe-detection-on-loss-recovery-undo.patch
queue-4.9/tcp_bbr-reset-long-term-bandwidth-sampling-on-loss-recovery-undo.patch
This is a note to let you know that I've just added the patch titled
tcp_bbr: reset full pipe detection on loss recovery undo
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tcp_bbr-reset-full-pipe-detection-on-loss-recovery-undo.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 2f6c498e4f15d27852c04ed46d804a39137ba364 Mon Sep 17 00:00:00 2001
From: Neal Cardwell <ncardwell(a)google.com>
Date: Thu, 7 Dec 2017 12:43:31 -0500
Subject: tcp_bbr: reset full pipe detection on loss recovery undo
From: Neal Cardwell <ncardwell(a)google.com>
commit 2f6c498e4f15d27852c04ed46d804a39137ba364 upstream.
Fix BBR so that upon notification of a loss recovery undo BBR resets
the full pipe detection (STARTUP exit) state machine.
Under high reordering, reordering events can be interpreted as loss.
If the reordering and spurious loss estimates are high enough, this
could previously cause BBR to spuriously estimate that the pipe is
full.
Since spurious loss recovery means that our overall sending will have
slowed down spuriously, this commit gives a flow more time to probe
robustly for bandwidth and decide the pipe is really full.
Signed-off-by: Neal Cardwell <ncardwell(a)google.com>
Reviewed-by: Yuchung Cheng <ycheng(a)google.com>
Acked-by: Soheil Hassas Yeganeh <soheil(a)google.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/ipv4/tcp_bbr.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -843,6 +843,10 @@ static u32 bbr_sndbuf_expand(struct sock
*/
static u32 bbr_undo_cwnd(struct sock *sk)
{
+ struct bbr *bbr = inet_csk_ca(sk);
+
+ bbr->full_bw = 0; /* spurious slow-down; reset full pipe detection */
+ bbr->full_bw_cnt = 0;
return tcp_sk(sk)->snd_cwnd;
}
Patches currently in stable-queue which might be from ncardwell(a)google.com are
queue-4.9/tcp_bbr-reset-full-pipe-detection-on-loss-recovery-undo.patch
queue-4.9/tcp_bbr-reset-long-term-bandwidth-sampling-on-loss-recovery-undo.patch