There is one CVE: CVE-2018-5391 kernel: IP fragments with random offsets allow a
remote denial of service (FragmentSmack),
A fix is a merge commit in the Linux kernel tree:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
consisting of the following commits:
7969e5c40dfd04799d4341f1b7cd266b6e47f227 ip: discard IPv4 datagrams with overlapping segments.
385114dec8a49b5e5945e77ba7de6356106713f4 net: modify skb_rbtree_purge to return the truesize of all purged skbs.
fa0f527358bd900ef92f925878ed6bfbd51305cc ip: use rb trees for IP frag queue.
All above patches are with rb tree to fix this CVE, which is very similar the CVE-2018-5390, that I have backport
to stable 4.4 branch in last year.
In these patchset, I will backport some patches to fix CVE-2018-5391 with rb tree.
v1->v2: in this patch, ipv6: defrag: drop non-last frags smaller than min mtu
fix the incorrect return value of nf_ct_frag6_gather.
Dan Carpenter (1):
ipv4: frags: precedence bug in ip_expire()
Eric Dumazet (2):
net: speed up skb_rbtree_purge()
inet: frags: get rif of inet_frag_evicting()
Florian Westphal (1):
ipv6: defrag: drop non-last frags smaller than min mtu
Michal Kubecek (1):
net: ipv4: do not handle duplicate fragments as overlapping
Peter Oskolkov (5):
ip: discard IPv4 datagrams with overlapping segments.
net: modify skb_rbtree_purge to return the truesize of all purged
skbs.
ip: use rb trees for IP frag queue.
ip: add helpers to process in-order fragments faster.
ip: process in-order fragments efficiently
Taehee Yoo (1):
ip: frags: fix crash in ip_do_fragment()
include/linux/skbuff.h | 4 +-
include/net/inet_frag.h | 12 +-
include/uapi/linux/snmp.h | 1 +
net/core/skbuff.c | 17 +-
net/ipv4/inet_fragment.c | 16 +-
net/ipv4/ip_fragment.c | 410 +++++++++++++++++++-------------
net/ipv4/proc.c | 1 +
net/ipv6/netfilter/nf_conntrack_reasm.c | 6 +
net/ipv6/reassembly.c | 9 +-
9 files changed, 292 insertions(+), 184 deletions(-)
--
1.8.3.1
The patch below does not apply to the 3.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From e0a352fabce61f730341d119fbedf71ffdb8663f Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david(a)redhat.com>
Date: Fri, 1 Feb 2019 14:21:19 -0800
Subject: [PATCH] mm: migrate: don't rely on __PageMovable() of newpage after
unlocking it
We had a race in the old balloon compaction code before b1123ea6d3b3
("mm: balloon: use general non-lru movable page feature") refactored it
that became visible after backporting 195a8c43e93d ("virtio-balloon:
deflate via a page list") without the refactoring.
The bug existed from commit d6d86c0a7f8d ("mm/balloon_compaction:
redesign ballooned pages management") till b1123ea6d3b3 ("mm: balloon:
use general non-lru movable page feature"). d6d86c0a7f8d
("mm/balloon_compaction: redesign ballooned pages management") was
backported to 3.12, so the broken kernels are stable kernels [3.12 -
4.7].
There was a subtle race between dropping the page lock of the newpage in
__unmap_and_move() and checking for __is_movable_balloon_page(newpage).
Just after dropping this page lock, virtio-balloon could go ahead and
deflate the newpage, effectively dequeueing it and clearing PageBalloon,
in turn making __is_movable_balloon_page(newpage) fail.
This resulted in dropping the reference of the newpage via
putback_lru_page(newpage) instead of put_page(newpage), leading to
page->lru getting modified and a !LRU page ending up in the LRU lists.
With 195a8c43e93d ("virtio-balloon: deflate via a page list")
backported, one would suddenly get corrupted lists in
release_pages_balloon():
- WARNING: CPU: 13 PID: 6586 at lib/list_debug.c:59 __list_del_entry+0xa1/0xd0
- list_del corruption. prev->next should be ffffe253961090a0, but was dead000000000100
Nowadays this race is no longer possible, but it is hidden behind very
ugly handling of __ClearPageMovable() and __PageMovable().
__ClearPageMovable() will not make __PageMovable() fail, only
PageMovable(). So the new check (__PageMovable(newpage)) will still
hold even after newpage was dequeued by virtio-balloon.
If anybody would ever change that special handling, the BUG would be
introduced again. So instead, make it explicit and use the information
of the original isolated page before migration.
This patch can be backported fairly easy to stable kernels (in contrast
to the refactoring).
Link: http://lkml.kernel.org/r/20190129233217.10747-1-david@redhat.com
Fixes: d6d86c0a7f8d ("mm/balloon_compaction: redesign ballooned pages management")
Signed-off-by: David Hildenbrand <david(a)redhat.com>
Reported-by: Vratislav Bendel <vbendel(a)redhat.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Acked-by: Rafael Aquini <aquini(a)redhat.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Naoya Horiguchi <n-horiguchi(a)ah.jp.nec.com>
Cc: Jan Kara <jack(a)suse.cz>
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Dominik Brodowski <linux(a)dominikbrodowski.net>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: Vratislav Bendel <vbendel(a)redhat.com>
Cc: Rafael Aquini <aquini(a)redhat.com>
Cc: Konstantin Khlebnikov <k.khlebnikov(a)samsung.com>
Cc: Minchan Kim <minchan(a)kernel.org>
Cc: <stable(a)vger.kernel.org> [3.12 - 4.7]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
diff --git a/mm/migrate.c b/mm/migrate.c
index 712b231a7376..d4fd680be3b0 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1130,10 +1130,13 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
* If migration is successful, decrease refcount of the newpage
* which will not free the page because new page owner increased
* refcounter. As well, if it is LRU page, add the page to LRU
- * list in here.
+ * list in here. Use the old state of the isolated source page to
+ * determine if we migrated a LRU page. newpage was already unlocked
+ * and possibly modified by its owner - don't rely on the page
+ * state.
*/
if (rc == MIGRATEPAGE_SUCCESS) {
- if (unlikely(__PageMovable(newpage)))
+ if (unlikely(!is_lru))
put_page(newpage);
else
putback_lru_page(newpage);
Some H5 boards seem to not have proper trace lengths for eMMC to be able
to use the default setting for the delay chains under HS-DDR mode. These
include the Bananapi M2+ H5 and NanoPi NEO Core2. However the Libre
Computer ALL-H3-CC-H5 works just fine.
For the H5 (at least for now), default to not enabling HS-DDR modes in
the driver, and expect the device tree to signal HS-DDR capability on
boards that work.
Reported-by: Chris Blake <chrisrblake93(a)gmail.com>
Fixes: 07bafc1e3536 ("mmc: sunxi: Use new timing mode for A64 eMMC controller")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Chen-Yu Tsai <wens(a)csie.org>
---
drivers/mmc/host/sunxi-mmc.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
index 279e326e397e..7415af8c8ff6 100644
--- a/drivers/mmc/host/sunxi-mmc.c
+++ b/drivers/mmc/host/sunxi-mmc.c
@@ -1399,7 +1399,16 @@ static int sunxi_mmc_probe(struct platform_device *pdev)
mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED |
MMC_CAP_ERASE | MMC_CAP_SDIO_IRQ;
- if (host->cfg->clk_delays || host->use_new_timings)
+ /*
+ * Some H5 devices do not have signal traces precise enough to
+ * use HS DDR mode for their eMMC chips.
+ *
+ * We still enable HS DDR modes for all the other controller
+ * variants that support them.
+ */
+ if ((host->cfg->clk_delays || host->use_new_timings) &&
+ !of_device_is_compatible(pdev->dev.of_node,
+ "allwinner,sun50i-h5-emmc"))
mmc->caps |= MMC_CAP_1_8V_DDR | MMC_CAP_3_3V_DDR;
ret = mmc_of_parse(mmc);
--
2.20.1
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From e0a352fabce61f730341d119fbedf71ffdb8663f Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david(a)redhat.com>
Date: Fri, 1 Feb 2019 14:21:19 -0800
Subject: [PATCH] mm: migrate: don't rely on __PageMovable() of newpage after
unlocking it
We had a race in the old balloon compaction code before b1123ea6d3b3
("mm: balloon: use general non-lru movable page feature") refactored it
that became visible after backporting 195a8c43e93d ("virtio-balloon:
deflate via a page list") without the refactoring.
The bug existed from commit d6d86c0a7f8d ("mm/balloon_compaction:
redesign ballooned pages management") till b1123ea6d3b3 ("mm: balloon:
use general non-lru movable page feature"). d6d86c0a7f8d
("mm/balloon_compaction: redesign ballooned pages management") was
backported to 3.12, so the broken kernels are stable kernels [3.12 -
4.7].
There was a subtle race between dropping the page lock of the newpage in
__unmap_and_move() and checking for __is_movable_balloon_page(newpage).
Just after dropping this page lock, virtio-balloon could go ahead and
deflate the newpage, effectively dequeueing it and clearing PageBalloon,
in turn making __is_movable_balloon_page(newpage) fail.
This resulted in dropping the reference of the newpage via
putback_lru_page(newpage) instead of put_page(newpage), leading to
page->lru getting modified and a !LRU page ending up in the LRU lists.
With 195a8c43e93d ("virtio-balloon: deflate via a page list")
backported, one would suddenly get corrupted lists in
release_pages_balloon():
- WARNING: CPU: 13 PID: 6586 at lib/list_debug.c:59 __list_del_entry+0xa1/0xd0
- list_del corruption. prev->next should be ffffe253961090a0, but was dead000000000100
Nowadays this race is no longer possible, but it is hidden behind very
ugly handling of __ClearPageMovable() and __PageMovable().
__ClearPageMovable() will not make __PageMovable() fail, only
PageMovable(). So the new check (__PageMovable(newpage)) will still
hold even after newpage was dequeued by virtio-balloon.
If anybody would ever change that special handling, the BUG would be
introduced again. So instead, make it explicit and use the information
of the original isolated page before migration.
This patch can be backported fairly easy to stable kernels (in contrast
to the refactoring).
Link: http://lkml.kernel.org/r/20190129233217.10747-1-david@redhat.com
Fixes: d6d86c0a7f8d ("mm/balloon_compaction: redesign ballooned pages management")
Signed-off-by: David Hildenbrand <david(a)redhat.com>
Reported-by: Vratislav Bendel <vbendel(a)redhat.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Acked-by: Rafael Aquini <aquini(a)redhat.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Naoya Horiguchi <n-horiguchi(a)ah.jp.nec.com>
Cc: Jan Kara <jack(a)suse.cz>
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Dominik Brodowski <linux(a)dominikbrodowski.net>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: Vratislav Bendel <vbendel(a)redhat.com>
Cc: Rafael Aquini <aquini(a)redhat.com>
Cc: Konstantin Khlebnikov <k.khlebnikov(a)samsung.com>
Cc: Minchan Kim <minchan(a)kernel.org>
Cc: <stable(a)vger.kernel.org> [3.12 - 4.7]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
diff --git a/mm/migrate.c b/mm/migrate.c
index 712b231a7376..d4fd680be3b0 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1130,10 +1130,13 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
* If migration is successful, decrease refcount of the newpage
* which will not free the page because new page owner increased
* refcounter. As well, if it is LRU page, add the page to LRU
- * list in here.
+ * list in here. Use the old state of the isolated source page to
+ * determine if we migrated a LRU page. newpage was already unlocked
+ * and possibly modified by its owner - don't rely on the page
+ * state.
*/
if (rc == MIGRATEPAGE_SUCCESS) {
- if (unlikely(__PageMovable(newpage)))
+ if (unlikely(!is_lru))
put_page(newpage);
else
putback_lru_page(newpage);
From: Benjamin Herrenschmidt <benh(a)kernel.crashing.org>
commit 726e41097920a73e4c7c33385dcc0debb1281e18 upstream
For devices with a class, we create a "glue" directory between
the parent device and the new device with the class name.
This directory is never "explicitely" removed when empty however,
this is left to the implicit sysfs removal done by kobject_release()
when the object loses its last reference via kobject_put().
This is problematic because as long as it's not been removed from
sysfs, it is still present in the class kset and in sysfs directory
structure.
The presence in the class kset exposes a use after free bug fixed
by the previous patch, but the presence in sysfs means that until
the kobject is released, which can take a while (especially with
kobject debugging), any attempt at re-creating such as binding a
new device for that class/parent pair, will result in a sysfs
duplicate file name error.
This fixes it by instead doing an explicit kobject_del() when
the glue dir is empty, by keeping track of the number of
child devices of the gluedir.
This is made easy by the fact that all glue dir operations are
done with a global mutex, and there's already a function
(cleanup_glue_dir) called in all the right places taking that
mutex that can be enhanced for this. It appears that this was
in fact the intent of the function, but the implementation was
wrong.
Backport Note: kref_read() is not present in 4.4. Hence,
use atomic_read(&kref.refcount) instead of kref_read(&kref).
Signed-off-by: Benjamin Herrenschmidt <benh(a)kernel.crashing.org>
Acked-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Zubin Mithra <zsm(a)chromium.org>
---
drivers/base/core.c | 2 ++
include/linux/kobject.h | 17 +++++++++++++++++
2 files changed, 19 insertions(+)
diff --git a/drivers/base/core.c b/drivers/base/core.c
index 049ccc070ce56..cb5718d2669ee 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -862,6 +862,8 @@ static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir)
return;
mutex_lock(&gdp_mutex);
+ if (!kobject_has_children(glue_dir))
+ kobject_del(glue_dir);
kobject_put(glue_dir);
mutex_unlock(&gdp_mutex);
}
diff --git a/include/linux/kobject.h b/include/linux/kobject.h
index e6284591599ec..5957c6a3fd7f9 100644
--- a/include/linux/kobject.h
+++ b/include/linux/kobject.h
@@ -113,6 +113,23 @@ extern void kobject_put(struct kobject *kobj);
extern const void *kobject_namespace(struct kobject *kobj);
extern char *kobject_get_path(struct kobject *kobj, gfp_t flag);
+/**
+ * kobject_has_children - Returns whether a kobject has children.
+ * @kobj: the object to test
+ *
+ * This will return whether a kobject has other kobjects as children.
+ *
+ * It does NOT account for the presence of attribute files, only sub
+ * directories. It also assumes there is no concurrent addition or
+ * removal of such children, and thus relies on external locking.
+ */
+static inline bool kobject_has_children(struct kobject *kobj)
+{
+ WARN_ON_ONCE(atomic_read(&kobj->kref.refcount) == 0);
+
+ return kobj->sd && kobj->sd->dir.subdirs;
+}
+
struct kobj_type {
void (*release)(struct kobject *kobj);
const struct sysfs_ops *sysfs_ops;
--
2.20.1.495.gaa96b0ce6b-goog
of_cpu_device_node_get() will increase the refcount of device_node,
it is necessary to call of_node_put() at the end to release the
refcount.
Fixes: 9eb15dbbfa1a2 ("cpufreq: Add cpufreq driver for Tegra124")
Cc: <stable(a)vger.kernel.org> # 4.4+
Signed-off-by: Yangtao Li <tiny.windzz(a)gmail.com>
---
v2:
- move of_node_put() to the very end
---
drivers/cpufreq/tegra124-cpufreq.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/cpufreq/tegra124-cpufreq.c b/drivers/cpufreq/tegra124-cpufreq.c
index 43530254201a..4bb154f6c54c 100644
--- a/drivers/cpufreq/tegra124-cpufreq.c
+++ b/drivers/cpufreq/tegra124-cpufreq.c
@@ -134,6 +134,8 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, priv);
+ of_node_put(np);
+
return 0;
out_switch_to_pllx:
--
2.17.0
Hi,
some of my boot tests in v4.19.y and v4.20.y currently generate tracebacks
as announced in commit 7c528e457d53 ("of: overlay: add missing of_node_put()
after add new node to changeset"). This commit has been applied to both
stable releases. The following patch series fixes the problem.
v4.19.y:
a613b26a5013 of: Convert to using %pOFn instead of device_node.name
144552c78692 of: overlay: add tests to validate kfrees from overlay removal
5b2c2f5a0ea3 of: overlay: add missing of_node_get() in __of_attach_node_sysfs
6b4955ba7bc0 of: overlay: use prop add changeset entry for property in new nodes
8814dc46bd9e of: overlay: do not duplicate properties from overlay for new nodes
v4.20.y:
144552c78692 of: overlay: add tests to validate kfrees from overlay removal
5b2c2f5a0ea3 of: overlay: add missing of_node_get() in __of_attach_node_sysfs
6b4955ba7bc0 of: overlay: use prop add changeset entry for property in new nodes
8814dc46bd9e of: overlay: do not duplicate properties from overlay for new nodes
Not all of those patches fix bugs, but they are required for the series
to apply cleanly.
Please consider applying the above patches to both releases.
Thanks,
Guenter
Commit-ID: 9dff0aa95a324e262ffb03f425d00e4751f3294e
Gitweb: https://git.kernel.org/tip/9dff0aa95a324e262ffb03f425d00e4751f3294e
Author: Mark Rutland <mark.rutland(a)arm.com>
AuthorDate: Thu, 10 Jan 2019 14:27:45 +0000
Committer: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Mon, 4 Feb 2019 08:45:25 +0100
perf/core: Don't WARN() for impossible ring-buffer sizes
The perf tool uses /proc/sys/kernel/perf_event_mlock_kb to determine how
large its ringbuffer mmap should be. This can be configured to arbitrary
values, which can be larger than the maximum possible allocation from
kmalloc.
When this is configured to a suitably large value (e.g. thanks to the
perf fuzzer), attempting to use perf record triggers a WARN_ON_ONCE() in
__alloc_pages_nodemask():
WARNING: CPU: 2 PID: 5666 at mm/page_alloc.c:4511 __alloc_pages_nodemask+0x3f8/0xbc8
Let's avoid this by checking that the requested allocation is possible
before calling kzalloc.
Reported-by: Julien Thierry <julien.thierry(a)arm.com>
Signed-off-by: Mark Rutland <mark.rutland(a)arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Reviewed-by: Julien Thierry <julien.thierry(a)arm.com>
Cc: Alexander Shishkin <alexander.shishkin(a)linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme(a)redhat.com>
Cc: Jiri Olsa <jolsa(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Namhyung Kim <namhyung(a)kernel.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: <stable(a)vger.kernel.org>
Link: https://lkml.kernel.org/r/20190110142745.25495-1-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
---
kernel/events/ring_buffer.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index 4a9937076331..309ef5a64af5 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -734,6 +734,9 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
size = sizeof(struct ring_buffer);
size += nr_pages * sizeof(void *);
+ if (order_base_2(size) >= MAX_ORDER)
+ goto fail;
+
rb = kzalloc(size, GFP_KERNEL);
if (!rb)
goto fail;
Commit-ID: 602cae04c4864bb3487dfe4c2126c8d9e7e1614a
Gitweb: https://git.kernel.org/tip/602cae04c4864bb3487dfe4c2126c8d9e7e1614a
Author: Peter Zijlstra <peterz(a)infradead.org>
AuthorDate: Wed, 19 Dec 2018 17:53:50 +0100
Committer: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Mon, 4 Feb 2019 08:44:51 +0100
perf/x86/intel: Delay memory deallocation until x86_pmu_dead_cpu()
intel_pmu_cpu_prepare() allocated memory for ->shared_regs among other
members of struct cpu_hw_events. This memory is released in
intel_pmu_cpu_dying() which is wrong. The counterpart of the
intel_pmu_cpu_prepare() callback is x86_pmu_dead_cpu().
Otherwise if the CPU fails on the UP path between CPUHP_PERF_X86_PREPARE
and CPUHP_AP_PERF_X86_STARTING then it won't release the memory but
allocate new memory on the next attempt to online the CPU (leaking the
old memory).
Also, if the CPU down path fails between CPUHP_AP_PERF_X86_STARTING and
CPUHP_PERF_X86_PREPARE then the CPU will go back online but never
allocate the memory that was released in x86_pmu_dying_cpu().
Make the memory allocation/free symmetrical in regard to the CPU hotplug
notifier by moving the deallocation to intel_pmu_cpu_dead().
This started in commit:
a7e3ed1e47011 ("perf: Add support for supplementary event registers").
In principle the bug was introduced in v2.6.39 (!), but it will almost
certainly not backport cleanly across the big CPU hotplug rewrite between v4.7-v4.15...
[ bigeasy: Added patch description. ]
[ mingo: Added backporting guidance. ]
Reported-by: He Zhe <zhe.he(a)windriver.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org> # With developer hat on
Signed-off-by: Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org> # With maintainer hat on
Cc: Alexander Shishkin <alexander.shishkin(a)linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme(a)redhat.com>
Cc: Jiri Olsa <jolsa(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: acme(a)kernel.org
Cc: bp(a)alien8.de
Cc: hpa(a)zytor.com
Cc: jolsa(a)kernel.org
Cc: kan.liang(a)linux.intel.com
Cc: namhyung(a)kernel.org
Cc: <stable(a)vger.kernel.org>
Fixes: a7e3ed1e47011 ("perf: Add support for supplementary event registers").
Link: https://lkml.kernel.org/r/20181219165350.6s3jvyxbibpvlhtq@linutronix.de
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
---
arch/x86/events/intel/core.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 40e12cfc87f6..daafb893449b 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3558,6 +3558,14 @@ static void free_excl_cntrs(int cpu)
}
static void intel_pmu_cpu_dying(int cpu)
+{
+ fini_debug_store_on_cpu(cpu);
+
+ if (x86_pmu.counter_freezing)
+ disable_counter_freeze();
+}
+
+static void intel_pmu_cpu_dead(int cpu)
{
struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
struct intel_shared_regs *pc;
@@ -3570,11 +3578,6 @@ static void intel_pmu_cpu_dying(int cpu)
}
free_excl_cntrs(cpu);
-
- fini_debug_store_on_cpu(cpu);
-
- if (x86_pmu.counter_freezing)
- disable_counter_freeze();
}
static void intel_pmu_sched_task(struct perf_event_context *ctx,
@@ -3663,6 +3666,7 @@ static __initconst const struct x86_pmu core_pmu = {
.cpu_prepare = intel_pmu_cpu_prepare,
.cpu_starting = intel_pmu_cpu_starting,
.cpu_dying = intel_pmu_cpu_dying,
+ .cpu_dead = intel_pmu_cpu_dead,
};
static struct attribute *intel_pmu_attrs[];
@@ -3703,6 +3707,8 @@ static __initconst const struct x86_pmu intel_pmu = {
.cpu_prepare = intel_pmu_cpu_prepare,
.cpu_starting = intel_pmu_cpu_starting,
.cpu_dying = intel_pmu_cpu_dying,
+ .cpu_dead = intel_pmu_cpu_dead,
+
.guest_get_msrs = intel_guest_get_msrs,
.sched_task = intel_pmu_sched_task,
};