This is the start of the stable review cycle for the 4.9.167 release.
There are 56 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Wed Apr 3 17:00:20 UTC 2019.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.167-rc…
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Linux 4.9.167-rc1
Eric Biggers <ebiggers(a)google.com>
arm64: support keyctl() system call in 32-bit mode
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Revert "USB: core: only clean up what we allocated"
Mathias Nyman <mathias.nyman(a)linux.intel.com>
xhci: Fix port resume done detection for SS ports with LPM enabled
Radoslav Gerganov <rgerganov(a)vmware.com>
USB: gadget: f_hid: fix deadlock in f_hidg_write()
Sean Christopherson <sean.j.christopherson(a)intel.com>
KVM: x86: Emulate MSR_IA32_ARCH_CAPABILITIES on AMD hosts
Sean Christopherson <sean.j.christopherson(a)intel.com>
KVM: Reject device ioctls from processes other than the VM's creator
Thomas Gleixner <tglx(a)linutronix.de>
x86/smp: Enforce CONFIG_HOTPLUG_CPU when SMP=y
Thomas Gleixner <tglx(a)linutronix.de>
cpu/hotplug: Prevent crash when CPU bringup fails on CONFIG_HOTPLUG_CPU=n
Adrian Hunter <adrian.hunter(a)intel.com>
perf intel-pt: Fix TSC slip
Yasushi Asano <yasano(a)jp.adit-jv.com>
usb: host: xhci-rcar: Add XHCI_TRUST_TX_LENGTH quirk
Fabrizio Castro <fabrizio.castro(a)bp.renesas.com>
usb: common: Consider only available nodes for dr_mode
Axel Lin <axel.lin(a)ingics.com>
gpio: adnp: Fix testing wrong value in adnp_gpio_direction_input
YueHaibing <yuehaibing(a)huawei.com>
fs/proc/proc_sysctl.c: fix NULL pointer dereference in put_links
Wentao Wang <witallwang(a)gmail.com>
Disable kgdboc failed by echo space to /sys/module/kgdboc/parameters/kgdboc
Bjørn Mork <bjorn(a)mork.no>
USB: serial: option: add Olicard 600
Mans Rullgard <mans(a)mansr.com>
USB: serial: option: set driver_info for SIM5218 and compatibles
Lin Yi <teroincn(a)163.com>
USB: serial: mos7720: fix mos_parport refcount imbalance on error path
George McCollister <george.mccollister(a)gmail.com>
USB: serial: ftdi_sio: add additional NovaTech products
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
USB: serial: cp210x: add new device id
Hoan Nguyen An <na-hoan(a)jinso.co.jp>
serial: sh-sci: Fix setting SCSCR_TIE while transferring data
Aditya Pakki <pakki001(a)umn.edu>
serial: max310x: Fix to avoid potential NULL pointer dereference
Malcolm Priestley <tvboxspy(a)gmail.com>
staging: vt6655: Fix interrupt race condition on device start up.
Malcolm Priestley <tvboxspy(a)gmail.com>
staging: vt6655: Remove vif check from vnt_interrupt
Ian Abbott <abbotti(a)mev.co.uk>
staging: comedi: ni_mio_common: Fix divide-by-zero for DIO cmdtest
Kangjie Lu <kjlu(a)umn.edu>
tty: atmel_serial: fix a potential NULL pointer dereference
Steffen Maier <maier(a)linux.ibm.com>
scsi: zfcp: fix scsi_eh host reset with port_forced ERP for non-NPIV FCP devices
Steffen Maier <maier(a)linux.ibm.com>
scsi: zfcp: fix rport unblock if deleted SCSI devices on Scsi_Host
Martin K. Petersen <martin.petersen(a)oracle.com>
scsi: sd: Quiesce warning if device does not report optimal I/O size
Bart Van Assche <bvanassche(a)acm.org>
scsi: sd: Fix a race between closing an sd device and sd I/O
Tetsuo Handa <penguin-kernel(a)I-love.SAKURA.ne.jp>
fs/open.c: allow opening only regular files during execve()
Takashi Iwai <tiwai(a)suse.de>
ALSA: pcm: Don't suspend stream in unrecoverable PCM state
Takashi Iwai <tiwai(a)suse.de>
ALSA: pcm: Fix possible OOB access in PCM oss plugins
Gustavo A. R. Silva <gustavo(a)embeddedor.com>
ALSA: seq: oss: Fix Spectre v1 vulnerability
Gustavo A. R. Silva <gustavo(a)embeddedor.com>
ALSA: rawmidi: Fix potential Spectre v1 vulnerability
Christian Lamparter <chunkeey(a)gmail.com>
net: dsa: qca8k: remove leftover phy accessors
Olga Kornievskaia <kolga(a)netapp.com>
NFSv4.1 don't free interrupted slot on open
Naveen N. Rao <naveen.n.rao(a)linux.vnet.ibm.com>
powerpc: bpf: Fix generation of load/store DW instructions
Kohji Okuno <okuno.kohji(a)jp.panasonic.com>
ARM: imx6q: cpuidle: fix bug that CPU might not wake up at expected time
Andrea Righi <andrea.righi(a)canonical.com>
btrfs: raid56: properly unmap parity page in finish_parity_scrub()
Josef Bacik <josef(a)toxicpanda.com>
btrfs: remove WARN_ON in log_dir_items
Eric Dumazet <edumazet(a)google.com>
tun: add a missing rcu_read_unlock() in error path
Eric Dumazet <edumazet(a)google.com>
tun: properly test for IFF_UP
Finn Thain <fthain(a)telegraphics.com.au>
mac8390: Fix mmio access size probe
Xin Long <lucien.xin(a)gmail.com>
sctp: get sctphdr by offset in sctp_compute_cksum
Zhiqiang Liu <liuzhiqiang26(a)huawei.com>
vxlan: Don't call gro_cells_destroy() before device is unregistered
Eric Dumazet <edumazet(a)google.com>
tcp: do not use ipv6 header for ipv4 flow
Maxime Chevallier <maxime.chevallier(a)bootlin.com>
packets: Always register packet sk in the same order
Eric Dumazet <edumazet(a)google.com>
net: rose: fix a possible stack overflow
Christoph Paasch <cpaasch(a)apple.com>
net/packet: Set __GFP_NOWARN upon allocation in alloc_pg_vec
Bjorn Helgaas <bhelgaas(a)google.com>
mISDN: hfcpci: Test both vendor & device ID for Digium HFC4S
Eric Dumazet <edumazet(a)google.com>
dccp: do not use ipv6 header for ipv4 flow
Bhadram Varka <vbhadram(a)nvidia.com>
stmmac: copy unicast mac address to MAC registers
Johannes Berg <johannes.berg(a)intel.com>
cfg80211: size various nl80211 messages correctly
Christoffer Dall <christoffer.dall(a)linaro.org>
video: fbdev: Set pixclock = 0 in goldfishfb
Marcel Holtmann <marcel(a)holtmann.org>
Bluetooth: Verify that l2cap_get_conf_opt provides large enough buffer
Marcel Holtmann <marcel(a)holtmann.org>
Bluetooth: Check L2CAP option sizes returned from l2cap_get_conf_opt
-------------
Diffstat:
Documentation/virtual/kvm/api.txt | 16 +++--
Makefile | 4 +-
arch/arm/mach-imx/cpuidle-imx6q.c | 27 +++----
arch/arm64/Kconfig | 4 ++
arch/powerpc/include/asm/ppc-opcode.h | 2 +
arch/powerpc/net/bpf_jit.h | 17 ++---
arch/powerpc/net/bpf_jit32.h | 4 ++
arch/powerpc/net/bpf_jit64.h | 20 ++++++
arch/powerpc/net/bpf_jit_comp64.c | 12 ++--
arch/x86/Kconfig | 8 +--
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/vmx.c | 14 ----
arch/x86/kvm/x86.c | 12 ++++
drivers/gpio/gpio-adnp.c | 6 +-
drivers/isdn/hardware/mISDN/hfcmulti.c | 3 +-
drivers/net/dsa/qca8k.c | 18 -----
drivers/net/ethernet/8390/mac8390.c | 19 +++--
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 16 ++++-
drivers/net/tun.c | 16 +++--
drivers/net/vxlan.c | 4 +-
drivers/s390/scsi/zfcp_erp.c | 17 +++++
drivers/s390/scsi/zfcp_ext.h | 2 +
drivers/s390/scsi/zfcp_scsi.c | 4 ++
drivers/scsi/sd.c | 22 ++++--
drivers/staging/comedi/comedidev.h | 2 +
drivers/staging/comedi/drivers.c | 33 +++++++--
drivers/staging/comedi/drivers/ni_mio_common.c | 10 ++-
drivers/staging/vt6655/device_main.c | 11 ++-
drivers/tty/serial/atmel_serial.c | 4 ++
drivers/tty/serial/kgdboc.c | 4 +-
drivers/tty/serial/max310x.c | 2 +
drivers/tty/serial/sh-sci.c | 12 +---
drivers/usb/common/common.c | 2 +
drivers/usb/core/config.c | 9 +--
drivers/usb/gadget/function/f_hid.c | 6 +-
drivers/usb/host/xhci-rcar.c | 1 +
drivers/usb/host/xhci-ring.c | 9 ++-
drivers/usb/host/xhci.h | 1 +
drivers/usb/serial/cp210x.c | 1 +
drivers/usb/serial/ftdi_sio.c | 2 +
drivers/usb/serial/ftdi_sio_ids.h | 4 +-
drivers/usb/serial/mos7720.c | 4 +-
drivers/usb/serial/option.c | 13 ++--
drivers/video/fbdev/goldfishfb.c | 2 +-
fs/btrfs/raid56.c | 3 +-
fs/btrfs/tree-log.c | 11 ++-
fs/nfs/nfs4proc.c | 3 +-
fs/open.c | 6 ++
fs/proc/proc_sysctl.c | 3 +-
include/net/sctp/checksum.h | 2 +-
include/net/sock.h | 6 ++
kernel/cpu.c | 20 +++++-
net/bluetooth/l2cap_core.c | 83 ++++++++++++++--------
net/dccp/ipv6.c | 4 +-
net/ipv6/tcp_ipv6.c | 8 +--
net/packet/af_packet.c | 4 +-
net/rose/rose_subr.c | 21 +++---
net/wireless/nl80211.c | 16 ++---
sound/core/oss/pcm_oss.c | 43 +++++------
sound/core/pcm_native.c | 9 ++-
sound/core/rawmidi.c | 2 +
sound/core/seq/oss/seq_oss_synth.c | 7 +-
.../perf/util/intel-pt-decoder/intel-pt-decoder.c | 20 +++---
virt/kvm/kvm_main.c | 3 +
64 files changed, 422 insertions(+), 252 deletions(-)
When doing re-add, we need to ensure rdev->mddev->pers is not NULL,
which can avoid potential NULL pointer derefence in fallowing
add_bound_rdev().
Fixes: a6da4ef85cef ("md: re-add a failed disk")
Cc: Xiao Ni <xni(a)redhat.com>
Cc: NeilBrown <neilb(a)suse.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Yufen Yu <yuyufen(a)huawei.com>
---
drivers/md/md.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 875b29ba5926..66b6bdf9f364 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -2859,8 +2859,10 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
err = 0;
}
} else if (cmd_match(buf, "re-add")) {
- if (test_bit(Faulty, &rdev->flags) && (rdev->raid_disk == -1) &&
- rdev->saved_raid_disk >= 0) {
+ if (!rdev->mddev->pers)
+ err = -EINVAL;
+ else if (test_bit(Faulty, &rdev->flags) && (rdev->raid_disk == -1) &&
+ rdev->saved_raid_disk >= 0) {
/* clear_bit is performed _after_ all the devices
* have their local Faulty bit cleared. If any writes
* happen in the meantime in the local node, they
--
2.16.2.dirty
Since commit a983b5ebee57 ("mm: memcontrol: fix excessive complexity in
memory.stat reporting") memcg dirty and writeback counters are managed
as:
1) per-memcg per-cpu values in range of [-32..32]
2) per-memcg atomic counter
When a per-cpu counter cannot fit in [-32..32] it's flushed to the
atomic. Stat readers only check the atomic.
Thus readers such as balance_dirty_pages() may see a nontrivial error
margin: 32 pages per cpu.
Assuming 100 cpus:
4k x86 page_size: 13 MiB error per memcg
64k ppc page_size: 200 MiB error per memcg
Considering that dirty+writeback are used together for some decisions
the errors double.
This inaccuracy can lead to undeserved oom kills. One nasty case is
when all per-cpu counters hold positive values offsetting an atomic
negative value (i.e. per_cpu[*]=32, atomic=n_cpu*-32).
balance_dirty_pages() only consults the atomic and does not consider
throttling the next n_cpu*32 dirty pages. If the file_lru is in the
13..200 MiB range then there's absolutely no dirty throttling, which
burdens vmscan with only dirty+writeback pages thus resorting to oom
kill.
It could be argued that tiny containers are not supported, but it's more
subtle. It's the amount the space available for file lru that matters.
If a container has memory.max-200MiB of non reclaimable memory, then it
will also suffer such oom kills on a 100 cpu machine.
The following test reliably ooms without this patch. This patch avoids
oom kills.
$ cat test
mount -t cgroup2 none /dev/cgroup
cd /dev/cgroup
echo +io +memory > cgroup.subtree_control
mkdir test
cd test
echo 10M > memory.max
(echo $BASHPID > cgroup.procs && exec /memcg-writeback-stress /foo)
(echo $BASHPID > cgroup.procs && exec dd if=/dev/zero of=/foo bs=2M count=100)
$ cat memcg-writeback-stress.c
/*
* Dirty pages from all but one cpu.
* Clean pages from the non dirtying cpu.
* This is to stress per cpu counter imbalance.
* On a 100 cpu machine:
* - per memcg per cpu dirty count is 32 pages for each of 99 cpus
* - per memcg atomic is -99*32 pages
* - thus the complete dirty limit: sum of all counters 0
* - balance_dirty_pages() only sees atomic count -99*32 pages, which
* it max()s to 0.
* - So a workload can dirty -99*32 pages before balance_dirty_pages()
* cares.
*/
#define _GNU_SOURCE
#include <err.h>
#include <fcntl.h>
#include <sched.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/stat.h>
#include <sys/sysinfo.h>
#include <sys/types.h>
#include <unistd.h>
static char *buf;
static int bufSize;
static void set_affinity(int cpu)
{
cpu_set_t affinity;
CPU_ZERO(&affinity);
CPU_SET(cpu, &affinity);
if (sched_setaffinity(0, sizeof(affinity), &affinity))
err(1, "sched_setaffinity");
}
static void dirty_on(int output_fd, int cpu)
{
int i, wrote;
set_affinity(cpu);
for (i = 0; i < 32; i++) {
for (wrote = 0; wrote < bufSize; ) {
int ret = write(output_fd, buf+wrote, bufSize-wrote);
if (ret == -1)
err(1, "write");
wrote += ret;
}
}
}
int main(int argc, char **argv)
{
int cpu, flush_cpu = 1, output_fd;
const char *output;
if (argc != 2)
errx(1, "usage: output_file");
output = argv[1];
bufSize = getpagesize();
buf = malloc(getpagesize());
if (buf == NULL)
errx(1, "malloc failed");
output_fd = open(output, O_CREAT|O_RDWR);
if (output_fd == -1)
err(1, "open(%s)", output);
for (cpu = 0; cpu < get_nprocs(); cpu++) {
if (cpu != flush_cpu)
dirty_on(output_fd, cpu);
}
set_affinity(flush_cpu);
if (fsync(output_fd))
err(1, "fsync(%s)", output);
if (close(output_fd))
err(1, "close(%s)", output);
free(buf);
}
Make balance_dirty_pages() and wb_over_bg_thresh() work harder to
collect exact per memcg counters. This avoids the aforementioned oom
kills.
This does not affect the overhead of memory.stat, which still reads the
single atomic counter.
Why not use percpu_counter? memcg already handles cpus going offline,
so no need for that overhead from percpu_counter. And the
percpu_counter spinlocks are more heavyweight than is required.
It probably also makes sense to use exact dirty and writeback counters
in memcg oom reports. But that is saved for later.
Cc: stable(a)vger.kernel.org # v4.16+
Signed-off-by: Greg Thelen <gthelen(a)google.com>
---
Changelog since v1:
- Move memcg_exact_page_state() into memcontrol.c.
- Unconditionally gather exact (per cpu) counters in mem_cgroup_wb_stats(), it's
not called in performance sensitive paths.
- Unconditionally check for underflow regardless of CONFIG_SMP. It's just
easier this way. This isn't performance sensitive.
- Add stable tag.
include/linux/memcontrol.h | 5 ++++-
mm/memcontrol.c | 20 ++++++++++++++++++--
2 files changed, 22 insertions(+), 3 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 1f3d880b7ca1..dbb6118370c1 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -566,7 +566,10 @@ struct mem_cgroup *lock_page_memcg(struct page *page);
void __unlock_page_memcg(struct mem_cgroup *memcg);
void unlock_page_memcg(struct page *page);
-/* idx can be of type enum memcg_stat_item or node_stat_item */
+/*
+ * idx can be of type enum memcg_stat_item or node_stat_item.
+ * Keep in sync with memcg_exact_page_state().
+ */
static inline unsigned long memcg_page_state(struct mem_cgroup *memcg,
int idx)
{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 532e0e2a4817..81a0d3914ec9 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3882,6 +3882,22 @@ struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb)
return &memcg->cgwb_domain;
}
+/*
+ * idx can be of type enum memcg_stat_item or node_stat_item.
+ * Keep in sync with memcg_exact_page().
+ */
+static unsigned long memcg_exact_page_state(struct mem_cgroup *memcg, int idx)
+{
+ long x = atomic_long_read(&memcg->stat[idx]);
+ int cpu;
+
+ for_each_online_cpu(cpu)
+ x += per_cpu_ptr(memcg->stat_cpu, cpu)->count[idx];
+ if (x < 0)
+ x = 0;
+ return x;
+}
+
/**
* mem_cgroup_wb_stats - retrieve writeback related stats from its memcg
* @wb: bdi_writeback in question
@@ -3907,10 +3923,10 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css);
struct mem_cgroup *parent;
- *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY);
+ *pdirty = memcg_exact_page_state(memcg, NR_FILE_DIRTY);
/* this should eventually include NR_UNSTABLE_NFS */
- *pwriteback = memcg_page_state(memcg, NR_WRITEBACK);
+ *pwriteback = memcg_exact_page_state(memcg, NR_WRITEBACK);
*pfilepages = mem_cgroup_nr_lru_pages(memcg, (1 << LRU_INACTIVE_FILE) |
(1 << LRU_ACTIVE_FILE));
*pheadroom = PAGE_COUNTER_MAX;
--
2.21.0.392.gf8f6787159e-goog
The patch titled
Subject: mm: writeback: use exact memcg dirty counts
has been added to the -mm tree. Its filename is
writeback-use-exact-memcg-dirty-counts.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/writeback-use-exact-memcg-dirty-co…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/writeback-use-exact-memcg-dirty-co…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Greg Thelen <gthelen(a)google.com>
Subject: mm: writeback: use exact memcg dirty counts
Since a983b5ebee57 ("mm: memcontrol: fix excessive complexity in
memory.stat reporting") memcg dirty and writeback counters are managed as:
1) per-memcg per-cpu values in range of [-32..32]
2) per-memcg atomic counter
When a per-cpu counter cannot fit in [-32..32] it's flushed to the atomic.
Stat readers only check the atomic. Thus readers such as
balance_dirty_pages() may see a nontrivial error margin: 32 pages per cpu.
Assuming 100 cpus:
4k x86 page_size: 13 MiB error per memcg
64k ppc page_size: 200 MiB error per memcg
Considering that dirty+writeback are used together for some decisions the
errors double.
This inaccuracy can lead to undeserved oom kills. One nasty case is when
all per-cpu counters hold positive values offsetting an atomic negative
value (i.e. per_cpu[*]=32, atomic=n_cpu*-32). balance_dirty_pages() only
consults the atomic and does not consider throttling the next n_cpu*32
dirty pages. If the file_lru is in the 13..200 MiB range then there's
absolutely no dirty throttling, which burdens vmscan with only
dirty+writeback pages thus resorting to oom kill.
It could be argued that tiny containers are not supported, but it's more
subtle. It's the amount the space available for file lru that matters.
If a container has memory.max-200MiB of non reclaimable memory, then it
will also suffer such oom kills on a 100 cpu machine.
The following test reliably ooms without this patch. This patch avoids
oom kills.
$ cat test
mount -t cgroup2 none /dev/cgroup
cd /dev/cgroup
echo +io +memory > cgroup.subtree_control
mkdir test
cd test
echo 10M > memory.max
(echo $BASHPID > cgroup.procs && exec /memcg-writeback-stress /foo)
(echo $BASHPID > cgroup.procs && exec dd if=/dev/zero of=/foo bs=2M count=100)
$ cat memcg-writeback-stress.c
/*
* Dirty pages from all but one cpu.
* Clean pages from the non dirtying cpu.
* This is to stress per cpu counter imbalance.
* On a 100 cpu machine:
* - per memcg per cpu dirty count is 32 pages for each of 99 cpus
* - per memcg atomic is -99*32 pages
* - thus the complete dirty limit: sum of all counters 0
* - balance_dirty_pages() only sees atomic count -99*32 pages, which
* it max()s to 0.
* - So a workload can dirty -99*32 pages before balance_dirty_pages()
* cares.
*/
#define _GNU_SOURCE
#include <err.h>
#include <fcntl.h>
#include <sched.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/stat.h>
#include <sys/sysinfo.h>
#include <sys/types.h>
#include <unistd.h>
static char *buf;
static int bufSize;
static void set_affinity(int cpu)
{
cpu_set_t affinity;
CPU_ZERO(&affinity);
CPU_SET(cpu, &affinity);
if (sched_setaffinity(0, sizeof(affinity), &affinity))
err(1, "sched_setaffinity");
}
static void dirty_on(int output_fd, int cpu)
{
int i, wrote;
set_affinity(cpu);
for (i = 0; i < 32; i++) {
for (wrote = 0; wrote < bufSize; ) {
int ret = write(output_fd, buf+wrote, bufSize-wrote);
if (ret == -1)
err(1, "write");
wrote += ret;
}
}
}
int main(int argc, char **argv)
{
int cpu, flush_cpu = 1, output_fd;
const char *output;
if (argc != 2)
errx(1, "usage: output_file");
output = argv[1];
bufSize = getpagesize();
buf = malloc(getpagesize());
if (buf == NULL)
errx(1, "malloc failed");
output_fd = open(output, O_CREAT|O_RDWR);
if (output_fd == -1)
err(1, "open(%s)", output);
for (cpu = 0; cpu < get_nprocs(); cpu++) {
if (cpu != flush_cpu)
dirty_on(output_fd, cpu);
}
set_affinity(flush_cpu);
if (fsync(output_fd))
err(1, "fsync(%s)", output);
if (close(output_fd))
err(1, "close(%s)", output);
free(buf);
}
Make balance_dirty_pages() and wb_over_bg_thresh() work harder to collect
exact per memcg counters. This avoids the aforementioned oom kills.
This does not affect the overhead of memory.stat, which still reads the
single atomic counter.
Why not use percpu_counter? memcg already handles cpus going offline, so
no need for that overhead from percpu_counter. And the percpu_counter
spinlocks are more heavyweight than is required.
It probably also makes sense to use exact dirty and writeback counters in
memcg oom reports. But that is saved for later.
Link: http://lkml.kernel.org/r/20190329174609.164344-1-gthelen@google.com
Signed-off-by: Greg Thelen <gthelen(a)google.com>
Reviewed-by: Roman Gushchin <guro(a)fb.com>
Acked-by: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: Vladimir Davydov <vdavydov.dev(a)gmail.com>
Cc: Tejun Heo <tj(a)kernel.org>
Cc: <stable(a)vger.kernel.org> [4.16+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/memcontrol.h | 5 ++++-
mm/memcontrol.c | 20 ++++++++++++++++++--
2 files changed, 22 insertions(+), 3 deletions(-)
--- a/include/linux/memcontrol.h~writeback-use-exact-memcg-dirty-counts
+++ a/include/linux/memcontrol.h
@@ -566,7 +566,10 @@ struct mem_cgroup *lock_page_memcg(struc
void __unlock_page_memcg(struct mem_cgroup *memcg);
void unlock_page_memcg(struct page *page);
-/* idx can be of type enum memcg_stat_item or node_stat_item */
+/*
+ * idx can be of type enum memcg_stat_item or node_stat_item.
+ * Keep in sync with memcg_exact_page_state().
+ */
static inline unsigned long memcg_page_state(struct mem_cgroup *memcg,
int idx)
{
--- a/mm/memcontrol.c~writeback-use-exact-memcg-dirty-counts
+++ a/mm/memcontrol.c
@@ -3882,6 +3882,22 @@ struct wb_domain *mem_cgroup_wb_domain(s
return &memcg->cgwb_domain;
}
+/*
+ * idx can be of type enum memcg_stat_item or node_stat_item.
+ * Keep in sync with memcg_exact_page().
+ */
+static unsigned long memcg_exact_page_state(struct mem_cgroup *memcg, int idx)
+{
+ long x = atomic_long_read(&memcg->stat[idx]);
+ int cpu;
+
+ for_each_online_cpu(cpu)
+ x += per_cpu_ptr(memcg->stat_cpu, cpu)->count[idx];
+ if (x < 0)
+ x = 0;
+ return x;
+}
+
/**
* mem_cgroup_wb_stats - retrieve writeback related stats from its memcg
* @wb: bdi_writeback in question
@@ -3907,10 +3923,10 @@ void mem_cgroup_wb_stats(struct bdi_writ
struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css);
struct mem_cgroup *parent;
- *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY);
+ *pdirty = memcg_exact_page_state(memcg, NR_FILE_DIRTY);
/* this should eventually include NR_UNSTABLE_NFS */
- *pwriteback = memcg_page_state(memcg, NR_WRITEBACK);
+ *pwriteback = memcg_exact_page_state(memcg, NR_WRITEBACK);
*pfilepages = mem_cgroup_nr_lru_pages(memcg, (1 << LRU_INACTIVE_FILE) |
(1 << LRU_ACTIVE_FILE));
*pheadroom = PAGE_COUNTER_MAX;
_
Patches currently in -mm which might be from gthelen(a)google.com are
writeback-use-exact-memcg-dirty-counts.patch
The patch titled
Subject: mm/huge_memory.c: fix modifying of page protection by insert_pfn_pmd()
has been added to the -mm tree. Its filename is
mm-fix-modifying-of-page-protection-by-insert_pfn_pmd.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-fix-modifying-of-page-protectio…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-fix-modifying-of-page-protectio…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: "Aneesh Kumar K.V" <aneesh.kumar(a)linux.ibm.com>
Subject: mm/huge_memory.c: fix modifying of page protection by insert_pfn_pmd()
With some architectures like ppc64, set_pmd_at() cannot cope with a
situation where there is already some (different) valid entry present.
Use pmdp_set_access_flags() instead to modify the pfn which is built to
deal with modifying existing PMD entries.
This is similar to cae85cb8add3 ("mm/memory.c: fix modifying of page
protection by insert_pfn()")
We also do similar update w.r.t insert_pfn_pud eventhough ppc64 don't
support pud pfn entries now.
Without this patch we also see the below message in kernel log "BUG:
non-zero pgtables_bytes on freeing mm:"
Link: http://lkml.kernel.org/r/20190402115125.18803-1-aneesh.kumar@linux.ibm.com
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar(a)linux.ibm.com>
Reported-by: Chandan Rajendra <chandan(a)linux.ibm.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/huge_memory.c | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
--- a/mm/huge_memory.c~mm-fix-modifying-of-page-protection-by-insert_pfn_pmd
+++ a/mm/huge_memory.c
@@ -755,6 +755,21 @@ static void insert_pfn_pmd(struct vm_are
spinlock_t *ptl;
ptl = pmd_lock(mm, pmd);
+ if (!pmd_none(*pmd)) {
+ if (write) {
+ if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
+ WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
+ goto out_unlock;
+ }
+ entry = pmd_mkyoung(*pmd);
+ entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+ if (pmdp_set_access_flags(vma, addr, pmd, entry, 1))
+ update_mmu_cache_pmd(vma, addr, pmd);
+ }
+
+ goto out_unlock;
+ }
+
entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
if (pfn_t_devmap(pfn))
entry = pmd_mkdevmap(entry);
@@ -766,11 +781,16 @@ static void insert_pfn_pmd(struct vm_are
if (pgtable) {
pgtable_trans_huge_deposit(mm, pmd, pgtable);
mm_inc_nr_ptes(mm);
+ pgtable = NULL;
}
set_pmd_at(mm, addr, pmd, entry);
update_mmu_cache_pmd(vma, addr, pmd);
+
+out_unlock:
spin_unlock(ptl);
+ if (pgtable)
+ pte_free(mm, pgtable);
}
vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
@@ -821,6 +841,20 @@ static void insert_pfn_pud(struct vm_are
spinlock_t *ptl;
ptl = pud_lock(mm, pud);
+ if (!pud_none(*pud)) {
+ if (write) {
+ if (pud_pfn(*pud) != pfn_t_to_pfn(pfn)) {
+ WARN_ON_ONCE(!is_huge_zero_pud(*pud));
+ goto out_unlock;
+ }
+ entry = pud_mkyoung(*pud);
+ entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma);
+ if (pudp_set_access_flags(vma, addr, pud, entry, 1))
+ update_mmu_cache_pud(vma, addr, pud);
+ }
+ goto out_unlock;
+ }
+
entry = pud_mkhuge(pfn_t_pud(pfn, prot));
if (pfn_t_devmap(pfn))
entry = pud_mkdevmap(entry);
@@ -830,6 +864,8 @@ static void insert_pfn_pud(struct vm_are
}
set_pud_at(mm, addr, pud, entry);
update_mmu_cache_pud(vma, addr, pud);
+
+out_unlock:
spin_unlock(ptl);
}
_
Patches currently in -mm which might be from aneesh.kumar(a)linux.ibm.com are
mm-fix-modifying-of-page-protection-by-insert_pfn_pmd.patch
mm-page_mkclean-vs-madv_dontneed-race.patch
On Tue, Apr 2, 2019 at 12:48 PM Sasha Levin <sashal(a)kernel.org> wrote:
>
> Hi,
>
> [This is an automated email]
>
> This commit has been processed because it contains a -stable tag.
> The stable tag indicates that it's relevant for the following trees: 4.9+
>
> The bot has tested the following trees: v5.0.5, v4.19.32, v4.14.109, v4.9.166.
> How should we proceed with this patch?
I can manually generate versions for 4.9, 4.14 and 4.19 stable trees
when this one hit mainline.
From: Eric Biggers <ebiggers(a)google.com>
[This is essentially the same as the corresponding patch to GCM.]
CCM instances can be created by either the "ccm" template, which only
allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base",
which allows choosing the ctr and cbcmac implementations, e.g.
"ccm_base(ctr(aes-generic),cbcmac(aes-generic))".
However, a "ccm_base" instance prevents a "ccm" instance from being
registered using the same implementations. Nor will the instance be
found by lookups of "ccm". This can be used as a denial of service.
Moreover, "ccm_base" instances are never tested by the crypto
self-tests, even if there are compatible "ccm" tests.
The root cause of these problems is that instances of the two templates
use different cra_names. Therefore, fix these problems by making
"ccm_base" instances set the same cra_name as "ccm" instances, e.g.
"ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))".
This requires extracting the block cipher name from the name of the ctr
algorithm, which means starting to require that the stream cipher really
be ctr and not something else. But it would be pretty bizarre if anyone
was actually relying on being able to use a non-ctr stream cipher here.
Fixes: 4a49b499dfa0 ("[CRYPTO] ccm: Added CCM mode")
Cc: stable(a)vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
---
crypto/ccm.c | 38 ++++++++++++--------------------------
1 file changed, 12 insertions(+), 26 deletions(-)
diff --git a/crypto/ccm.c b/crypto/ccm.c
index 50df8f001c1c9..3bb49776450b7 100644
--- a/crypto/ccm.c
+++ b/crypto/ccm.c
@@ -458,7 +458,6 @@ static void crypto_ccm_free(struct aead_instance *inst)
static int crypto_ccm_create_common(struct crypto_template *tmpl,
struct rtattr **tb,
- const char *full_name,
const char *ctr_name,
const char *mac_name)
{
@@ -509,23 +508,22 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
ctr = crypto_spawn_skcipher_alg(&ictx->ctr);
- /* Not a stream cipher? */
+ /* Must be CTR mode with 16-byte blocks. */
err = -EINVAL;
- if (ctr->base.cra_blocksize != 1)
+ if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
+ crypto_skcipher_alg_ivsize(ctr) != 16)
goto err_drop_ctr;
- /* We want the real thing! */
- if (crypto_skcipher_alg_ivsize(ctr) != 16)
+ err = -ENAMETOOLONG;
+ if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
+ "ccm(%s", ctr->base.cra_name + 4) >= CRYPTO_MAX_ALG_NAME)
goto err_drop_ctr;
- err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"ccm_base(%s,%s)", ctr->base.cra_driver_name,
mac->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
goto err_drop_ctr;
- memcpy(inst->alg.base.cra_name, full_name, CRYPTO_MAX_ALG_NAME);
-
inst->alg.base.cra_flags = ctr->base.cra_flags & CRYPTO_ALG_ASYNC;
inst->alg.base.cra_priority = (mac->base.cra_priority +
ctr->base.cra_priority) / 2;
@@ -567,7 +565,6 @@ static int crypto_ccm_create(struct crypto_template *tmpl, struct rtattr **tb)
const char *cipher_name;
char ctr_name[CRYPTO_MAX_ALG_NAME];
char mac_name[CRYPTO_MAX_ALG_NAME];
- char full_name[CRYPTO_MAX_ALG_NAME];
cipher_name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(cipher_name))
@@ -581,35 +578,24 @@ static int crypto_ccm_create(struct crypto_template *tmpl, struct rtattr **tb)
cipher_name) >= CRYPTO_MAX_ALG_NAME)
return -ENAMETOOLONG;
- if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm(%s)", cipher_name) >=
- CRYPTO_MAX_ALG_NAME)
- return -ENAMETOOLONG;
-
- return crypto_ccm_create_common(tmpl, tb, full_name, ctr_name,
- mac_name);
+ return crypto_ccm_create_common(tmpl, tb, ctr_name, mac_name);
}
static int crypto_ccm_base_create(struct crypto_template *tmpl,
struct rtattr **tb)
{
const char *ctr_name;
- const char *cipher_name;
- char full_name[CRYPTO_MAX_ALG_NAME];
+ const char *mac_name;
ctr_name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(ctr_name))
return PTR_ERR(ctr_name);
- cipher_name = crypto_attr_alg_name(tb[2]);
- if (IS_ERR(cipher_name))
- return PTR_ERR(cipher_name);
-
- if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm_base(%s,%s)",
- ctr_name, cipher_name) >= CRYPTO_MAX_ALG_NAME)
- return -ENAMETOOLONG;
+ mac_name = crypto_attr_alg_name(tb[2]);
+ if (IS_ERR(mac_name))
+ return PTR_ERR(mac_name);
- return crypto_ccm_create_common(tmpl, tb, full_name, ctr_name,
- cipher_name);
+ return crypto_ccm_create_common(tmpl, tb, ctr_name, mac_name);
}
static int crypto_rfc4309_setkey(struct crypto_aead *parent, const u8 *key,
--
2.21.0
Currently, compat tasks running on arm64 can allocate memory up to
TASK_SIZE_32 (UL(0x100000000)).
This means that mmap() allocations, if we treat them as returning an
array, are not compliant with the sections 6.5.8 of the C standard
(C99) which states that: "If the expression P points to an element of
an array object and the expression Q points to the last element of the
same array object, the pointer expression Q+1 compares greater than P".
Redefine TASK_SIZE_32 to address the issue.
Cc: Catalin Marinas <catalin.marinas(a)arm.com>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: Jann Horn <jannh(a)google.com>
Cc: <stable(a)vger.kernel.org>
Reported-by: Jann Horn <jannh(a)google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino(a)arm.com>
---
arch/arm64/include/asm/processor.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5d9ce62bdebd..9c831d9d3cd2 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -57,7 +57,15 @@
#define TASK_SIZE_64 (UL(1) << vabits_user)
#ifdef CONFIG_COMPAT
+#ifdef CONFIG_ARM64_64K_PAGES
+/*
+ * With CONFIG_ARM64_64K_PAGES enabled, the last page is occupied
+ * by the compact vectors page.
+ */
#define TASK_SIZE_32 UL(0x100000000)
+#else
+#define TASK_SIZE_32 (UL(0x100000000) - PAGE_SIZE)
+#endif /* CONFIG_ARM64_64K_PAGES */
#define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \
TASK_SIZE_32 : TASK_SIZE_64)
#define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \
--
2.21.0