This is the start of the stable review cycle for the 4.9.118 release. There are 32 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Mon Aug 6 08:26:35 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.118-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 4.9.118-rc1
Tony Battersby tonyb@cybernetics.com scsi: sg: fix minor memory leak in error path
Boris Brezillon boris.brezillon@bootlin.com drm/vc4: Reset ->{x, y}_scaling[1] when dealing with uniplanar formats
Herbert Xu herbert@gondor.apana.org.au crypto: padlock-aes - Fix Nano workaround data corruption
Roman Kagan rkagan@virtuozzo.com kvm: x86: vmx: fix vpid leak
Jiang Biao jiang.biao2@zte.com.cn virtio_balloon: fix another race between migration and ballooning
Jeremy Cline jcline@redhat.com net: socket: fix potential spectre v1 gadget in socketcall
Anton Vasilyev vasilyev@ispras.ru can: ems_usb: Fix memory leak on ems_usb_disconnect()
Linus Torvalds torvalds@linux-foundation.org squashfs: more metadata hardenings
Linus Torvalds torvalds@linux-foundation.org squashfs: more metadata hardening
Jose Abreu Jose.Abreu@synopsys.com net: stmmac: Fix WoL for PCI-based setups
Jeremy Cline jcline@redhat.com netlink: Fix spectre v1 gadget in netlink_create()
Florian Fainelli f.fainelli@gmail.com net: dsa: Do not suspend/resume closed slave_dev
Eric Dumazet edumazet@google.com ipv4: frags: handle possible skb truesize change
Eric Dumazet edumazet@google.com inet: frag: enforce memory limits earlier
Eric Dumazet edumazet@google.com bonding: avoid lockdep confusion in bond_get_stats()
Boqun Feng boqun.feng@gmail.com sched/wait: Remove the lockless swait_active() check in swake_up*()
Andy Shevchenko andriy.shevchenko@linux.intel.com pinctrl: intel: Read back TX buffer state
Eric Dumazet edumazet@google.com tcp: add one more quick ack after after ECN events
Yousuk Seung ysseung@google.com tcp: refactor tcp_ecn_check_ce to remove sk type cast
Eric Dumazet edumazet@google.com tcp: do not aggressively quick ack after ECN events
Eric Dumazet edumazet@google.com tcp: add max_quickacks param to tcp_incr_quickack and tcp_enter_quickack_mode
Eric Dumazet edumazet@google.com tcp: do not force quickack when receiving out-of-order packets
Dmitry Safonov dima@arista.com netlink: Don't shift with UB on nlk->ngroups
Dmitry Safonov dima@arista.com netlink: Do not subscribe to non-existent groups
Xiao Liang xiliang@redhat.com xen-netfront: wait xenbus state change when load module manually
Neal Cardwell ncardwell@google.com tcp_bbr: fix bw probing to raise in-flight data for very small BDPs
Eugeniy Paltsev Eugeniy.Paltsev@synopsys.com NET: stmmac: align DMA stuff to largest cache line length
Anton Vasilyev vasilyev@ispras.ru net: mdio-mux: bcm-iproc: fix wrong getter and setter pair
Stefan Wahren stefan.wahren@i2se.com net: lan78xx: fix rx handling before first packet is send
tangpengpeng tangpengpeng@higon.com net: fix amd-xgbe flow-control issue
Gal Pressman pressmangal@gmail.com net: ena: Fix use of uninitialized DMA address bits field
Lorenzo Bianconi lorenzo.bianconi@redhat.com ipv4: remove BUG_ON() from fib_compute_spec_dst
-------------
Diffstat:
Makefile | 4 +- arch/x86/kvm/vmx.c | 7 ++-- drivers/crypto/padlock-aes.c | 8 +++- drivers/gpu/drm/vc4/vc4_plane.c | 3 ++ drivers/net/bonding/bond_main.c | 14 ++++++- drivers/net/can/usb/ems_usb.c | 1 + drivers/net/ethernet/amazon/ena/ena_com.c | 1 + drivers/net/ethernet/amd/xgbe/xgbe-mdio.c | 4 +- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 2 +- drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c | 40 ++++++++++++++++++- drivers/net/phy/mdio-mux-bcm-iproc.c | 2 +- drivers/net/usb/lan78xx.c | 2 + drivers/net/xen-netfront.c | 6 +++ drivers/pinctrl/intel/pinctrl-intel.c | 7 +++- drivers/scsi/sg.c | 1 + drivers/virtio/virtio_balloon.c | 2 + fs/squashfs/block.c | 2 + fs/squashfs/fragment.c | 13 ++++-- fs/squashfs/squashfs_fs_sb.h | 1 + fs/squashfs/super.c | 5 ++- include/net/tcp.h | 2 +- kernel/sched/swait.c | 6 --- net/dsa/slave.c | 6 +++ net/ipv4/fib_frontend.c | 4 +- net/ipv4/inet_fragment.c | 10 ++--- net/ipv4/ip_fragment.c | 5 +++ net/ipv4/tcp_bbr.c | 4 ++ net/ipv4/tcp_dctcp.c | 4 +- net/ipv4/tcp_input.c | 48 ++++++++++++----------- net/netlink/af_netlink.c | 7 ++++ net/socket.c | 2 + 31 files changed, 161 insertions(+), 62 deletions(-)
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lorenzo Bianconi lorenzo.bianconi@redhat.com
[ Upstream commit 9fc12023d6f51551d6ca9ed7e02ecc19d79caf17 ]
Remove BUG_ON() from fib_compute_spec_dst routine and check in_dev pointer during flowi4 data structure initialization. fib_compute_spec_dst routine can be run concurrently with device removal where ip_ptr net_device pointer is set to NULL. This can happen if userspace enables pkt info on UDP rx socket and the device is removed while traffic is flowing
Fixes: 35ebf65e851c ("ipv4: Create and use fib_compute_spec_dst() helper") Signed-off-by: Lorenzo Bianconi lorenzo.bianconi@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/fib_frontend.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/net/ipv4/fib_frontend.c +++ b/net/ipv4/fib_frontend.c @@ -282,19 +282,19 @@ __be32 fib_compute_spec_dst(struct sk_bu return ip_hdr(skb)->daddr;
in_dev = __in_dev_get_rcu(dev); - BUG_ON(!in_dev);
net = dev_net(dev);
scope = RT_SCOPE_UNIVERSE; if (!ipv4_is_zeronet(ip_hdr(skb)->saddr)) { + bool vmark = in_dev && IN_DEV_SRC_VMARK(in_dev); struct flowi4 fl4 = { .flowi4_iif = LOOPBACK_IFINDEX, .flowi4_oif = l3mdev_master_ifindex_rcu(dev), .daddr = ip_hdr(skb)->saddr, .flowi4_tos = RT_TOS(ip_hdr(skb)->tos), .flowi4_scope = scope, - .flowi4_mark = IN_DEV_SRC_VMARK(in_dev) ? skb->mark : 0, + .flowi4_mark = vmark ? skb->mark : 0, }; if (!fib_lookup(net, &fl4, &res, 0)) return FIB_RES_PREFSRC(net, res);
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gal Pressman pressmangal@gmail.com
[ Upstream commit 101f0cd4f2216d32f1b8a75a2154cf3997484ee2 ]
UBSAN triggers the following undefined behaviour warnings: [...] [ 13.236124] UBSAN: Undefined behaviour in drivers/net/ethernet/amazon/ena/ena_eth_com.c:468:22 [ 13.240043] shift exponent 64 is too large for 64-bit type 'long long unsigned int' [...] [ 13.744769] UBSAN: Undefined behaviour in drivers/net/ethernet/amazon/ena/ena_eth_com.c:373:4 [ 13.748694] shift exponent 64 is too large for 64-bit type 'long long unsigned int' [...]
When splitting the address to high and low, GENMASK_ULL is used to generate a bitmask with dma_addr_bits field from io_sq (in ena_com_prepare_tx and ena_com_add_single_rx_desc). The problem is that dma_addr_bits is not initialized with a proper value (besides being cleared in ena_com_create_io_queue). Assign dma_addr_bits the correct value that is stored in ena_dev when initializing the SQ.
Fixes: 1738cd3ed342 ("net: ena: Add a driver for Amazon Elastic Network Adapters (ENA)") Signed-off-by: Gal Pressman pressmangal@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/amazon/ena/ena_com.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/net/ethernet/amazon/ena/ena_com.c +++ b/drivers/net/ethernet/amazon/ena/ena_com.c @@ -331,6 +331,7 @@ static int ena_com_init_io_sq(struct ena
memset(&io_sq->desc_addr, 0x0, sizeof(struct ena_com_io_desc_addr));
+ io_sq->dma_addr_bits = ena_dev->dma_addr_bits; io_sq->desc_entry_size = (io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) ? sizeof(struct ena_eth_io_tx_desc) :
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: tangpengpeng tangpengpeng@higon.com
[ Upstream commit 7f3fc7ddf719cd6faaf787722c511f6918ac6aab ]
If we enable or disable xgbe flow-control by ethtool , it does't work.Because the parameter is not properly assigned,so we need to adjust the assignment order of the parameters.
Fixes: c1ce2f77366b ("amd-xgbe: Fix flow control setting logic") Signed-off-by: tangpengpeng tangpengpeng@higon.com Acked-by: Tom Lendacky thomas.lendacky@amd.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/amd/xgbe/xgbe-mdio.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c @@ -877,14 +877,14 @@ static void xgbe_phy_adjust_link(struct
if (pdata->tx_pause != pdata->phy.tx_pause) { new_state = 1; - pdata->hw_if.config_tx_flow_control(pdata); pdata->tx_pause = pdata->phy.tx_pause; + pdata->hw_if.config_tx_flow_control(pdata); }
if (pdata->rx_pause != pdata->phy.rx_pause) { new_state = 1; - pdata->hw_if.config_rx_flow_control(pdata); pdata->rx_pause = pdata->phy.rx_pause; + pdata->hw_if.config_rx_flow_control(pdata); }
/* Speed support */
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stefan Wahren stefan.wahren@i2se.com
[ Upstream commit 136f55f660192ce04af091642efc75d85e017364 ]
As long the bh tasklet isn't scheduled once, no packet from the rx path will be handled. Since the tx path also schedule the same tasklet this situation only persits until the first packet transmission. So fix this issue by scheduling the tasklet after link reset.
Link: https://github.com/raspberrypi/linux/issues/2617 Fixes: 55d7de9de6c3 ("Microchip's LAN7800 family USB 2/3 to 10/100/1000 Ethernet") Suggested-by: Floris Bos bos@je-eigen-domein.nl Signed-off-by: Stefan Wahren stefan.wahren@i2se.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/usb/lan78xx.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/net/usb/lan78xx.c +++ b/drivers/net/usb/lan78xx.c @@ -1170,6 +1170,8 @@ static int lan78xx_link_reset(struct lan mod_timer(&dev->stat_monitor, jiffies + STAT_UPDATE_TIMER); } + + tasklet_schedule(&dev->bh); }
return ret;
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Anton Vasilyev vasilyev@ispras.ru
[ Upstream commit b0753408aadf32c7ece9e6b765017881e54af833 ]
mdio_mux_iproc_probe() uses platform_set_drvdata() to store md pointer in device, whereas mdio_mux_iproc_remove() restores md pointer by dev_get_platdata(&pdev->dev). This leads to wrong resources release.
The patch replaces getter to platform_get_drvdata.
Fixes: 98bc865a1ec8 ("net: mdio-mux: Add MDIO mux driver for iProc SoCs") Signed-off-by: Anton Vasilyev vasilyev@ispras.ru Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/phy/mdio-mux-bcm-iproc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/phy/mdio-mux-bcm-iproc.c +++ b/drivers/net/phy/mdio-mux-bcm-iproc.c @@ -218,7 +218,7 @@ out:
static int mdio_mux_iproc_remove(struct platform_device *pdev) { - struct iproc_mdiomux_desc *md = dev_get_platdata(&pdev->dev); + struct iproc_mdiomux_desc *md = platform_get_drvdata(pdev);
mdio_mux_uninit(md->mux_handle); mdiobus_unregister(md->mii_bus);
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eugeniy Paltsev Eugeniy.Paltsev@synopsys.com
[ Upstream commit 9939a46d90c6c76f4533d534dbadfa7b39dc6acc ]
As for today STMMAC_ALIGN macro (which is used to align DMA stuff) relies on L1 line length (L1_CACHE_BYTES). This isn't correct in case of system with several cache levels which might have L1 cache line length smaller than L2 line. This can lead to sharing one cache line between DMA buffer and other data, so we can lose this data while invalidate DMA buffer before DMA transaction.
Fix that by using SMP_CACHE_BYTES instead of L1_CACHE_BYTES for aligning.
Signed-off-by: Eugeniy Paltsev Eugeniy.Paltsev@synopsys.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -55,7 +55,7 @@ #include <linux/of_mdio.h> #include "dwmac1000.h"
-#define STMMAC_ALIGN(x) L1_CACHE_ALIGN(x) +#define STMMAC_ALIGN(x) __ALIGN_KERNEL(x, SMP_CACHE_BYTES) #define TSO_MAX_BUFF_SIZE (SZ_16K - 1)
/* Module parameters */
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Neal Cardwell ncardwell@google.com
[ Upstream commit 383d470936c05554219094a4d364d964cb324827 ]
For some very small BDPs (with just a few packets) there was a quantization effect where the target number of packets in flight during the super-unity-gain (1.25x) phase of gain cycling was implicitly truncated to a number of packets no larger than the normal unity-gain (1.0x) phase of gain cycling. This meant that in multi-flow scenarios some flows could get stuck with a lower bandwidth, because they did not push enough packets inflight to discover that there was more bandwidth available. This was really only an issue in multi-flow LAN scenarios, where RTTs and BDPs are low enough for this to be an issue.
This fix ensures that gain cycling can raise inflight for small BDPs by ensuring that in PROBE_BW mode target inflight values with a super-unity gain are always greater than inflight values with a gain <= 1. Importantly, this applies whether the inflight value is calculated for use as a cwnd value, or as a target inflight value for the end of the super-unity phase in bbr_is_next_cycle_phase() (both need to be bigger to ensure we can probe with more packets in flight reliably).
This is a candidate fix for stable releases.
Fixes: 0f8782ea1497 ("tcp_bbr: add BBR congestion control") Signed-off-by: Neal Cardwell ncardwell@google.com Acked-by: Yuchung Cheng ycheng@google.com Acked-by: Soheil Hassas Yeganeh soheil@google.com Acked-by: Priyaranjan Jha priyarjha@google.com Reviewed-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/tcp_bbr.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/net/ipv4/tcp_bbr.c +++ b/net/ipv4/tcp_bbr.c @@ -324,6 +324,10 @@ static u32 bbr_target_cwnd(struct sock * /* Reduce delayed ACKs by rounding up cwnd to the next even number. */ cwnd = (cwnd + 1) & ~1U;
+ /* Ensure gain cycling gets inflight above BDP even for small BDPs. */ + if (bbr->mode == BBR_PROBE_BW && gain > BBR_UNIT) + cwnd += 2; + return cwnd; }
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xiao Liang xiliang@redhat.com
[ Upstream commit 822fb18a82abaf4ee7058793d95d340f5dab7bfc ]
When loading module manually, after call xenbus_switch_state to initializes the state of the netfront device, the driver state did not change so fast that may lead no dev created in latest kernel. This patch adds wait to make sure xenbus knows the driver is not in closed/unknown state.
Current state: [vm]# ethtool eth0 Settings for eth0: Link detected: yes [vm]# modprobe -r xen_netfront [vm]# modprobe xen_netfront [vm]# ethtool eth0 Settings for eth0: Cannot get device settings: No such device Cannot get wake-on-lan settings: No such device Cannot get message level: No such device Cannot get link status: No such device No data available
With the patch installed. [vm]# ethtool eth0 Settings for eth0: Link detected: yes [vm]# modprobe -r xen_netfront [vm]# modprobe xen_netfront [vm]# ethtool eth0 Settings for eth0: Link detected: yes
Signed-off-by: Xiao Liang xiliang@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/xen-netfront.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -86,6 +86,7 @@ struct netfront_cb { /* IRQ name is queue name with "-tx" or "-rx" appended */ #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
+static DECLARE_WAIT_QUEUE_HEAD(module_load_q); static DECLARE_WAIT_QUEUE_HEAD(module_unload_q);
struct netfront_stats { @@ -1349,6 +1350,11 @@ static struct net_device *xennet_create_ netif_carrier_off(netdev);
xenbus_switch_state(dev, XenbusStateInitialising); + wait_event(module_load_q, + xenbus_read_driver_state(dev->otherend) != + XenbusStateClosed && + xenbus_read_driver_state(dev->otherend) != + XenbusStateUnknown); return netdev;
exit:
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Safonov dima@arista.com
[ Upstream commit 7acf9d4237c46894e0fa0492dd96314a41742e84 ]
Make ABI more strict about subscribing to group > ngroups. Code doesn't check for that and it looks bogus. (one can subscribe to non-existing group) Still, it's possible to bind() to all possible groups with (-1)
Cc: "David S. Miller" davem@davemloft.net Cc: Herbert Xu herbert@gondor.apana.org.au Cc: Steffen Klassert steffen.klassert@secunet.com Cc: netdev@vger.kernel.org Signed-off-by: Dmitry Safonov dima@arista.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/netlink/af_netlink.c | 1 + 1 file changed, 1 insertion(+)
--- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -983,6 +983,7 @@ static int netlink_bind(struct socket *s if (err) return err; } + groups &= (1UL << nlk->ngroups) - 1;
bound = nlk->bound; if (bound) {
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Safonov dima@arista.com
[ Upstream commit 61f4b23769f0cc72ae62c9a81cf08f0397d40da8 ]
On i386 nlk->ngroups might be 32 or 0. Which leads to UB, resulting in hang during boot. Check for 0 ngroups and use (unsigned long long) as a type to shift.
Fixes: 7acf9d4237c4 ("netlink: Do not subscribe to non-existent groups"). Reported-by: kernel test robot rong.a.chen@intel.com Signed-off-by: Dmitry Safonov dima@arista.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/netlink/af_netlink.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -983,7 +983,11 @@ static int netlink_bind(struct socket *s if (err) return err; } - groups &= (1UL << nlk->ngroups) - 1; + + if (nlk->ngroups == 0) + groups = 0; + else + groups &= (1ULL << nlk->ngroups) - 1;
bound = nlk->bound; if (bound) {
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit a3893637e1eb0ef5eb1bbc52b3a8d2dfa317a35d ]
As explained in commit 9f9843a751d0 ("tcp: properly handle stretch acks in slow start"), TCP stacks have to consider how many packets are acknowledged in one single ACK, because of GRO, but also because of ACK compression or losses.
We plan to add SACK compression in the following patch, we must therefore not call tcp_enter_quickack_mode()
Signed-off-by: Eric Dumazet edumazet@google.com Acked-by: Neal Cardwell ncardwell@google.com Acked-by: Soheil Hassas Yeganeh soheil@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/tcp_input.c | 2 -- 1 file changed, 2 deletions(-)
--- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -4745,8 +4745,6 @@ drop: if (!before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt + tcp_receive_window(tp))) goto out_of_window;
- tcp_enter_quickack_mode(sk); - if (before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) { /* Partial packet, seq < rcv_next < end_seq */ SOCK_DEBUG(sk, "partial packet: rcv_next %X seq %X - %X\n",
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 9a9c9b51e54618861420093ae6e9b50a961914c5 ]
We want to add finer control of the number of ACK packets sent after ECN events.
This patch is not changing current behavior, it only enables following change.
Signed-off-by: Eric Dumazet edumazet@google.com Acked-by: Soheil Hassas Yeganeh soheil@google.com Acked-by: Neal Cardwell ncardwell@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/tcp.h | 2 +- net/ipv4/tcp_dctcp.c | 4 ++-- net/ipv4/tcp_input.c | 24 +++++++++++++----------- 3 files changed, 16 insertions(+), 14 deletions(-)
--- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -363,7 +363,7 @@ ssize_t tcp_splice_read(struct socket *s struct pipe_inode_info *pipe, size_t len, unsigned int flags);
-void tcp_enter_quickack_mode(struct sock *sk); +void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks); static inline void tcp_dec_quickack_mode(struct sock *sk, const unsigned int pkts) { --- a/net/ipv4/tcp_dctcp.c +++ b/net/ipv4/tcp_dctcp.c @@ -138,7 +138,7 @@ static void dctcp_ce_state_0_to_1(struct */ if (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER) __tcp_send_ack(sk, ca->prior_rcv_nxt); - tcp_enter_quickack_mode(sk); + tcp_enter_quickack_mode(sk, 1); }
ca->prior_rcv_nxt = tp->rcv_nxt; @@ -159,7 +159,7 @@ static void dctcp_ce_state_1_to_0(struct */ if (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER) __tcp_send_ack(sk, ca->prior_rcv_nxt); - tcp_enter_quickack_mode(sk); + tcp_enter_quickack_mode(sk, 1); }
ca->prior_rcv_nxt = tp->rcv_nxt; --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -198,21 +198,23 @@ static void tcp_measure_rcv_mss(struct s } }
-static void tcp_incr_quickack(struct sock *sk) +static void tcp_incr_quickack(struct sock *sk, unsigned int max_quickacks) { struct inet_connection_sock *icsk = inet_csk(sk); unsigned int quickacks = tcp_sk(sk)->rcv_wnd / (2 * icsk->icsk_ack.rcv_mss);
if (quickacks == 0) quickacks = 2; + quickacks = min(quickacks, max_quickacks); if (quickacks > icsk->icsk_ack.quick) - icsk->icsk_ack.quick = min(quickacks, TCP_MAX_QUICKACKS); + icsk->icsk_ack.quick = quickacks; }
-void tcp_enter_quickack_mode(struct sock *sk) +void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks) { struct inet_connection_sock *icsk = inet_csk(sk); - tcp_incr_quickack(sk); + + tcp_incr_quickack(sk, max_quickacks); icsk->icsk_ack.pingpong = 0; icsk->icsk_ack.ato = TCP_ATO_MIN; } @@ -257,7 +259,7 @@ static void __tcp_ecn_check_ce(struct tc * it is probably a retransmit. */ if (tp->ecn_flags & TCP_ECN_SEEN) - tcp_enter_quickack_mode((struct sock *)tp); + tcp_enter_quickack_mode((struct sock *)tp, TCP_MAX_QUICKACKS); break; case INET_ECN_CE: if (tcp_ca_needs_ecn((struct sock *)tp)) @@ -265,7 +267,7 @@ static void __tcp_ecn_check_ce(struct tc
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) { /* Better not delay acks, sender can have a very low cwnd */ - tcp_enter_quickack_mode((struct sock *)tp); + tcp_enter_quickack_mode((struct sock *)tp, TCP_MAX_QUICKACKS); tp->ecn_flags |= TCP_ECN_DEMAND_CWR; } tp->ecn_flags |= TCP_ECN_SEEN; @@ -675,7 +677,7 @@ static void tcp_event_data_recv(struct s /* The _first_ data packet received, initialize * delayed ACK engine. */ - tcp_incr_quickack(sk); + tcp_incr_quickack(sk, TCP_MAX_QUICKACKS); icsk->icsk_ack.ato = TCP_ATO_MIN; } else { int m = now - icsk->icsk_ack.lrcvtime; @@ -691,7 +693,7 @@ static void tcp_event_data_recv(struct s /* Too long gap. Apparently sender failed to * restart window, so that we send ACKs quickly. */ - tcp_incr_quickack(sk); + tcp_incr_quickack(sk, TCP_MAX_QUICKACKS); sk_mem_reclaim(sk); } } @@ -4210,7 +4212,7 @@ static void tcp_send_dupack(struct sock if (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq && before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) { NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKLOST); - tcp_enter_quickack_mode(sk); + tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS);
if (tcp_is_sack(tp) && sysctl_tcp_dsack) { u32 end_seq = TCP_SKB_CB(skb)->end_seq; @@ -4734,7 +4736,7 @@ queue_and_out: tcp_dsack_set(sk, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq);
out_of_window: - tcp_enter_quickack_mode(sk); + tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS); inet_csk_schedule_ack(sk); drop: tcp_drop(sk, skb); @@ -5828,7 +5830,7 @@ static int tcp_rcv_synsent_state_process * to stand against the temptation 8) --ANK */ inet_csk_schedule_ack(sk); - tcp_enter_quickack_mode(sk); + tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS); inet_csk_reset_xmit_timer(sk, ICSK_TIME_DACK, TCP_DELACK_MAX, TCP_RTO_MAX);
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 522040ea5fdd1c33bbf75e1d7c7c0422b96a94ef ]
ECN signals currently forces TCP to enter quickack mode for up to 16 (TCP_MAX_QUICKACKS) following incoming packets.
We believe this is not needed, and only sending one immediate ack for the current packet should be enough.
This should reduce the extra load noticed in DCTCP environments, after congestion events.
This is part 2 of our effort to reduce pure ACK packets.
Signed-off-by: Eric Dumazet edumazet@google.com Acked-by: Soheil Hassas Yeganeh soheil@google.com Acked-by: Yuchung Cheng ycheng@google.com Acked-by: Neal Cardwell ncardwell@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/tcp_input.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -259,7 +259,7 @@ static void __tcp_ecn_check_ce(struct tc * it is probably a retransmit. */ if (tp->ecn_flags & TCP_ECN_SEEN) - tcp_enter_quickack_mode((struct sock *)tp, TCP_MAX_QUICKACKS); + tcp_enter_quickack_mode((struct sock *)tp, 1); break; case INET_ECN_CE: if (tcp_ca_needs_ecn((struct sock *)tp)) @@ -267,7 +267,7 @@ static void __tcp_ecn_check_ce(struct tc
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) { /* Better not delay acks, sender can have a very low cwnd */ - tcp_enter_quickack_mode((struct sock *)tp, TCP_MAX_QUICKACKS); + tcp_enter_quickack_mode((struct sock *)tp, 1); tp->ecn_flags |= TCP_ECN_DEMAND_CWR; } tp->ecn_flags |= TCP_ECN_SEEN;
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yousuk Seung ysseung@google.com
[ Upstream commit f4c9f85f3b2cb7669830cd04d0be61192a4d2436 ]
Refactor tcp_ecn_check_ce and __tcp_ecn_check_ce to accept struct sock* instead of tcp_sock* to clean up type casts. This is a pure refactor patch.
Signed-off-by: Yousuk Seung ysseung@google.com Signed-off-by: Neal Cardwell ncardwell@google.com Signed-off-by: Yuchung Cheng ycheng@google.com Signed-off-by: Eric Dumazet edumazet@google.com Acked-by: Soheil Hassas Yeganeh soheil@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/tcp_input.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-)
--- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -250,8 +250,10 @@ static void tcp_ecn_withdraw_cwr(struct tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR; }
-static void __tcp_ecn_check_ce(struct tcp_sock *tp, const struct sk_buff *skb) +static void __tcp_ecn_check_ce(struct sock *sk, const struct sk_buff *skb) { + struct tcp_sock *tp = tcp_sk(sk); + switch (TCP_SKB_CB(skb)->ip_dsfield & INET_ECN_MASK) { case INET_ECN_NOT_ECT: /* Funny extension: if ECT is not set on a segment, @@ -259,31 +261,31 @@ static void __tcp_ecn_check_ce(struct tc * it is probably a retransmit. */ if (tp->ecn_flags & TCP_ECN_SEEN) - tcp_enter_quickack_mode((struct sock *)tp, 1); + tcp_enter_quickack_mode(sk, 1); break; case INET_ECN_CE: - if (tcp_ca_needs_ecn((struct sock *)tp)) - tcp_ca_event((struct sock *)tp, CA_EVENT_ECN_IS_CE); + if (tcp_ca_needs_ecn(sk)) + tcp_ca_event(sk, CA_EVENT_ECN_IS_CE);
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) { /* Better not delay acks, sender can have a very low cwnd */ - tcp_enter_quickack_mode((struct sock *)tp, 1); + tcp_enter_quickack_mode(sk, 1); tp->ecn_flags |= TCP_ECN_DEMAND_CWR; } tp->ecn_flags |= TCP_ECN_SEEN; break; default: - if (tcp_ca_needs_ecn((struct sock *)tp)) - tcp_ca_event((struct sock *)tp, CA_EVENT_ECN_NO_CE); + if (tcp_ca_needs_ecn(sk)) + tcp_ca_event(sk, CA_EVENT_ECN_NO_CE); tp->ecn_flags |= TCP_ECN_SEEN; break; } }
-static void tcp_ecn_check_ce(struct tcp_sock *tp, const struct sk_buff *skb) +static void tcp_ecn_check_ce(struct sock *sk, const struct sk_buff *skb) { - if (tp->ecn_flags & TCP_ECN_OK) - __tcp_ecn_check_ce(tp, skb); + if (tcp_sk(sk)->ecn_flags & TCP_ECN_OK) + __tcp_ecn_check_ce(sk, skb); }
static void tcp_ecn_rcv_synack(struct tcp_sock *tp, const struct tcphdr *th) @@ -699,7 +701,7 @@ static void tcp_event_data_recv(struct s } icsk->icsk_ack.lrcvtime = now;
- tcp_ecn_check_ce(tp, skb); + tcp_ecn_check_ce(sk, skb);
if (skb->len >= 128) tcp_grow_window(sk, skb); @@ -4456,7 +4458,7 @@ static void tcp_data_queue_ofo(struct so u32 seq, end_seq; bool fragstolen;
- tcp_ecn_check_ce(tp, skb); + tcp_ecn_check_ce(sk, skb);
if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) { NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFODROP);
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 15ecbe94a45ef88491ca459b26efdd02f91edb6d ]
Larry Brakmo proposal ( https://patchwork.ozlabs.org/patch/935233/ tcp: force cwnd at least 2 in tcp_cwnd_reduction) made us rethink about our recent patch removing ~16 quick acks after ECN events.
tcp_enter_quickack_mode(sk, 1) makes sure one immediate ack is sent, but in the case the sender cwnd was lowered to 1, we do not want to have a delayed ack for the next packet we will receive.
Fixes: 522040ea5fdd ("tcp: do not aggressively quick ack after ECN events") Signed-off-by: Eric Dumazet edumazet@google.com Reported-by: Neal Cardwell ncardwell@google.com Cc: Lawrence Brakmo brakmo@fb.com Acked-by: Neal Cardwell ncardwell@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/tcp_input.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -261,7 +261,7 @@ static void __tcp_ecn_check_ce(struct so * it is probably a retransmit. */ if (tp->ecn_flags & TCP_ECN_SEEN) - tcp_enter_quickack_mode(sk, 1); + tcp_enter_quickack_mode(sk, 2); break; case INET_ECN_CE: if (tcp_ca_needs_ecn(sk)) @@ -269,7 +269,7 @@ static void __tcp_ecn_check_ce(struct so
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) { /* Better not delay acks, sender can have a very low cwnd */ - tcp_enter_quickack_mode(sk, 1); + tcp_enter_quickack_mode(sk, 2); tp->ecn_flags |= TCP_ECN_DEMAND_CWR; } tp->ecn_flags |= TCP_ECN_SEEN;
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Shevchenko andriy.shevchenko@linux.intel.com Date: Thu, 24 Aug 2017 11:19:33 +0300 Subject: [PATCH 4.9 16/32] pinctrl: intel: Read back TX buffer state
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
commit d68b42e30bbacd24354d644f430d088435b15e83 upstream.
In the same way as it's done in pinctrl-cherryview.c we would provide a readback TX buffer state.
Fixes: 17fab473693 ("pinctrl: intel: Set pin direction properly") Reported-by: "Bourque, Francis" francis.bourque@intel.com Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Acked-by: Mika Westerberg mika.westerberg@linux.intel.com Tested-by: "Bourque, Francis" francis.bourque@intel.com Signed-off-by: Linus Walleij linus.walleij@linaro.org Cc: Anthony de Boer adb@adb.ca Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pinctrl/intel/pinctrl-intel.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
--- a/drivers/pinctrl/intel/pinctrl-intel.c +++ b/drivers/pinctrl/intel/pinctrl-intel.c @@ -604,12 +604,17 @@ static int intel_gpio_get(struct gpio_ch { struct intel_pinctrl *pctrl = gpiochip_get_data(chip); void __iomem *reg; + u32 padcfg0;
reg = intel_get_padcfg(pctrl, offset, PADCFG0); if (!reg) return -EINVAL;
- return !!(readl(reg) & PADCFG0_GPIORXSTATE); + padcfg0 = readl(reg); + if (!(padcfg0 & PADCFG0_GPIOTXDIS)) + return !!(padcfg0 & PADCFG0_GPIOTXSTATE); + + return !!(padcfg0 & PADCFG0_GPIORXSTATE); }
static void intel_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Boqun Feng boqun.feng@gmail.com
commit 35a2897c2a306cca344ca5c0b43416707018f434 upstream.
Steven Rostedt reported a potential race in RCU core because of swake_up():
CPU0 CPU1 ---- ---- __call_rcu_core() {
spin_lock(rnp_root) need_wake = __rcu_start_gp() { rcu_start_gp_advanced() { gp_flags = FLAG_INIT } }
rcu_gp_kthread() { swait_event_interruptible(wq, gp_flags & FLAG_INIT) { spin_lock(q->lock)
*fetch wq->task_list here! *
list_add(wq->task_list, q->task_list) spin_unlock(q->lock);
*fetch old value of gp_flags here *
spin_unlock(rnp_root)
rcu_gp_kthread_wake() { swake_up(wq) { swait_active(wq) { list_empty(wq->task_list)
} * return false *
if (condition) * false * schedule();
In this case, a wakeup is missed, which could cause the rcu_gp_kthread waits for a long time.
The reason of this is that we do a lockless swait_active() check in swake_up(). To fix this, we can either 1) add a smp_mb() in swake_up() before swait_active() to provide the proper order or 2) simply remove the swait_active() in swake_up().
The solution 2 not only fixes this problem but also keeps the swait and wait API as close as possible, as wake_up() doesn't provide a full barrier and doesn't do a lockless check of the wait queue either. Moreover, there are users already using swait_active() to do their quick checks for the wait queues, so it make less sense that swake_up() and swake_up_all() do this on their own.
This patch then removes the lockless swait_active() check in swake_up() and swake_up_all().
Reported-by: Steven Rostedt rostedt@goodmis.org Signed-off-by: Boqun Feng boqun.feng@gmail.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: Krister Johansen kjlx@templeofstupid.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Paul E. McKenney paulmck@linux.vnet.ibm.com Cc: Paul Gortmaker paul.gortmaker@windriver.com Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: http://lkml.kernel.org/r/20170615041828.zk3a3sfyudm5p6nl@tardis Signed-off-by: Ingo Molnar mingo@kernel.org Cc: David Chen david.chen@nutanix.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/sched/swait.c | 6 ------ 1 file changed, 6 deletions(-)
--- a/kernel/sched/swait.c +++ b/kernel/sched/swait.c @@ -33,9 +33,6 @@ void swake_up(struct swait_queue_head *q { unsigned long flags;
- if (!swait_active(q)) - return; - raw_spin_lock_irqsave(&q->lock, flags); swake_up_locked(q); raw_spin_unlock_irqrestore(&q->lock, flags); @@ -51,9 +48,6 @@ void swake_up_all(struct swait_queue_hea struct swait_queue *curr; LIST_HEAD(tmp);
- if (!swait_active(q)) - return; - raw_spin_lock_irq(&q->lock); list_splice_init(&q->task_list, &tmp); while (!list_empty(&tmp)) {
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 7e2556e40026a1b0c16f37446ab398d5a5a892e4 ]
syzbot found that the following sequence produces a LOCKDEP splat [1]
ip link add bond10 type bond ip link add bond11 type bond ip link set bond11 master bond10
To fix this, we can use the already provided nest_level.
This patch also provides correct nesting for dev->addr_list_lock
[1] WARNING: possible recursive locking detected 4.18.0-rc6+ #167 Not tainted -------------------------------------------- syz-executor751/4439 is trying to acquire lock: (____ptrval____) (&(&bond->stats_lock)->rlock){+.+.}, at: spin_lock include/linux/spinlock.h:310 [inline] (____ptrval____) (&(&bond->stats_lock)->rlock){+.+.}, at: bond_get_stats+0xb4/0x560 drivers/net/bonding/bond_main.c:3426
but task is already holding lock: (____ptrval____) (&(&bond->stats_lock)->rlock){+.+.}, at: spin_lock include/linux/spinlock.h:310 [inline] (____ptrval____) (&(&bond->stats_lock)->rlock){+.+.}, at: bond_get_stats+0xb4/0x560 drivers/net/bonding/bond_main.c:3426
other info that might help us debug this: Possible unsafe locking scenario:
CPU0 ---- lock(&(&bond->stats_lock)->rlock); lock(&(&bond->stats_lock)->rlock);
*** DEADLOCK ***
May be due to missing lock nesting notation
3 locks held by syz-executor751/4439: #0: (____ptrval____) (rtnl_mutex){+.+.}, at: rtnl_lock+0x17/0x20 net/core/rtnetlink.c:77 #1: (____ptrval____) (&(&bond->stats_lock)->rlock){+.+.}, at: spin_lock include/linux/spinlock.h:310 [inline] #1: (____ptrval____) (&(&bond->stats_lock)->rlock){+.+.}, at: bond_get_stats+0xb4/0x560 drivers/net/bonding/bond_main.c:3426 #2: (____ptrval____) (rcu_read_lock){....}, at: bond_get_stats+0x0/0x560 include/linux/compiler.h:215
stack backtrace: CPU: 0 PID: 4439 Comm: syz-executor751 Not tainted 4.18.0-rc6+ #167 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x1c9/0x2b4 lib/dump_stack.c:113 print_deadlock_bug kernel/locking/lockdep.c:1765 [inline] check_deadlock kernel/locking/lockdep.c:1809 [inline] validate_chain kernel/locking/lockdep.c:2405 [inline] __lock_acquire.cold.64+0x1fb/0x486 kernel/locking/lockdep.c:3435 lock_acquire+0x1e4/0x540 kernel/locking/lockdep.c:3924 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:144 spin_lock include/linux/spinlock.h:310 [inline] bond_get_stats+0xb4/0x560 drivers/net/bonding/bond_main.c:3426 dev_get_stats+0x10f/0x470 net/core/dev.c:8316 bond_get_stats+0x232/0x560 drivers/net/bonding/bond_main.c:3432 dev_get_stats+0x10f/0x470 net/core/dev.c:8316 rtnl_fill_stats+0x4d/0xac0 net/core/rtnetlink.c:1169 rtnl_fill_ifinfo+0x1aa6/0x3fb0 net/core/rtnetlink.c:1611 rtmsg_ifinfo_build_skb+0xc8/0x190 net/core/rtnetlink.c:3268 rtmsg_ifinfo_event.part.30+0x45/0xe0 net/core/rtnetlink.c:3300 rtmsg_ifinfo_event net/core/rtnetlink.c:3297 [inline] rtnetlink_event+0x144/0x170 net/core/rtnetlink.c:4716 notifier_call_chain+0x180/0x390 kernel/notifier.c:93 __raw_notifier_call_chain kernel/notifier.c:394 [inline] raw_notifier_call_chain+0x2d/0x40 kernel/notifier.c:401 call_netdevice_notifiers_info+0x3f/0x90 net/core/dev.c:1735 call_netdevice_notifiers net/core/dev.c:1753 [inline] netdev_features_change net/core/dev.c:1321 [inline] netdev_change_features+0xb3/0x110 net/core/dev.c:7759 bond_compute_features.isra.47+0x585/0xa50 drivers/net/bonding/bond_main.c:1120 bond_enslave+0x1b25/0x5da0 drivers/net/bonding/bond_main.c:1755 bond_do_ioctl+0x7cb/0xae0 drivers/net/bonding/bond_main.c:3528 dev_ifsioc+0x43c/0xb30 net/core/dev_ioctl.c:327 dev_ioctl+0x1b5/0xcc0 net/core/dev_ioctl.c:493 sock_do_ioctl+0x1d3/0x3e0 net/socket.c:992 sock_ioctl+0x30d/0x680 net/socket.c:1093 vfs_ioctl fs/ioctl.c:46 [inline] file_ioctl fs/ioctl.c:500 [inline] do_vfs_ioctl+0x1de/0x1720 fs/ioctl.c:684 ksys_ioctl+0xa9/0xd0 fs/ioctl.c:701 __do_sys_ioctl fs/ioctl.c:708 [inline] __se_sys_ioctl fs/ioctl.c:706 [inline] __x64_sys_ioctl+0x73/0xb0 fs/ioctl.c:706 do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x440859 Code: e8 2c af 02 00 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 3b 10 fc ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007ffc51a92878 EFLAGS: 00000213 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 0000000000440859 RDX: 0000000020000040 RSI: 0000000000008990 RDI: 0000000000000003 RBP: 0000000000000000 R08: 00000000004002c8 R09: 00000000004002c8 R10: 00000000022d5880 R11: 0000000000000213 R12: 0000000000007390 R13: 0000000000401db0 R14: 0000000000000000 R15: 0000000000000000
Signed-off-by: Eric Dumazet edumazet@google.com Cc: Jay Vosburgh j.vosburgh@gmail.com Cc: Veaceslav Falico vfalico@gmail.com Cc: Andy Gospodarek andy@greyhouse.net
Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/bonding/bond_main.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
--- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -1682,6 +1682,8 @@ int bond_enslave(struct net_device *bond goto err_upper_unlink; }
+ bond->nest_level = dev_get_nest_level(bond_dev) + 1; + /* If the mode uses primary, then the following is handled by * bond_change_active_slave(). */ @@ -1729,7 +1731,6 @@ int bond_enslave(struct net_device *bond if (bond_mode_uses_xmit_hash(bond)) bond_update_slave_arr(bond, NULL);
- bond->nest_level = dev_get_nest_level(bond_dev);
netdev_info(bond_dev, "Enslaving %s as %s interface with %s link\n", slave_dev->name, @@ -3359,6 +3360,13 @@ static void bond_fold_stats(struct rtnl_ } }
+static int bond_get_nest_level(struct net_device *bond_dev) +{ + struct bonding *bond = netdev_priv(bond_dev); + + return bond->nest_level; +} + static struct rtnl_link_stats64 *bond_get_stats(struct net_device *bond_dev, struct rtnl_link_stats64 *stats) { @@ -3367,7 +3375,7 @@ static struct rtnl_link_stats64 *bond_ge struct list_head *iter; struct slave *slave;
- spin_lock(&bond->stats_lock); + spin_lock_nested(&bond->stats_lock, bond_get_nest_level(bond_dev)); memcpy(stats, &bond->bond_stats, sizeof(*stats));
rcu_read_lock(); @@ -4163,6 +4171,7 @@ static const struct net_device_ops bond_ .ndo_neigh_setup = bond_neigh_setup, .ndo_vlan_rx_add_vid = bond_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = bond_vlan_rx_kill_vid, + .ndo_get_lock_subclass = bond_get_nest_level, #ifdef CONFIG_NET_POLL_CONTROLLER .ndo_netpoll_setup = bond_netpoll_setup, .ndo_netpoll_cleanup = bond_netpoll_cleanup, @@ -4655,6 +4664,7 @@ static int bond_init(struct net_device * if (!bond->wq) return -ENOMEM;
+ bond->nest_level = SINGLE_DEPTH_NESTING; netdev_lockdep_set_classes(bond_dev);
list_add_tail(&bond->bond_list, &bn->dev_list);
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 56e2c94f055d328f5f6b0a5c1721cca2f2d4e0a1 ]
We currently check current frags memory usage only when a new frag queue is created. This allows attackers to first consume the memory budget (default : 4 MB) creating thousands of frag queues, then sending tiny skbs to exceed high_thresh limit by 2 to 3 order of magnitude.
Note that before commit 648700f76b03 ("inet: frags: use rhashtables for reassembly units"), work queue could be starved under DOS, getting no cpu cycles. After commit 648700f76b03, only the per frag queue timer can eventually remove an incomplete frag queue and its skbs.
Fixes: b13d3cbfb8e8 ("inet: frag: move eviction of queues to work queue") Signed-off-by: Eric Dumazet edumazet@google.com Reported-by: Jann Horn jannh@google.com Cc: Florian Westphal fw@strlen.de Cc: Peter Oskolkov posk@google.com Cc: Paolo Abeni pabeni@redhat.com Acked-by: Florian Westphal fw@strlen.de Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/inet_fragment.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
--- a/net/ipv4/inet_fragment.c +++ b/net/ipv4/inet_fragment.c @@ -356,11 +356,6 @@ static struct inet_frag_queue *inet_frag { struct inet_frag_queue *q;
- if (!nf->high_thresh || frag_mem_limit(nf) > nf->high_thresh) { - inet_frag_schedule_worker(f); - return NULL; - } - q = kmem_cache_zalloc(f->frags_cachep, GFP_ATOMIC); if (!q) return NULL; @@ -397,6 +392,11 @@ struct inet_frag_queue *inet_frag_find(s struct inet_frag_queue *q; int depth = 0;
+ if (!nf->high_thresh || frag_mem_limit(nf) > nf->high_thresh) { + inet_frag_schedule_worker(f); + return NULL; + } + if (frag_mem_limit(nf) > nf->low_thresh) inet_frag_schedule_worker(f);
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 4672694bd4f1aebdab0ad763ae4716e89cb15221 ]
ip_frag_queue() might call pskb_pull() on one skb that is already in the fragment queue.
We need to take care of possible truesize change, or we might have an imbalance of the netns frags memory usage.
IPv6 is immune to this bug, because RFC5722, Section 4, amended by Errata ID 3089 states :
When reassembling an IPv6 datagram, if one or more its constituent fragments is determined to be an overlapping fragment, the entire datagram (and any constituent fragments) MUST be silently discarded.
Fixes: 158f323b9868 ("net: adjust skb->truesize in pskb_expand_head()") Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/ip_fragment.c | 5 +++++ 1 file changed, 5 insertions(+)
--- a/net/ipv4/ip_fragment.c +++ b/net/ipv4/ip_fragment.c @@ -446,11 +446,16 @@ found: int i = end - FRAG_CB(next)->offset; /* overlap is 'i' bytes */
if (i < next->len) { + int delta = -next->truesize; + /* Eat head of the next overlapped fragment * and leave the loop. The next ones cannot overlap. */ if (!pskb_pull(next, i)) goto err; + delta += next->truesize; + if (delta) + add_frag_mem_limit(qp->q.net, delta); FRAG_CB(next)->offset += i; qp->q.meat -= i; if (next->ip_summed != CHECKSUM_UNNECESSARY)
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Florian Fainelli f.fainelli@gmail.com
[ Upstream commit a94c689e6c9e72e722f28339e12dff191ee5a265 ]
If a DSA slave network device was previously disabled, there is no need to suspend or resume it.
Fixes: 2446254915a7 ("net: dsa: allow switch drivers to implement suspend/resume hooks") Signed-off-by: Florian Fainelli f.fainelli@gmail.com Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/dsa/slave.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -1199,6 +1199,9 @@ int dsa_slave_suspend(struct net_device { struct dsa_slave_priv *p = netdev_priv(slave_dev);
+ if (!netif_running(slave_dev)) + return 0; + netif_device_detach(slave_dev);
if (p->phy) { @@ -1216,6 +1219,9 @@ int dsa_slave_resume(struct net_device * { struct dsa_slave_priv *p = netdev_priv(slave_dev);
+ if (!netif_running(slave_dev)) + return 0; + netif_device_attach(slave_dev);
if (p->phy) {
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jeremy Cline jcline@redhat.com
[ Upstream commit bc5b6c0b62b932626a135f516a41838c510c6eba ]
'protocol' is a user-controlled value, so sanitize it after the bounds check to avoid using it for speculative out-of-bounds access to arrays indexed by it.
This addresses the following accesses detected with the help of smatch:
* net/netlink/af_netlink.c:654 __netlink_create() warn: potential spectre issue 'nlk_cb_mutex_keys' [w]
* net/netlink/af_netlink.c:654 __netlink_create() warn: potential spectre issue 'nlk_cb_mutex_key_strings' [w]
* net/netlink/af_netlink.c:685 netlink_create() warn: potential spectre issue 'nl_table' [w] (local cap)
Cc: Josh Poimboeuf jpoimboe@redhat.com Signed-off-by: Jeremy Cline jcline@redhat.com Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/netlink/af_netlink.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -62,6 +62,7 @@ #include <asm/cacheflush.h> #include <linux/hash.h> #include <linux/genetlink.h> +#include <linux/nospec.h>
#include <net/net_namespace.h> #include <net/sock.h> @@ -654,6 +655,7 @@ static int netlink_create(struct net *ne
if (protocol < 0 || protocol >= MAX_LINKS) return -EPROTONOSUPPORT; + protocol = array_index_nospec(protocol, MAX_LINKS);
netlink_lock_table(); #ifdef CONFIG_MODULES
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jose Abreu Jose.Abreu@synopsys.com
[ Upstream commit b7d0f08e9129c45ed41bc0cfa8e77067881e45fd ]
WoL won't work in PCI-based setups because we are not saving the PCI EP state before entering suspend state and not allowing D3 wake.
Fix this by using a wrapper around stmmac_{suspend/resume} which correctly sets the PCI EP state.
Signed-off-by: Jose Abreu joabreu@synopsys.com Cc: David S. Miller davem@davemloft.net Cc: Joao Pinto jpinto@synopsys.com Cc: Giuseppe Cavallaro peppe.cavallaro@st.com Cc: Alexandre Torgue alexandre.torgue@st.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c | 40 +++++++++++++++++++++-- 1 file changed, 38 insertions(+), 2 deletions(-)
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c @@ -183,7 +183,7 @@ static int stmmac_pci_probe(struct pci_d return -ENOMEM;
/* Enable pci device */ - ret = pcim_enable_device(pdev); + ret = pci_enable_device(pdev); if (ret) { dev_err(&pdev->dev, "%s: ERROR: failed to enable device\n", __func__); @@ -232,9 +232,45 @@ static int stmmac_pci_probe(struct pci_d static void stmmac_pci_remove(struct pci_dev *pdev) { stmmac_dvr_remove(&pdev->dev); + pci_disable_device(pdev); }
-static SIMPLE_DEV_PM_OPS(stmmac_pm_ops, stmmac_suspend, stmmac_resume); +static int stmmac_pci_suspend(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + int ret; + + ret = stmmac_suspend(dev); + if (ret) + return ret; + + ret = pci_save_state(pdev); + if (ret) + return ret; + + pci_disable_device(pdev); + pci_wake_from_d3(pdev, true); + return 0; +} + +static int stmmac_pci_resume(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + int ret; + + pci_restore_state(pdev); + pci_set_power_state(pdev, PCI_D0); + + ret = pci_enable_device(pdev); + if (ret) + return ret; + + pci_set_master(pdev); + + return stmmac_resume(dev); +} + +static SIMPLE_DEV_PM_OPS(stmmac_pm_ops, stmmac_pci_suspend, stmmac_pci_resume);
#define STMMAC_VENDOR_ID 0x700 #define STMMAC_QUARK_ID 0x0937
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Linus Torvalds torvalds@linux-foundation.org
commit d512584780d3e6a7cacb2f482834849453d444a1 upstream.
Anatoly reports another squashfs fuzzing issue, where the decompression parameters themselves are in a compressed block.
This causes squashfs_read_data() to be called in order to read the decompression options before the decompression stream having been set up, making squashfs go sideways.
Reported-by: Anatoly Trosinenko anatoly.trosinenko@gmail.com Acked-by: Phillip Lougher phillip.lougher@gmail.com Cc: stable@kernel.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/squashfs/block.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/fs/squashfs/block.c +++ b/fs/squashfs/block.c @@ -166,6 +166,8 @@ int squashfs_read_data(struct super_bloc }
if (compressed) { + if (!msblk->stream) + goto read_failure; length = squashfs_decompress(msblk, bh, b, offset, length, output); if (length < 0)
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Anton Vasilyev vasilyev@ispras.ru
commit 72c05f32f4a5055c9c8fe889bb6903ec959c0aad upstream.
ems_usb_probe() allocates memory for dev->tx_msg_buffer, but there is no its deallocation in ems_usb_disconnect().
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Anton Vasilyev vasilyev@ispras.ru Cc: stable@vger.kernel.org Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/can/usb/ems_usb.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/net/can/usb/ems_usb.c +++ b/drivers/net/can/usb/ems_usb.c @@ -1071,6 +1071,7 @@ static void ems_usb_disconnect(struct us usb_free_urb(dev->intr_urb);
kfree(dev->intr_in_buffer); + kfree(dev->tx_msg_buffer); } }
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jeremy Cline jcline@redhat.com
commit c8e8cd579bb4265651df8223730105341e61a2d1 upstream.
'call' is a user-controlled value, so sanitize the array index after the bounds check to avoid speculating past the bounds of the 'nargs' array.
Found with the help of Smatch:
net/socket.c:2508 __do_sys_socketcall() warn: potential spectre issue 'nargs' [r] (local cap)
Cc: Josh Poimboeuf jpoimboe@redhat.com Cc: stable@vger.kernel.org Signed-off-by: Jeremy Cline jcline@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- net/socket.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/net/socket.c +++ b/net/socket.c @@ -89,6 +89,7 @@ #include <linux/magic.h> #include <linux/slab.h> #include <linux/xattr.h> +#include <linux/nospec.h>
#include <asm/uaccess.h> #include <asm/unistd.h> @@ -2338,6 +2339,7 @@ SYSCALL_DEFINE2(socketcall, int, call, u
if (call < 1 || call > SYS_SENDMMSG) return -EINVAL; + call = array_index_nospec(call, SYS_SENDMMSG + 1);
len = nargs[call]; if (len > sizeof(a))
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiang Biao jiang.biao2@zte.com.cn
commit 89da619bc18d79bca5304724c11d4ba3b67ce2c6 upstream.
Kernel panic when with high memory pressure, calltrace looks like,
PID: 21439 TASK: ffff881be3afedd0 CPU: 16 COMMAND: "java" #0 [ffff881ec7ed7630] machine_kexec at ffffffff81059beb #1 [ffff881ec7ed7690] __crash_kexec at ffffffff81105942 #2 [ffff881ec7ed7760] crash_kexec at ffffffff81105a30 #3 [ffff881ec7ed7778] oops_end at ffffffff816902c8 #4 [ffff881ec7ed77a0] no_context at ffffffff8167ff46 #5 [ffff881ec7ed77f0] __bad_area_nosemaphore at ffffffff8167ffdc #6 [ffff881ec7ed7838] __node_set at ffffffff81680300 #7 [ffff881ec7ed7860] __do_page_fault at ffffffff8169320f #8 [ffff881ec7ed78c0] do_page_fault at ffffffff816932b5 #9 [ffff881ec7ed78f0] page_fault at ffffffff8168f4c8 [exception RIP: _raw_spin_lock_irqsave+47] RIP: ffffffff8168edef RSP: ffff881ec7ed79a8 RFLAGS: 00010046 RAX: 0000000000000246 RBX: ffffea0019740d00 RCX: ffff881ec7ed7fd8 RDX: 0000000000020000 RSI: 0000000000000016 RDI: 0000000000000008 RBP: ffff881ec7ed79a8 R8: 0000000000000246 R9: 000000000001a098 R10: ffff88107ffda000 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000008 R14: ffff881ec7ed7a80 R15: ffff881be3afedd0 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
It happens in the pagefault and results in double pagefault during compacting pages when memory allocation fails.
Analysed the vmcore, the page leads to second pagefault is corrupted with _mapcount=-256, but private=0.
It's caused by the race between migration and ballooning, and lock missing in virtballoon_migratepage() of virtio_balloon driver. This patch fix the bug.
Fixes: e22504296d4f64f ("virtio_balloon: introduce migration primitives to balloon pages") Cc: stable@vger.kernel.org Signed-off-by: Jiang Biao jiang.biao2@zte.com.cn Signed-off-by: Huang Chong huang.chong@zte.com.cn Signed-off-by: Michael S. Tsirkin mst@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/virtio/virtio_balloon.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -493,7 +493,9 @@ static int virtballoon_migratepage(struc tell_host(vb, vb->inflate_vq);
/* balloon's page migration 2nd step -- deflate "page" */ + spin_lock_irqsave(&vb_dev_info->pages_lock, flags); balloon_page_delete(page); + spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags); vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; set_page_pfns(vb, vb->pfns, page); tell_host(vb, vb->deflate_vq);
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Roman Kagan rkagan@virtuozzo.com
commit 63aff65573d73eb8dda4732ad4ef222dd35e4862 upstream.
VPID for the nested vcpu is allocated at vmx_create_vcpu whenever nested vmx is turned on with the module parameter.
However, it's only freed if the L1 guest has executed VMXON which is not a given.
As a result, on a system with nested==on every creation+deletion of an L1 vcpu without running an L2 guest results in leaking one vpid. Since the total number of vpids is limited to 64k, they can eventually get exhausted, preventing L2 from starting.
Delay allocation of the L2 vpid until VMXON emulation, thus matching its freeing.
Fixes: 5c614b3583e7b6dab0c86356fa36c2bcbb8322a0 Cc: stable@vger.kernel.org Signed-off-by: Roman Kagan rkagan@virtuozzo.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/vmx.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -7085,6 +7085,8 @@ static int handle_vmon(struct kvm_vcpu * HRTIMER_MODE_REL_PINNED); vmx->nested.preemption_timer.function = vmx_preemption_timer_fn;
+ vmx->nested.vpid02 = allocate_vpid(); + vmx->nested.vmxon = true;
skip_emulated_instruction(vcpu); @@ -9264,10 +9266,8 @@ static struct kvm_vcpu *vmx_create_vcpu( goto free_vmcs; }
- if (nested) { + if (nested) nested_vmx_setup_ctls_msrs(vmx); - vmx->nested.vpid02 = allocate_vpid(); - }
vmx->nested.posted_intr_nv = -1; vmx->nested.current_vmptr = -1ull; @@ -9285,7 +9285,6 @@ static struct kvm_vcpu *vmx_create_vcpu( return &vmx->vcpu;
free_vmcs: - free_vpid(vmx->nested.vpid02); free_loaded_vmcs(vmx->loaded_vmcs); free_msrs: kfree(vmx->guest_msrs);
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Herbert Xu herbert@gondor.apana.org.au
commit 46d8c4b28652d35dc6cfb5adf7f54e102fc04384 upstream.
This was detected by the self-test thanks to Ard's chunking patch.
I finally got around to testing this out on my ancient Via box. It turns out that the workaround got the assembly wrong and we end up doing count + initial cycles of the loop instead of just count.
This obviously causes corruption, either by overwriting the source that is yet to be processed, or writing over the end of the buffer.
On CPUs that don't require the workaround only ECB is affected. On Nano CPUs both ECB and CBC are affected.
This patch fixes it by doing the subtraction prior to the assembly.
Fixes: a76c1c23d0c3 ("crypto: padlock-aes - work around Nano CPU...") Cc: stable@vger.kernel.org Reported-by: Jamie Heilman jamie@audible.transient.net Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/crypto/padlock-aes.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
--- a/drivers/crypto/padlock-aes.c +++ b/drivers/crypto/padlock-aes.c @@ -266,6 +266,8 @@ static inline void padlock_xcrypt_ecb(co return; }
+ count -= initial; + if (initial) asm volatile (".byte 0xf3,0x0f,0xa7,0xc8" /* rep xcryptecb */ : "+S"(input), "+D"(output) @@ -273,7 +275,7 @@ static inline void padlock_xcrypt_ecb(co
asm volatile (".byte 0xf3,0x0f,0xa7,0xc8" /* rep xcryptecb */ : "+S"(input), "+D"(output) - : "d"(control_word), "b"(key), "c"(count - initial)); + : "d"(control_word), "b"(key), "c"(count)); }
static inline u8 *padlock_xcrypt_cbc(const u8 *input, u8 *output, void *key, @@ -284,6 +286,8 @@ static inline u8 *padlock_xcrypt_cbc(con if (count < cbc_fetch_blocks) return cbc_crypt(input, output, key, iv, control_word, count);
+ count -= initial; + if (initial) asm volatile (".byte 0xf3,0x0f,0xa7,0xd0" /* rep xcryptcbc */ : "+S" (input), "+D" (output), "+a" (iv) @@ -291,7 +295,7 @@ static inline u8 *padlock_xcrypt_cbc(con
asm volatile (".byte 0xf3,0x0f,0xa7,0xd0" /* rep xcryptcbc */ : "+S" (input), "+D" (output), "+a" (iv) - : "d" (control_word), "b" (key), "c" (count-initial)); + : "d" (control_word), "b" (key), "c" (count)); return iv; }
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Boris Brezillon boris.brezillon@bootlin.com
commit a6a00918d4ad8718c3ccde38c02cec17f116b2fd upstream.
This is needed to ensure ->is_unity is correct when the plane was previously configured to output a multi-planar format with scaling enabled, and is then being reconfigured to output a uniplanar format.
Fixes: fc04023fafec ("drm/vc4: Add support for YUV planes.") Cc: stable@vger.kernel.org Signed-off-by: Boris Brezillon boris.brezillon@bootlin.com Reviewed-by: Eric Anholt eric@anholt.net Link: https://patchwork.freedesktop.org/patch/msgid/20180724133601.32114-1-boris.b... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/vc4/vc4_plane.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/gpu/drm/vc4/vc4_plane.c +++ b/drivers/gpu/drm/vc4/vc4_plane.c @@ -350,6 +350,9 @@ static int vc4_plane_setup_clipping_and_ vc4_state->x_scaling[0] = VC4_SCALING_TPZ; if (vc4_state->y_scaling[0] == VC4_SCALING_NONE) vc4_state->y_scaling[0] = VC4_SCALING_TPZ; + } else { + vc4_state->x_scaling[1] = VC4_SCALING_NONE; + vc4_state->y_scaling[1] = VC4_SCALING_NONE; }
vc4_state->is_unity = (vc4_state->x_scaling[0] == VC4_SCALING_NONE &&
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tony Battersby tonyb@cybernetics.com
commit c170e5a8d222537e98aa8d4fddb667ff7a2ee114 upstream.
Fix a minor memory leak when there is an error opening a /dev/sg device.
Fixes: cc833acbee9d ("sg: O_EXCL and other lock handling") Cc: stable@vger.kernel.org Reviewed-by: Ewan D. Milne emilne@redhat.com Signed-off-by: Tony Battersby tonyb@cybernetics.com Reviewed-by: Bart Van Assche bart.vanassche@wdc.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/sg.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/scsi/sg.c +++ b/drivers/scsi/sg.c @@ -2185,6 +2185,7 @@ sg_add_sfp(Sg_device * sdp) write_lock_irqsave(&sdp->sfd_lock, iflags); if (atomic_read(&sdp->detaching)) { write_unlock_irqrestore(&sdp->sfd_lock, iflags); + kfree(sfp); return ERR_PTR(-ENODEV); } list_add_tail(&sfp->sfd_siblings, &sdp->sfds);
On Sat, Aug 04, 2018 at 11:00:50AM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.9.118 release. There are 32 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Mon Aug 6 08:26:35 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.118-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y and the diffstat can be found below.
thanks,
greg k-h
Merged, compiled with -Werror, and installed onto my OnePlus 6.
No issues noticed in dmesg or general usage.
Thanks! Nathan
On Sat, Aug 04, 2018 at 02:30:00AM -0700, Nathan Chancellor wrote:
On Sat, Aug 04, 2018 at 11:00:50AM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.9.118 release. There are 32 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Mon Aug 6 08:26:35 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.118-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y and the diffstat can be found below.
thanks,
greg k-h
Merged, compiled with -Werror, and installed onto my OnePlus 6.
No issues noticed in dmesg or general usage.
Wonderful, thank you for testing two of these and letting me know.
greg k-h
On 08/04/2018 02:00 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.9.118 release. There are 32 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Mon Aug 6 08:26:35 UTC 2018. Anything received after that time might be too late.
Build results: total: 148 pass: 148 fail: 0 Qemu test results: total: 250 pass: 250 fail: 0
Details are available at http://kerneltests.org/builders/.
Guenter
On 4 August 2018 at 14:30, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 4.9.118 release. There are 32 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Mon Aug 6 08:26:35 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.118-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm and x86_64.
Summary ------------------------------------------------------------------------
kernel: 4.9.118-rc1 git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git git branch: linux-4.9.y git commit: 54552119e851c10bcc103f36c19901d2239b751c git describe: v4.9.117-33-g54552119e851 Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-4.9-oe/build/v4.9.117-33-...
No regressions (compared to build v4.9.117-18-g17b0ebc4dc40)
Ran 16223 total tests in the following environments and test suites.
Environments -------------- - dragonboard-410c - arm64 - hi6220-hikey - arm64 - juno-r2 - arm64 - qemu_arm - qemu_arm64 - qemu_x86_64 - x15 - arm - x86_64
Test Suites ----------- * boot * kselftest * libhugetlbfs * ltp-cap_bounds-tests * ltp-containers-tests * ltp-cve-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-io-tests * ltp-ipc-tests * ltp-math-tests * ltp-nptl-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * ltp-syscalls-tests * ltp-timers-tests * ltp-open-posix-tests * kselftest-vsyscall-mode-native * kselftest-vsyscall-mode-none
linux-stable-mirror@lists.linaro.org