The patch below does not apply to the 3.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 80b3ffce0196ea50068885d085ff981e4b8396f4 Mon Sep 17 00:00:00 2001
From: "Maciej W. Rozycki" <macro(a)mips.com>
Date: Mon, 11 Dec 2017 22:53:14 +0000
Subject: [PATCH] MIPS: Consistently handle buffer counter with
PTRACE_SETREGSET
Update commit d614fd58a283 ("mips/ptrace: Preserve previous registers
for short regset write") bug and consistently consume all data supplied
to `fpr_set_msa' with the ptrace(2) PTRACE_SETREGSET request, such that
a zero data buffer counter is returned where insufficient data has been
given to fill a whole number of FP general registers.
In reality this is not going to happen, as the caller is supposed to
only supply data covering a whole number of registers and it is verified
in `ptrace_regset' and again asserted in `fpr_set', however structuring
code such that the presence of trailing partial FP general register data
causes `fpr_set_msa' to return with a non-zero data buffer counter makes
it appear that this trailing data will be used if there are subsequent
writes made to FP registers, which is going to be the case with the FCSR
once the missing write to that register has been fixed.
Fixes: d614fd58a283 ("mips/ptrace: Preserve previous registers for short regset write")
Signed-off-by: Maciej W. Rozycki <macro(a)mips.com>
Cc: James Hogan <james.hogan(a)mips.com>
Cc: Paul Burton <Paul.Burton(a)mips.com>
Cc: Alex Smith <alex(a)alex-smith.me.uk>
Cc: Dave Martin <Dave.Martin(a)arm.com>
Cc: linux-mips(a)linux-mips.org
Cc: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # v4.11+
Patchwork: https://patchwork.linux-mips.org/patch/17927/
Signed-off-by: Ralf Baechle <ralf(a)linux-mips.org>
diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
index 7fcadaaf330f..47a01d5f26ea 100644
--- a/arch/mips/kernel/ptrace.c
+++ b/arch/mips/kernel/ptrace.c
@@ -504,7 +504,7 @@ static int fpr_set_msa(struct task_struct *target,
int err;
BUILD_BUG_ON(sizeof(fpr_val) != sizeof(elf_fpreg_t));
- for (i = 0; i < NUM_FPU_REGS && *count >= sizeof(elf_fpreg_t); i++) {
+ for (i = 0; i < NUM_FPU_REGS && *count > 0; i++) {
err = user_regset_copyin(pos, count, kbuf, ubuf,
&fpr_val, i * sizeof(elf_fpreg_t),
(i + 1) * sizeof(elf_fpreg_t));
The patch below does not apply to the 3.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From dc24d0edf33c3e15099688b6bbdf7bdc24bf6e91 Mon Sep 17 00:00:00 2001
From: "Maciej W. Rozycki" <macro(a)mips.com>
Date: Mon, 11 Dec 2017 22:52:15 +0000
Subject: [PATCH] MIPS: Guard against any partial write attempt with
PTRACE_SETREGSET
Complement commit d614fd58a283 ("mips/ptrace: Preserve previous
registers for short regset write") and ensure that no partial register
write attempt is made with PTRACE_SETREGSET, as we do not preinitialize
any temporaries used to hold incoming register data and consequently
random data could be written.
It is the responsibility of the caller, such as `ptrace_regset', to
arrange for writes to span whole registers only, so here we only assert
that it has indeed happened.
Signed-off-by: Maciej W. Rozycki <macro(a)mips.com>
Fixes: 72b22bbad1e7 ("MIPS: Don't assume 64-bit FP registers for FP regset")
Cc: James Hogan <james.hogan(a)mips.com>
Cc: Paul Burton <Paul.Burton(a)mips.com>
Cc: Alex Smith <alex(a)alex-smith.me.uk>
Cc: Dave Martin <Dave.Martin(a)arm.com>
Cc: linux-mips(a)linux-mips.org
Cc: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # v3.15+
Patchwork: https://patchwork.linux-mips.org/patch/17926/
Signed-off-by: Ralf Baechle <ralf(a)linux-mips.org>
diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
index 62e8ffd9370a..7fcadaaf330f 100644
--- a/arch/mips/kernel/ptrace.c
+++ b/arch/mips/kernel/ptrace.c
@@ -516,7 +516,15 @@ static int fpr_set_msa(struct task_struct *target,
return 0;
}
-/* Copy the supplied NT_PRFPREG buffer to the floating-point context. */
+/*
+ * Copy the supplied NT_PRFPREG buffer to the floating-point context.
+ *
+ * We optimize for the case where `count % sizeof(elf_fpreg_t) == 0',
+ * which is supposed to have been guaranteed by the kernel before
+ * calling us, e.g. in `ptrace_regset'. We enforce that requirement,
+ * so that we can safely avoid preinitializing temporaries for
+ * partial register writes.
+ */
static int fpr_set(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
@@ -524,6 +532,8 @@ static int fpr_set(struct task_struct *target,
{
int err;
+ BUG_ON(count % sizeof(elf_fpreg_t));
+
/* XXX fcr31 */
init_fp_ctx(target);
On Tue, Jan 09, 2018 at 08:49:16PM +0000, Mathieu Desnoyers wrote:
> Hi Greg,
>
> commit e39d200fa "KVM: Fix stack-out-of-bounds read in write_mmio"
> is upstream in latest 4.15-rc tags, is also in long term
> 3.16.52+ and 3.2.97+ stable kernels, and in latest Fedora kernels,
> but is missing from 4.14, 4.9, 4.4, 4.1 stable branches.
>
> I think the patch author missed the CC to stable(a)vger.kernel.org
>
> It might be good to consider picking it up into the stable kernels
> to ensure long term stable and distro kernels don't diverge too much
> from more recent stable kernel branches.
Thanks, now picked up.
I wonder how it got into the 3.16 and 3.2 kernels :(
Also, always cc: stable(a)vger.kernel.org so that all of the stable
maintainers can see it, as I am not in charge of all of them (i.e.
4.1)
thanks,
greg k-h
The bounce buffer is gone from the MMC core, and now we found out
that there are some (crippled) i.MX boards out there that have broken
ADMA (cannot do scatter-gather), and broken PIO so they must use
SDMA. Closer examination shows a less significant slowdown also on
SDMA-only capable Laptop hosts.
SDMA sets down the number of segments to one, so that each segment
gets turned into a singular request that ping-pongs to the block
layer before the next request/segment is issued.
Apparently it happens a lot that the block layer send requests
that include a lot of physically discontigous segments. My guess
is that this phenomenon is coming from the file system.
These devices that cannot handle scatterlists in hardware can see
major benefits from a DMA-contigous bounce buffer.
This patch accumulates those fragmented scatterlists in a physically
contigous bounce buffer so that we can issue bigger DMA data chunks
to/from the card.
When tested with thise PCI-integrated host (1217:8221) that
only supports SDMA:
0b:00.0 SD Host controller: O2 Micro, Inc. OZ600FJ0/OZ900FJ0/OZ600FJS
SD/MMC Card Reader Controller (rev 05)
This patch gave ~1Mbyte/s improved throughput on large reads and
writes when testing using iozone than without the patch.
On the i.MX SDHCI controllers on the crippled i.MX 25 and i.MX 35
the patch restores the performance to what it was before we removed
the bounce buffers, and then some: performance is better than ever
because we now allocate a bounce buffer the size of the maximum
single request the SDMA engine can handle. On the PCI laptop this
is 256K, whereas with the old bounce buffer code it was 64K max.
Cc: Pierre Ossman <pierre(a)ossman.eu>
Cc: Benoît Thébaudeau <benoit(a)wsystem.com>
Cc: Fabio Estevam <fabio.estevam(a)nxp.com>
Cc: stable(a)vger.kernel.org
Fixes: de3ee99b097d ("mmc: Delete bounce buffer handling")
Tested-by: Benjamin Beckmeyer <beckmeyer.b(a)rittal.de>
Signed-off-by: Linus Walleij <linus.walleij(a)linaro.org>
---
ChangeLog v2->v3:
- Rewrite the commit message a bit
- Add Benjamin's Tested-by
- Add Fixes and stable tags
ChangeLog v1->v2:
- Skip the remapping and fiddling with the buffer, instead use
dma_alloc_coherent() and use a simple, coherent bounce buffer.
- Couple kernel messages to ->parent of the mmc_host as it relates
to the hardware characteristics.
---
drivers/mmc/host/sdhci.c | 94 +++++++++++++++++++++++++++++++++++++++++++-----
drivers/mmc/host/sdhci.h | 3 ++
2 files changed, 89 insertions(+), 8 deletions(-)
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index e9290a3439d5..97d4c6fc1159 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -502,8 +502,20 @@ static int sdhci_pre_dma_transfer(struct sdhci_host *host,
if (data->host_cookie == COOKIE_PRE_MAPPED)
return data->sg_count;
- sg_count = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
- mmc_get_dma_dir(data));
+ /* Bounce write requests to the bounce buffer */
+ if (host->bounce_buffer) {
+ if (mmc_get_dma_dir(data) == DMA_TO_DEVICE) {
+ /* Copy the data to the bounce buffer */
+ sg_copy_to_buffer(data->sg, data->sg_len,
+ host->bounce_buffer, host->bounce_buffer_size);
+ }
+ /* Just a dummy value */
+ sg_count = 1;
+ } else {
+ /* Just access the data directly from memory */
+ sg_count = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
+ mmc_get_dma_dir(data));
+ }
if (sg_count == 0)
return -ENOSPC;
@@ -858,8 +870,13 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
SDHCI_ADMA_ADDRESS_HI);
} else {
WARN_ON(sg_cnt != 1);
- sdhci_writel(host, sg_dma_address(data->sg),
- SDHCI_DMA_ADDRESS);
+ /* Bounce buffer goes to work */
+ if (host->bounce_buffer)
+ sdhci_writel(host, host->bounce_addr,
+ SDHCI_DMA_ADDRESS);
+ else
+ sdhci_writel(host, sg_dma_address(data->sg),
+ SDHCI_DMA_ADDRESS);
}
}
@@ -2248,7 +2265,12 @@ static void sdhci_pre_req(struct mmc_host *mmc, struct mmc_request *mrq)
mrq->data->host_cookie = COOKIE_UNMAPPED;
- if (host->flags & SDHCI_REQ_USE_DMA)
+ /*
+ * No pre-mapping in the pre hook if we're using the bounce buffer,
+ * for that we would need two bounce buffers since one buffer is
+ * in flight when this is getting called.
+ */
+ if (host->flags & SDHCI_REQ_USE_DMA && !host->bounce_buffer)
sdhci_pre_dma_transfer(host, mrq->data, COOKIE_PRE_MAPPED);
}
@@ -2352,8 +2374,19 @@ static bool sdhci_request_done(struct sdhci_host *host)
struct mmc_data *data = mrq->data;
if (data && data->host_cookie == COOKIE_MAPPED) {
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
- mmc_get_dma_dir(data));
+ if (host->bounce_buffer) {
+ /* On reads, copy the bounced data into the sglist */
+ if (mmc_get_dma_dir(data) == DMA_FROM_DEVICE) {
+ sg_copy_from_buffer(data->sg, data->sg_len,
+ host->bounce_buffer,
+ host->bounce_buffer_size);
+ }
+ } else {
+ /* Unmap the raw data */
+ dma_unmap_sg(mmc_dev(host->mmc), data->sg,
+ data->sg_len,
+ mmc_get_dma_dir(data));
+ }
data->host_cookie = COOKIE_UNMAPPED;
}
}
@@ -2636,7 +2669,12 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
*/
if (intmask & SDHCI_INT_DMA_END) {
u32 dmastart, dmanow;
- dmastart = sg_dma_address(host->data->sg);
+
+ if (host->bounce_buffer)
+ dmastart = host->bounce_addr;
+ else
+ dmastart = sg_dma_address(host->data->sg);
+
dmanow = dmastart + host->data->bytes_xfered;
/*
* Force update to the next DMA block boundary.
@@ -3713,6 +3751,43 @@ int sdhci_setup_host(struct sdhci_host *host)
*/
mmc->max_blk_count = (host->quirks & SDHCI_QUIRK_NO_MULTIBLOCK) ? 1 : 65535;
+ if (mmc->max_segs == 1) {
+ unsigned int max_blocks;
+ unsigned int max_seg_size;
+
+ max_seg_size = mmc->max_req_size;
+ max_blocks = max_seg_size / 512;
+ dev_info(mmc->parent, "host only supports SDMA, activate bounce buffer\n");
+
+ /*
+ * When we just support one segment, we can get significant speedups
+ * by the help of a bounce buffer to group scattered reads/writes
+ * together.
+ *
+ * TODO: is this too big? Stealing too much memory? The old bounce
+ * buffer is max 64K. This should be the 512K that SDMA can handle
+ * if I read the code above right. Anyways let's try this.
+ * FIXME: use devm_*
+ */
+ host->bounce_buffer = dma_alloc_coherent(mmc->parent, max_seg_size,
+ &host->bounce_addr, GFP_KERNEL);
+ if (!host->bounce_buffer) {
+ dev_err(mmc->parent,
+ "failed to allocate %u bytes for bounce buffer\n",
+ max_seg_size);
+ return -ENOMEM;
+ }
+ host->bounce_buffer_size = max_seg_size;
+
+ /* Lie about this since we're bouncing */
+ mmc->max_segs = max_blocks;
+ mmc->max_seg_size = max_seg_size;
+
+ dev_info(mmc->parent,
+ "bounce buffer: bounce up to %u segments into one, max segment size %u bytes\n",
+ max_blocks, max_seg_size);
+ }
+
return 0;
unreg:
@@ -3743,6 +3818,9 @@ void sdhci_cleanup_host(struct sdhci_host *host)
host->align_addr);
host->adma_table = NULL;
host->align_buffer = NULL;
+ if (host->bounce_buffer)
+ dma_free_coherent(mmc->parent, host->bounce_buffer_size,
+ host->bounce_buffer, host->bounce_addr);
}
EXPORT_SYMBOL_GPL(sdhci_cleanup_host);
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index 54bc444c317f..865e09618d22 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -440,6 +440,9 @@ struct sdhci_host {
int irq; /* Device IRQ */
void __iomem *ioaddr; /* Mapped address */
+ char *bounce_buffer; /* For packing SDMA reads/writes */
+ dma_addr_t bounce_addr;
+ size_t bounce_buffer_size;
const struct sdhci_ops *ops; /* Low level hw interface */
--
2.14.3
This is a note to let you know that I've just added the patch titled
mac80211: Add RX flag to indicate ICV stripped
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mac80211-add-rx-flag-to-indicate-icv-stripped.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From cef0acd4d7d4811d2d19cd0195031bf0dfe41249 Mon Sep 17 00:00:00 2001
From: David Spinadel <david.spinadel(a)intel.com>
Date: Mon, 21 Nov 2016 16:58:40 +0200
Subject: mac80211: Add RX flag to indicate ICV stripped
From: David Spinadel <david.spinadel(a)intel.com>
commit cef0acd4d7d4811d2d19cd0195031bf0dfe41249 upstream.
Add a flag that indicates that the WEP ICV was stripped from an
RX packet, allowing the device to not transfer that if it's
already checked.
Signed-off-by: David Spinadel <david.spinadel(a)intel.com>
Signed-off-by: Johannes Berg <johannes.berg(a)intel.com>
Cc: Kalle Valo <kvalo(a)codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/net/mac80211.h | 5 ++++-
net/mac80211/wep.c | 3 ++-
net/mac80211/wpa.c | 3 ++-
3 files changed, 8 insertions(+), 3 deletions(-)
--- a/include/net/mac80211.h
+++ b/include/net/mac80211.h
@@ -1007,7 +1007,7 @@ ieee80211_tx_info_clear_status(struct ie
* @RX_FLAG_DECRYPTED: This frame was decrypted in hardware.
* @RX_FLAG_MMIC_STRIPPED: the Michael MIC is stripped off this frame,
* verification has been done by the hardware.
- * @RX_FLAG_IV_STRIPPED: The IV/ICV are stripped from this frame.
+ * @RX_FLAG_IV_STRIPPED: The IV and ICV are stripped from this frame.
* If this flag is set, the stack cannot do any replay detection
* hence the driver or hardware will have to do that.
* @RX_FLAG_PN_VALIDATED: Currently only valid for CCMP/GCMP frames, this
@@ -1078,6 +1078,8 @@ ieee80211_tx_info_clear_status(struct ie
* @RX_FLAG_ALLOW_SAME_PN: Allow the same PN as same packet before.
* This is used for AMSDU subframes which can have the same PN as
* the first subframe.
+ * @RX_FLAG_ICV_STRIPPED: The ICV is stripped from this frame. CRC checking must
+ * be done in the hardware.
*/
enum mac80211_rx_flags {
RX_FLAG_MMIC_ERROR = BIT(0),
@@ -1113,6 +1115,7 @@ enum mac80211_rx_flags {
RX_FLAG_RADIOTAP_VENDOR_DATA = BIT(31),
RX_FLAG_MIC_STRIPPED = BIT_ULL(32),
RX_FLAG_ALLOW_SAME_PN = BIT_ULL(33),
+ RX_FLAG_ICV_STRIPPED = BIT_ULL(34),
};
#define RX_FLAG_STBC_SHIFT 26
--- a/net/mac80211/wep.c
+++ b/net/mac80211/wep.c
@@ -293,7 +293,8 @@ ieee80211_crypto_wep_decrypt(struct ieee
return RX_DROP_UNUSABLE;
ieee80211_wep_remove_iv(rx->local, rx->skb, rx->key);
/* remove ICV */
- if (pskb_trim(rx->skb, rx->skb->len - IEEE80211_WEP_ICV_LEN))
+ if (!(status->flag & RX_FLAG_ICV_STRIPPED) &&
+ pskb_trim(rx->skb, rx->skb->len - IEEE80211_WEP_ICV_LEN))
return RX_DROP_UNUSABLE;
}
--- a/net/mac80211/wpa.c
+++ b/net/mac80211/wpa.c
@@ -295,7 +295,8 @@ ieee80211_crypto_tkip_decrypt(struct iee
return RX_DROP_UNUSABLE;
/* Trim ICV */
- skb_trim(skb, skb->len - IEEE80211_TKIP_ICV_LEN);
+ if (!(status->flag & RX_FLAG_ICV_STRIPPED))
+ skb_trim(skb, skb->len - IEEE80211_TKIP_ICV_LEN);
/* Remove IV */
memmove(skb->data + IEEE80211_TKIP_IV_LEN, skb->data, hdrlen);
Patches currently in stable-queue which might be from david.spinadel(a)intel.com are
queue-4.9/mac80211-add-rx-flag-to-indicate-icv-stripped.patch
This is a note to let you know that I've just added the patch titled
dm bufio: fix shrinker scans when (nr_to_scan < retain_target)
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-bufio-fix-shrinker-scans-when-nr_to_scan-retain_target.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From fbc7c07ec23c040179384a1f16b62b6030eb6bdd Mon Sep 17 00:00:00 2001
From: Suren Baghdasaryan <surenb(a)google.com>
Date: Wed, 6 Dec 2017 09:27:30 -0800
Subject: dm bufio: fix shrinker scans when (nr_to_scan < retain_target)
From: Suren Baghdasaryan <surenb(a)google.com>
commit fbc7c07ec23c040179384a1f16b62b6030eb6bdd upstream.
When system is under memory pressure it is observed that dm bufio
shrinker often reclaims only one buffer per scan. This change fixes
the following two issues in dm bufio shrinker that cause this behavior:
1. ((nr_to_scan - freed) <= retain_target) condition is used to
terminate slab scan process. This assumes that nr_to_scan is equal
to the LRU size, which might not be correct because do_shrink_slab()
in vmscan.c calculates nr_to_scan using multiple inputs.
As a result when nr_to_scan is less than retain_target (64) the scan
will terminate after the first iteration, effectively reclaiming one
buffer per scan and making scans very inefficient. This hurts vmscan
performance especially because mutex is acquired/released every time
dm_bufio_shrink_scan() is called.
New implementation uses ((LRU size - freed) <= retain_target)
condition for scan termination. LRU size can be safely determined
inside __scan() because this function is called after dm_bufio_lock().
2. do_shrink_slab() uses value returned by dm_bufio_shrink_count() to
determine number of freeable objects in the slab. However dm_bufio
always retains retain_target buffers in its LRU and will terminate
a scan when this mark is reached. Therefore returning the entire LRU size
from dm_bufio_shrink_count() is misleading because that does not
represent the number of freeable objects that slab will reclaim during
a scan. Returning (LRU size - retain_target) better represents the
number of freeable objects in the slab. This way do_shrink_slab()
returns 0 when (LRU size < retain_target) and vmscan will not try to
scan this shrinker avoiding scans that will not reclaim any memory.
Test: tested using Android device running
<AOSP>/system/extras/alloc-stress that generates memory pressure
and causes intensive shrinker scans
Signed-off-by: Suren Baghdasaryan <surenb(a)google.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-bufio.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -1554,7 +1554,8 @@ static unsigned long __scan(struct dm_bu
int l;
struct dm_buffer *b, *tmp;
unsigned long freed = 0;
- unsigned long count = nr_to_scan;
+ unsigned long count = c->n_buffers[LIST_CLEAN] +
+ c->n_buffers[LIST_DIRTY];
unsigned long retain_target = get_retain_buffers(c);
for (l = 0; l < LIST_SIZE; l++) {
@@ -1591,6 +1592,7 @@ dm_bufio_shrink_count(struct shrinker *s
{
struct dm_bufio_client *c;
unsigned long count;
+ unsigned long retain_target;
c = container_of(shrink, struct dm_bufio_client, shrinker);
if (sc->gfp_mask & __GFP_FS)
@@ -1599,8 +1601,9 @@ dm_bufio_shrink_count(struct shrinker *s
return 0;
count = c->n_buffers[LIST_CLEAN] + c->n_buffers[LIST_DIRTY];
+ retain_target = get_retain_buffers(c);
dm_bufio_unlock(c);
- return count;
+ return (count < retain_target) ? 0 : (count - retain_target);
}
/*
Patches currently in stable-queue which might be from surenb(a)google.com are
queue-4.9/dm-bufio-fix-shrinker-scans-when-nr_to_scan-retain_target.patch