The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 8adddf349fda0d3de2f6bb41ddf838cbf36a8ad2 Mon Sep 17 00:00:00 2001
From: Michael Ellerman <mpe(a)ellerman.id.au>
Date: Tue, 16 Apr 2019 23:59:02 +1000
Subject: [PATCH] powerpc/mm/radix: Make Radix require HUGETLB_PAGE
Joel reported weird crashes using skiroot_defconfig, in his case we
jumped into an NX page:
kernel tried to execute exec-protected page (c000000002bff4f0) - exploit attempt? (uid: 0)
BUG: Unable to handle kernel instruction fetch
Faulting instruction address: 0xc000000002bff4f0
Looking at the disassembly, we had simply branched to that address:
c000000000c001bc 49fff335 bl c000000002bff4f0
But that didn't match the original kernel image:
c000000000c001bc 4bfff335 bl c000000000bff4f0 <kobject_get+0x8>
When STRICT_KERNEL_RWX is enabled, and we're using the radix MMU, we
call radix__change_memory_range() late in boot to change page
protections. We do that both to mark rodata read only and also to mark
init text no-execute. That involves walking the kernel page tables,
and clearing _PAGE_WRITE or _PAGE_EXEC respectively.
With radix we may use hugepages for the linear mapping, so the code in
radix__change_memory_range() uses eg. pmd_huge() to test if it has
found a huge mapping, and if so it stops the page table walk and
changes the PMD permissions.
However if the kernel is built without HUGETLBFS support, pmd_huge()
is just a #define that always returns 0. That causes the code in
radix__change_memory_range() to incorrectly interpret the PMD value as
a pointer to a PTE page rather than as a PTE at the PMD level.
We can see this using `dv` in xmon which also uses pmd_huge():
0:mon> dv c000000000000000
pgd @ 0xc000000001740000
pgdp @ 0xc000000001740000 = 0x80000000ffffb009
pudp @ 0xc0000000ffffb000 = 0x80000000ffffa009
pmdp @ 0xc0000000ffffa000 = 0xc00000000000018f <- this is a PTE
ptep @ 0xc000000000000100 = 0xa64bb17da64ab07d <- kernel text
The end result is we treat the value at 0xc000000000000100 as a PTE
and clear _PAGE_WRITE or _PAGE_EXEC, potentially corrupting the code
at that address.
In Joel's specific case we cleared the sign bit in the offset of the
branch, causing a backward branch to turn into a forward branch which
caused us to branch into a non-executable page. However the exact
nature of the crash depends on kernel version, compiler version, and
other factors.
We need to fix radix__change_memory_range() to not use accessors that
depend on HUGETLBFS, but we also have radix memory hotplug code that
uses pmd_huge() etc that will also need fixing. So for now just
disallow the broken combination of Radix with HUGETLBFS disabled.
The only defconfig we have that is affected is skiroot_defconfig, so
turn on HUGETLBFS there so that it still gets Radix.
Fixes: 566ca99af026 ("powerpc/mm/radix: Add dummy radix_enabled()")
Cc: stable(a)vger.kernel.org # v4.7+
Reported-by: Joel Stanley <joel(a)jms.id.au>
Signed-off-by: Michael Ellerman <mpe(a)ellerman.id.au>
diff --git a/arch/powerpc/configs/skiroot_defconfig b/arch/powerpc/configs/skiroot_defconfig
index 5ba131c30f6b..1bcd468ab422 100644
--- a/arch/powerpc/configs/skiroot_defconfig
+++ b/arch/powerpc/configs/skiroot_defconfig
@@ -266,6 +266,7 @@ CONFIG_UDF_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_PROC_KCORE=y
+CONFIG_HUGETLBFS=y
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS=y
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 842b2c7e156a..50cd09b4e05d 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -324,7 +324,7 @@ config ARCH_ENABLE_SPLIT_PMD_PTLOCK
config PPC_RADIX_MMU
bool "Radix MMU Support"
- depends on PPC_BOOK3S_64
+ depends on PPC_BOOK3S_64 && HUGETLB_PAGE
select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
default y
help
On 4/28/19 4:39 PM, Sasha Levin wrote:
> Hi,
>
> [This is an automated email]
>
> This commit has been processed because it contains a "Fixes:" tag,
> fixing commit: 70800c3c0cc5 locking/rwsem: Scan the wait_list for readers only once.
>
> The bot has tested the following trees: v5.0.10, v4.19.37, v4.14.114, v4.9.171.
>
> v5.0.10: Failed to apply! Possible dependencies:
> 412f34a82ccf ("locking/qspinlock_stat: Track the no MCS node available case")
> 46ad0840b158 ("locking/rwsem: Remove arch specific rwsem files")
> 579afe866f52 ("xtensa: use generic spinlock/rwlock implementation")
> a8654596f037 ("locking/rwsem: Enable lock event counting")
> ad53fa10fa9e ("locking/qspinlock_stat: Introduce generic lockevent_*() counting APIs")
> c7580c1e8443 ("locking/rwsem: Move owner setting code from rwsem.c to rwsem.h")
> ce9084ba0d1d ("x86: Make ARCH_USE_MEMREMAP_PROT a generic Kconfig symbol")
> d682b596d993 ("locking/qspinlock: Handle > 4 slowpath nesting levels")
> eecec78f7777 ("locking/rwsem: Relocate rwsem_down_read_failed()")
> fb346fd9fc08 ("locking/lock_events: Make lock_events available for all archs & other locks")
>
> v4.19.37: Failed to apply! Possible dependencies:
> 0fa809ca7f81 ("locking/pvqspinlock: Extend node size when pvqspinlock is configured")
> 1222109a5363 ("locking/qspinlock_stat: Count instances of nested lock slowpaths")
> 412f34a82ccf ("locking/qspinlock_stat: Track the no MCS node available case")
> 46ad0840b158 ("locking/rwsem: Remove arch specific rwsem files")
> 4b486b535c33 ("locking/rwsem: Exit read lock slowpath if queue empty & no writer")
> 925b9cd1b89a ("locking/rwsem: Make owner store task pointer of last owning reader")
> a8654596f037 ("locking/rwsem: Enable lock event counting")
> ad53fa10fa9e ("locking/qspinlock_stat: Introduce generic lockevent_*() counting APIs")
> c7580c1e8443 ("locking/rwsem: Move owner setting code from rwsem.c to rwsem.h")
> ce9084ba0d1d ("x86: Make ARCH_USE_MEMREMAP_PROT a generic Kconfig symbol")
> d682b596d993 ("locking/qspinlock: Handle > 4 slowpath nesting levels")
> eecec78f7777 ("locking/rwsem: Relocate rwsem_down_read_failed()")
> fb346fd9fc08 ("locking/lock_events: Make lock_events available for all archs & other locks")
>
> v4.14.114: Failed to apply! Possible dependencies:
> 0fa809ca7f81 ("locking/pvqspinlock: Extend node size when pvqspinlock is configured")
> 11752adb68a3 ("locking/pvqspinlock: Implement hybrid PV queued/unfair locks")
> 1222109a5363 ("locking/qspinlock_stat: Count instances of nested lock slowpaths")
> 1958b5fc4010 ("x86/boot: Add early boot support when running with SEV active")
> 1cd9c22fee3a ("x86/mm/encrypt: Move page table helpers into separate translation unit")
> 271ca788774a ("arch: enable relative relocations for arm64, power and x86")
> 412f34a82ccf ("locking/qspinlock_stat: Track the no MCS node available case")
> 81d3dc9a349b ("locking/qspinlock: Add stat tracking for pending vs. slowpath")
> 94d49eb30e85 ("x86/mm: Decouple dynamic __PHYSICAL_MASK from AMD SME")
> a8654596f037 ("locking/rwsem: Enable lock event counting")
> ad53fa10fa9e ("locking/qspinlock_stat: Introduce generic lockevent_*() counting APIs")
> ce9084ba0d1d ("x86: Make ARCH_USE_MEMREMAP_PROT a generic Kconfig symbol")
> d682b596d993 ("locking/qspinlock: Handle > 4 slowpath nesting levels")
> d7b417fa08d1 ("x86/mm: Add DMA support for SEV memory encryption")
> d8aa7eea78a1 ("x86/mm: Add Secure Encrypted Virtualization (SEV) support")
> dfaaec9033b8 ("x86: Add support for changing memory encryption attribute in early boot")
> fb346fd9fc08 ("locking/lock_events: Make lock_events available for all archs & other locks")
>
> v4.9.171: Failed to apply! Possible dependencies:
> 1a8b6d76dc5b ("net:add one common config ARCH_WANT_RELAX_ORDER to support relax ordering")
> 271ca788774a ("arch: enable relative relocations for arm64, power and x86")
> 29dee3c03abc ("locking/refcounts: Out-of-line everything")
> 40565b5aedd6 ("sched/cputime, powerpc, s390: Make scaled cputime arch specific")
> 4ad8622dc548 ("powerpc/8xx: Implement hw_breakpoint")
> 51c9c0843993 ("powerpc/kprobes: Implement Optprobes")
> 5b9ff0278598 ("powerpc: Build-time sort the exception table")
> 65c059bcaa73 ("powerpc: Enable support for GCC plugins")
> 9fea59bd7ca5 ("powerpc/mm: Add support for runtime configuration of ASLR limits")
> a7d2475af7ae ("powerpc: Sort the selects under CONFIG_PPC")
> a8654596f037 ("locking/rwsem: Enable lock event counting")
> bd174169c7a1 ("locking/refcount: Add refcount_t API kernel-doc comments")
> ce9084ba0d1d ("x86: Make ARCH_USE_MEMREMAP_PROT a generic Kconfig symbol")
> d557d1b58b35 ("refcount: change EXPORT_SYMBOL markings")
> ebfa50df435e ("powerpc: Add helper to check if offset is within relative branch range")
> f405df5de317 ("refcount_t: Introduce a special purpose refcount type")
> fa769d3f58e6 ("powerpc/32: Enable HW_BREAKPOINT on BOOK3S")
> fb346fd9fc08 ("locking/lock_events: Make lock_events available for all archs & other locks")
> fd25d19f6b8d ("locking/refcount: Create unchecked atomic_t implementation")
>
>
> How should we proceed with this patch?
I will send out a version that will be easier to backport once this
patch lands in the mainline.
Cheers,
Longman
During my rwsem testing, it was found that after a down_read(), the
reader count may occasionally become 0 or even negative. Consequently,
a writer may steal the lock at that time and execute with the reader
in parallel thus breaking the mutual exclusion guarantee of the write
lock. In other words, both readers and writer can become rwsem owners
simultaneously.
The current reader wakeup code does it in one pass to clear waiter->task
and put them into wake_q before fully incrementing the reader count.
Once waiter->task is cleared, the corresponding reader may see it,
finish the critical section and do unlock to decrement the count before
the count is incremented. This is not a problem if there is only one
reader to wake up as the count has been pre-incremented by 1. It is
a problem if there are more than one readers to be woken up and writer
can steal the lock.
The wakeup is actually done in 2 passes before the v4.9 commit
70800c3c0cc5 ("locking/rwsem: Scan the wait_list for readers only
once"). To fix this problem, the wakeup is now done in two passes
again. In the first pass, we collect the readers and count them. The
reader count is then fully incremented. In the second pass, the
waiter->task is then cleared and they are put into wake_q to be woken
up later.
Fixes: 70800c3c0cc5 ("locking/rwsem: Scan the wait_list for readers only once")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Waiman Long <longman(a)redhat.com>
---
kernel/locking/rwsem-xadd.c | 45 +++++++++++++++++++++++++------------
1 file changed, 31 insertions(+), 14 deletions(-)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 6b3ee9948bf1..cab5b1f6b2de 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -130,6 +130,7 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
{
struct rwsem_waiter *waiter, *tmp;
long oldcount, woken = 0, adjustment = 0;
+ struct list_head wlist;
/*
* Take a peek at the queue head waiter such that we can determine
@@ -188,18 +189,44 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
* of the queue. We know that woken will be at least 1 as we accounted
* for above. Note we increment the 'active part' of the count by the
* number of readers before waking any processes up.
+ *
+ * We have to do wakeup in 2 passes to prevent the possibility that
+ * the reader count may be decremented before it is incremented. It
+ * is because the to-be-woken waiter may not have slept yet. So it
+ * may see waiter->task got cleared, finish its critical section and
+ * do an unlock before the reader count increment.
+ *
+ * 1) Collect the read-waiters in a separate list, count them and
+ * fully increment the reader count in rwsem.
+ * 2) For each waiters in the new list, clear waiter->task and
+ * put them into wake_q to be woken up later.
*/
+ INIT_LIST_HEAD(&wlist);
list_for_each_entry_safe(waiter, tmp, &sem->wait_list, list) {
- struct task_struct *tsk;
-
if (waiter->type == RWSEM_WAITING_FOR_WRITE)
break;
woken++;
- tsk = waiter->task;
+ list_move_tail(&waiter->list, &wlist);
+ }
+
+ adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
+ lockevent_cond_inc(rwsem_wake_reader, woken);
+ if (list_empty(&sem->wait_list)) {
+ /* hit end of list above */
+ adjustment -= RWSEM_WAITING_BIAS;
+ }
+
+ if (adjustment)
+ atomic_long_add(adjustment, &sem->count);
+
+ /* 2nd pass */
+ list_for_each_entry(waiter, &wlist, list) {
+ struct task_struct *tsk;
+ tsk = waiter->task;
get_task_struct(tsk);
- list_del(&waiter->list);
+
/*
* Ensure calling get_task_struct() before setting the reader
* waiter to nil such that rwsem_down_read_failed() cannot
@@ -213,16 +240,6 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
*/
wake_q_add_safe(wake_q, tsk);
}
-
- adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
- lockevent_cond_inc(rwsem_wake_reader, woken);
- if (list_empty(&sem->wait_list)) {
- /* hit end of list above */
- adjustment -= RWSEM_WAITING_BIAS;
- }
-
- if (adjustment)
- atomic_long_add(adjustment, &sem->count);
}
/*
--
2.18.1
Do you want web development and design?
We work full time working online.
We do not ask for advance payment.
Try for 7 days, If you like our work then only pay and continue long term.
Web Design , Development , PHP , Graphic Design , Rate @ 125 USD / Week
General Office Work (BPO) - Rate @ 90 US / Week ( Mon - Sat 40 Hrs per week ).
Our Expertise ( PHP | MySQL | HTML | CSS, Dreamweaver, Photoshop etc..)
We are always online on Skype(id-anjanindi) and email.
Please visit codelogics . com to see our previous work samples
If you are not interested we will never send you email again ever.
Thanks
Anjan Patra
Once blk_cleanup_queue() returns, tags shouldn't be used any more,
because blk_mq_free_tag_set() may be called. Commit 45a9c9d909b2
("blk-mq: Fix a use-after-free") fixes this issue exactly.
However, that commit introduces another issue. Before 45a9c9d909b2,
we are allowed to run queue during cleaning up queue if the queue's
kobj refcount is held. After that commit, queue can't be run during
queue cleaning up, otherwise oops can be triggered easily because
some fields of hctx are freed by blk_mq_free_queue() in blk_cleanup_queue().
We have invented ways for addressing this kind of issue before, such as:
8dc765d438f1 ("SCSI: fix queue cleanup race before queue initialization is done")
c2856ae2f315 ("blk-mq: quiesce queue before freeing queue")
But still can't cover all cases, recently James reports another such
kind of issue:
https://marc.info/?l=linux-scsi&m=155389088124782&w=2
This issue can be quite hard to address by previous way, given
scsi_run_queue() may run requeues for other LUNs.
Fixes the above issue by freeing hctx's resources in its release handler, and this
way is safe becasue tags isn't needed for freeing such hctx resource.
This approach follows typical design pattern wrt. kobject's release handler.
Cc: Dongli Zhang <dongli.zhang(a)oracle.com>
Cc: James Smart <james.smart(a)broadcom.com>
Cc: Bart Van Assche <bart.vanassche(a)wdc.com>
Cc: linux-scsi(a)vger.kernel.org,
Cc: Martin K . Petersen <martin.petersen(a)oracle.com>,
Cc: Christoph Hellwig <hch(a)lst.de>,
Cc: James E . J . Bottomley <jejb(a)linux.vnet.ibm.com>,
Reported-by: James Smart <james.smart(a)broadcom.com>
Fixes: 45a9c9d909b2 ("blk-mq: Fix a use-after-free")
Cc: stable(a)vger.kernel.org
Reviewed-by: Hannes Reinecke <hare(a)suse.com>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Tested-by: James Smart <james.smart(a)broadcom.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
---
block/blk-core.c | 2 +-
block/blk-mq-sysfs.c | 6 ++++++
block/blk-mq.c | 8 ++------
block/blk-mq.h | 2 +-
4 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 93dc588fabe2..2dd94b3e9ece 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -374,7 +374,7 @@ void blk_cleanup_queue(struct request_queue *q)
blk_exit_queue(q);
if (queue_is_mq(q))
- blk_mq_free_queue(q);
+ blk_mq_exit_queue(q);
percpu_ref_exit(&q->q_usage_counter);
diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
index 3f9c3f4ac44c..4040e62c3737 100644
--- a/block/blk-mq-sysfs.c
+++ b/block/blk-mq-sysfs.c
@@ -10,6 +10,7 @@
#include <linux/smp.h>
#include <linux/blk-mq.h>
+#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-tag.h"
@@ -33,6 +34,11 @@ static void blk_mq_hw_sysfs_release(struct kobject *kobj)
{
struct blk_mq_hw_ctx *hctx = container_of(kobj, struct blk_mq_hw_ctx,
kobj);
+
+ if (hctx->flags & BLK_MQ_F_BLOCKING)
+ cleanup_srcu_struct(hctx->srcu);
+ blk_free_flush_queue(hctx->fq);
+ sbitmap_free(&hctx->ctx_map);
free_cpumask_var(hctx->cpumask);
kfree(hctx->ctxs);
kfree(hctx);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 89781309a108..d98cb9614dfa 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2267,12 +2267,7 @@ static void blk_mq_exit_hctx(struct request_queue *q,
if (set->ops->exit_hctx)
set->ops->exit_hctx(hctx, hctx_idx);
- if (hctx->flags & BLK_MQ_F_BLOCKING)
- cleanup_srcu_struct(hctx->srcu);
-
blk_mq_remove_cpuhp(hctx);
- blk_free_flush_queue(hctx->fq);
- sbitmap_free(&hctx->ctx_map);
}
static void blk_mq_exit_hw_queues(struct request_queue *q,
@@ -2907,7 +2902,8 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
}
EXPORT_SYMBOL(blk_mq_init_allocated_queue);
-void blk_mq_free_queue(struct request_queue *q)
+/* tags can _not_ be used after returning from blk_mq_exit_queue */
+void blk_mq_exit_queue(struct request_queue *q)
{
struct blk_mq_tag_set *set = q->tag_set;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 423ea88ab6fb..633a5a77ee8b 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -37,7 +37,7 @@ struct blk_mq_ctx {
struct kobject kobj;
} ____cacheline_aligned_in_smp;
-void blk_mq_free_queue(struct request_queue *q);
+void blk_mq_exit_queue(struct request_queue *q);
int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
void blk_mq_wake_waiters(struct request_queue *q);
bool blk_mq_dispatch_rq_list(struct request_queue *, struct list_head *, bool);
--
2.9.5
From: Philipp Puschmann <philipp.puschmann(a)emlix.com>
[ Upstream commit 82ad759143ed77673db0d93d53c1cde7b99917ee ]
This patch fixes a bug that prevents freeing the reset gpio on unloading
the module.
aic3x_i2c_probe is called when loading the module and it calls list_add
with a probably uninitialized list entry aic3x->list (next = prev = NULL)).
So even if list_del is called it does nothing and in the end the gpio_reset
is not freed. Then a repeated module probing fails silently because
gpio_request fails.
When moving INIT_LIST_HEAD to aic3x_i2c_probe we also have to move
list_del to aic3x_i2c_remove because aic3x_remove may be called
multiple times without aic3x_i2c_remove being called which leads to
a NULL pointer dereference.
Signed-off-by: Philipp Puschmann <philipp.puschmann(a)emlix.com>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
sound/soc/codecs/tlv320aic3x.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/sound/soc/codecs/tlv320aic3x.c b/sound/soc/codecs/tlv320aic3x.c
index 6aa0edf8c5ef..cea3ebecdb12 100644
--- a/sound/soc/codecs/tlv320aic3x.c
+++ b/sound/soc/codecs/tlv320aic3x.c
@@ -1609,7 +1609,6 @@ static int aic3x_probe(struct snd_soc_component *component)
struct aic3x_priv *aic3x = snd_soc_component_get_drvdata(component);
int ret, i;
- INIT_LIST_HEAD(&aic3x->list);
aic3x->component = component;
for (i = 0; i < ARRAY_SIZE(aic3x->supplies); i++) {
@@ -1692,7 +1691,6 @@ static void aic3x_remove(struct snd_soc_component *component)
struct aic3x_priv *aic3x = snd_soc_component_get_drvdata(component);
int i;
- list_del(&aic3x->list);
for (i = 0; i < ARRAY_SIZE(aic3x->supplies); i++)
regulator_unregister_notifier(aic3x->supplies[i].consumer,
&aic3x->disable_nb[i].nb);
@@ -1890,6 +1888,7 @@ static int aic3x_i2c_probe(struct i2c_client *i2c,
if (ret != 0)
goto err_gpio;
+ INIT_LIST_HEAD(&aic3x->list);
list_add(&aic3x->list, &reset_list);
return 0;
@@ -1906,6 +1905,8 @@ static int aic3x_i2c_remove(struct i2c_client *client)
{
struct aic3x_priv *aic3x = i2c_get_clientdata(client);
+ list_del(&aic3x->list);
+
if (gpio_is_valid(aic3x->gpio_reset) &&
!aic3x_is_shared_reset(aic3x)) {
gpio_set_value(aic3x->gpio_reset, 0);
--
2.19.1
This is a backport of a 5.1rc patchset:
https://patchwork.ozlabs.org/cover/1029418/
Which was backported into 4.19:
https://patchwork.ozlabs.org/cover/1081619/
I had to backport two additional patches into 4.14 to make it work.
John Masinter (captwiggum), could you, please, confirm that this
patchset fixes TAHI tests? (I'm reasonably certain that it does, as
I ran ip_defrag selftest, but given the amount of changes here,
another set of completed tests would be nice to have).
Eric Dumazet (1):
ipv6: frags: fix a lockdep false positive
Florian Westphal (1):
ipv6: remove dependency of nf_defrag_ipv6 on ipv6 module
Peter Oskolkov (3):
net: IP defrag: encapsulate rbtree defrag code into callable functions
net: IP6 defrag: use rbtrees for IPv6 defrag
net: IP6 defrag: use rbtrees in nf_conntrack_reasm.c
include/net/inet_frag.h | 16 +-
include/net/ipv6.h | 29 --
include/net/ipv6_frag.h | 111 +++++++
net/ieee802154/6lowpan/reassembly.c | 2 +-
net/ipv4/inet_fragment.c | 293 ++++++++++++++++++
net/ipv4/ip_fragment.c | 290 ++----------------
net/ipv6/netfilter/nf_conntrack_reasm.c | 279 +++++------------
net/ipv6/netfilter/nf_defrag_ipv6_hooks.c | 3 +-
net/ipv6/reassembly.c | 357 +++++-----------------
net/openvswitch/conntrack.c | 1 +
10 files changed, 616 insertions(+), 765 deletions(-)
create mode 100644 include/net/ipv6_frag.h
--
2.21.0.593.g511ec345e18-goog
Hi,
On 23-04-19 00:29, Sasha Levin wrote:
> Hi,
>
> [This is an automated email]
>
> This commit has been processed because it contains a -stable tag.
> The stable tag indicates that it's relevant for the following trees: all
>
> The bot has tested the following trees: v5.0.9, v4.19.36, v4.14.113, v4.9.170, v4.4.178, v3.18.138.
>
> v5.0.9: Build OK!
> v4.19.36: Build OK!
> v4.14.113: Failed to apply! Possible dependencies:
> b60c75b6a502 ("power: supply: axp288_fuel_gauge: Do not register our psy on (some) HDMI sticks")
>
> v4.9.170: Failed to apply! Possible dependencies:
> b60c75b6a502 ("power: supply: axp288_fuel_gauge: Do not register our psy on (some) HDMI sticks")
>
> v4.4.178: Failed to apply! Possible dependencies:
> b60c75b6a502 ("power: supply: axp288_fuel_gauge: Do not register our psy on (some) HDMI sticks")
>
> v3.18.138: Failed to apply! Possible dependencies:
> 5a5bf49088f4 ("X-Power AXP288 PMIC Fuel Gauge Driver")
> b60c75b6a502 ("power: supply: axp288_fuel_gauge: Do not register our psy on (some) HDMI sticks")
> c1a281e34dae ("power: Add support for DA9150 Charger")
>
>
> How should we proceed with this patch?
Just applying it to 4.19.x and 5.0.x is fine.
Regards,
Hans