This is a follow-up to the discussion in [1], [2].
IOMMUs using ARMv7 short-descriptor format require page tables (level 1 and 2) to be allocated within the first 4GB of RAM, even on 64-bit systems.
For L1 tables that are bigger than a page, we can just use __get_free_pages with GFP_DMA32 (on arm64 systems only, arm would still use GFP_DMA).
For L2 tables that only take 1KB, it would be a waste to allocate a full page, so we considered 3 approaches: 1. This series, adding support for GFP_DMA32 slab caches. 2. genalloc, which requires pre-allocating the maximum number of L2 page tables (4096, so 4MB of memory). 3. page_frag, which is not very memory-efficient as it is unable to reuse freed fragments until the whole page is freed. [3]
This series is the most memory-efficient approach.
stable@ note: We confirmed that this is a regression, and IOMMU errors happen on 4.19 and linux-next/master on MT8173 (elm, Acer Chromebook R13). The issue most likely starts from commit ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32"), i.e. 4.15, and presumably breaks a number of Mediatek platforms (and maybe others?).
[1] https://lists.linuxfoundation.org/pipermail/iommu/2018-November/030876.html [2] https://lists.linuxfoundation.org/pipermail/iommu/2018-December/031696.html [3] https://patchwork.codeaurora.org/patch/671639/
Changes since v1: - Add support for SLAB_CACHE_DMA32 in slab and slub (patches 1/2) - iommu/io-pgtable-arm-v7s (patch 3): - Changed approach to use SLAB_CACHE_DMA32 added by the previous commit. - Use DMA or DMA32 depending on the architecture (DMA for arm, DMA32 for arm64).
Changes since v2: - Reworded and expanded commit messages - Added cache_dma32 documentation in PATCH 2/3.
v3 used the page_frag approach, see [3].
Changes since v4: - Dropped change that removed GFP_DMA32 from GFP_SLAB_BUG_MASK: instead we can just call kmem_cache_*alloc without GFP_DMA32 parameter. This also means that we can drop PATCH v4 1/3, as we do not make any changes in GFP flag verification. - Dropped hunks that added cache_dma32 sysfs file, and moved the hunks to PATCH v5 3/3, so that maintainer can decide whether to pick the change independently.
Changes since v5: - Rename ARM_V7S_TABLE_SLAB_CACHE to ARM_V7S_TABLE_SLAB_FLAGS. - Add stable@ to cc.
Nicolas Boichat (3): mm: Add support for kmem caches in DMA32 zone iommu/io-pgtable-arm-v7s: Request DMA32 memory, and improve debugging mm: Add /sys/kernel/slab/cache/cache_dma32
Documentation/ABI/testing/sysfs-kernel-slab | 9 +++++++++ drivers/iommu/io-pgtable-arm-v7s.c | 19 +++++++++++++++---- include/linux/slab.h | 2 ++ mm/slab.c | 2 ++ mm/slab.h | 3 ++- mm/slab_common.c | 2 +- mm/slub.c | 16 ++++++++++++++++ tools/vm/slabinfo.c | 7 ++++++- 8 files changed, 53 insertions(+), 7 deletions(-)
IOMMUs using ARMv7 short-descriptor format require page tables to be allocated within the first 4GB of RAM, even on 64-bit systems. On arm64, this is done by passing GFP_DMA32 flag to memory allocation functions.
For IOMMU L2 tables that only take 1KB, it would be a waste to allocate a full page using get_free_pages, so we considered 3 approaches: 1. This patch, adding support for GFP_DMA32 slab caches. 2. genalloc, which requires pre-allocating the maximum number of L2 page tables (4096, so 4MB of memory). 3. page_frag, which is not very memory-efficient as it is unable to reuse freed fragments until the whole page is freed.
This change makes it possible to create a custom cache in DMA32 zone using kmem_cache_create, then allocate memory using kmem_cache_alloc.
We do not create a DMA32 kmalloc cache array, as there are currently no users of kmalloc(..., GFP_DMA32). These calls will continue to trigger a warning, as we keep GFP_DMA32 in GFP_SLAB_BUG_MASK.
This implies that calls to kmem_cache_*alloc on a SLAB_CACHE_DMA32 kmem_cache must _not_ use GFP_DMA32 (it is anyway redundant and unnecessary).
Cc: stable@vger.kernel.org Signed-off-by: Nicolas Boichat drinkcat@chromium.org Acked-by: Vlastimil Babka vbabka@suse.cz ---
Changes since v2: - Clarified commit message - Add entry in sysfs-kernel-slab to document the new sysfs file
(v3 used the page_frag approach)
Changes since v4: - Added details to commit message - Dropped change that removed GFP_DMA32 from GFP_SLAB_BUG_MASK: instead we can just call kmem_cache_*alloc without GFP_DMA32 parameter. This also means that we can drop PATCH 1/3, as we do not make any changes in GFP flag verification. - Dropped hunks that added cache_dma32 sysfs file, and moved the hunks to PATCH 3/3, so that maintainer can decide whether to pick the change independently.
(no change since v5)
include/linux/slab.h | 2 ++ mm/slab.c | 2 ++ mm/slab.h | 3 ++- mm/slab_common.c | 2 +- mm/slub.c | 5 +++++ 5 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/include/linux/slab.h b/include/linux/slab.h index 11b45f7ae4057c..9449b19c5f107a 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -32,6 +32,8 @@ #define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U) /* Use GFP_DMA memory */ #define SLAB_CACHE_DMA ((slab_flags_t __force)0x00004000U) +/* Use GFP_DMA32 memory */ +#define SLAB_CACHE_DMA32 ((slab_flags_t __force)0x00008000U) /* DEBUG: Store the last owner for bug hunting */ #define SLAB_STORE_USER ((slab_flags_t __force)0x00010000U) /* Panic if kmem_cache_create() fails */ diff --git a/mm/slab.c b/mm/slab.c index 73fe23e649c91a..124f8c556d27fb 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2109,6 +2109,8 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags) cachep->allocflags = __GFP_COMP; if (flags & SLAB_CACHE_DMA) cachep->allocflags |= GFP_DMA; + if (flags & SLAB_CACHE_DMA32) + cachep->allocflags |= GFP_DMA32; if (flags & SLAB_RECLAIM_ACCOUNT) cachep->allocflags |= __GFP_RECLAIMABLE; cachep->size = size; diff --git a/mm/slab.h b/mm/slab.h index 4190c24ef0e9df..fcf717e12f0a86 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -127,7 +127,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
/* Legal flag mask for kmem_cache_create(), for various configurations */ -#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \ +#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ + SLAB_CACHE_DMA32 | SLAB_PANIC | \ SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS )
#if defined(CONFIG_DEBUG_SLAB) diff --git a/mm/slab_common.c b/mm/slab_common.c index 70b0cc85db67f8..18b7b809c8d064 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -53,7 +53,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work, SLAB_FAILSLAB | SLAB_KASAN)
#define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ - SLAB_ACCOUNT) + SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
/* * Merge control. If this is set then no merging of slab caches will occur. diff --git a/mm/slub.c b/mm/slub.c index c229a9b7dd5448..4caadb926838ef 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3583,6 +3583,9 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) if (s->flags & SLAB_CACHE_DMA) s->allocflags |= GFP_DMA;
+ if (s->flags & SLAB_CACHE_DMA32) + s->allocflags |= GFP_DMA32; + if (s->flags & SLAB_RECLAIM_ACCOUNT) s->allocflags |= __GFP_RECLAIMABLE;
@@ -5671,6 +5674,8 @@ static char *create_unique_id(struct kmem_cache *s) */ if (s->flags & SLAB_CACHE_DMA) *p++ = 'd'; + if (s->flags & SLAB_CACHE_DMA32) + *p++ = 'D'; if (s->flags & SLAB_RECLAIM_ACCOUNT) *p++ = 'a'; if (s->flags & SLAB_CONSISTENCY_CHECKS)
IOMMUs using ARMv7 short-descriptor format require page tables (level 1 and 2) to be allocated within the first 4GB of RAM, even on 64-bit systems.
For level 1/2 pages, ensure GFP_DMA32 is used if CONFIG_ZONE_DMA32 is defined (e.g. on arm64 platforms).
For level 2 pages, allocate a slab cache in SLAB_CACHE_DMA32. Note that we do not explicitly pass GFP_DMA[32] to kmem_cache_zalloc, as this is not strictly necessary, and would cause a warning in mm/sl*b.c, as we did not update GFP_SLAB_BUG_MASK.
Also, print an error when the physical address does not fit in 32-bit, to make debugging easier in the future.
Cc: stable@vger.kernel.org Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32") Signed-off-by: Nicolas Boichat drinkcat@chromium.org ---
Changes since v2: - Commit message
(v3 used the page_frag approach)
Changes since v4: - Do not pass ARM_V7S_TABLE_GFP_DMA to kmem_cache_zalloc, as this is unnecessary, and would trigger a warning.
Changes since v5: - Rename ARM_V7S_TABLE_SLAB_CACHE to ARM_V7S_TABLE_SLAB_FLAGS.
drivers/iommu/io-pgtable-arm-v7s.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c index 445c3bde04800c..d2fdb192f7610f 100644 --- a/drivers/iommu/io-pgtable-arm-v7s.c +++ b/drivers/iommu/io-pgtable-arm-v7s.c @@ -161,6 +161,14 @@
#define ARM_V7S_TCR_PD1 BIT(5)
+#ifdef CONFIG_ZONE_DMA32 +#define ARM_V7S_TABLE_GFP_DMA GFP_DMA32 +#define ARM_V7S_TABLE_SLAB_FLAGS SLAB_CACHE_DMA32 +#else +#define ARM_V7S_TABLE_GFP_DMA GFP_DMA +#define ARM_V7S_TABLE_SLAB_FLAGS SLAB_CACHE_DMA +#endif + typedef u32 arm_v7s_iopte;
static bool selftest_running; @@ -198,13 +206,16 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp, void *table = NULL;
if (lvl == 1) - table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size)); + table = (void *)__get_free_pages( + __GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size)); else if (lvl == 2) - table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA); + table = kmem_cache_zalloc(data->l2_tables, gfp); phys = virt_to_phys(table); - if (phys != (arm_v7s_iopte)phys) + if (phys != (arm_v7s_iopte)phys) { /* Doesn't fit in PTE */ + dev_err(dev, "Page table does not fit in PTE: %pa", &phys); goto out_free; + } if (table && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)) { dma = dma_map_single(dev, table, size, DMA_TO_DEVICE); if (dma_mapping_error(dev, dma)) @@ -737,7 +748,7 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2", ARM_V7S_TABLE_SIZE(2), ARM_V7S_TABLE_SIZE(2), - SLAB_CACHE_DMA, NULL); + ARM_V7S_TABLE_SLAB_FLAGS, NULL); if (!data->l2_tables) goto out_free_data;
On Mon, Dec 10, 2018 at 09:15:03AM +0800, Nicolas Boichat wrote:
IOMMUs using ARMv7 short-descriptor format require page tables (level 1 and 2) to be allocated within the first 4GB of RAM, even on 64-bit systems.
For level 1/2 pages, ensure GFP_DMA32 is used if CONFIG_ZONE_DMA32 is defined (e.g. on arm64 platforms).
For level 2 pages, allocate a slab cache in SLAB_CACHE_DMA32. Note that we do not explicitly pass GFP_DMA[32] to kmem_cache_zalloc, as this is not strictly necessary, and would cause a warning in mm/sl*b.c, as we did not update GFP_SLAB_BUG_MASK.
Also, print an error when the physical address does not fit in 32-bit, to make debugging easier in the future.
Cc: stable@vger.kernel.org Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32") Signed-off-by: Nicolas Boichat drinkcat@chromium.org
Acked-by: Will Deacon will.deacon@arm.com
Will
On Mon, Dec 10, 2018 at 09:15:03AM +0800, Nicolas Boichat wrote:
IOMMUs using ARMv7 short-descriptor format require page tables (level 1 and 2) to be allocated within the first 4GB of RAM, even on 64-bit systems.
For level 1/2 pages, ensure GFP_DMA32 is used if CONFIG_ZONE_DMA32 is defined (e.g. on arm64 platforms).
For level 2 pages, allocate a slab cache in SLAB_CACHE_DMA32. Note that we do not explicitly pass GFP_DMA[32] to kmem_cache_zalloc, as this is not strictly necessary, and would cause a warning in mm/sl*b.c, as we did not update GFP_SLAB_BUG_MASK.
Also, print an error when the physical address does not fit in 32-bit, to make debugging easier in the future.
Cc: stable@vger.kernel.org Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32") Signed-off-by: Nicolas Boichat drinkcat@chromium.org
Assuming you're routing all of this via akpm:
Acked-by: Will Deacon will.deacon@arm.com
Will
A previous patch in this series adds support for SLAB_CACHE_DMA32 kmem caches. This adds the corresponding /sys/kernel/slab/cache/cache_dma32 entries, and fixes slabinfo tool.
Cc: stable@vger.kernel.org Signed-off-by: Nicolas Boichat drinkcat@chromium.org ---
There were different opinions on whether this sysfs entry should be added, so I'll leave it up to the mm/slub maintainers to decide whether they want to pick this up, or drop it.
No change since v5.
Documentation/ABI/testing/sysfs-kernel-slab | 9 +++++++++ mm/slub.c | 11 +++++++++++ tools/vm/slabinfo.c | 7 ++++++- 3 files changed, 26 insertions(+), 1 deletion(-)
diff --git a/Documentation/ABI/testing/sysfs-kernel-slab b/Documentation/ABI/testing/sysfs-kernel-slab index 29601d93a1c2ea..d742c6cfdffbe9 100644 --- a/Documentation/ABI/testing/sysfs-kernel-slab +++ b/Documentation/ABI/testing/sysfs-kernel-slab @@ -106,6 +106,15 @@ Description: are from ZONE_DMA. Available when CONFIG_ZONE_DMA is enabled.
+What: /sys/kernel/slab/cache/cache_dma32 +Date: December 2018 +KernelVersion: 4.21 +Contact: Nicolas Boichat drinkcat@chromium.org +Description: + The cache_dma32 file is read-only and specifies whether objects + are from ZONE_DMA32. + Available when CONFIG_ZONE_DMA32 is enabled. + What: /sys/kernel/slab/cache/cpu_slabs Date: May 2007 KernelVersion: 2.6.22 diff --git a/mm/slub.c b/mm/slub.c index 4caadb926838ef..840f3719d9d543 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5104,6 +5104,14 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf) SLAB_ATTR_RO(cache_dma); #endif
+#ifdef CONFIG_ZONE_DMA32 +static ssize_t cache_dma32_show(struct kmem_cache *s, char *buf) +{ + return sprintf(buf, "%d\n", !!(s->flags & SLAB_CACHE_DMA32)); +} +SLAB_ATTR_RO(cache_dma32); +#endif + static ssize_t usersize_show(struct kmem_cache *s, char *buf) { return sprintf(buf, "%u\n", s->usersize); @@ -5444,6 +5452,9 @@ static struct attribute *slab_attrs[] = { #ifdef CONFIG_ZONE_DMA &cache_dma_attr.attr, #endif +#ifdef CONFIG_ZONE_DMA32 + &cache_dma32_attr.attr, +#endif #ifdef CONFIG_NUMA &remote_node_defrag_ratio_attr.attr, #endif diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c index 334b16db0ebbe9..4ee1bf6e498dfa 100644 --- a/tools/vm/slabinfo.c +++ b/tools/vm/slabinfo.c @@ -29,7 +29,7 @@ struct slabinfo { char *name; int alias; int refs; - int aliases, align, cache_dma, cpu_slabs, destroy_by_rcu; + int aliases, align, cache_dma, cache_dma32, cpu_slabs, destroy_by_rcu; unsigned int hwcache_align, object_size, objs_per_slab; unsigned int sanity_checks, slab_size, store_user, trace; int order, poison, reclaim_account, red_zone; @@ -531,6 +531,8 @@ static void report(struct slabinfo *s) printf("** Hardware cacheline aligned\n"); if (s->cache_dma) printf("** Memory is allocated in a special DMA zone\n"); + if (s->cache_dma32) + printf("** Memory is allocated in a special DMA32 zone\n"); if (s->destroy_by_rcu) printf("** Slabs are destroyed via RCU\n"); if (s->reclaim_account) @@ -599,6 +601,8 @@ static void slabcache(struct slabinfo *s) *p++ = '*'; if (s->cache_dma) *p++ = 'd'; + if (s->cache_dma32) + *p++ = 'D'; if (s->hwcache_align) *p++ = 'A'; if (s->poison) @@ -1205,6 +1209,7 @@ static void read_slab_dir(void) slab->aliases = get_obj("aliases"); slab->align = get_obj("align"); slab->cache_dma = get_obj("cache_dma"); + slab->cache_dma32 = get_obj("cache_dma32"); slab->cpu_slabs = get_obj("cpu_slabs"); slab->destroy_by_rcu = get_obj("destroy_by_rcu"); slab->hwcache_align = get_obj("hwcache_align");
Hi all,
On Mon, Dec 10, 2018 at 9:15 AM Nicolas Boichat drinkcat@chromium.org wrote:
This is a follow-up to the discussion in [1], [2].
IOMMUs using ARMv7 short-descriptor format require page tables (level 1 and 2) to be allocated within the first 4GB of RAM, even on 64-bit systems.
For L1 tables that are bigger than a page, we can just use __get_free_pages with GFP_DMA32 (on arm64 systems only, arm would still use GFP_DMA).
For L2 tables that only take 1KB, it would be a waste to allocate a full page, so we considered 3 approaches:
- This series, adding support for GFP_DMA32 slab caches.
- genalloc, which requires pre-allocating the maximum number of L2 page tables (4096, so 4MB of memory).
- page_frag, which is not very memory-efficient as it is unable to reuse freed fragments until the whole page is freed. [3]
This series is the most memory-efficient approach.
Does anyone have any further comment on this series? If not, which maintainer is going to pick this up? I assume Andrew Morton?
Thanks,
stable@ note: We confirmed that this is a regression, and IOMMU errors happen on 4.19 and linux-next/master on MT8173 (elm, Acer Chromebook R13). The issue most likely starts from commit ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32"), i.e. 4.15, and presumably breaks a number of Mediatek platforms (and maybe others?).
[1] https://lists.linuxfoundation.org/pipermail/iommu/2018-November/030876.html [2] https://lists.linuxfoundation.org/pipermail/iommu/2018-December/031696.html [3] https://patchwork.codeaurora.org/patch/671639/
Changes since v1:
- Add support for SLAB_CACHE_DMA32 in slab and slub (patches 1/2)
- iommu/io-pgtable-arm-v7s (patch 3):
- Changed approach to use SLAB_CACHE_DMA32 added by the previous commit.
- Use DMA or DMA32 depending on the architecture (DMA for arm, DMA32 for arm64).
Changes since v2:
- Reworded and expanded commit messages
- Added cache_dma32 documentation in PATCH 2/3.
v3 used the page_frag approach, see [3].
Changes since v4:
- Dropped change that removed GFP_DMA32 from GFP_SLAB_BUG_MASK: instead we can just call kmem_cache_*alloc without GFP_DMA32 parameter. This also means that we can drop PATCH v4 1/3, as we do not make any changes in GFP flag verification.
- Dropped hunks that added cache_dma32 sysfs file, and moved the hunks to PATCH v5 3/3, so that maintainer can decide whether to pick the change independently.
Changes since v5:
- Rename ARM_V7S_TABLE_SLAB_CACHE to ARM_V7S_TABLE_SLAB_FLAGS.
- Add stable@ to cc.
Nicolas Boichat (3): mm: Add support for kmem caches in DMA32 zone iommu/io-pgtable-arm-v7s: Request DMA32 memory, and improve debugging mm: Add /sys/kernel/slab/cache/cache_dma32
Documentation/ABI/testing/sysfs-kernel-slab | 9 +++++++++ drivers/iommu/io-pgtable-arm-v7s.c | 19 +++++++++++++++---- include/linux/slab.h | 2 ++ mm/slab.c | 2 ++ mm/slab.h | 3 ++- mm/slab_common.c | 2 +- mm/slub.c | 16 ++++++++++++++++ tools/vm/slabinfo.c | 7 ++++++- 8 files changed, 53 insertions(+), 7 deletions(-)
-- 2.20.0.rc2.403.gdbc3b29805-goog
On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote:
Does anyone have any further comment on this series? If not, which maintainer is going to pick this up? I assume Andrew Morton?
Probably, yes. I don't like to carry the mm-changes in iommu-tree, so this should go through mm.
Regards,
Joerg
Hi Andrew,
On Fri, Jan 11, 2019 at 6:21 PM Joerg Roedel joro@8bytes.org wrote:
On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote:
Does anyone have any further comment on this series? If not, which maintainer is going to pick this up? I assume Andrew Morton?
Probably, yes. I don't like to carry the mm-changes in iommu-tree, so this should go through mm.
Gentle ping on this series, it seems like it's better if it goes through your tree.
Series still applies cleanly on linux-next, but I'm happy to resend if that helps.
Thanks!
Regards,
Joerg
On 1/22/19 11:51 PM, Nicolas Boichat wrote:
Hi Andrew,
On Fri, Jan 11, 2019 at 6:21 PM Joerg Roedel joro@8bytes.org wrote:
On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote:
Does anyone have any further comment on this series? If not, which maintainer is going to pick this up? I assume Andrew Morton?
Probably, yes. I don't like to carry the mm-changes in iommu-tree, so this should go through mm.
Gentle ping on this series, it seems like it's better if it goes through your tree.
Series still applies cleanly on linux-next, but I'm happy to resend if that helps.
Ping, Andrew?
Thanks!
Regards,
Joerg
On Thu, Feb 14, 2019 at 1:12 AM Vlastimil Babka vbabka@suse.cz wrote:
On 1/22/19 11:51 PM, Nicolas Boichat wrote:
Hi Andrew,
On Fri, Jan 11, 2019 at 6:21 PM Joerg Roedel joro@8bytes.org wrote:
On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote:
Does anyone have any further comment on this series? If not, which maintainer is going to pick this up? I assume Andrew Morton?
Probably, yes. I don't like to carry the mm-changes in iommu-tree, so this should go through mm.
Gentle ping on this series, it seems like it's better if it goes through your tree.
Series still applies cleanly on linux-next, but I'm happy to resend if that helps.
Ping, Andrew?
Another gentle ping, I still don't see these patches in mmot[ms]. Thanks.
Thanks!
Regards,
Joerg
On Mon, Feb 25, 2019 at 8:23 AM Nicolas Boichat drinkcat@chromium.org wrote:
On Thu, Feb 14, 2019 at 1:12 AM Vlastimil Babka vbabka@suse.cz wrote:
On 1/22/19 11:51 PM, Nicolas Boichat wrote:
Hi Andrew,
On Fri, Jan 11, 2019 at 6:21 PM Joerg Roedel joro@8bytes.org wrote:
On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote:
Does anyone have any further comment on this series? If not, which maintainer is going to pick this up? I assume Andrew Morton?
Probably, yes. I don't like to carry the mm-changes in iommu-tree, so this should go through mm.
Gentle ping on this series, it seems like it's better if it goes through your tree.
Series still applies cleanly on linux-next, but I'm happy to resend if that helps.
Ping, Andrew?
Another gentle ping, I still don't see these patches in mmot[ms]. Thanks.
Andrew: AFAICT this still applies cleanly on linux-next/master, so I don't plan to resend... is there any other issues with this series?
This is a regression, so it'd be nice to have it fixed in mainline, eventually.
Thanks,
Thanks!
Regards,
Joerg
On Tue, 19 Mar 2019 15:41:43 +0800 Nicolas Boichat drinkcat@chromium.org wrote:
On Mon, Feb 25, 2019 at 8:23 AM Nicolas Boichat drinkcat@chromium.org wrote:
On Thu, Feb 14, 2019 at 1:12 AM Vlastimil Babka vbabka@suse.cz wrote:
On 1/22/19 11:51 PM, Nicolas Boichat wrote:
Hi Andrew,
On Fri, Jan 11, 2019 at 6:21 PM Joerg Roedel joro@8bytes.org wrote:
On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote:
Does anyone have any further comment on this series? If not, which maintainer is going to pick this up? I assume Andrew Morton?
Probably, yes. I don't like to carry the mm-changes in iommu-tree, so this should go through mm.
Gentle ping on this series, it seems like it's better if it goes through your tree.
Series still applies cleanly on linux-next, but I'm happy to resend if that helps.
Ping, Andrew?
Another gentle ping, I still don't see these patches in mmot[ms]. Thanks.
Andrew: AFAICT this still applies cleanly on linux-next/master, so I don't plan to resend... is there any other issues with this series?
This is a regression, so it'd be nice to have it fixed in mainline, eventually.
Sorry, seeing "iommu" and "arm" made these escape my gimlet eye.
I'm only seeing acks on [1/3]. What's the review status of [2/3] and [3/3]?
On Wed, Mar 20, 2019 at 1:56 AM Andrew Morton akpm@linux-foundation.org wrote:
On Tue, 19 Mar 2019 15:41:43 +0800 Nicolas Boichat drinkcat@chromium.org wrote:
On Mon, Feb 25, 2019 at 8:23 AM Nicolas Boichat drinkcat@chromium.org wrote:
On Thu, Feb 14, 2019 at 1:12 AM Vlastimil Babka vbabka@suse.cz wrote:
On 1/22/19 11:51 PM, Nicolas Boichat wrote:
Hi Andrew,
On Fri, Jan 11, 2019 at 6:21 PM Joerg Roedel joro@8bytes.org wrote:
On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote: > Does anyone have any further comment on this series? If not, which > maintainer is going to pick this up? I assume Andrew Morton?
Probably, yes. I don't like to carry the mm-changes in iommu-tree, so this should go through mm.
Gentle ping on this series, it seems like it's better if it goes through your tree.
Series still applies cleanly on linux-next, but I'm happy to resend if that helps.
Ping, Andrew?
Another gentle ping, I still don't see these patches in mmot[ms]. Thanks.
Andrew: AFAICT this still applies cleanly on linux-next/master, so I don't plan to resend... is there any other issues with this series?
This is a regression, so it'd be nice to have it fixed in mainline, eventually.
Sorry, seeing "iommu" and "arm" made these escape my gimlet eye.
Thanks for picking them up!
I'm only seeing acks on [1/3]. What's the review status of [2/3] and [3/3]?
Replied on the notification, [2/3] had a Ack, [3/3] is somewhat controversial.
linux-stable-mirror@lists.linaro.org