Hi Sergey,
On Fri, Apr 10, 2020 at 11:38:45AM +0900, Sergey Senozhatsky wrote:
On (20/04/09 10:08), Minchan Kim wrote:
Even though I don't know how many usecase we have using zsmalloc as module(I heard only once by dumb reason), it could affect existing users. Thus, please include concrete explanation in the patch to justify when the complain occurs.
The justification is 'we can unexport functions that have no sane reason of being exported in the first place'.
The Changelog pretty much says that.
Okay, I hope there is no affected user since this patch. If there are someone, they need to provide sane reason why they want to have zsmalloc as module.
I'm one of those who use zsmalloc as a module - mainly because I use zram as a compressing general purpose block device, not as a swap device. I create zram0, mkfs, mount, checkout and compile code, once done - umount, rmmod. This reduces the number of writes to SSD. Some people use tmpfs, but zram device(-s) can be much larger in size. That's a niche use case and I'm not against the patch.
It doesn't mean we couldn't use zsmalloc as module any longer. It means we couldn't use zsmalloc as module with pgtable mapping whcih was little bit faster on microbenchmark in some architecutre(However, I usually temped to remove it since it had several problems). However, we could still use zsmalloc as module as copy way instead of pgtable mapping. Thus, if someone really want to rollback the feature, they should provide reasonable reason why it doesn't work for them. "A little fast" wouldn't be enough to exports deep internal to the module.
Thanks.
Hi Minchan,
On Fri, Apr 10, 2020 at 04:11:36PM -0700, Minchan Kim wrote:
It doesn't mean we couldn't use zsmalloc as module any longer. It means we couldn't use zsmalloc as module with pgtable mapping whcih was little bit faster on microbenchmark in some architecutre(However, I usually temped to remove it since it had several problems). However, we could still use zsmalloc as module as copy way instead of pgtable mapping. Thus, if someone really want to rollback the feature, they should provide reasonable reason why it doesn't work for them. "A little fast" wouldn't be enough to exports deep internal to the module.
do you have any data how much faster it is on arm (and does that include arm64 as well)? Besides the exports which were my prime concern, zsmalloc with pgtable mappings also is the only user of map_kernel_range outside of vmalloc.c, if it really is another code base for tiny improvements we could mark map_kernel_range or in fact remove it entirely and open code it in the remaining callers.
(unmap_kernel_range is a different story, it has a bunch of callers, and most look odd)
Hi Christoph,
Sorry for the late.
On Sat, Apr 11, 2020 at 09:20:52AM +0200, Christoph Hellwig wrote:
Hi Minchan,
On Fri, Apr 10, 2020 at 04:11:36PM -0700, Minchan Kim wrote:
It doesn't mean we couldn't use zsmalloc as module any longer. It means we couldn't use zsmalloc as module with pgtable mapping whcih was little bit faster on microbenchmark in some architecutre(However, I usually temped to remove it since it had several problems). However, we could still use zsmalloc as module as copy way instead of pgtable mapping. Thus, if someone really want to rollback the feature, they should provide reasonable reason why it doesn't work for them. "A little fast" wouldn't be enough to exports deep internal to the module.
do you have any data how much faster it is on arm (and does that include arm64 as well)? Besides the exports which were my prime concern,
https://github.com/sjenning/zsmapbench
I need to recall the memory. IIRC, it was almost 30% faster at that time in ARM so was not trivial at that time. However, it was story from several years ago.
zsmalloc with pgtable mappings also is the only user of map_kernel_range outside of vmalloc.c, if it really is another code base for tiny improvements we could mark map_kernel_range or in fact remove it entirely and open code it in the remaining callers.
I alsh have temped to remove it. Let me have time to revist it in this chance.
Thanks.
linaro-mm-sig@lists.linaro.org