On 2024/3/26 8:28, Mina Almasry wrote:
On Tue, Mar 5, 2024 at 11:38 AM Mina Almasry almasrymina@google.com wrote:
On Tue, Mar 5, 2024 at 4:54 AM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2024/3/5 10:01, Mina Almasry wrote:
...
Perf - page-pool benchmark:
bench_page_pool_simple.ko tests with and without these changes: https://pastebin.com/raw/ncHDwAbn
AFAIK the number that really matters in the perf tests is the 'tasklet_page_pool01_fast_path Per elem'. This one measures at about 8 cycles without the changes but there is some 1 cycle noise in some results.
With the patches this regresses to 9 cycles with the changes but there is 1 cycle noise occasionally running this test repeatedly.
Lastly I tried disable the static_branch_unlikely() in netmem_is_net_iov() check. To my surprise disabling the static_branch_unlikely() check reduces the fast path back to 8 cycles, but the 1 cycle noise remains.
The last sentence seems to be suggesting the above 1 ns regresses is caused by the static_branch_unlikely() checking?
Note it's not a 1ns regression, it's looks like maybe a 1 cycle regression (slightly less than 1ns if I'm reading the output of the test correctly):
# clean net-next time_bench: Type:tasklet_page_pool01_fast_path Per elem: 8 cycles(tsc) 2.993 ns (step:0)
# with patches time_bench: Type:tasklet_page_pool01_fast_path Per elem: 9 cycles(tsc) 3.679 ns (step:0)
# with patches and with diff that disables static branching: time_bench: Type:tasklet_page_pool01_fast_path Per elem: 8 cycles(tsc) 3.248 ns (step:0)
I do see noise in the test results between run and run, and any regression (if any) is slightly obfuscated by the noise, so it's a bit hard to make confident statements. So far it looks like a ~0.25ns regression without static branch and about ~0.65ns with static branch.
Honestly when I saw all 3 results were within some noise I did not investigate more, but if this looks concerning to you I can dig further. I likely need to gather a few test runs to filter out the noise and maybe investigate the assembly my compiler is generating to maybe narrow down what changes there.
I did some more investigation here to gather more data to filter out the noise, and recorded the summary here:
https://pastebin.com/raw/v5dYRg8L
Long story short, the page_pool benchmark results are consistent with some outlier noise results that I'm discounting here. Currently page_pool fast path is at 8 cycles
[ 2115.724510] time_bench: Type:tasklet_page_pool01_fast_path Per elem: 8 cycles(tsc) 3.187 ns (step:0) - (measurement period time:0.031870585 sec time_interval:31870585) - (invoke count:10000000 tsc_interval:86043192)
and with this patch series it degrades to 10 cycles, or about a 0.7ns degradation or so:
Even if the absolute value for the overhead is small, we seems have a degradation of about 20% for tasklet_page_pool01_fast_path testcase, which seems scary.
I am assuming that every page is recyclable for tasklet_page_pool01_fast_path testcase, and that code path matters for page_pool, it would be good to remove any additional checking for that code path.
And we already have pool->has_init_callback checking when we have to use a new page, it may make sense to refactor that to share the same checking for provider to avoid the overhead as much as possible.
Also, I am not sure if it really matter that much, as with the introducing of netmem_is_net_iov() checking spreading in the networking, the overhead might add up for other case too.
[ 498.226127] time_bench: Type:tasklet_page_pool01_fast_path Per elem: 10 cycles(tsc) 3.944 ns (step:0) - (measurement period time:0.039442539 sec time_interval:39442539) - (invoke count:10000000 tsc_interval:106485268)
I took the time to dig into where the degradation comes from, and to my surprise we can shave off 1 cycle in perf by removing the static_branch_unlikely check in netmem_is_net_iov() like so:
diff --git a/include/net/netmem.h b/include/net/netmem.h index fe354d11a421..2b4310ac1115 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -122,8 +122,7 @@ typedef unsigned long __bitwise netmem_ref; static inline bool netmem_is_net_iov(const netmem_ref netmem) { #ifdef CONFIG_PAGE_POOL
return static_branch_unlikely(&page_pool_mem_providers) &&
(__force unsigned long)netmem & NET_IOV;
return (__force unsigned long)netmem & NET_IOV;
#else return false; #endif
With this change, the fast path is 9 cycles, only a 1 cycle (~0.35ns) regression:
[ 199.184429] time_bench: Type:tasklet_page_pool01_fast_path Per elem: 9 cycles(tsc) 3.552 ns (step:0) - (measurement period time:0.035524013 sec time_interval:35524013) - (invoke count:10000000 tsc_interval:95907775)
I did some digging with YiFei on why the static_branch_unlikely appears to be causing a 1 cycle regression, but could not get an answer that makes sense. The # of instructions in page_pool_return_page() with the static_branch_unlikely and without is about the same in the compiled .o file, and my understanding is that static_branch will cause code re-writing anyway so looking at the compiled code may not be representative.
Worthy of note is that I get ~95% line rate of devmem TCP regardless of the static_branch_unlikely() or not, so impact of the static_branch is not large enough to be measurable end-to-end. I'm thinking I want to drop the static_branch_unlikely() in the next RFC since it doesn't improve the end-to-end throughput number and is resulting in a measurable improvement in the page pool benchmark.