Hello James,
I can say from statistical point of view the following:
1. madvise results: All madvise test scenarios show a performance
degradation 4.4% - 6.6% compared to the baseline (v6.9). It's clear from
the numbers and chart (see
aws-graviton3-2024-06-17/stressng/stressng.html). Given the consistency of
the regression across different 'madvise' settings (number of threads
1-128), this tells of a potentially major issue introduced in Linux Kernel
v6.10-rc3 that impacts the 'madvise' system call. This observation also can
be strengthened with individual threads cases (madvise-1, .., -128)
observation across previous versions starting from 6.8
(aws-graviton3-2024-06-17stressng/madvise-1/comparison.html, for example).
Moreover, from git log it's clear that some active work was done in windows
v6.8-v6.9, v6.9-v6.10-rc1, etc. This could be a reason for such
degradations. With implemented bisection procedures it will be possible to
tell which commits have made what contribution to such degradations.
2. fork results: The 'fork' tests with 32, 64 and 128 processes show
performance regressions ranging 3.7% - 4.85%. While not as severe as the
madvise results, these still indicate a measurable degradation in the fork
system call performance.
The madvise results are more significant for a the following reasons:
- The regressions are larger in magnitude (up to 6.6% vs 4.85% for fork)
- The regressions are consistent across all tested 'madvise' settings
and versions, while 'fork' only shows notable regressions at higher counts.
So in summary, based on this data, the 'madvise' performance issues
introduced in Linux Kernel v6.10-rc3 appear to be the more significant and
require further investigation and optimization efforts.
Best regards,
Konstantin.
On Fri, 14 Jun 2024 at 22:16, James Greenhalgh <James.Greenhalgh(a)arm.com>
wrote:
> Should I read this as identifying new issues with madvise and fork
> performance, with the madvise result likely to be more significant?
>
>
>
> Thanks for the effort to make the report public – it is now much easier to
> access and click through to see the detailed stats.
>
>
>
> Thanks,
>
> James
>
>
>
> *From: *konstantin.belov(a)linaro.org <konstantin.belov(a)linaro.org>
> *Date: *Friday, June 14, 2024 at 15:03
> *To: *graviton3-test-report(a)lists.linaro.org <
> graviton3-test-report(a)lists.linaro.org>
> *Subject: *[Graviton3-test-report] Linux Kernel v6.10-rc3 vs v6.9
> Performance Testing Results
>
>
> Hello,
> Performance evaluation of Linux Kernel v6.10-rc3 is finished.
> Comparison is done to baseline Linux Kernel version: v6.9.
> The following configuration was used for all targets:
> - Instance CPU: graviton3
> - Number of cores: 2
> - RAM size, GB: 7.6
> - OS: Debian Linux 12 (bookworm)
>
>
> Regressions by test
>
> Benchmark/Test/Metric, Units | Baseline Median |
> Current Median | Diff, % | SD | Samples | Confidence interval
>
> ------------------------------------------------------------------------------------------------------------------------------------------------------
> aim9/brk_test/System Memory Allocations, OPS | 2024133.335 |
> 2013262.470 | 0.537 | 16053.459 | 60 | 2024467.853 /
> 2032591.875
> aim9/exec_test/Program Loads, OPS | 2019.665 |
> 2005.835 | 0.685 | 17.517 | 60 | 2015.828 / 2024.693
> aim9/page_test/System Allocations & Pages, OPS | 570520.000 |
> 550091.667 | 3.581 | 8356.271 | 60 | 568085.990 /
> 572314.769
> stressng/context/Context Switches, switches/s
> - context-1 | 2413.745 |
> 2376.445 | 1.545 | 5.545 | 21 | 2411.760 / 2416.503
> - context-2 | 4841.600 |
> 4769.580 | 1.488 | 8.553 | 21 | 4836.387 / 4843.704
> - context-4 | 4752.505 |
> 4712.600 | 0.840 | 56.213 | 21 | 4716.312 / 4764.397
> - context-8 | 4753.285 |
> 4724.605 | 0.603 | 42.188 | 21 | 4728.812 / 4764.899
> - context-16 | 4753.605 |
> 4726.140 | 0.578 | 26.756 | 21 | 4740.173 / 4763.060
> - context-128 | 4814.425 |
> 4774.555 | 0.828 | 43.430 | 21 | 4786.041 / 4823.191
> stressng/fork/Forks, OPS
> - fork-32 | 6659.030 |
> 6412.310 | 3.705 | 174.515 | 21 | 6588.293 / 6737.573
> - fork-64 | 6516.635 |
> 6200.470 | 4.852 | 266.061 | 21 | 6394.023 / 6621.612
> - fork-128 | 6128.850 |
> 5843.435 | 4.657 | 353.797 | 21 | 6005.178 / 6307.815
> stressng/get/Read Throughput, MB/s
> - get-1 | 2835.125 |
> 2785.405 | 1.754 | 25.632 | 21 | 2823.417 / 2845.342
> - get-2 | 3295.495 |
> 3250.850 | 1.355 | 50.308 | 21 | 3271.035 / 3314.069
> - get-16 | 3097.905 |
> 3062.850 | 1.132 | 36.459 | 21 | 3081.725 / 3112.912
> stressng/madvise/Access calls, OPS
> - madvise-1 | 39.845 |
> 37.240 | 6.538 | 0.864 | 21 | 39.425 / 40.164
> - madvise-2 | 76.950 |
> 72.480 | 5.809 | 2.384 | 21 | 76.016 / 78.055
> - madvise-4 | 86.900 |
> 83.080 | 4.396 | 1.283 | 21 | 86.317 / 87.414
> - madvise-8 | 87.615 |
> 82.920 | 5.359 | 1.634 | 21 | 86.943 / 88.341
> - madvise-16 | 87.230 |
> 82.345 | 5.600 | 1.879 | 21 | 86.472 / 88.079
> - madvise-32 | 85.895 |
> 80.780 | 5.955 | 1.543 | 21 | 85.294 / 86.614
> - madvise-64 | 85.955 |
> 80.625 | 6.201 | 2.257 | 21 | 84.615 / 86.546
> - madvise-128 | 85.005 |
> 79.375 | 6.623 | 1.706 | 21 | 84.143 / 85.603
> stressng/vm-splice/Transfer Rate, MB/s
> - vm-splice-1 | 393825.950 |
> 384976.075 | 2.247 | 1326.151 | 21 | 393146.946 /
> 394281.333
> - vm-splice-2 | 985022.850 |
> 970009.675 | 1.524 | 3305.843 | 21 | 983522.945 /
> 986350.758
> - vm-splice-4 | 1111278.105 |
> 1098156.625 | 1.181 | 5058.131 | 21 | 1108947.236 /
> 1113273.953
> - vm-splice-8 | 1195749.630 |
> 1183949.325 | 0.987 | 5994.316 | 21 | 1192658.405 /
> 1197785.934
> - vm-splice-16 | 1238939.755 |
> 1226909.580 | 0.971 | 7273.147 | 21 | 1235968.212 /
> 1242189.651
> - vm-splice-32 | 1241984.390 |
> 1227662.560 | 1.153 | 5226.860 | 21 | 1239895.031 /
> 1244366.079
> - vm-splice-64 | 1241623.855 |
> 1227060.990 | 1.173 | 5099.827 | 21 | 1239863.809 /
> 1244226.193
> - vm-splice-128 | 1248609.720 |
> 1233938.390 | 1.175 | 7083.372 | 21 | 1245315.449 /
> 1251374.554
>
> * index after test name indicate number of threads or processes used for
> the test.
>
> Detailed test results and raw data can be found here:
> https://artifacts.codelinaro.org/artifactory/linaro-373-reports/aws-gravito…
>
> Best regards.
> _______________________________________________
> Graviton3-test-report mailing list --
> graviton3-test-report(a)lists.linaro.org
> To unsubscribe send an email to
> graviton3-test-report-leave(a)lists.linaro.org
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
>