On Tue, 12 Feb 2013 19:07:20 +0200 Imre Deak imre.deak@intel.com wrote:
So, exactly how big is this thing, and how do we know it's better this way than if we were to uninline some/all of the helpers?
I admit I only hoped compiler optimization would keep the inlined parts at a minimum, but now I actually checked (on Intel CPU). I applied the patchset from [1] and uninlined sg_page_iter_start as it's not significant for speed:
size drivers/gpu/drm/i915/i915.ko 514855 15996 272 531123 81ab3 drivers/gpu/drm/i915/i915.ko
Then uninlined all helpers: size drivers/gpu/drm/i915/i915.ko 513447 15996 272 529715 81533 drivers/gpu/drm/i915/i915.ko
Since there are 8 invocations of the macro, the overhead for a single invocation is about (531123 - 529715) / 8 = 191 bytes.
For speed, I benchmarked a simple loop which was basically:
page = vmalloc(sizeof(*page) * 1000, GFP_KERNEL); for_each_sg_page(sglist, iter, 0) *page++ = iter.page;
where each entry on the sglist contained 16 consecutive pages. This takes ~10% more time for the uninlined version to run. This is a rather artificial test and I couldn't come up with something more real-life using only the i915 driver's ioctl interface that would show a significant change in speed.
10% for the function call overhead sounds reasonable. Of course, that test is biased in one direction. A test which was biased in the other direction would exercise all eight of the macro's callsites and would investigate the performance impact of a 1kbyte increase in L1 cache utilisation.
And I must say, it would need to be a pretty damn carefully crafted test case to be able to trigger enough cache thrashing to cause a 10% hit.
So at least for now I'm ok with just uninlining all the helpers.
OK.