Dear Andy Green,
In message CAAfg0W7p6rvvaMqsKsC09yQnfPod08YDTbeh_MQrz6n+B4iyYA@mail.gmail.com you wrote:
Instead of making assumptions on the performance of memcpy() and
As I wrote, I measured the performance and got a very big gain, it's 3x faster on my setup to use memcpy() then default memmove().
Yes, in your single test case of copying a Linux kernel image, i. e. a multi-megabyte file.
By calling that an "assumption" you're saying that there exist platforms where 32-bit linear memmove() is slower than doing it with 8-bit actions?
No. I said you should not assume that memcpy() is always faster than memmove(); a system may use optiomized versions of either.
adding the overhead of an additional function call (which can be expensive especially for short copy operations) it would make more
I am not sure U-Boot is really in the business of doing small memmoves, but okay...
It's easy to avoid this overhead, and also get rid of the restrictions you built into it (otimizong only the non-overlapping case), so if we touch that code, we should do it right.
sense to pull the "copy a word at a time" code from memcpy() into memmove(), too.
On the other hand - if you really care about performance, then why do
I spent several hours figuring out why our NOR boot performance was terrible.... it's because this default memmove code is gloriously inefficient for all cases.
If you like it like that, no worries.
Don't twist my words. I asked for a different, better implementation, that's all.
Best regards,
Wolfgang Denk