On Thursday 21 June 2012, Andrew Bradford wrote:
On Wed, Jun 20, 2012, at 03:06 PM, Arnd Bergmann wrote:
On Wednesday 20 June 2012, Andrew Bradford wrote:
I'm having trouble reproducing my tests from yesterday, for example (same as above at 23MiB offset) but now without the fast-slow-fast-slow lines:
[andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --open-au --erasesize=$[8*1024*1024] --blocksize=$[16*1024] --open-au-nr=4 --offset=$[23*1024*1204] 8MiB 8.75M/s 4MiB 8.24M/s 2MiB 6.82M/s 1MiB 4.61M/s 512KiB 4.83M/s 256KiB 5.29M/s 128KiB 4.71M/s 64KiB 3.46M/s 32KiB 2.79M/s 16KiB 1.07M/s
I can consistently reproduce this type of performance, but I can't get back to the fast-slow-fast-slow lines performance. Which makes me question the validity of all my other tests.
Is there any reason why I'd see so much variability between running the same test on the same machine one day to the next? I even rebooted this morning but still can't get back to the fast-slow-fast-slow lines. I can't remember seeing this much difference one day to the next with the SD cards I've tested, they seemed pretty consistent.
I don't feel confident comparing one day's measurements to another's and making valid conclusions if I can't reproduce the previous results.
I've seen a bunch of devices that try to guess the type of I/O that is being done on a given erase block and then optimize for that kind of access at later times. It seems your device belongs into that category. This is a good thing in theory, but it makes it much harder to find out what it does.
The last time I had one of these, I could usually reset the behavior for each erase block by doing long linear writes across those blocks, e.g. doing "dd if=/dev/zero of=/dev/sdc bs=8M count=100".
Another way to get around it is to frequently change the --offset value. In case the device remembers the last 10 blocks that had random I/O patterns in the past, you could cycle through 24MB, 48MB, 72MB, 96MB, ...
Arnd