On Mon, Mar 14, 2011 at 04:56:22PM +0100, Arnd Bergmann wrote:
Unfortunately, not much result again. The only result that is obvious is the bump between 16 and 20 MB, which is an indication that this is an open allocation unit, and is slightly slower to read than others. I see this as a hint that the erase block size is actually 4 MB, not 8 MB, and that the erase block sizes start at full multiples of their size (as is normally the case).
To verify this, you could write to a single block in another erase block, e.g. using dd if=/dev/zero of=/dev/sdb bs=64K seek=128 count=1, and
Do you mean :
tmp179:~ # time dd if=/dev/zero of=/dev/sdb bs=64K seek=128 count=1 1+0 records in 1+0 records out 65536 bytes (66 kB) copied, 0.0155327 s, 4.2 MB/s
real 0m0.017s user 0m0.002s sys 0m0.000s tmp179:~ #
see how the picture changes. To speed up the process, you can lower scatter-span to 1 (only do one block at a time) and scatter-order to 12 (do only the first 32 MB, i.e. 8kb*2^12, instead of the first 128 MB of the drive). You can probably also do a much lower repeat count, e.g. 10, since the effect appears to be quite visible.
tmp179:~ # ./flashbench -s --scatter-span=1 --scatter-order=12 --blocksize=4096 --count=10 /dev/sdb -o /tmp/output2.plot sched_setscheduler: Operation not permitted tmp179:~ #
result is attached
About 92% of the requests took between 4.21 and 4.23 ms to complete, which is fairly uniform, the other 8% were significantly faster at around 3.99 ms. I'm sure that there is a reason for this, but I don't understand it.
You can suggest more tests, now that flashbecnh does not terminate abnormally anymore.
Philippe