Excerpts from Sascha Silbe's message of Fri Apr 08 00:34:15 +0200 2011:
Excerpts from Arnd Bergmann's message of Thu Apr 07 16:23:05 +0200 2011:
- It is possible that this card has a fully log-structured approach,
which would mean that there actually is no cut-off but that it simply gets gradually slower with more open AUs. I have no programmatic way to find these yet.
A quick test with logarithmic scale suggests this might indeed be the case:
[...]
A more extensive set of tests (50 samples each for 1-16 open AUs, only EBS sized blocks) post-processed and plotted to show mean and standard deviation painted a different picture. For 1-3 open AUs there's "high" speed with high deviation; for 4-7 open AUs mean speed is a bit less with almost no deviation. There's a clear cut after 10 AUs (change in mean value greater than standard variation).
Not sure why there's so much deviation for 1-3 open AUs; maybe doing a completely erase + fill cycle would give better numbers. However I'd prefer to avoid that, because a) it would wear the card significantly (I don't expect it to survive more than 1k cycles) and b) the test would take two full days (for 10 samples each at 1-16 open AUs) during which I can use neither the card nor the card slot for anything else.
I've attached raw data, post-processing script and plot. The invocations were:
for NUMAU in $(seq 1 16) ; do echo $NUMAU open AUs: ; for numtry in $(seq 1 50) ; do ~/flashbench/flashbench --open-au --open-au-nr=$NUMAU --erasesize=1572864 --blocksize=1572864 --random /dev/mmcblk[0-9] ; done ; done | tee ~/sandisk_openau_random_1.5M_1.5M_1-16_2.result
./result2plot.py ~/sandisk_openau_random_1.5M_1.5M_1-16_2.result
Sascha