hi
how stable is this card? is it a scsi harddisk? ;) ok, random is not that good
****************************************************** **** Apacer microSDHC mobile series class 2, 16GB **** ****************************************************** ==> /sys/block/mmcblk0/device/cid <== 03534453553136478070387aa500a200
==> /sys/block/mmcblk0/device/csd <== 400e00325b59000076b27f800a404000
==> /sys/block/mmcblk0/device/scr <== 0235800000000000
==> /sys/block/mmcblk0/device/fwrev <== 0x0
==> /sys/block/mmcblk0/device/hwrev <== 0x8
==> /sys/block/mmcblk0/device/cid <== 03534453553136478070387aa500a200
==> /sys/block/mmcblk0/device/manfid <== 0x000003
==> /sys/block/mmcblk0/device/oemid <== 0x5344
==> /sys/block/mmcblk0/device/serial <== 0x70387aa5
==> /sys/block/mmcblk0/device/erase_size <== 512
==> /sys/block/mmcblk0/device/preferred_erase_size <== 4194304
==> /sys/block/mmcblk0/device/name <== SU16G
==> /sys/block/mmcblk0/device/date <== 02/2010 clock: 33000000 Hz vdd: 20 (3.2 ~ 3.3 V) bus mode: 2 (push-pull) chip select: 0 (don't care) power mode: 2 (on) bus width: 2 (4 bits) timing spec: 2 (sd high-speed)
$ fdisk -lu /dev/sda
Disk /dev/sda: 15.9 GB, 15931539456 bytes 128 heads, 32 sectors/track, 7596 cylinders, total 31116288 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0x00000000
Device Boot Start End Blocks Id System /dev/sda1 1 614399 307199+ 83 Linux /dev/sda2 614400 819199 102400 83 Linux /dev/sda3 819200 1851391 516096 82 Linux swap / Solaris /dev/sda4 1851392 7364607 2756608 83 Linux
$ ./flashbench -a /dev/sda3 --count=100 --blocksize=2048 align 134217728 pre 740µs on 740µs post 741µs diff -643ns align 67108864 pre 736µs on 737µs post 736µs diff 1.03µs align 33554432 pre 738µs on 742µs post 740µs diff 3.61µs align 16777216 pre 827µs on 816µs post 738µs diff 33.7µs align 8388608 pre 789µs on 810µs post 775µs diff 28.5µs align 4194304 pre 737µs on 738µs post 738µs diff 24ns align 2097152 pre 736µs on 737µs post 739µs diff -234ns align 1048576 pre 736µs on 734µs post 734µs diff -1177ns align 524288 pre 735µs on 735µs post 737µs diff -1214ns align 262144 pre 739µs on 735µs post 731µs diff -224ns align 131072 pre 737µs on 737µs post 738µs diff -650ns align 65536 pre 738µs on 736µs post 736µs diff -689ns align 32768 pre 735µs on 738µs post 732µs diff 4.52µs align 16384 pre 737µs on 738µs post 737µs diff 534ns align 8192 pre 736µs on 737µs post 736µs diff 1.38µs align 4096 pre 738µs on 732µs post 735µs diff -4358ns
-> aeh?
$ ./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=2048 --erasesize=$[8*1024*1024] 8MiB 1.48M/s 4MiB 3.23M/s 2MiB 3.63M/s 1MiB 3.69M/s 512KiB 3.65M/s 256KiB 3.62M/s 128KiB 3.5M/s 64KiB 3.87M/s 32KiB 3.44M/s 16KiB 3.1M/s 8KiB 2.15M/s 4KiB 1.4M/s 2KiB 813K/s
$ ./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=2048 --erasesize=$[8*1024*1024] 8MiB 3.65M/s 4MiB 3.63M/s 2MiB 3.67M/s 1MiB 3.7M/s 512KiB 3.69M/s 256KiB 3.64M/s 128KiB 3.51M/s 64KiB 3.88M/s 32KiB 3.45M/s 16KiB 3.12M/s 8KiB 2.17M/s 4KiB 1.41M/s 2KiB 805K/s
$ ./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=2048 --erasesize=$[8*1024*1024] 8MiB 3.73M/s 4MiB 3.67M/s 2MiB 3.73M/s 1MiB 3.76M/s 512KiB 3.73M/s 256KiB 3.73M/s 128KiB 3.56M/s 64KiB 3.97M/s 32KiB 3.54M/s 16KiB 3.16M/s 8KiB 2.19M/s 4KiB 1.4M/s 2KiB 811K/s
$ ./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=2048 --erasesize=$[16*1024*1024] 16MiB 4.75M/s 8MiB 5.48M/s 4MiB 5.44M/s 2MiB 5.38M/s 1MiB 5.48M/s 512KiB 5.38M/s 256KiB 5.36M/s 128KiB 5.02M/s 64KiB 5.97M/s 32KiB 4.93M/s 16KiB 4.32M/s 8KiB 2.6M/s 4KiB 1.62M/s 2KiB 905K/s
$ ./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=2048 --erasesize=$[16*1024*1024] --offset=$[2*1024*1024] 16MiB 6.04M/s 8MiB 4.57M/s 4MiB 5.31M/s 2MiB 5.3M/s 1MiB 5.38M/s 512KiB 5.32M/s 256KiB 5.24M/s 128KiB 4.94M/s 64KiB 5.85M/s 32KiB 4.86M/s 16KiB 4.29M/s 8KiB 2.57M/s 4KiB 1.62M/s 2KiB 912K/s
./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=2048 --erasesize=$[16*1024*1024] --offset=$[512] 16MiB 5.13M/s 8MiB 5.4M/s 4MiB 5.41M/s 2MiB 5.33M/s 1MiB 5.47M/s 512KiB 5.39M/s 256KiB 5.28M/s 128KiB 4.83M/s 64KiB 6.39M/s 32KiB 4.44M/s 16KiB 3.45M/s 8KiB 1.96M/s 4KiB 1.35M/s 2KiB 823K/s
$ ./flashbench /dev/sda3 --open-au-nr=2 -O --blocksize=2048 --erasesize=$[16*1024*1024] 16MiB 5.69M/s 8MiB 5.66M/s 4MiB 5.57M/s 2MiB 5.23M/s 1MiB 5.56M/s 512KiB 5.47M/s 256KiB 5.42M/s 128KiB 5.1M/s 64KiB 5.52M/s 32KiB 5.02M/s 16KiB 4.31M/s 8KiB 2.51M/s 4KiB 1.58M/s 2KiB 908K/s
./flashbench /dev/sda3 --open-au-nr=5 -O --blocksize=2048 --erasesize=$[16*1024*1024] 16MiB 5.94M/s 8MiB 4.92M/s 4MiB 4.33M/s 2MiB 4.54M/s 1MiB 5.98M/s 512KiB 5.95M/s 256KiB 5.94M/s 128KiB 5.53M/s 64KiB 5.92M/s 32KiB 5.42M/s 16KiB 4.65M/s 8KiB 2.67M/s 4KiB 1.67M/s 2KiB 917K/s
$ ./flashbench /dev/sda3 --open-au-nr=10 -O --blocksize=2048 --erasesize=$[16*1024*1024] 16MiB 5.79M/s 8MiB 5.28M/s 4MiB 4.25M/s 2MiB 3.05M/s 1MiB 2.04M/s 512KiB 1.02M/s 256KiB 515K/s freezes
./flashbench /dev/sda3 --open-au-nr=6 -O --blocksize=2048 --erasesize=$[16*1024*1024] 16MiB 5.5M/s 8MiB 5.06M/s 4MiB 4.29M/s 2MiB 3.1M/s 1MiB 2.08M/s 512KiB 1.03M/s 256KiB 518K/s ^C
-> at least 5 blocks
./flashbench /dev/sda3 --open-au-nr=5 -O --blocksize=2048 --erasesize=$[16*1024*1024] --random 16MiB 5.22M/s 8MiB 5.04M/s 4MiB 4.14M/s 2MiB 3.09M/s 1MiB 2.19M/s 512KiB 1.13M/s 256KiB 605K/s 128KiB 300K/s ^C
./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=2048 --erasesize=$[16*1024*1024] --random 16MiB 4.65M/s 8MiB 4.69M/s 4MiB 4.65M/s 2MiB 4.71M/s 1MiB 3.66M/s 512KiB 1.75M/s 256KiB 1.28M/s 128KiB 819K/s 64KiB 542K/s 32KiB 262K/s 16KiB 127K/s 8KiB 64.4K/s
-> quite slow on small blocks
-> assuming 8GiB erase-size (?)
$ ./flashbench /dev/sda3 -f --erasesize=$[8*1024*1024] --blocksize=2048 8MiB 5.81M/s 6.2M/s 6.64M/s 6.8M/s 6.18M/s 4.83M/s 4MiB 5.77M/s 6.49M/s 6.39M/s 6.69M/s 6.51M/s 4.68M/s 2MiB 5.68M/s 6.45M/s 6.53M/s 6.52M/s 6.48M/s 4.8M/s 1MiB 5.9M/s 6.55M/s 6.61M/s 6.74M/s 6.6M/s 4.85M/s 512KiB 5.77M/s 6.55M/s 6.52M/s 6.58M/s 6.6M/s 4.81M/s 256KiB 5.72M/s 6.49M/s 6.46M/s 6.45M/s 6.52M/s 4.77M/s 128KiB 5.3M/s 6.04M/s 6M/s 5.97M/s 6M/s 4.51M/s 64KiB 6.51M/s 7.3M/s 7.37M/s 7.5M/s 7.37M/s 5.2M/s 32KiB 5.32M/s 5.98M/s 5.92M/s 5.9M/s 6M/s 4.45M/s 16KiB 4.54M/s 5.1M/s 5.04M/s 4.94M/s 5M/s 3.84M/s 8KiB 2.67M/s 2.8M/s 2.83M/s 2.84M/s 2.88M/s 2.48M/s 4KiB 1.67M/s 1.74M/s 1.73M/s 1.73M/s 1.74M/s 1.57M/s 2KiB 934K/s 956K/s 958K/s 953K/s 954K/s 884K/s
peter
On Wednesday 13 July 2011, Peter Warasin wrote:
==> /sys/block/mmcblk0/device/manfid <== 0x000003
==> /sys/block/mmcblk0/device/oemid <== 0x5344
This is a Sandisk controller.
$ fdisk -lu /dev/sda
Disk /dev/sda: 15.9 GB, 15931539456 bytes 128 heads, 32 sectors/track, 7596 cylinders, total 31116288 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0x00000000
Device Boot Start End Blocks Id System /dev/sda1 1 614399 307199+ 83 Linux /dev/sda2 614400 819199 102400 83 Linux /dev/sda3 819200 1851391 516096 82 Linux swap / Solaris /dev/sda4 1851392 7364607 2756608 83 Linux
Another oddity that occasionally comes up:
Note how the size is a multiple of 1.5 MB but not 2 MB!
$ factor 15931539456 15931539456: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 7 1447 $ echo $[15931539456 / 7 / 1447] 1572864
This card must have 1.5 MB erase blocks! On all SD cards, the total size is a multiple of the erase block size, so yet another way of guessing the correct size is looking at the prime factors of the size.
$ ./flashbench -a /dev/sda3 --count=100 --blocksize=2048 align 134217728 pre 740µs on 740µs post 741µs diff -643ns align 67108864 pre 736µs on 737µs post 736µs diff 1.03µs align 33554432 pre 738µs on 742µs post 740µs diff 3.61µs align 16777216 pre 827µs on 816µs post 738µs diff 33.7µs align 8388608 pre 789µs on 810µs post 775µs diff 28.5µs align 4194304 pre 737µs on 738µs post 738µs diff 24ns align 2097152 pre 736µs on 737µs post 739µs diff -234ns align 1048576 pre 736µs on 734µs post 734µs diff -1177ns align 524288 pre 735µs on 735µs post 737µs diff -1214ns align 262144 pre 739µs on 735µs post 731µs diff -224ns align 131072 pre 737µs on 737µs post 738µs diff -650ns align 65536 pre 738µs on 736µs post 736µs diff -689ns align 32768 pre 735µs on 738µs post 732µs diff 4.52µs align 16384 pre 737µs on 738µs post 737µs diff 534ns align 8192 pre 736µs on 737µs post 736µs diff 1.38µs align 4096 pre 738µs on 732µs post 735µs diff -4358ns
-> aeh?
If the erase block size is a multiple of three, the test does not work any more. It could work with
./flashbench -a /dev/sda3 --count=100 --blocksize=1536
./flashbench /dev/sda3 --open-au-nr=5 -O --blocksize=2048 --erasesize=$[16*1024*1024] 16MiB 5.94M/s 8MiB 4.92M/s 4MiB 4.33M/s 2MiB 4.54M/s 1MiB 5.98M/s 512KiB 5.95M/s 256KiB 5.94M/s 128KiB 5.53M/s 64KiB 5.92M/s 32KiB 5.42M/s 16KiB 4.65M/s 8KiB 2.67M/s 4KiB 1.67M/s 2KiB 917K/s
$ ./flashbench /dev/sda3 --open-au-nr=10 -O --blocksize=2048 --erasesize=$[16*1024*1024] 16MiB 5.79M/s 8MiB 5.28M/s 4MiB 4.25M/s 2MiB 3.05M/s 1MiB 2.04M/s 512KiB 1.02M/s 256KiB 515K/s freezes
./flashbench /dev/sda3 --open-au-nr=6 -O --blocksize=2048 --erasesize=$[16*1024*1024] 16MiB 5.5M/s 8MiB 5.06M/s 4MiB 4.29M/s 2MiB 3.1M/s 1MiB 2.08M/s 512KiB 1.03M/s 256KiB 518K/s ^C
-> at least 5 blocks
Probably more, but you will have to pass the right size. I never implemented easy support for this in flashbench, but you can work around it by passing an odd blocksize along with the erasesize:
./flashbench /dev/sda3 --open-au-nr=4 -O --blocksize=1536 --erasesize=$[1536*1024]
./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=2048 --erasesize=$[16*1024*1024] --random 16MiB 4.65M/s 8MiB 4.69M/s 4MiB 4.65M/s 2MiB 4.71M/s 1MiB 3.66M/s 512KiB 1.75M/s 256KiB 1.28M/s 128KiB 819K/s 64KiB 542K/s 32KiB 262K/s 16KiB 127K/s 8KiB 64.4K/s
-> quite slow on small blocks
Yes, what I guess this means is that you exceed the number of 1.5MB erase blocks inside of the 16MB region you test. Probably the actual number is less than 10, consequently.
Arnd
hi
-> this card sucks, i think :) fast with wrong erase block and slow with correct(?) one.. ?-)
On 13/07/11 18:30, Arnd Bergmann wrote:
Another oddity that occasionally comes up:
[..]
This card must have 1.5 MB erase blocks! On all SD cards, the total size is a multiple of the erase block size, so yet another way of guessing the correct size is looking at the prime factors of the size.
ah yeah, clear. cool, good to know
If the erase block size is a multiple of three, the test does not work any more. It could work with ./flashbench -a /dev/sda3 --count=100 --blocksize=1536
it doesn't, i get: time_read: Invalid argument
$ ./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=$[1024+512] --erasesize=$[1024*1024 + 1024*512] 1.5MiB 4.84M/s 768KiB 928K/s 384KiB 1.31M/s 192KiB 2.37M/s 96KiB 1.34M/s 48KiB 2.28M/s 24KiB 1.27M/s 12KiB 1.45M/s 6KiB 817K/s 3KiB 765K/s 1.5KiB 419K/s
$ ./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=$[1024+512] --erasesize=$[1024*1024 + 1024*512] 1.5MiB 6.91M/s 768KiB 1.97M/s 384KiB 2.36M/s 192KiB 1.34M/s 96KiB 2.4M/s 48KiB 1.3M/s 24KiB 2.18M/s 12KiB 980K/s 6KiB 1.13M/s 3KiB 607K/s 1.5KiB 496K/s
$ ./flashbench /dev/sda3 --open-au-nr=5 -O --blocksize=$[1024+512] --erasesize=$[1024*1024 + 1024*512] 1.5MiB 1.7M/s 768KiB 1.14M/s 384KiB 884K/s 192KiB 864K/s 96KiB 885K/s 48KiB 877K/s 24KiB 793K/s 12KiB 761K/s 6KiB 673K/s 3KiB 611K/s ^C
$ ./flashbench /dev/sda3 --open-au-nr=4 -O --blocksize=$[1024+512] --erasesize=$[1024*1024 + 1024*512] 1.5MiB 1.41M/s 768KiB 1.25M/s 384KiB 1.07M/s 192KiB 1.04M/s 96KiB 1.07M/s 48KiB 1.04M/s 24KiB 911K/s 12KiB 873K/s 6KiB 761K/s ^C
$ ./flashbench /dev/sda3 --open-au-nr=3 -O --blocksize=$[1024+512] --erasesize=$[1024*1024 + 1024*512] 1.5MiB 1.81M/s 768KiB 1.53M/s 384KiB 1.56M/s 192KiB 1.57M/s 96KiB 1.59M/s 48KiB 1.56M/s 24KiB 1.44M/s 12KiB 1.21M/s 6KiB 999K/s 3KiB 695K/s 1.5KiB 513K/s
$ ./flashbench /dev/sda3 --open-au-nr=2 -O --blocksize=$[1024+512] --erasesize=$[1024*1024 + 1024*512] 1.5MiB 2.3M/s 768KiB 1.3M/s 384KiB 1.69M/s 192KiB 1.64M/s 96KiB 1.69M/s 48KiB 1.62M/s 24KiB 1.53M/s 12KiB 1.29M/s 6KiB 1.04M/s 3KiB 722K/s 1.5KiB 411K/s
-> only 1 is really good. 3 is acceptable if closing both eyes -> how come that with 4MiB 4 blocks was that good? -> erase-block must be a multiple of 1.5MiB, so makes no sense to try -> with 3.5MiB, right?
$ ./flashbench /dev/sda3 --open-au-nr=1 -O --blocksize=$[1024+512] --erasesize=$[1024*1024 + 1024*512] --random 1.5MiB 1.3M/s 768KiB 2.37M/s 384KiB 2.29M/s 192KiB 2.31M/s 96KiB 2.39M/s 48KiB 1.28M/s 24KiB 1.15M/s 12KiB 715K/s 6KiB 396K/s 3KiB 245K/s 1.5KiB 116K/s
$ ./flashbench /dev/sda3 --open-au-nr=3 -O --blocksize=$[1024+512] --erasesize=$[1024*1024 + 1024*512] --random 1.5MiB 1.76M/s 768KiB 1.16M/s 384KiB 1.08M/s 192KiB 624K/s 96KiB 342K/s 48KiB 225K/s 24KiB 130K/s 12KiB 68.4K/s 6KiB 33.4K/s ^C
$ ./flashbench /dev/sda3 --open-au-nr=2 -O --blocksize=$[1024+512] --erasesize=$[1024*1024 + 1024*512] --random 1.5MiB 7.85M/s 768KiB 1.08M/s 384KiB 2.3M/s 192KiB 1.02M/s 96KiB 1.05M/s 48KiB 613K/s 24KiB 328K/s 12KiB 206K/s 6KiB 120K/s ^C
-> so random pretty sucks also here
$ ./flashbench /dev/sda3 -f --erasesize=$[1024*1024+1024*512] --blocksize=$[1024+512] --random 1.5MiB 6.53M/s 6.81M/s 7.01M/s 6.79M/s 7.16M/s 6.74M/s 768KiB 1.12M/s 2.23M/s 2.34M/s 2.15M/s 2.34M/s 904K/s 384KiB 2.27M/s 2.1M/s 1.3M/s 7.94M/s 1.3M/s 2.34M/s 192KiB 2.38M/s 1.3M/s 2.35M/s 1.32M/s 2.36M/s 1.31M/s 96KiB 2.38M/s 1.3M/s 2.4M/s 1.29M/s 2.38M/s 1.29M/s 48KiB 2.32M/s 2.26M/s 2.32M/s 1.3M/s 2.26M/s 906K/s 24KiB 1.13M/s 1.18M/s 1.16M/s 1.17M/s 1.17M/s 839K/s 12KiB 721K/s 570K/s 711K/s 570K/s 713K/s 932K/s 6KiB 402K/s 399K/s 404K/s 389K/s 400K/s 394K/s 3KiB 214K/s 225K/s 213K/s 204K/s 209K/s 222K/s 1.5KiB 108K/s 106K/s 109K/s 110K/s 109K/s 115K/s
peter
On Wednesday 13 July 2011 19:16:09 Peter Warasin wrote:
hi
-> this card sucks, i think :) fast with wrong erase block and slow with correct(?) one.. ?-)
It's more likely that now your partitition is misaligned, because the start is not a multiple of 1.5MB.
If the erase block size is a multiple of three, the test does not work any more. It could work with ./flashbench -a /dev/sda3 --count=100 --blocksize=1536
it doesn't, i get: time_read: Invalid argument
Sorry, my fault. It needs to be --blocksize=3072 or a multiple of that. The reason for this is that O_DIRECT accesses need to be multiples of 512 byte aligned in Linux, and the test reads will be moved to half of --blocksize.
-> so random pretty sucks also here
$ ./flashbench /dev/sda3 -f --erasesize=$[1024*1024+1024*512] --blocksize=$[1024+512] --random 1.5MiB 6.53M/s 6.81M/s 7.01M/s 6.79M/s 7.16M/s 6.74M/s 768KiB 1.12M/s 2.23M/s 2.34M/s 2.15M/s 2.34M/s 904K/s 384KiB 2.27M/s 2.1M/s 1.3M/s 7.94M/s 1.3M/s 2.34M/s 192KiB 2.38M/s 1.3M/s 2.35M/s 1.32M/s 2.36M/s 1.31M/s 96KiB 2.38M/s 1.3M/s 2.4M/s 1.29M/s 2.38M/s 1.29M/s 48KiB 2.32M/s 2.26M/s 2.32M/s 1.3M/s 2.26M/s 906K/s 24KiB 1.13M/s 1.18M/s 1.16M/s 1.17M/s 1.17M/s 839K/s 12KiB 721K/s 570K/s 711K/s 570K/s 713K/s 932K/s 6KiB 402K/s 399K/s 404K/s 389K/s 400K/s 394K/s 3KiB 214K/s 225K/s 213K/s 204K/s 209K/s 222K/s 1.5KiB 108K/s 106K/s 109K/s 110K/s 109K/s 115K/s
I also forgot to mention that the --findfat test case is pointless on a partition. What it does is to write to the first few erase blocks in order to find the FAT optimized regions. If they exist, they are always on the beginning of the card, so never on the beginning of a later partition.
Arnd
hi
On 13/07/11 20:05, Arnd Bergmann wrote:
On Wednesday 13 July 2011 19:16:09 Peter Warasin wrote:
fast with wrong erase block and slow with correct(?) one.. ?-)
It's more likely that now your partitition is misaligned, because the start is not a multiple of 1.5MB.
ah yeah, sure, very reasonable indeed :)
$ fdisk -ul /dev/sda
Disk /dev/sda: 15.9 GB, 15931539456 bytes 64 heads, 32 sectors/track, 15193 cylinders, total 31116288 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0x00000000
Device Boot Start End Blocks Id System /dev/sda1 3072 1022975 509952 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 1022976 31116287 15046656 83 Linux
Sorry, my fault. It needs to be --blocksize=3072 or a multiple of that. The reason for this is that O_DIRECT accesses need to be multiples of 512 byte aligned in Linux, and the test reads will be moved to half of --blocksize.
ah, ok. good, it's working now.
$ ./flashbench -a /dev/sda1 --count=100 --blocksize=3072 align 100663296 pre 959µs on 961µs post 974µs diff -5051ns align 50331648 pre 941µs on 935µs post 925µs diff 1.49µs align 25165824 pre 813µs on 834µs post 833µs diff 11.7µs align 12582912 pre 947µs on 979µs post 996µs diff 7.99µs align 6291456 pre 998µs on 1ms post 1.02ms diff -5067ns align 3145728 pre 1ms on 994µs post 1.02ms diff -13866n align 1572864 pre 1ms on 1.07ms post 1ms diff 70.2µs align 786432 pre 1ms on 1ms post 1.01ms diff -6167ns align 393216 pre 1ms on 1ms post 1.02ms diff -8731ns align 196608 pre 1ms on 1ms post 1.02ms diff -6397ns align 98304 pre 1ms on 998µs post 1.01ms diff -9612ns align 49152 pre 998µs on 1ms post 1.02ms diff -9015ns align 24576 pre 1.02ms on 1.02ms post 1ms diff 9.32µs align 12288 pre 999µs on 999µs post 989µs diff 4.77µs align 6144 pre 1ms on 990µs post 1ms diff -13454n
still quite strange values :/
$ ./flashbench /dev/sda1 -O --open-au-nr=1 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 6MiB 3.99M/s 3MiB 3.06M/s 1.5MiB 4.01M/s 768KiB 4.01M/s 384KiB 3.91M/s 192KiB 3.92M/s 96KiB 4.3M/s 48KiB 4.15M/s 24KiB 2.87M/s 12KiB 2.2M/s 6KiB 1.62M/s 3KiB 1.07M/s
$ ./flashbench /dev/sda1 -O --open-au-nr=2 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 6MiB 3.4M/s 3MiB 4.14M/s 1.5MiB 4.02M/s 768KiB 4.01M/s 384KiB 3.92M/s 192KiB 3.81M/s 96KiB 4.31M/s 48KiB 4.16M/s 24KiB 2.54M/s 12KiB 2.19M/s 6KiB 1.64M/s 3KiB 1.03M/s
$ ./flashbench /dev/sda1 -O --open-au-nr=5 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 6MiB 4.28M/s 3MiB 3.11M/s 1.5MiB 4.06M/s 768KiB 4.03M/s 384KiB 3.93M/s 192KiB 3.95M/s 96KiB 4.35M/s 48KiB 4.19M/s 24KiB 2.76M/s 12KiB 2.24M/s 6KiB 1.65M/s 3KiB 1.06M/s
$ ./flashbench /dev/sda1 -O --open-au-nr=6 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 6MiB 4.12M/s 3MiB 3.07M/s 1.5MiB 3.06M/s 768KiB 1.56M/s 384KiB 752K/s 192KiB 378K/s 96KiB 197K/s 48KiB 98.1K/s ^C
-> ok, definitively 5 blocks i guess
$ ./flashbench /dev/sda1 -O --open-au-nr=5 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 --random 6MiB 3.4M/s 3MiB 3.03M/s 1.5MiB 4.07M/s 768KiB 2.11M/s 384KiB 949K/s 192KiB 545K/s 96KiB 293K/s 48KiB 149K/s 24KiB 73.4K/s ^C
$ ./flashbench /dev/sda1 -O --open-au-nr=1 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 --random 6MiB 3.09M/s 3MiB 3.11M/s 1.5MiB 4.15M/s 768KiB 3.17M/s 384KiB 2.05M/s 192KiB 1.94M/s 96KiB 1.48M/s 48KiB 1.9M/s 24KiB 1.06M/s 12KiB 715K/s 6KiB 391K/s 3KiB 212K/s
$ ./flashbench /dev/sda1 -O --open-au-nr=4 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 --random 6MiB 3.46M/s 3MiB 3.15M/s 1.5MiB 4.19M/s 768KiB 2.17M/s 384KiB 978K/s 192KiB 551K/s 96KiB 299K/s 48KiB 156K/s ^C
-> well, looks not that good
I also forgot to mention that the --findfat test case is pointless on a partition. What it does is to write to the first few erase blocks in order to find the FAT optimized regions. If they exist, they are always on the beginning of the card, so never on the beginning of a later partition.
ah, ok. so i will redo them
./flashbench /dev/sda -f --erasesize=$[1024*1024+1024*512] --blocksize=$[1024+512] --random 1.5MiB 2.08M/s 6.07M/s 2.25M/s 6.71M/s 3.87M/s 6.07M/s 768KiB 2.18M/s 2.27M/s 5.04M/s 2.22M/s 2.14M/s 1.28M/s 384KiB 2.3M/s 2.32M/s 2.22M/s 2.27M/s 2.32M/s 2.32M/s 192KiB 1.31M/s 2.34M/s 2.38M/s 1.3M/s 1.3M/s 2.33M/s 96KiB 1.31M/s 2.39M/s 2.38M/s 1.32M/s 1.3M/s 1.33M/s 48KiB 1.29M/s 2.31M/s 2.26M/s 1.3M/s 2.31M/s 1.29M/s 24KiB 1.17M/s 1.16M/s 1.17M/s 1.16M/s 1.17M/s 830K/s 12KiB 714K/s 714K/s 712K/s 708K/s 716K/s 710K/s 6KiB 398K/s 399K/s 399K/s 397K/s 397K/s 348K/s 3KiB 210K/s 212K/s 212K/s 211K/s 214K/s 200K/s 1.5KiB 111K/s 107K/s 112K/s 117K/s 114K/s 109K/s
peter
On Monday 18 July 2011, Peter Warasin wrote:
$ ./flashbench -a /dev/sda1 --count=100 --blocksize=3072 align 100663296 pre 959µs on 961µs post 974µs diff -5051ns align 50331648 pre 941µs on 935µs post 925µs diff 1.49µs align 25165824 pre 813µs on 834µs post 833µs diff 11.7µs align 12582912 pre 947µs on 979µs post 996µs diff 7.99µs align 6291456 pre 998µs on 1ms post 1.02ms diff -5067ns align 3145728 pre 1ms on 994µs post 1.02ms diff -13866n align 1572864 pre 1ms on 1.07ms post 1ms diff 70.2µs align 786432 pre 1ms on 1ms post 1.01ms diff -6167ns align 393216 pre 1ms on 1ms post 1.02ms diff -8731ns align 196608 pre 1ms on 1ms post 1.02ms diff -6397ns align 98304 pre 1ms on 998µs post 1.01ms diff -9612ns align 49152 pre 998µs on 1ms post 1.02ms diff -9015ns align 24576 pre 1.02ms on 1.02ms post 1ms diff 9.32µs align 12288 pre 999µs on 999µs post 989µs diff 4.77µs align 6144 pre 1ms on 990µs post 1ms diff -13454n
still quite strange values :/
Yes, that happens sometimes. On some cards, the values just don't correlate with the erase blocks here.
$ ./flashbench /dev/sda1 -O --open-au-nr=5 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 6MiB 4.28M/s 3MiB 3.11M/s 1.5MiB 4.06M/s 768KiB 4.03M/s 384KiB 3.93M/s 192KiB 3.95M/s 96KiB 4.35M/s 48KiB 4.19M/s 24KiB 2.76M/s 12KiB 2.24M/s 6KiB 1.65M/s 3KiB 1.06M/s
$ ./flashbench /dev/sda1 -O --open-au-nr=6 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 6MiB 4.12M/s 3MiB 3.07M/s 1.5MiB 3.06M/s 768KiB 1.56M/s 384KiB 752K/s 192KiB 378K/s 96KiB 197K/s 48KiB 98.1K/s ^C
-> ok, definitively 5 blocks i guess
Yes.
$ ./flashbench /dev/sda1 -O --open-au-nr=5 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 --random 6MiB 3.4M/s 3MiB 3.03M/s 1.5MiB 4.07M/s 768KiB 2.11M/s 384KiB 949K/s 192KiB 545K/s 96KiB 293K/s 48KiB 149K/s 24KiB 73.4K/s ^C
$ ./flashbench /dev/sda1 -O --open-au-nr=1 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 --random 6MiB 3.09M/s 3MiB 3.11M/s 1.5MiB 4.15M/s 768KiB 3.17M/s 384KiB 2.05M/s 192KiB 1.94M/s 96KiB 1.48M/s 48KiB 1.9M/s 24KiB 1.06M/s 12KiB 715K/s 6KiB 391K/s 3KiB 212K/s
$ ./flashbench /dev/sda1 -O --open-au-nr=4 --erasesize=$[4*(1024*1024+1024*512)] --blocksize=3072 --random 6MiB 3.46M/s 3MiB 3.15M/s 1.5MiB 4.19M/s 768KiB 2.17M/s 384KiB 978K/s 192KiB 551K/s 96KiB 299K/s 48KiB 156K/s ^C
-> well, looks not that good
I think what you see here is again an artifact of passign the wrong erasesize. Your test is with 6MB, while the card probably uses 1.5MB erase blocks. If it can have 5 1.5MB erase blocks open, doing random access within a 6 MB region consumes four of the five erase blocks, trying 4*6MB would need 16 erase blocks, which is way beyond the five it can handle.
This should look much better using
./flashbench /dev/sda1 -O --open-au-nr=4 --erasesize=$[(1024*1024+1024*512)] --blocksize=3072 --random
I also forgot to mention that the --findfat test case is pointless on a partition. What it does is to write to the first few erase blocks in order to find the FAT optimized regions. If they exist, they are always on the beginning of the card, so never on the beginning of a later partition.
ah, ok. so i will redo them
./flashbench /dev/sda -f --erasesize=$[1024*1024+1024*512] --blocksize=$[1024+512] --random 1.5MiB 2.08M/s 6.07M/s 2.25M/s 6.71M/s 3.87M/s 6.07M/s 768KiB 2.18M/s 2.27M/s 5.04M/s 2.22M/s 2.14M/s 1.28M/s 384KiB 2.3M/s 2.32M/s 2.22M/s 2.27M/s 2.32M/s 2.32M/s 192KiB 1.31M/s 2.34M/s 2.38M/s 1.3M/s 1.3M/s 2.33M/s 96KiB 1.31M/s 2.39M/s 2.38M/s 1.32M/s 1.3M/s 1.33M/s 48KiB 1.29M/s 2.31M/s 2.26M/s 1.3M/s 2.31M/s 1.29M/s 24KiB 1.17M/s 1.16M/s 1.17M/s 1.16M/s 1.17M/s 830K/s 12KiB 714K/s 714K/s 712K/s 708K/s 716K/s 710K/s 6KiB 398K/s 399K/s 399K/s 397K/s 397K/s 348K/s 3KiB 210K/s 212K/s 212K/s 211K/s 214K/s 200K/s 1.5KiB 111K/s 107K/s 112K/s 117K/s 114K/s 109K/s
Right, there is something to be seen here, but it's not clean to me what ;-) The numbers are fluctuating a lot. It would be good to run this repeatedly to see if anything changes, and with a '--fat-nr=12' argument so see more columns. The first six 1.5 MB erase blocks are only 9MB in total, so if the FAT area spans the first 16 MB, all of this will be the FAT.
It seems that the sixth column in your output is different from the first ones, which indicates that the card actually has a 7.5 MB (1.5*5) FAT area.
Arnd
On 18/07/11 17:03, Arnd Bergmann wrote:
I think what you see here is again an artifact of passign the wrong erasesize. Your test is with 6MB, while the card probably uses 1.5MB erase blocks. If it can have 5 1.5MB erase blocks open, doing random access within a 6 MB region consumes four of the five erase blocks, trying 4*6MB would need 16 erase blocks, which is way beyond the five it can handle.
ah ok. thought it's only the beginning of blocksize what you pass. so it affects the whole test if starting with a to big one (?)
./flashbench /dev/sda1 -O --open-au-nr=5 --erasesize=$[1024*1024+1024*512] --blocksize=3072 --random 1.5MiB 2.11M/s 768KiB 1.66M/s 384KiB 2.29M/s 192KiB 1.34M/s 96KiB 661K/s 48KiB 335K/s 24KiB 145K/s 12KiB 76.1K/s ^C
./flashbench /dev/sda1 -O --open-au-nr=1 --erasesize=$[1024*1024+1024*512] --blocksize=3072 --random 1.5MiB 1.26M/s 768KiB 2.2M/s 384KiB 2.93M/s 192KiB 1.26M/s 96KiB 2.27M/s 48KiB 1.23M/s 24KiB 1.15M/s 12KiB 701K/s 6KiB 396K/s 3KiB 211K/s
./flashbench /dev/sda1 -O --open-au-nr=4 --erasesize=$[1024*1024+1024*512] --blocksize=3072 --random 1.5MiB 3.25M/s 768KiB 1.83M/s 384KiB 1.74M/s 192KiB 1.62M/s 96KiB 828K/s 48KiB 479K/s 24KiB 217K/s 12KiB 118K/s 6KiB 57.9K/s 3KiB 29.9K/s
-> ok, still not really better, but 1 block is ok i would say
Right, there is something to be seen here, but it's not clean to me what ;-) The numbers are fluctuating a lot. It would be good to run this repeatedly to see if anything changes, and with a '--fat-nr=12' argument so see more columns. The first six 1.5 MB erase blocks are only 9MB in total, so if the FAT area spans the first 16 MB, all of this will be the FAT.
It seems that the sixth column in your output is different from the first ones, which indicates that the card actually has a 7.5 MB (1.5*5) FAT area.
Arnd
./flashbench /dev/sda -f --erasesize=$[1024*1024+1024*512] --blocksize=$[1024+512] --random --fat-nr=12 1.5MiB 6.46M/s 6.15M/s 2.11M/s 6.82M/s 2.29M/s 6.73M/s 6.5M/s 6.27M/s 6.45M/s 6.29M/s 6.47M/s 6.28M/s 768KiB 1.27M/s 2.31M/s 2.15M/s 2.3M/s 4.99M/s 2.25M/s 2.2M/s 2.3M/s 1.28M/s 2.3M/s 1.29M/s 2.23M/s 384KiB 2.32M/s 2.32M/s 2.31M/s 2.32M/s 1.27M/s 2.3M/s 2.26M/s 2.33M/s 1.31M/s 2.33M/s 1.3M/s 2.32M/s 192KiB 1.29M/s 2.32M/s 1.32M/s 2.33M/s 2.37M/s 1.3M/s 1.29M/s 2.37M/s 1.28M/s 2.37M/s 1.29M/s 2.37M/s 96KiB 1.32M/s 2.37M/s 1.29M/s 2.4M/s 2.37M/s 1.32M/s 1.31M/s 2.36M/s 1.3M/s 2.36M/s 1.31M/s 2.34M/s 48KiB 1.3M/s 2.26M/s 1.29M/s 2.32M/s 1.28M/s 1.3M/s 1.25M/s 2.35M/s 1.26M/s 2.35M/s 1.27M/s 2.3M/s 24KiB 838K/s 1.19M/s 1.16M/s 1.18M/s 1.2M/s 1.18M/s 1.18M/s 1.2M/s 1.18M/s 1.19M/s 1.18M/s 1.19M/s 12KiB 726K/s 577K/s 720K/s 722K/s 729K/s 735K/s 722K/s 725K/s 729K/s 692K/s 722K/s 724K/s 6KiB 360K/s 412K/s 414K/s 412K/s 407K/s 350K/s 360K/s 411K/s 360K/s 406K/s 361K/s 359K/s 3KiB 219K/s 218K/s 219K/s 221K/s 220K/s 219K/s 221K/s 218K/s 221K/s 204K/s 222K/s 220K/s
1.5KiB 115K/s 111K/s 115K/s 115K/s 114K/s 109K/s 108K/s 113K/s 109K/s 112K/s 111K/s 106K/s
-> nothing? seems still quite stable
On Monday 18 July 2011 19:44:18 Peter Warasin wrote:
On 18/07/11 17:03, Arnd Bergmann wrote:
I think what you see here is again an artifact of passign the wrong erasesize. Your test is with 6MB, while the card probably uses 1.5MB erase blocks. If it can have 5 1.5MB erase blocks open, doing random access within a 6 MB region consumes four of the five erase blocks, trying 4*6MB would need 16 erase blocks, which is way beyond the five it can handle.
ah ok. thought it's only the beginning of blocksize what you pass. so it affects the whole test if starting with a to big one (?)
./flashbench /dev/sda1 -O --open-au-nr=5 --erasesize=$[1024*1024+1024*512] --blocksize=3072 --random 1.5MiB 2.11M/s 768KiB 1.66M/s 384KiB 2.29M/s 192KiB 1.34M/s 96KiB 661K/s 48KiB 335K/s 24KiB 145K/s 12KiB 76.1K/s ^C
./flashbench /dev/sda1 -O --open-au-nr=1 --erasesize=$[1024*1024+1024*512] --blocksize=3072 --random 1.5MiB 1.26M/s 768KiB 2.2M/s 384KiB 2.93M/s 192KiB 1.26M/s 96KiB 2.27M/s 48KiB 1.23M/s 24KiB 1.15M/s 12KiB 701K/s 6KiB 396K/s 3KiB 211K/s
./flashbench /dev/sda1 -O --open-au-nr=4 --erasesize=$[1024*1024+1024*512] --blocksize=3072 --random 1.5MiB 3.25M/s 768KiB 1.83M/s 384KiB 1.74M/s 192KiB 1.62M/s 96KiB 828K/s 48KiB 479K/s 24KiB 217K/s 12KiB 118K/s 6KiB 57.9K/s 3KiB 29.9K/s
-> ok, still not really better, but 1 block is ok i would say
Yes, this is strange. I would have expected at least 3.
./flashbench /dev/sda -f --erasesize=$[1024*1024+1024*512] --blocksize=$[1024+512] --random --fat-nr=12 1.5MiB 6.46M/s 6.15M/s 2.11M/s 6.82M/s 2.29M/s 6.73M/s 6.5M/s 6.27M/s 6.45M/s 6.29M/s 6.47M/s 6.28M/s 768KiB 1.27M/s 2.31M/s 2.15M/s 2.3M/s 4.99M/s 2.25M/s 2.2M/s 2.3M/s 1.28M/s 2.3M/s 1.29M/s 2.23M/s 384KiB 2.32M/s 2.32M/s 2.31M/s 2.32M/s 1.27M/s 2.3M/s 2.26M/s 2.33M/s 1.31M/s 2.33M/s 1.3M/s 2.32M/s 192KiB 1.29M/s 2.32M/s 1.32M/s 2.33M/s 2.37M/s 1.3M/s 1.29M/s 2.37M/s 1.28M/s 2.37M/s 1.29M/s 2.37M/s 96KiB 1.32M/s 2.37M/s 1.29M/s 2.4M/s 2.37M/s 1.32M/s 1.31M/s 2.36M/s 1.3M/s 2.36M/s 1.31M/s 2.34M/s 48KiB 1.3M/s 2.26M/s 1.29M/s 2.32M/s 1.28M/s 1.3M/s 1.25M/s 2.35M/s 1.26M/s 2.35M/s 1.27M/s 2.3M/s 24KiB 838K/s 1.19M/s 1.16M/s 1.18M/s 1.2M/s 1.18M/s 1.18M/s 1.2M/s 1.18M/s 1.19M/s 1.18M/s 1.19M/s 12KiB 726K/s 577K/s 720K/s 722K/s 729K/s 735K/s 722K/s 725K/s 729K/s 692K/s 722K/s 724K/s 6KiB 360K/s 412K/s 414K/s 412K/s 407K/s 350K/s 360K/s 411K/s 360K/s 406K/s 361K/s 359K/s 3KiB 219K/s 218K/s 219K/s 221K/s 220K/s 219K/s 221K/s 218K/s 221K/s 204K/s 222K/s 220K/s 1.5KiB 115K/s 111K/s 115K/s 115K/s 114K/s 109K/s 108K/s 113K/s 109K/s 112K/s 111K/s 106K/s
-> nothing? seems still quite stable
Yes, I agree. There are still significant differences between the tests of each erase block, but it's probably not worth spending more time on getting to the bottom of this. The characteristics of the card seem ok, with a small erase block size, but it's rather slow to start with. As a TLC flash, it also won't last all that long, even if you treat it really well.
Arnd
flashbench-results@lists.linaro.org