On Wednesday 13 June 2012, Andrew Bradford wrote:
Transcend JetFlash 700 32GB TS32GJF700 USB3 USB drive, testing in USB2 port.
Had some issues at first but after wiping the partition table, those seem to have gone away. They may be caused by my host somewhat and not entirely be the USB drive's fault but I've not investigated further. I've included the problem output here, if anyone has any suggestions as to the root cause, please let me know.
[andrew@mythdvr flashbench]$ sudo sfdisk -uS -l /dev/sdb
Disk /dev/sdb: 30147 cylinders, 64 heads, 32 sectors/track Units = sectors of 512 bytes, counting from 0
Device Boot Start End #sectors Id System /dev/sdb1 ? 778135908 1919645538 1141509631 72 Unknown start: (c,h,s) expected (1023,63,32) found (357,116,40) end: (c,h,s) expected (1023,63,32) found (357,32,45) /dev/sdb2 ? 168689522 2104717761 1936028240 65 Novell Netware 386 start: (c,h,s) expected (1023,63,32) found (288,115,43) end: (c,h,s) expected (1023,63,32) found (367,114,50) /dev/sdb3 ? 1869881465 3805909656 1936028192 79 Unknown start: (c,h,s) expected (1023,63,32) found (366,32,33) end: (c,h,s) expected (1023,63,32) found (357,32,43) /dev/sdb4 ? 2885681152 2885736650 55499 d Unknown start: (c,h,s) expected (1023,63,32) found (372,97,50) end: (c,h,s) expected (1023,63,32) found (0,10,0)
# That's a very odd table. # 64 heads / 32 sectors should indicate erase block size is a power of 2 if Transcend is doing that on purpose.
No, I think it's just what the scsi host reports for any sd* device these days, the geometry does not come from Transcend.
[andrew@mythdvr flashbench]$ sudo fdisk -l /dev/sdb | grep Disk Disk /dev/sdb: 31.6 GB, 31611420672 bytes Disk identifier: 0x6f20736b
[andrew@mythdvr flashbench]$ factor 31611420672 31611420672: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 13 773
This suggests that we have 10049 3MB sections, my guess is that the erase block size is actually 3 MB based on that.
[andrew@mythdvr flashbench]$ dmesg # with slight editing to remove useless lines sd 8:0:0:0: [sdb] Unhandled error code sd 8:0:0:0: [sdb] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK end_request: I/O error, dev sdb, sector 33562624 sd 8:0:0:0: [sdb] Unhandled error code sd 8:0:0:0: [sdb] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK end_request: I/O error, dev sdb, sector 50339840 sd 8:0:0:0: [sdb] Unhandled error code sd 8:0:0:0: [sdb] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK end_request: I/O error, dev sdb, sector 5376000 sd 8:0:0:0: [sdb] Unhandled error code sd 8:0:0:0: [sdb] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK end_request: I/O error, dev sdb, sector 22153216 sd 8:0:0:0: [sdb] Unhandled error code sd 8:0:0:0: [sdb] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK end_request: I/O error, dev sdb, sector 38930432 scsi 8:0:0:0: [sdb] Unhandled error code scsi 8:0:0:0: [sdb] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK end_request: I/O error, dev sdb, sector 8184
# Repartitioned to see if that helps
repartitioning should really have no impact on this. I don't understand what's going on with the read errors.
[andrew@mythdvr flashbench]$ sudo ./flashbench -a /dev/sdb --blocksize=1024 align 8589934592 pre 1.02ms on 1.51ms post 1.12ms diff 440µs align 4294967296 pre 1.05ms on 1.6ms post 1.12ms diff 515µs align 2147483648 pre 1.05ms on 1.6ms post 1.12ms diff 519µs align 1073741824 pre 1.05ms on 1.39ms post 1.12ms diff 305µs align 536870912 pre 1.05ms on 1.39ms post 1.12ms diff 306µs align 268435456 pre 1.05ms on 1.39ms post 1.12ms diff 305µs align 134217728 pre 1.05ms on 1.39ms post 1.12ms diff 310µs align 67108864 pre 1.04ms on 1.39ms post 1.12ms diff 308µs align 33554432 pre 1.05ms on 1.38ms post 1.12ms diff 298µs align 16777216 pre 1.05ms on 1.39ms post 1.12ms diff 308µs align 8388608 pre 1.05ms on 1.38ms post 1.12ms diff 298µs align 4194304 pre 1.11ms on 1.24ms post 1.12ms diff 125µs align 2097152 pre 1.11ms on 1.23ms post 1.12ms diff 117µs align 1048576 pre 1.12ms on 1.25ms post 1.12ms diff 127µs align 524288 pre 1.12ms on 1.25ms post 1.12ms diff 128µs align 262144 pre 1.12ms on 1.25ms post 1.12ms diff 126µs align 131072 pre 1.12ms on 1.25ms post 1.12ms diff 125µs align 65536 pre 1.09ms on 1.12ms post 1.11ms diff 17.2µs align 32768 pre 1.12ms on 1.25ms post 1.12ms diff 128µs align 16384 pre 1.12ms on 1.25ms post 1.12ms diff 128µs align 8192 pre 1.12ms on 1.41ms post 1.12ms diff 286µs align 4096 pre 1.11ms on 1.23ms post 1.12ms diff 111µs align 2048 pre 1.11ms on 1.12ms post 1.12ms diff 2.8µs
# Looks like 8 MiB erase block # Page size is hard to tell
Agreed, these numbers definitely suggest 8MB erase blocks.
[andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --open-au --erasesize=$[8*1024*1024] --blocksize=$[16*1024] --open-au-nr=1 8MiB 33.5M/s 4MiB 32.8M/s 2MiB 30M/s 1MiB 33.2M/s 512KiB 32.4M/s 256KiB 30.8M/s 128KiB 29.5M/s 64KiB 30.4M/s 32KiB 26M/s 16KiB 18.6M/s
[andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --open-au --erasesize=$[8*1024*1024] --blocksize=$[16*1024] --open-au-nr=2 8MiB 33.2M/s 4MiB 32.6M/s 2MiB 32.1M/s 1MiB 31.3M/s 512KiB 28.5M/s 256KiB 24.5M/s 128KiB 21.8M/s 64KiB 17.2M/s 32KiB 11.1M/s 16KiB 2.26M/s
[andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --open-au --erasesize=$[8*1024*1024] --blocksize=$[16*1024] --open-au-nr=6 8MiB 32.6M/s 4MiB 33.2M/s 2MiB 32.4M/s 1MiB 18M/s 512KiB 18.3M/s 256KiB 21.3M/s 128KiB 23.6M/s 64KiB 12.4M/s 32KiB 6.67M/s 16KiB 1.21M/s
# Would that be indicative of 128KiB as a minimum write size / page size? It's almost exactly 50% fall off after that.
Agreed, but this is different for 1 and 2 erase blocks, so there is something going on between the two.
[andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --open-au --erasesize=$[8*1024*1024] --blocksize=$[16*1024] --open-au-nr=7 8MiB 33M/s 4MiB 31.1M/s 2MiB 11M/s 1MiB 9.81M/s 512KiB 8.64M/s 256KiB 9.72M/s 128KiB 6.91M/s 64KiB 7.51M/s 32KiB 4.71M/s 16KiB 1.03M/s
# Looks like 6 open erase blocks, but even with 7, performance isn't horrible, just much worse. Just to see, try 12 open erase blocks:
[andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --open-au --erasesize=$[8*1024*1024] --blocksize=$[16*1024] --open-au-nr=12 8MiB 32.8M/s 4MiB 20.3M/s 2MiB 9.95M/s 1MiB 6.75M/s 512KiB 4.58M/s 256KiB 5.4M/s 128KiB 5M/s 64KiB 4.05M/s 32KiB 2.09M/s 16KiB 677K/s
# Definitely worse, but still not really horrible like some SD cards get.
Right.
[andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --open-au --erasesize=$[8*1024*1024] --blocksize=$[16*1024] --random --open-au-nr=1 8MiB 32.5M/s 4MiB 33.1M/s 2MiB 33.3M/s 1MiB 33.5M/s 512KiB 32.6M/s 256KiB 31M/s 128KiB 11.4M/s 64KiB 6.1M/s 32KiB 12.5M/s 16KiB 2.48M/s
[andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --open-au --erasesize=$[8*1024*1024] --blocksize=$[16*1024] --random --open-au-nr=2 8MiB 32.5M/s 4MiB 33.3M/s 2MiB 32.8M/s 1MiB 10.8M/s 512KiB 6.71M/s 256KiB 7.05M/s 128KiB 3.73M/s 64KiB 4.41M/s 32KiB 3.36M/s 16KiB 1.7M/s
[andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --open-au --erasesize=$[8*1024*1024] --blocksize=$[16*1024] --random --open-au-nr=6 8MiB 32.3M/s 4MiB 22.5M/s 2MiB 11.6M/s 1MiB 6.96M/s 512KiB 4.93M/s 256KiB 5.28M/s 128KiB 3.38M/s 64KiB 2.85M/s 32KiB 2.03M/s 16KiB 936K/s
# So random isn't a strong point compared to linear access.
I would interpret this as having 1 erase block open in random mode, but something between 2 and 6 in linear mode.
[andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --findfat --erasesize=$[8*1024*1024] --blocksize=4096 8MiB 33.2M/s 32.7M/s 33.1M/s 32.7M/s 33.1M/s 33.2M/s 4MiB 33M/s 32.9M/s 32.8M/s 32.6M/s 32.8M/s 32.7M/s 2MiB 32.7M/s 32.5M/s 32.7M/s 32.6M/s 32.8M/s 32.6M/s 1MiB 33.2M/s 33.1M/s 33.1M/s 33.2M/s 33.2M/s 33.2M/s 512KiB 32.2M/s 32.2M/s 32.4M/s 32.1M/s 32.3M/s 32.3M/s 256KiB 30.8M/s 30.8M/s 30.8M/s 30.8M/s 30.7M/s 30.9M/s 128KiB 30.1M/s 30.3M/s 30.1M/s 30M/s 30.3M/s 30.3M/s 64KiB 32M/s 32M/s 32.1M/s 32M/s 32.1M/s 32M/s 32KiB 28.2M/s 28.2M/s 28.3M/s 28.3M/s 28.2M/s 28.6M/s 16KiB 21.4M/s 7.8M/s 6.07M/s 11.9M/s 8.91M/s 5.33M/s 8KiB 5.11M/s 6.85M/s 6.19M/s 5.73M/s 5.77M/s 6.38M/s 4KiB 4.97M/s 4.98M/s 5.06M/s 5.02M/s 4.96M/s 5.01M/s
andrew@mythdvr flashbench]$ sudo ./flashbench /dev/sdb --findfat --erasesize=$[4*1024*1024] --blocksize=4096 --fat-nr=8 4MiB 32.4M/s 32.6M/s 31.6M/s 32.7M/s 31.7M/s 32.7M/s 31.6M/s 32.8M/s 2MiB 31.5M/s 32.5M/s 31.5M/s 32.6M/s 31.4M/s 32.6M/s 31.5M/s 32.7M/s 1MiB 32M/s 33.1M/s 31.8M/s 33.2M/s 32M/s 33.2M/s 31.9M/s 33.1M/s 512KiB 31M/s 31.9M/s 30.7M/s 32M/s 30.9M/s 32.1M/s 31.1M/s 32.1M/s 256KiB 29.5M/s 30.4M/s 29.3M/s 30.4M/s 29.5M/s 30.4M/s 29.5M/s 30.4M/s 128KiB 28.2M/s 29.2M/s 28.3M/s 29.2M/s 18.7M/s 29.6M/s 28.6M/s 29.2M/s 64KiB 30.1M/s 30.9M/s 30M/s 31.1M/s 30M/s 30.9M/s 29.9M/s 30.9M/s 32KiB 25.7M/s 26.3M/s 27.8M/s 29.1M/s 25.9M/s 27.6M/s 28.4M/s 29.1M/s 16KiB 18.6M/s 19.2M/s 19.4M/s 18.8M/s 19M/s 19.8M/s 7.53M/s 7.39M/s 8KiB 6.86M/s 6.81M/s 4.37M/s 9.62M/s 5.33M/s 7.48M/s 5.01M/s 7.49M/s 4KiB 4.15M/s 5.67M/s 4.14M/s 5.68M/s 4.18M/s 5.67M/s 4.25M/s 5.73M/s
This resuls might be clearer in --random mode for this, but 24 MB of special area seems reasonable for a large USB device, especially if it was formatted with a small cluster size.
Given the size of the drive as a multiple of 3MB, I would suggest you do another test with --erasesize=$[3*1024*1024] --blocksize=$[48 * 1024] to see if that gives clearer results.
Thanks for sharing the results so far!
Arnd