On Thu, Apr 18, 2019 at 11:36:03AM +0200, Paolo Bonzini wrote:
On 18/04/19 11:29, Ming Lei wrote:
On Thu, Apr 18, 2019 at 10:42:21AM +0200, Paolo Bonzini wrote:
On 18/04/19 04:19, Ming Lei wrote:
Hi Paolo,
On Wed, Apr 17, 2019 at 01:52:07PM +0200, Paolo Bonzini wrote:
Because bio_kmalloc uses inline iovecs, the limit on the number of entries is not BIO_MAX_PAGES but rather UIO_MAXIOV, which indeed is already checked in bio_kmalloc. This could cause SG_IO requests to be truncated and the HBA to report a DMA overrun.
BIO_MAX_PAGES only limits the single bio's max vector number, if one bio can't hold all user space request, new bio will be allocated and appended to the passthrough request if queue limits aren't reached.
Stupid question: where? I don't see any place starting at blk_rq_map_user_iov (and then __blk_rq_map_user_iov->bio_map_user_iov) that would allocate a second bio. The only bio_kmalloc in that path is the one I'm patching.
Each bio is created inside __blk_rq_map_user_iov() which is run inside a loop, and the created bio is added to request via blk_rq_append_bio(), see the following code:
Uff, I can't read apparently. :( This is the commit that introduced it:
commit 4d6af73d9e43f78651a43ee4c5ad221107ac8365 Author: Christoph Hellwig <hch@lst.de> Date: Wed Mar 2 18:07:14 2016 +0100 block: support large requests in blk_rq_map_user_iov
Exactly, the above commit starts to build multiple bios for a request.
Then I guess your issue is triggered on kernel without the commit.
Thanks, Ming