Because bio_kmalloc uses inline iovecs, the limit on the number of entries is not BIO_MAX_PAGES but rather UIO_MAXIOV, which indeed is already checked in bio_kmalloc. This could cause SG_IO requests to be truncated and the HBA to report a DMA overrun.
Note that if the argument to iov_iter_npages were changed to UIO_MAXIOV, we would still truncate SG_IO requests beyond UIO_MAXIOV pages. Changing it to UIO_MAXIOV + 1 instead ensures that bio_kmalloc notices that the request is too big and blocks it.
Cc: stable@vger.kernel.org Cc: Al Viro viro@zeniv.linux.org.uk Fixes: b282cc766958 ("bio_map_user_iov(): get rid of the iov_for_each()", 2017-10-11) Signed-off-by: Paolo Bonzini pbonzini@redhat.com
--- v1->v2: now with "git commit" --- block/bio.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/bio.c b/block/bio.c index 4db1008309ed..0914ae4adae9 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1299,7 +1299,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, if (!iov_iter_count(iter)) return ERR_PTR(-EINVAL);
- bio = bio_kmalloc(gfp_mask, iov_iter_npages(iter, BIO_MAX_PAGES)); + bio = bio_kmalloc(gfp_mask, iov_iter_npages(iter, UIO_MAXIOV + 1)); if (!bio) return ERR_PTR(-ENOMEM);