On Wed, May 07, 2025 at 03:49:38PM -0600, Uday Shankar wrote:
We currently have a helper ublk_queue_alloc_sqes which the ublk targets use to allocate SQEs for their own operations. However, as we move towards decoupled ublk_queues and ublk server threads, this helper does not make sense anymore. SQEs are allocated from rings, and we will have one ring per thread to avoid locking. Change the SQE allocation helper to ublk_io_alloc_sqes. Currently this still allocates SQEs from the io's queue's ring, but when we fully decouple threads and queues, it will allocate from the io's thread's ring instead.
Signed-off-by: Uday Shankar ushankar@purestorage.com
tools/testing/selftests/ublk/fault_inject.c | 2 +- tools/testing/selftests/ublk/file_backed.c | 6 +++--- tools/testing/selftests/ublk/kublk.c | 3 ++- tools/testing/selftests/ublk/kublk.h | 11 +++++++---- tools/testing/selftests/ublk/null.c | 2 +- tools/testing/selftests/ublk/stripe.c | 4 ++-- 6 files changed, 16 insertions(+), 12 deletions(-)
diff --git a/tools/testing/selftests/ublk/fault_inject.c b/tools/testing/selftests/ublk/fault_inject.c index 6bc8ee519b483ba6a365dccb03ad389425eefd3b..101c6dad6cf1f6dd45bbc46baa793493b97646bf 100644 --- a/tools/testing/selftests/ublk/fault_inject.c +++ b/tools/testing/selftests/ublk/fault_inject.c @@ -41,7 +41,7 @@ static int ublk_fault_inject_queue_io(struct ublk_queue *q, int tag) .tv_nsec = (long long)q->dev->private_data, };
- ublk_queue_alloc_sqes(q, &sqe, 1);
- ublk_io_alloc_sqes(ublk_get_io(q, tag), &sqe, 1); io_uring_prep_timeout(sqe, &ts, 1, 0); sqe->user_data = build_user_data(tag, ublksrv_get_op(iod), 0, q->q_id, 1);
diff --git a/tools/testing/selftests/ublk/file_backed.c b/tools/testing/selftests/ublk/file_backed.c index 69991ac7a0a947acba7b23ac89348936a3fcef75..563f11a21604bbf5b9531f69f806d09cdd785960 100644 --- a/tools/testing/selftests/ublk/file_backed.c +++ b/tools/testing/selftests/ublk/file_backed.c @@ -18,7 +18,7 @@ static int loop_queue_flush_io(struct ublk_queue *q, const struct ublksrv_io_des unsigned ublk_op = ublksrv_get_op(iod); struct io_uring_sqe *sqe[1];
- ublk_queue_alloc_sqes(q, sqe, 1);
- ublk_io_alloc_sqes(ublk_get_io(q, tag), sqe, 1); io_uring_prep_fsync(sqe[0], 1 /*fds[1]*/, IORING_FSYNC_DATASYNC); io_uring_sqe_set_flags(sqe[0], IOSQE_FIXED_FILE); /* bit63 marks us as tgt io */
@@ -34,7 +34,7 @@ static int loop_queue_tgt_rw_io(struct ublk_queue *q, const struct ublksrv_io_de struct io_uring_sqe *sqe[3]; if (!zc) {
ublk_queue_alloc_sqes(q, sqe, 1);
if (!sqe[0]) return -ENOMEM;ublk_io_alloc_sqes(ublk_get_io(q, tag), sqe, 1);
@@ -48,7 +48,7 @@ static int loop_queue_tgt_rw_io(struct ublk_queue *q, const struct ublksrv_io_de return 1; }
- ublk_queue_alloc_sqes(q, sqe, 3);
- ublk_io_alloc_sqes(ublk_get_io(q, tag), sqe, 3);
io_uring_prep_buf_register(sqe[0], 0, tag, q->q_id, tag); sqe[0]->flags |= IOSQE_CQE_SKIP_SUCCESS | IOSQE_IO_HARDLINK; diff --git a/tools/testing/selftests/ublk/kublk.c b/tools/testing/selftests/ublk/kublk.c index d0eaf06fadbbb00c0549bba0a08f1be23baa2359..7b3af98546803134dd7f959c40408cefda7cd45c 100644 --- a/tools/testing/selftests/ublk/kublk.c +++ b/tools/testing/selftests/ublk/kublk.c @@ -439,6 +439,7 @@ static int ublk_queue_init(struct ublk_queue *q) for (i = 0; i < q->q_depth; i++) { q->ios[i].buf_addr = NULL; q->ios[i].flags = UBLKSRV_NEED_FETCH_RQ | UBLKSRV_IO_FREE;
q->ios[i].q = q;
if (q->state & UBLKSRV_NO_BUF) continue; @@ -554,7 +555,7 @@ int ublk_queue_io_cmd(struct ublk_queue *q, struct ublk_io *io, unsigned tag) if (io_uring_sq_space_left(&q->ring) < 1) io_uring_submit(&q->ring);
- ublk_queue_alloc_sqes(q, sqe, 1);
- ublk_io_alloc_sqes(ublk_get_io(q, tag), sqe, 1); if (!sqe[0]) { ublk_err("%s: run out of sqe %d, tag %d\n", __func__, q->q_id, tag);
diff --git a/tools/testing/selftests/ublk/kublk.h b/tools/testing/selftests/ublk/kublk.h index 34f92bb2c64d0ddc7690b2654613e0c77b2b8121..7c912116606429215af7dbc2a8ce6b40ef89bfbd 100644 --- a/tools/testing/selftests/ublk/kublk.h +++ b/tools/testing/selftests/ublk/kublk.h @@ -119,6 +119,8 @@ struct ublk_io { unsigned short flags; unsigned short refs; /* used by target code only */
- struct ublk_queue *q;
- int result;
In following patch, you added 'tag' to 'struct ublk_io', then 8bytes 'struct ublk_queue *q' needn't to be added because it can be figured out by container_of():
- queue->ios can be calculated by 'io' address & its tag. - please add one helper ublk_io_to_queue(io) for it.
Also please try to avoid hole to 'ublk_io'.
Otherwise, this patch looks fine.
thanks, Ming