On Tue, Feb 18, 2025 at 11:29:59AM -0400, Jason Gunthorpe wrote:
On Fri, Jan 24, 2025 at 04:30:34PM -0800, Nicolin Chen wrote:
- list_add_tail(&vevent->node, &eventq->deliver);
- vevent->on_list = true;
- vevent->header.sequence = atomic_read(&veventq->sequence);
- if (atomic_read(&veventq->sequence) == INT_MAX)
atomic_set(&veventq->sequence, 0);
- else
atomic_inc(&veventq->sequence);
- spin_unlock(&eventq->lock);
This is all locked, we don't need veventq->sequence to be an atomic?
The bounding can be done with some simple math:
veventq->sequence = (veventq->sequence + 1) & INT_MAX;
Ack. Perhaps we can reuse eventq->lock to fence @num_events too.
+static struct iommufd_vevent * +iommufd_veventq_deliver_fetch(struct iommufd_veventq *veventq) +{
- struct iommufd_eventq *eventq = &veventq->common;
- struct list_head *list = &eventq->deliver;
- struct iommufd_vevent *vevent = NULL;
- spin_lock(&eventq->lock);
- if (!list_empty(list)) {
vevent = list_first_entry(list, struct iommufd_vevent, node);
list_del(&vevent->node);
vevent->on_list = false;
- }
- /* Make a copy of the overflow node for copy_to_user */
- if (vevent == &veventq->overflow) {
vevent = kzalloc(sizeof(*vevent), GFP_ATOMIC);
if (vevent)
memcpy(vevent, &veventq->overflow, sizeof(*vevent));
- }
This error handling is wonky, if we can't allocate then we shouldn't have done the list_del. Just return NULL which will cause iommufd_veventq_fops_read() to exist and userspace will try again.
OK.
We have two cases to support here: 1) Normal vevent node -- list_del and return the node. 2) Overflow node -- list_del and return a copy.
I think we can do: if (!list_empty(list)) { struct iommufd_vevent *next;
next = list_first_entry(list, struct iommufd_vevent, node); if (next == &veventq->overflow) { /* Make a copy of the overflow node for copy_to_user */ vevent = kzalloc(sizeof(*vevent), GFP_ATOMIC); if (!vevent) goto out_unlock; } list_del(&next->node); if (vevent) memcpy(vevent, next, sizeof(*vevent)); else vevent = next; }
@@ -403,6 +531,10 @@ static int iommufd_eventq_fops_release(struct inode *inode, struct file *filep) { struct iommufd_eventq *eventq = filep->private_data;
- if (eventq->obj.type == IOMMUFD_OBJ_VEVENTQ) {
atomic_set(&eventq_to_veventq(eventq)->sequence, 0);
atomic_set(&eventq_to_veventq(eventq)->num_events, 0);
- }
Why? We are about to free the memory?
Ack. I thought about a re-entry of an open(). But release() does lose the event_fd completely, and user space wouldn't be able to open the same fd again.
+int iommufd_veventq_alloc(struct iommufd_ucmd *ucmd) +{
- struct iommu_veventq_alloc *cmd = ucmd->cmd;
- struct iommufd_veventq *veventq;
- struct iommufd_viommu *viommu;
- int fdno;
- int rc;
- if (cmd->flags || cmd->type == IOMMU_VEVENTQ_TYPE_DEFAULT)
return -EOPNOTSUPP;
- if (!cmd->veventq_depth)
return -EINVAL;
Check __reserved for 0 too
Kevin is suggesting a 32-bit flag field, so I think we can drop the __reserved in that case.
Thanks Nicolin