On 29-09-23, 07:46, Juergen Gross wrote:
On 29.08.23 14:29, Viresh Kumar wrote:
+static irqreturn_t ioeventfd_interrupt(int irq, void *dev_id) +{
- struct ioreq_port *port = dev_id;
- struct privcmd_kernel_ioreq *kioreq = port->kioreq;
- struct ioreq *ioreq = &kioreq->ioreq[port->vcpu];
- struct privcmd_kernel_ioeventfd *kioeventfd;
- unsigned int state = STATE_IOREQ_READY;
- if (ioreq->state != STATE_IOREQ_READY ||
ioreq->type != IOREQ_TYPE_COPY || ioreq->dir != IOREQ_WRITE)
return IRQ_NONE;
- smp_mb();
- ioreq->state = STATE_IOREQ_INPROCESS;
- mutex_lock(&kioreq->lock);
- list_for_each_entry(kioeventfd, &kioreq->ioeventfds, list) {
if (ioreq->addr == kioeventfd->addr + VIRTIO_MMIO_QUEUE_NOTIFY &&
ioreq->size == kioeventfd->addr_len &&
(ioreq->data & QUEUE_NOTIFY_VQ_MASK) == kioeventfd->vq) {
eventfd_signal(kioeventfd->eventfd, 1);
state = STATE_IORESP_READY;
break;
}
- }
- mutex_unlock(&kioreq->lock);
- smp_mb();
Is this really needed after calling mutex_unlock()? I think you are trying to avoid any accesses to go past ioreq->state modification. If so, add a comment (either why you need the barrier, or that you don't need it due to the unlock).
Right, want all writes to finish before updating state.
In general, shouldn't the state be checked and modified in the locked area?
The handler runs separately for each vcpu and shouldn't run in parallel for the same vcpu. And so only one thread should ever be accessing ioreq port structure.
The lock is there to protect the ioeventfds list (as mentioned in struct declaration) against parallel access, as threads for different vcpus may end up accessing it simultaneously.