On Wed, Aug 4, 2021 at 9:05 AM Viresh Kumar viresh.kumar@linaro.org wrote:
On 03-08-21, 17:01, Arnd Bergmann wrote:
+static void virtio_gpio_irq_unmask(struct irq_data *d) +{
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
struct virtio_gpio *vgpio = gpiochip_get_data(gc);
struct vgpio_irq_line *irq_line = &vgpio->irq_lines[d->hwirq];
irq_line->masked = false;
irq_line->update_pending = true;
+}
Same here. unmask is probably less important, but it's the same operation: if you want interrupts to become active again when they are not, just queue the request
We can't because its a slow bus ? And unmask can be called from irq-context. That's exactly why we had the irq_bus_lock/unlock discussion earlier.
I thought only 'mask' is slow, since that has to wait for the completion, but 'unmask' just involves sending the eventq request without having to wait for it.
+static void vgpio_work_handler(struct work_struct *work) +{
struct virtio_gpio *vgpio = container_of(work, struct virtio_gpio,
work);
struct device *dev = &vgpio->vdev->dev;
struct vgpio_irq_line *irq_line;
int irq, gpio, ret;
unsigned int len;
mutex_lock(&vgpio->irq_lock);
while (true) {
irq_line = virtqueue_get_buf(vgpio->event_vq, &len);
if (!irq_line)
break;
Related to above, I think all the eventq handling should be moved into the virtio_gpio_event_vq() function directly.
You mean without scheduling a work ?
Yes.
/* The interrupt may have been disabled by now */
if (irq_line->update_pending && irq_line->masked)
update_irq_type(vgpio, gpio, VIRTIO_GPIO_IRQ_TYPE_NONE);
This is a part I'm not sure about, and I suppose it's the same part that Marc was also confused by.
As far as I can tell, the update_irq_type() message would lead to the interrupt getting delivered when it was armed and is now getting disabled, but I don't see why we would call update_irq_type() as a result of the eventq notification.
Lemme try to explain answer to all other question together here.
The irq related functions get called in two scenarios:
request_irq() or irq_set_irq_type(), enable/disable_irq(), etc:
The call sequence here is like this:
->irq_bus_lock()
->spin-lock-irqsave ->irq_mask()/irq_unmask()/irq_set_type().. ->spin-unlock-irqsave
->irq_bus_unlock()
So the mask/unmask/set-type routines can't issue virtio requests and we need to do that from irq_bus_unlock(). This shall answer your question about mask/unmask, right ? Or maybe I misunderstood them then ?
I don't think it is correct that you cannot issue virtio requests from atomic context, only that you cannot wait for the reply.
For 'unmask', there is no waiting, since the reply is the actual IRQ event. For the others, the sequence makes sense.
Interrupt: i.e. buffer sent back by the host over virtio.
virtio_gpio_event_vq() schedules a work item, which processes the items from the eventq virtqueue and eventually calls generic_handle_irq(). The irq-core can issue calls to ->irq_mask/unmask() here without a prior call to irq_bus_lock/unlock(), normally they will balance out by the end, but I am not sure if it is guaranteed. Moreover, interrupt should be re-enabled only after unmask() is called (for ONESHOT) and not at EOI, right ?
I chose not to queue the buffers back from eoi() as it is possible that we won't want to queue them at all, as the interrupt needs to be disabled by the time generic_handle_irq() returns. And so I did everything from the end of vgpio_work_handler()'s loop, i.e. either disable the interrupts with VIRTIO_GPIO_IRQ_TYPE_NONE or enable the interrupt again by re-queuing the buffer.
Regarding irq handling using work-item, I had to move to that to take care of locking for re-queuing the buffers for a GPIO line from irq-handler and bus-unlock. Nothing else seemed to work, though I am continuing to look into that to see if there is an alternative here.
I don't think it makes sense to optimize for the rare case that the irq handler disables the irq, when that makes the common case (irq remains unmasked and enabled) much slower.
Arnd