Hi,
Looking at the history of VirtIO is seems very much like virtio-mmio is treated as the legacy interface with most of the action to date being directed at virtio-pci which has a number of advantages including:
- already specified behaviour and driver model - enumeration support - support for MSI signals
The benefits of this have even led to specs for alternative transport/devices such and IVSHMEMv2 being based around a PCI device model.
There are detractors however, most notably the new microvm machine types have chosen virtio-mmio's static assignment because it allows for a faster spin up of VMs as you don't need to deal with specified settle times in the spec.
Azzedine mentioned in a previous call that the PCI device model adds complications based on assumptions about real HW which might not be relevant to virtual devices. Others have mentioned the increased complexity of the code base to handle a full PCI device.
I'm curious to what the implications are for supporting PCI in type-1 hypervisors? Is the desire to avoid handling all the details of PCI within the small hypervisor code base and punt details to the Dom0 causing more vmexits?
As Srivatsa mentioned there have been some attempts to update the virtio-mmio spec to support MSI IRQs which I think are the best way to minimise unwanted context switches. So should we raise a card to help with that effort?
Work that would be involved:
- defining the out-of-band information (MSI irqs in device tree?) - updating the spec - supporting MSI in the virtio-mmio front-end
Anything else?
Of course one worry would be that by adding all this additional functionality to virtio-mmio you start to approach the complexity of virtio-pci. I don't know if this is a real worry but I suspect people using virtio-mmio for speed won't want any optional enhancements to slow it down.
Thoughts?
Hi Alex,
Replies inline below.
On Wed, 15 Jul 2020, Alex Bennée wrote:
Hi,
Looking at the history of VirtIO is seems very much like virtio-mmio is treated as the legacy interface with most of the action to date being directed at virtio-pci which has a number of advantages including:
- already specified behaviour and driver model
- enumeration support
- support for MSI signals
The benefits of this have even led to specs for alternative transport/devices such and IVSHMEMv2 being based around a PCI device model.
There are detractors however, most notably the new microvm machine types have chosen virtio-mmio's static assignment because it allows for a faster spin up of VMs as you don't need to deal with specified settle times in the spec.
Azzedine mentioned in a previous call that the PCI device model adds complications based on assumptions about real HW which might not be relevant to virtual devices. Others have mentioned the increased complexity of the code base to handle a full PCI device.
I'm curious to what the implications are for supporting PCI in type-1 hypervisors? Is the desire to avoid handling all the details of PCI within the small hypervisor code base and punt details to the Dom0 causing more vmexits?
Handling virtual PCI in Xen (and I imagine other type-1s) should not be particularly problematic. That said, there are still valid reasons for virtio-mmio such as startup times, etc. as you mentioned. I don't know what's best (also see last part of the email).
In regards to virtual PCI, keep in mind that it is also necessary to get PCI passthrough working. You should know that Bertrand Marquis from ARM is actively working on that as we speak. The idea is to emulate the PCI config space in the hypervisor itself.
Specific PCI BDFs reads/writes could also be forwarded to dom0 userspace so that an application (e.g. QEMU) could emulate a given PCI device. The infrastructure to do that is called "ioreq" in Xen. The ioreq infrastructure can also be used for non-PCI MMIO reads/writes.
As you probably have guessed getting ioreq to work on ARM (today it is x86 only) is one of the very first things to do to get virtio working with Xen.
As Srivatsa mentioned there have been some attempts to update the virtio-mmio spec to support MSI IRQs which I think are the best way to minimise unwanted context switches. So should we raise a card to help with that effort?
Work that would be involved:
- defining the out-of-band information (MSI irqs in device tree?)
- updating the spec
- supporting MSI in the virtio-mmio front-end
Anything else?
Of course one worry would be that by adding all this additional functionality to virtio-mmio you start to approach the complexity of virtio-pci. I don't know if this is a real worry but I suspect people using virtio-mmio for speed won't want any optional enhancements to slow it down.
Given that virtio support (PCI and/or MMIO) is currently missing in upstream Xen on ARM, before raising a card, I suggest to take a look at the existing patches for Xen to get a clearer understanding. It could be that there are in fact some virtio-mmio or virtio-pci specific issues with type-1s that we don't yet know but they could become clearer by looking at that implementation.
Do you want me to put you in contact with the guys currently working on the virtio patches for Xen?
Stefano Stabellini stefano.stabellini@xilinx.com writes:
Hi Alex,
Replies inline below.
On Wed, 15 Jul 2020, Alex Bennée wrote:
Hi,
Looking at the history of VirtIO is seems very much like virtio-mmio is treated as the legacy interface with most of the action to date being directed at virtio-pci which has a number of advantages including:
- already specified behaviour and driver model
- enumeration support
- support for MSI signals
The benefits of this have even led to specs for alternative transport/devices such and IVSHMEMv2 being based around a PCI device model.
There are detractors however, most notably the new microvm machine types have chosen virtio-mmio's static assignment because it allows for a faster spin up of VMs as you don't need to deal with specified settle times in the spec.
Azzedine mentioned in a previous call that the PCI device model adds complications based on assumptions about real HW which might not be relevant to virtual devices. Others have mentioned the increased complexity of the code base to handle a full PCI device.
I'm curious to what the implications are for supporting PCI in type-1 hypervisors? Is the desire to avoid handling all the details of PCI within the small hypervisor code base and punt details to the Dom0 causing more vmexits?
Handling virtual PCI in Xen (and I imagine other type-1s) should not be particularly problematic. That said, there are still valid reasons for virtio-mmio such as startup times, etc. as you mentioned. I don't know what's best (also see last part of the email).
In regards to virtual PCI, keep in mind that it is also necessary to get PCI passthrough working. You should know that Bertrand Marquis from ARM is actively working on that as we speak. The idea is to emulate the PCI config space in the hypervisor itself.
Is this to avoid too many context switches between Dom0<-Xen->Guest on each PCI transaction?
Specific PCI BDFs reads/writes could also be forwarded to dom0 userspace so that an application (e.g. QEMU) could emulate a given PCI device. The infrastructure to do that is called "ioreq" in Xen. The ioreq infrastructure can also be used for non-PCI MMIO reads/writes.
As you probably have guessed getting ioreq to work on ARM (today it is x86 only) is one of the very first things to do to get virtio working with Xen.
As Srivatsa mentioned there have been some attempts to update the virtio-mmio spec to support MSI IRQs which I think are the best way to minimise unwanted context switches. So should we raise a card to help with that effort?
Work that would be involved:
- defining the out-of-band information (MSI irqs in device tree?)
- updating the spec
- supporting MSI in the virtio-mmio front-end
Anything else?
Of course one worry would be that by adding all this additional functionality to virtio-mmio you start to approach the complexity of virtio-pci. I don't know if this is a real worry but I suspect people using virtio-mmio for speed won't want any optional enhancements to slow it down.
Given that virtio support (PCI and/or MMIO) is currently missing in upstream Xen on ARM, before raising a card, I suggest to take a look at the existing patches for Xen to get a clearer understanding. It could be that there are in fact some virtio-mmio or virtio-pci specific issues with type-1s that we don't yet know but they could become clearer by looking at that implementation.
Do you want me to put you in contact with the guys currently working on the virtio patches for Xen?
Please do.
Le jeu. 16 juil. 2020 à 09:55, Alex Bennée alex.bennee@linaro.org a écrit :
Stefano Stabellini stefano.stabellini@xilinx.com writes:
Hi Alex,
Replies inline below.
On Wed, 15 Jul 2020, Alex Bennée wrote:
Hi,
Looking at the history of VirtIO is seems very much like virtio-mmio is treated as the legacy interface with most of the action to date being directed at virtio-pci which has a number of advantages including:
- already specified behaviour and driver model
- enumeration support
- support for MSI signals
The benefits of this have even led to specs for alternative transport/devices such and IVSHMEMv2 being based around a PCI device model.
There are detractors however, most notably the new microvm machine types have chosen virtio-mmio's static assignment because it allows for a faster spin up of VMs as you don't need to deal with specified settle times in the spec.
Azzedine mentioned in a previous call that the PCI device model adds complications based on assumptions about real HW which might not be relevant to virtual devices. Others have mentioned the increased complexity of the code base to handle a full PCI device.
I'm curious to what the implications are for supporting PCI in type-1 hypervisors? Is the desire to avoid handling all the details of PCI within the small hypervisor code base and punt details to the Dom0 causing more vmexits?
Handling virtual PCI in Xen (and I imagine other type-1s) should not be particularly problematic. That said, there are still valid reasons for virtio-mmio such as startup times, etc. as you mentioned. I don't know what's best (also see last part of the email).
In regards to virtual PCI, keep in mind that it is also necessary to get PCI passthrough working. You should know that Bertrand Marquis from ARM is actively working on that as we speak. The idea is to emulate the PCI config space in the hypervisor itself.
Is this to avoid too many context switches between Dom0<-Xen->Guest on each PCI transaction?
Is there distinction between emulation and pass through in that work ? If you think of DPDK handling 100Gbps cards, any hypervisor hop to handle door bells that are implemented as PCI register would be a killer.
Specific PCI BDFs reads/writes could also be forwarded to dom0 userspace so that an application (e.g. QEMU) could emulate a given PCI device. The infrastructure to do that is called "ioreq" in Xen. The ioreq infrastructure can also be used for non-PCI MMIO reads/writes.
As you probably have guessed getting ioreq to work on ARM (today it is x86 only) is one of the very first things to do to get virtio working with Xen.
As Srivatsa mentioned there have been some attempts to update the virtio-mmio spec to support MSI IRQs which I think are the best way to minimise unwanted context switches. So should we raise a card to help with that effort?
Work that would be involved:
- defining the out-of-band information (MSI irqs in device tree?)
- updating the spec
- supporting MSI in the virtio-mmio front-end
Anything else?
Of course one worry would be that by adding all this additional functionality to virtio-mmio you start to approach the complexity of virtio-pci. I don't know if this is a real worry but I suspect people using virtio-mmio for speed won't want any optional enhancements to slow it down.
Given that virtio support (PCI and/or MMIO) is currently missing in upstream Xen on ARM, before raising a card, I suggest to take a look at the existing patches for Xen to get a clearer understanding. It could be that there are in fact some virtio-mmio or virtio-pci specific issues with type-1s that we don't yet know but they could become clearer by looking at that implementation.
Do you want me to put you in contact with the guys currently working on the virtio patches for Xen?
Please do.
-- Alex Bennée
On Thu, 16 Jul 2020, François Ozog wrote:
Le jeu. 16 juil. 2020 à 09:55, Alex Bennée alex.bennee@linaro.org a écrit : Stefano Stabellini stefano.stabellini@xilinx.com writes:
Hi Alex,
Replies inline below.
On Wed, 15 Jul 2020, Alex Bennée wrote:
Hi,
Looking at the history of VirtIO is seems very much like virtio-mmio is treated as the legacy interface with most of the action to date being directed at virtio-pci which has a number of advantages including:
- already specified behaviour and driver model - enumeration support - support for MSI signals
The benefits of this have even led to specs for alternative transport/devices such and IVSHMEMv2 being based around a PCI device model.
There are detractors however, most notably the new microvm machine types have chosen virtio-mmio's static assignment because it allows for a faster spin up of VMs as you don't need to deal with specified settle times in the spec.
Azzedine mentioned in a previous call that the PCI device model adds complications based on assumptions about real HW which might not be relevant to virtual devices. Others have mentioned the increased complexity of the code base to handle a full PCI device.
I'm curious to what the implications are for supporting PCI in type-1 hypervisors? Is the desire to avoid handling all the details of PCI within the small hypervisor code base and punt details to the Dom0 causing more vmexits?
Handling virtual PCI in Xen (and I imagine other type-1s) should not be particularly problematic. That said, there are still valid reasons for virtio-mmio such as startup times, etc. as you mentioned. I don't know what's best (also see last part of the email).
In regards to virtual PCI, keep in mind that it is also necessary to get PCI passthrough working. You should know that Bertrand Marquis from ARM is actively working on that as we speak. The idea is to emulate the PCI config space in the hypervisor itself.
Is this to avoid too many context switches between Dom0<-Xen->Guest on each PCI transaction?
re distinction between emulation and pass through in that work ? If you think of DPDK handling 100Gbps cards, any hypervisor hop to door bells that are implemented as PCI register would be a killer.
Yeah, emulation in dom0 userspace is slow. The other reason is that this way you can have multiple independent dom0 userspace applications to emulate different devices. Very useful in cases such as vGPUs.
Specific PCI BDFs reads/writes could also be forwarded to dom0 userspace so that an application (e.g. QEMU) could emulate a given PCI device. The infrastructure to do that is called "ioreq" in Xen. The ioreq infrastructure can also be used for non-PCI MMIO reads/writes.
As you probably have guessed getting ioreq to work on ARM (today it is x86 only) is one of the very first things to do to get virtio working with Xen.
As Srivatsa mentioned there have been some attempts to update the virtio-mmio spec to support MSI IRQs which I think are the best way to minimise unwanted context switches. So should we raise a card to help with that effort?
Work that would be involved:
- defining the out-of-band information (MSI irqs in device tree?) - updating the spec - supporting MSI in the virtio-mmio front-end
Anything else?
Of course one worry would be that by adding all this additional functionality to virtio-mmio you start to approach the complexity of virtio-pci. I don't know if this is a real worry but I suspect people using virtio-mmio for speed won't want any optional enhancements to slow it down.
Given that virtio support (PCI and/or MMIO) is currently missing in upstream Xen on ARM, before raising a card, I suggest to take a look at the existing patches for Xen to get a clearer understanding. It could be that there are in fact some virtio-mmio or virtio-pci specific issues with type-1s that we don't yet know but they could become clearer by looking at that implementation.
Do you want me to put you in contact with the guys currently working on the virtio patches for Xen?
Please do.
Done.
On Wed, Jul 15, 2020 at 06:32:28PM +0100, Alex Bennée wrote:
Hi,
Looking at the history of VirtIO is seems very much like virtio-mmio is treated as the legacy interface with most of the action to date being directed at virtio-pci which has a number of advantages including:
- already specified behaviour and driver model
- enumeration support
- support for MSI signals
The benefits of this have even led to specs for alternative transport/devices such and IVSHMEMv2 being based around a PCI device model.
There are detractors however, most notably the new microvm machine types have chosen virtio-mmio's static assignment because it allows for a faster spin up of VMs as you don't need to deal with specified settle times in the spec.
Azzedine mentioned in a previous call that the PCI device model adds complications based on assumptions about real HW which might not be relevant to virtual devices. Others have mentioned the increased complexity of the code base to handle a full PCI device.
I'm curious to what the implications are for supporting PCI in type-1 hypervisors? Is the desire to avoid handling all the details of PCI within the small hypervisor code base and punt details to the Dom0 causing more vmexits?
As Srivatsa mentioned there have been some attempts to update the virtio-mmio spec to support MSI IRQs which I think are the best way to minimise unwanted context switches. So should we raise a card to help with that effort?
Work that would be involved:
- defining the out-of-band information (MSI irqs in device tree?)
I think you'd only need to add an msi-parent property to the virtio-mmio node, the rest should be configurable through virtio config space.
- updating the spec
- supporting MSI in the virtio-mmio front-end
Anything else?
Of course one worry would be that by adding all this additional functionality to virtio-mmio you start to approach the complexity of virtio-pci.
That was one concern on the latest proposal (Feb 2020), that adding features such as MSI to virtio-mmio could make it as complex as virtio-pci, without the advantage of sharing core code with other device drivers. https://lwn.net/ml/linux-kernel/20200211053953-mutt-send-email-mst@kernel.or...
Implementing even a basic conventional PCI device is more complicated than an MMIO device, but then adding features (MSI, hotplug, virtual functions, PASID, etc) becomes a lot simpler since they are already standardized and supported by guests.
Thanks, Jean
I don't know if this is a real worry but I suspect people using virtio-mmio for speed won't want any optional enhancements to slow it down.
On Wed, Jul 15, 2020 at 06:32:28PM +0100, Alex Bennée wrote:
Hi,
Looking at the history of VirtIO is seems very much like virtio-mmio is treated as the legacy interface with most of the action to date being directed at virtio-pci which has a number of advantages including:
- already specified behaviour and driver model
- enumeration support
- support for MSI signals
The benefits of this have even led to specs for alternative transport/devices such and IVSHMEMv2 being based around a PCI device model.
There are detractors however, most notably the new microvm machine types have chosen virtio-mmio's static assignment because it allows for a faster spin up of VMs as you don't need to deal with specified settle times in the spec.
Azzedine mentioned in a previous call that the PCI device model adds complications based on assumptions about real HW which might not be relevant to virtual devices. Others have mentioned the increased complexity of the code base to handle a full PCI device.
I'm curious to what the implications are for supporting PCI in type-1 hypervisors? Is the desire to avoid handling all the details of PCI within the small hypervisor code base and punt details to the Dom0 causing more vmexits?
As Srivatsa mentioned there have been some attempts to update the virtio-mmio spec to support MSI IRQs which I think are the best way to minimise unwanted context switches. So should we raise a card to help with that effort?
Work that would be involved:
- defining the out-of-band information (MSI irqs in device tree?)
I think you'd only need to add an msi-parent property to the virtio-mmio node, the rest should be configurable through virtio config space.
- updating the spec
- supporting MSI in the virtio-mmio front-end
Anything else?
Of course one worry would be that by adding all this additional functionality to virtio-mmio you start to approach the complexity of virtio-pci.
That was one concern on the latest proposal (Feb 2020), that adding features such as MSI to virtio-mmio could make it as complex as virtio-pci, without the advantage of sharing core code with other device drivers. https://lwn.net/ml/linux-kernel/20200211053953-mutt-send-email-mst@kernel.or...
We have been using MMIO given its simplicity and also want the performance improvements offered by the proposed MSI extensions. I don't see the need for other fancy features like hotplug/error_reporting infrastructure that PCI provides, so I feel MMIO + MSI is still a useful and simpler option to provide for platform designers. We would support any efforts to make this possible upstream.
Implementing even a basic conventional PCI device is more complicated than an MMIO device, but then adding features (MSI, hotplug, virtual functions, PASID, etc) becomes a lot simpler since they are already standardized and supported by guests.
Thanks, Jean
I don't know if this is a real worry but I suspect people using virtio-mmio for speed won't want any optional enhancements to slow it down.
stratos-dev@op-lists.linaro.org