On Tue, Sep 7, 2021 at 4:55 AM AKASHI Takahiro takahiro.akashi@linaro.org wrote:
Hi,
I have not covered all your comments below yet. So just one comment:
On Mon, Sep 06, 2021 at 05:57:43PM -0700, Christopher Clark wrote:
On Thu, Sep 2, 2021 at 12:19 AM AKASHI Takahiro <
takahiro.akashi@linaro.org>
wrote:
(snip)
It appears that, on FE side, at least three hypervisor calls (and
data
copying) need to be invoked at every request, right?
For a write, counting FE sendv ops: 1: the write data payload is sent via the "Argo ring for writes" 2: the descriptor is sent via a sync of the available/descriptor ring -- is there a third one that I am missing?
In the picture, I can see a) Data transmitted by Argo sendv b) Descriptor written after data sendv c) VirtIO ring sync'd to back-end via separate sendv
Oops, (b) is not a hypervisor call, is it?
That's correct, it is not - the blue arrows in the diagram are not hypercalls, they are intended to show data movement or action in the flow of performing the operation, and (b) is a data write within the guest's address space into the descriptor ring.
(But I guess that you will have to have yet another call for notification since there is no config register of QueueNotify?)
Reasoning about hypercalls necessary for data movement:
VirtIO transport drivers are responsible for instantiating virtqueues (setup_vq) and are able to populate the notify function pointer in the virtqueue that they supply. The virtio-argo transport driver can provide a suitable notify function implementation that will issue the Argo hypercall sendv hypercall(s) for sending data from the guest frontend to the backend. By issuing the sendv at the time of the queuenotify, rather than as each buffer is added to the virtqueue, the cost of the sendv hypercall can be amortized over multiple buffer additions to the virtqueue.
I also understand that there has been some recent work in the Linaro Project Stratos on "Fat Virtqueues", where the data to be transmitted is included within an expanded virtqueue, which could further reduce the number of hypercalls required, since the data can be transmitted inline with the descriptors. Reference here: https://linaro.atlassian.net/wiki/spaces/STR/pages/25626313982/2021-01-21+Pr... https://linaro.atlassian.net/browse/STR-25
As a result of the above, I think that a single hypercall could be sufficient for communicating data for multiple requests, and that a two-hypercall-per-request (worst case) upper bound could also be established.
Christopher
Thanks, -Takahiro Akashi
Christopher
Thanks, -Takahiro Akashi
- Here are the design documents for building VirtIO-over-Argo, to
support a
hypervisor-agnostic frontend VirtIO transport driver using Argo.
The Development Plan to build VirtIO virtual device support over Argo transport:
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Dev...
A design for using VirtIO over Argo, describing how VirtIO data
structures
and communication is handled over the Argo transport:
https://openxt.atlassian.net/wiki/spaces/DC/pages/1348763698/VirtIO+Argo
Diagram (from the above document) showing how VirtIO rings are
synchronized
between domains without using shared memory:
https://openxt.atlassian.net/46e1c93b-2b87-4cb2-951e-abd4377a1194#media-blob...
Please note that the above design documents show that the existing
VirtIO
device drivers, and both vring and virtqueue data structures can be preserved while interdomain communication can be performed with no shared
memory
required for most drivers; (the exceptions where further design is required
are
those
such as virtual framebuffer devices where shared memory regions are intentionally added to the communication structure beyond the vrings and
virtqueues).
An analysis of VirtIO and Argo, informing the design:
https://openxt.atlassian.net/wiki/spaces/DC/pages/1333428225/Analysis+of+Arg...
- Argo can be used for a communication path for configuration
between the
backend and the toolstack, avoiding the need for a dependency on XenStore,
which
is an advantage for any hypervisor-agnostic design. It is also amenable
to a
notification mechanism that is not based on Xen event channels.
- Argo does not use or require shared memory between VMs and
provides an
alternative to the use of foreign shared memory mappings. It avoids some of the complexities involved with using grants (eg. XSA-300).
- Argo supports Mandatory Access Control by the hypervisor,
satisfying a
common certification requirement.
- The Argo headers are BSD-licensed and the Xen hypervisor
implementation
is GPLv2 but accessible via the hypercall interface. The licensing should not
present
an obstacle to adoption of Argo in guest software or implementation by other hypervisors.
- Since the interface that Argo presents to a guest VM is similar to
DMA, a
VirtIO-Argo frontend transport driver should be able to operate with a physical VirtIO-enabled smart-NIC if the toolstack and an Argo-aware backend provide
support.
The next Xen Community Call is next week and I would be happy to
answer
questions about Argo and on this topic. I will also be following this thread.
Christopher (Argo maintainer, Xen Community)
[1] An introduction to Argo:
https://static.sched.com/hosted_files/xensummit19/92/Argo%20and%20HMX%20-%20...
https://www.youtube.com/watch?v=cnC0Tg3jqJQ Xen Wiki page for Argo:
https://wiki.xenproject.org/wiki/Argo:_Hypervisor-Mediated_Exchange_(HMX)_fo...
[2] OpenXT Linux Argo driver and userspace library: https://github.com/openxt/linux-xen-argo
Windows V4V at OpenXT wiki: https://openxt.atlassian.net/wiki/spaces/DC/pages/14844007/V4V Windows v4v driver source: https://github.com/OpenXT/xc-windows/tree/master/xenv4v
HP/Bromium uXen V4V driver: https://github.com/uxen-virt/uxen/tree/ascara/windows/uxenv4vlib
[3] v2 of the Argo test unikernel for XTF:
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02234.html
[4] Argo HMX Transport for VirtIO meeting minutes:
https://lists.xenproject.org/archives/html/xen-devel/2021-02/msg01422.html
VirtIO-Argo Development wiki page:
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Dev...