AKASHI Takahiro takahiro.akashi@linaro.org writes:
Alex,
On Wed, Sep 01, 2021 at 01:53:34PM +0100, Alex Benn??e wrote:
Stefan Hajnoczi stefanha@redhat.com writes:
[[PGP Signed Part:Undecided]] On Wed, Aug 04, 2021 at 12:20:01PM -0700, Stefano Stabellini wrote:
Could we consider the kernel internally converting IOREQ messages from the Xen hypervisor to eventfd events? Would this scale with other kernel hypercall interfaces?
So any thoughts on what directions are worth experimenting with?
One option we should consider is for each backend to connect to Xen via the IOREQ interface. We could generalize the IOREQ interface and make it hypervisor agnostic. The interface is really trivial and easy to add. The only Xen-specific part is the notification mechanism, which is an event channel. If we replaced the event channel with something else the interface would be generic. See: https://gitlab.com/xen-project/xen/-/blob/staging/xen/include/public/hvm/ior...
There have been experiments with something kind of similar in KVM recently (see struct ioregionfd_cmd): https://lore.kernel.org/kvm/dad3d025bcf15ece11d9df0ff685e8ab0a4f2edd.1613828...
Reading the cover letter was very useful in showing how this provides a separate channel for signalling IO events to userspace instead of using the normal type-2 vmexit type event. I wonder how deeply tied the userspace facing side of this is to KVM? Could it provide a common FD type interface to IOREQ?
Why do you stick to a "FD" type interface?
I mean most user space interfaces on POSIX start with a file descriptor and the usual read/write semantics or a series of ioctls.
As I understand IOREQ this is currently a direct communication between userspace and the hypervisor using the existing Xen message bus. My
With IOREQ server, IO event occurrences are notified to BE via Xen's event channel, while the actual contexts of IO events (see struct ioreq in ioreq.h) are put in a queue on a single shared memory page which is to be assigned beforehand with xenforeignmemory_map_resource hypervisor call.
If we abstracted the IOREQ via the kernel interface you would probably just want to put the ioreq structure on a queue rather than expose the shared page to userspace.
worry would be that by adding knowledge of what the underlying hypervisor is we'd end up with excess complexity in the kernel. For one thing we certainly wouldn't want an API version dependency on the kernel to understand which version of the Xen hypervisor it was running on.
That's exactly what virtio-proxy in my proposal[1] does; All the hypervisor- specific details of IO event handlings are contained in virtio-proxy and virtio BE will communicate with virtio-proxy through a virtqueue (yes, virtio-proxy is seen as yet another virtio device on BE) and will get IO event-related *RPC* callbacks, either MMIO read or write, from virtio-proxy.
See page 8 (protocol flow) and 10 (interfaces) in [1].
There are two areas of concern with the proxy approach at the moment. The first is how the bootstrap of the virtio-proxy channel happens and the second is how many context switches are involved in a transaction. Of course with all things there is a trade off. Things involving the very tightest latency would probably opt for a bare metal backend which I think would imply hypervisor knowledge in the backend binary.
If kvm's ioregionfd can fit into this protocol, virtio-proxy for kvm will hopefully be implemented using ioregionfd.
-Takahiro Akashi
[1] https://op-lists.linaro.org/pipermail/stratos-dev/2021-August/000548.html