Hi,
As we consider the next cycle for Project Stratos I would like to make some more progress on hypervisor agnosticism for our virtio backends. While we have implemented a number of virtio vhost-user backends using C we've rapidly switched to using rust-vmm based ones for virtio-i2c, virtio-rng and virtio-gpio. Given the interest in Rust for implementing backends does it make sense to do some enabling work in rust-vmm to support Xen?
There are two chunks of work I can think of:
1. Enough of libxl/hypervisor interface to implement an IOREQ end point.
This would require supporting enough of the hypervisor interface to support the implementation of an IOREQ server. We would also need to think about how we would map the IOREQ view of the world into the existing vhost-user interface so we can re-use the current vhost-user backends code base. The two approaches I can think of are:
a) implement a vhost-user master that speaks IOREQ to the hypervisor and vhost-user to the vhost-user slave. In this case the bridge would be standing in for something like QEMU.
b) implement some variants of the vhost-user slave traits that can talk directly to the hypervisor to get/send the equivalent kick/notify events. I don't know if this might be too complex as the impedance matching between the two interfaces might be too great.
This assumes most of the setup is done by the existing toolstack, so the existing libxl tools are used to create, connect and configure the domains before the backend is launched.
which leads to:
2. The rest of the libxl/hypervisor interface.
This would be the rest of the interface to allow rust-vmm tools to be written that could create, configure and manage Xen domains with pure rust tools. My main concern about this is how rust-vmm's current model (which is very much KVM influenced) will be able to handle the differences for a type-1 hypervisor. Wei's pointed me to the Linux support that was added to expose a Hyper-V control interface via the Linux kernel. While I can see support has been merged on other rust based projects I think the rust-vmm crate is still outstanding:
https://github.com/rust-vmm/community/issues/50
and I guess this would need revisiting for Xen to see if the proposed abstraction would scale across other hypervisors.
Finally there is the question of how/if any of this would relate to the concept of bare-metal rust backends? We've talked about bare metal backends before but I wonder if the programming model for them is going to be outside the scope of rust-vmm? Would be program just be hardwired to IRQs and be presented a doorbell port to kick or would we want to have at least some of the higher level rust-vmm abstractions for dealing with navigating the virtqueues and responding and filling in data?
Thoughts?
On 13/09/2021 13:44, Alex Bennée wrote:
Hi,
As we consider the next cycle for Project Stratos I would like to make some more progress on hypervisor agnosticism for our virtio backends. While we have implemented a number of virtio vhost-user backends using C we've rapidly switched to using rust-vmm based ones for virtio-i2c, virtio-rng and virtio-gpio. Given the interest in Rust for implementing backends does it make sense to do some enabling work in rust-vmm to support Xen?
There are two chunks of work I can think of:
- Enough of libxl/hypervisor interface to implement an IOREQ end point.
No libxl here at all.
As of Xen 4.15, there are enough stable interfaces to implement simple IOERQ servers.
https://github.com/xapi-project/varstored/commit/fde707c59f7a189e1d4e97c1a4e... was the commit where I removed the final unstable interface from varstored (terrible name) which is a dom0 backend for UEFI secure variable handling. As such, it also serves as a (not totally simple) reference of an IOERQ server.
There are a few bits and pieces of rust going on within Xen, and a whole load of plans. Also, there is a lot of interest from other downstreams in being able to write Rust backends.
We've got a placeholder xen and xen-sys crates, and placeholder work for supporting cross-compile as x86 PV and PVH stubdomains.
The want to have a simple IOREQ server compiled either as a dom0 backend, or as a PV or PVH stubdomains influences some of the design decisions early on, but they're all no-brainers for the longevity of the work.
I started work on trying to reimplement varstored entirely in Rust as a hackathon project, although ran out of time trying to make hypercall buffers work (there is a bug with Box and non-global allocators causing rustc to hit an assert(). In the short term, we'll have to implement hypercall buffers in a slightly more irritating way).
Furthermore, stick to the stable hypercalls only. Xen's C libraries are disaster for cross-version compatibility, and you absolutely do not want to recompile your rust program just to run it against a different version of the hypervisor. The plan is to start with simple IOREQ servers, which are on fully stable interfaces, then stabilise further hypercalls as necessary to expand functionality.
It's high time the Xen Rust working group (which has been talked about for several years now) actually forms...
~Andrew
Andrew Cooper andrew.cooper3@citrix.com writes:
On 13/09/2021 13:44, Alex Bennée wrote:
Hi,
As we consider the next cycle for Project Stratos I would like to make some more progress on hypervisor agnosticism for our virtio backends. While we have implemented a number of virtio vhost-user backends using C we've rapidly switched to using rust-vmm based ones for virtio-i2c, virtio-rng and virtio-gpio. Given the interest in Rust for implementing backends does it make sense to do some enabling work in rust-vmm to support Xen?
There are two chunks of work I can think of:
- Enough of libxl/hypervisor interface to implement an IOREQ end point.
No libxl here at all.
As of Xen 4.15, there are enough stable interfaces to implement simple IOERQ servers.
https://github.com/xapi-project/varstored/commit/fde707c59f7a189e1d4e97c1a4e... was the commit where I removed the final unstable interface from varstored (terrible name) which is a dom0 backend for UEFI secure variable handling. As such, it also serves as a (not totally simple) reference of an IOERQ server.
There are a few bits and pieces of rust going on within Xen, and a whole load of plans. Also, there is a lot of interest from other downstreams in being able to write Rust backends.
We've got a placeholder xen and xen-sys crates, and placeholder work for supporting cross-compile as x86 PV and PVH stubdomains.
Are these in the rust-vmm project is elsewhere?
The want to have a simple IOREQ server compiled either as a dom0 backend, or as a PV or PVH stubdomains influences some of the design decisions early on, but they're all no-brainers for the longevity of the work.
Just to clarify nomenclature is a PVH stubdomain what I'm referring to as a bare metal backend, i.e: a unikernel or RTOS image that implements the backend without having to transition between some sort of userspace and it's supporting kernel?
I started work on trying to reimplement varstored entirely in Rust as a hackathon project, although ran out of time trying to make hypercall buffers work (there is a bug with Box and non-global allocators causing rustc to hit an assert(). In the short term, we'll have to implement hypercall buffers in a slightly more irritating way).
Furthermore, stick to the stable hypercalls only. Xen's C libraries are disaster for cross-version compatibility, and you absolutely do not want to recompile your rust program just to run it against a different version of the hypervisor. The plan is to start with simple IOREQ servers, which are on fully stable interfaces, then stabilise further hypercalls as necessary to expand functionality.
Are the hypercalls mediated by a kernel layer or are you making direct HVC calls (on ARM) to the hypervisor from userspace?
Where would I look in the Xen code to find the hypercalls that are considered stable and won't change between versions?
It's high time the Xen Rust working group (which has been talked about for several years now) actually forms...
Indeed part of the purpose of this email was to smoke out those who are interested in the intersection of Xen, Rust and VirtIO ;-)
On 14/09/2021 15:44, Alex Bennée wrote:
Andrew Cooper andrew.cooper3@citrix.com writes:
On 13/09/2021 13:44, Alex Bennée wrote:
Hi,
As we consider the next cycle for Project Stratos I would like to make some more progress on hypervisor agnosticism for our virtio backends. While we have implemented a number of virtio vhost-user backends using C we've rapidly switched to using rust-vmm based ones for virtio-i2c, virtio-rng and virtio-gpio. Given the interest in Rust for implementing backends does it make sense to do some enabling work in rust-vmm to support Xen?
There are two chunks of work I can think of:
- Enough of libxl/hypervisor interface to implement an IOREQ end point.
No libxl here at all.
As of Xen 4.15, there are enough stable interfaces to implement simple IOERQ servers.
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com... was the commit where I removed the final unstable interface from varstored (terrible name) which is a dom0 backend for UEFI secure variable handling. As such, it also serves as a (not totally simple) reference of an IOERQ server.
There are a few bits and pieces of rust going on within Xen, and a whole load of plans. Also, there is a lot of interest from other downstreams in being able to write Rust backends.
We've got a placeholder xen and xen-sys crates, and placeholder work for supporting cross-compile as x86 PV and PVH stubdomains.
Are these in the rust-vmm project is elsewhere?
https://crates.io/crates/xen-sys
When I say placeholder, I really do mean placeholder.
To start this work meaningfully, we'd want to make a repo (or several) in the xen-project organisation on github or gitlab (we have both, for reasons), and set these as the upstream of the xen and xen-sys crates.
The want to have a simple IOREQ server compiled either as a dom0 backend, or as a PV or PVH stubdomains influences some of the design decisions early on, but they're all no-brainers for the longevity of the work.
Just to clarify nomenclature is a PVH stubdomain what I'm referring to as a bare metal backend, i.e: a unikernel or RTOS image that implements the backend without having to transition between some sort of userspace and it's supporting kernel?
I think so, yes, although calling it "bare metal" seems misleading for something which is a VM targetted at a specific hypervisor...
I started work on trying to reimplement varstored entirely in Rust as a hackathon project, although ran out of time trying to make hypercall buffers work (there is a bug with Box and non-global allocators causing rustc to hit an assert(). In the short term, we'll have to implement hypercall buffers in a slightly more irritating way).
Furthermore, stick to the stable hypercalls only. Xen's C libraries are disaster for cross-version compatibility, and you absolutely do not want to recompile your rust program just to run it against a different version of the hypervisor. The plan is to start with simple IOREQ servers, which are on fully stable interfaces, then stabilise further hypercalls as necessary to expand functionality.
Are the hypercalls mediated by a kernel layer or are you making direct HVC calls (on ARM) to the hypervisor from userspace?
For a dom0 backends irrespective of architecture, you need to issue ioctl()'s on the appropriate kernel device.
For a PV/PVH stubdom, you should make a call into the hypercall_page https://xenbits.xen.org/docs/latest/guest-guide/x86/hypercall-abi.html because Intel and AMD used different instructions for their equivalent of HVC.
ARM doesn't have the hypercall page ABI, so I'd expect the hypercall implementation to expand to HVC directly.
In terms of rust APIs, we'd want a crate which has target-specific implementations so the caller need not worry about the implementation details in the common case.
Where would I look in the Xen code to find the hypercalls that are considered stable and won't change between versions?
I'm afraid that's mostly in developers heads right now.
For a first pass, you can look for __XEN_TOOLS__ (This is mis-named, and ought to be called __XEN_UNSTABLE_INTERFACE__, because...) but be aware that some things currently tagged __XEN_TOOLS__ are incorrect and are in fact stable.
As a first pass, assume everything is unstable. The things contained within libxendevicemodel and libxenforeignmem are stable and were specifically made so to try and get simple IOREQ server functionality done and stable.
Almost everything else, particularly concerning the toolstack operations, is unstable. There is 15 years of organic growth and dubious decisions here, and they need unpicking carefully. We've got some hypercalls which look like they're unstable, but are actually stable (as they were exposed to guests), and therefore have ridiculous interfaces.
The "ABI v2" work is massive and complicated, and picking at some of the corners based on "what is needed to make new $FOO work" is a good way to make some inroads.
It's high time the Xen Rust working group (which has been talked about for several years now) actually forms...
Indeed part of the purpose of this email was to smoke out those who are interested in the intersection of Xen, Rust and VirtIO ;-)
The conversation has come up quite a few times in the past, but mostly by people who are also busy with other things.
~Andrew
On Tue, 14 Sep 2021, Andrew Cooper wrote:
On 14/09/2021 15:44, Alex Bennée wrote:
Andrew Cooper andrew.cooper3@citrix.com writes:
On 13/09/2021 13:44, Alex Bennée wrote:
Hi,
As we consider the next cycle for Project Stratos I would like to make some more progress on hypervisor agnosticism for our virtio backends. While we have implemented a number of virtio vhost-user backends using C we've rapidly switched to using rust-vmm based ones for virtio-i2c, virtio-rng and virtio-gpio. Given the interest in Rust for implementing backends does it make sense to do some enabling work in rust-vmm to support Xen?
There are two chunks of work I can think of:
- Enough of libxl/hypervisor interface to implement an IOREQ end point.
No libxl here at all.
As of Xen 4.15, there are enough stable interfaces to implement simple IOERQ servers.
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com... was the commit where I removed the final unstable interface from varstored (terrible name) which is a dom0 backend for UEFI secure variable handling. As such, it also serves as a (not totally simple) reference of an IOERQ server.
There are a few bits and pieces of rust going on within Xen, and a whole load of plans. Also, there is a lot of interest from other downstreams in being able to write Rust backends.
We've got a placeholder xen and xen-sys crates, and placeholder work for supporting cross-compile as x86 PV and PVH stubdomains.
Are these in the rust-vmm project is elsewhere?
https://crates.io/crates/xen-sys
When I say placeholder, I really do mean placeholder.
To start this work meaningfully, we'd want to make a repo (or several) in the xen-project organisation on github or gitlab (we have both, for reasons), and set these as the upstream of the xen and xen-sys crates.
The want to have a simple IOREQ server compiled either as a dom0 backend, or as a PV or PVH stubdomains influences some of the design decisions early on, but they're all no-brainers for the longevity of the work.
Just to clarify nomenclature is a PVH stubdomain what I'm referring to as a bare metal backend, i.e: a unikernel or RTOS image that implements the backend without having to transition between some sort of userspace and it's supporting kernel?
I think so, yes, although calling it "bare metal" seems misleading for something which is a VM targetted at a specific hypervisor...
I started work on trying to reimplement varstored entirely in Rust as a hackathon project, although ran out of time trying to make hypercall buffers work (there is a bug with Box and non-global allocators causing rustc to hit an assert(). In the short term, we'll have to implement hypercall buffers in a slightly more irritating way).
Furthermore, stick to the stable hypercalls only. Xen's C libraries are disaster for cross-version compatibility, and you absolutely do not want to recompile your rust program just to run it against a different version of the hypervisor. The plan is to start with simple IOREQ servers, which are on fully stable interfaces, then stabilise further hypercalls as necessary to expand functionality.
Are the hypercalls mediated by a kernel layer or are you making direct HVC calls (on ARM) to the hypervisor from userspace?
For a dom0 backends irrespective of architecture, you need to issue ioctl()'s on the appropriate kernel device.
For a PV/PVH stubdom, you should make a call into the hypercall_page https://xenbits.xen.org/docs/latest/guest-guide/x86/hypercall-abi.html because Intel and AMD used different instructions for their equivalent of HVC.
ARM doesn't have the hypercall page ABI, so I'd expect the hypercall implementation to expand to HVC directly.
See for example arch/arm64/xen/hypercall.S in Linux
On Mon, 2021-09-13 at 13:44 +0100, Alex Bennée wrote:
Hi,
As we consider the next cycle for Project Stratos I would like to make some more progress on hypervisor agnosticism for our virtio backends. While we have implemented a number of virtio vhost-user backends using C we've rapidly switched to using rust-vmm based ones for virtio-i2c, virtio-rng and virtio-gpio. Given the interest in Rust for implementing backends does it make sense to do some enabling work in rust-vmm to support Xen?
I like this idea.
Somewhat separately, Alex Agache has already started some preliminary hacking on supporting Xen guests within rust-vmm (on top of Linux/KVM): https://github.com/alexandruag/vmm-reference/commits/xen
Being able to run on *actual* Xen would be good too. And we should also aspire to do guest-transparent live migration between the two hosting environments.
Where relevant, it would be great to be able to share components (like emulation of the Xen PCI platform device, a completely single-tenant XenStore implementation dedicated to a single guest, perhaps PV netback/blkback and other things).
David Woodhouse dwmw2@infradead.org writes:
[[S/MIME Signed Part:Undecided]] On Mon, 2021-09-13 at 13:44 +0100, Alex Bennée wrote:
Hi,
As we consider the next cycle for Project Stratos I would like to make some more progress on hypervisor agnosticism for our virtio backends. While we have implemented a number of virtio vhost-user backends using C we've rapidly switched to using rust-vmm based ones for virtio-i2c, virtio-rng and virtio-gpio. Given the interest in Rust for implementing backends does it make sense to do some enabling work in rust-vmm to support Xen?
I like this idea.
Somewhat separately, Alex Agache has already started some preliminary hacking on supporting Xen guests within rust-vmm (on top of Linux/KVM): https://github.com/alexandruag/vmm-reference/commits/xen
I'll be sending along a more detailed post once I've finished my work breakdown but I'm currently envisioning two parts. A xen-sys crate for the low level access that supports both ioctl and hypercall calls. This would be useful for other projects such as stubdomains (think a "bare-metal" RTOS with some sort of backend, uni-kernel style). It would also be the lowest layer that rust-vmm can use to interact with the hypervisor.
I'm aware the HyperV solution is to present a KVM-like ioctl interface via the host kernel. However if we want generality with type-1 hypervisors we can't assume all will get suitable translation layers in the kernel.
Fortunately for the time being our focus is on virtio backends so we don't need to get directly involved in the hypervisor run loop... for now.
Being able to run on *actual* Xen would be good too. And we should also aspire to do guest-transparent live migration between the two hosting environments.
Where relevant, it would be great to be able to share components (like emulation of the Xen PCI platform device, a completely single-tenant XenStore implementation dedicated to a single guest, perhaps PV netback/blkback and other things).
For Stratos portable virtio backends is one of our project goals.
[[End of S/MIME Signed Part]]
stratos-dev@op-lists.linaro.org