Hi,
As part of looking at implementing vhost-user daemons which don't have complete access to a guests address space we noticed the vhost-user spec has a definition for VHOST_USER_SLAVE_IOTLB_MSG's. Specifically the message VHOST_IOTLB_ACCESS_FAIL looks like it could be used to delegate the mapping of memory by the master/frontend so the backend can access it.
As far as I can see no backends currently use this message and the specific handling of VHOST_IOTLB_ACCESS_FAIL in QEMU doesn't do much more than report an error.
The VHOST_IOTLB_MISS does have some handling but the commentary seems to imply this is needed for the in kernel vhost kernel support (perhaps when real hardware is filling in a buffer being forwarded to a VirtIO device?).
Can anyone point to any backends that implement these messages?
We have implemented a Xen Vhost User Frontend:
https://github.com/vireshk/xen-vhost-frontend
which currently uses a lightly hacked Xen privcmd device to map all of the guests memory. We want to investigate using the stricter gntdev device where buffers for individual transactions can be mapped into the backend domains before being released at the end of the transaction. We want to keep the hypervisor specific code in the frontend so the backend can stay portable between different hypervisors.
Thanks,
Alex Bennée alex.bennee@linaro.org writes:
(apologies, empty reply to see if qemu-devel accepts this)
Hi,
As part of looking at implementing vhost-user daemons which don't have complete access to a guests address space we noticed the vhost-user spec has a definition for VHOST_USER_SLAVE_IOTLB_MSG's. Specifically the message VHOST_IOTLB_ACCESS_FAIL looks like it could be used to delegate the mapping of memory by the master/frontend so the backend can access it.
As far as I can see no backends currently use this message and the specific handling of VHOST_IOTLB_ACCESS_FAIL in QEMU doesn't do much more than report an error.
The VHOST_IOTLB_MISS does have some handling but the commentary seems to imply this is needed for the in kernel vhost kernel support (perhaps when real hardware is filling in a buffer being forwarded to a VirtIO device?).
Can anyone point to any backends that implement these messages?
We have implemented a Xen Vhost User Frontend:
https://github.com/vireshk/xen-vhost-frontend
which currently uses a lightly hacked Xen privcmd device to map all of the guests memory. We want to investigate using the stricter gntdev device where buffers for individual transactions can be mapped into the backend domains before being released at the end of the transaction. We want to keep the hypervisor specific code in the frontend so the backend can stay portable between different hypervisors.
Thanks,
stratos-dev@op-lists.linaro.org