Finally I am getting around Xen grants and haven't got a running setup
yet. There are few questions I have at the moment:
- Xen's libxl_arm.c creates the iommu nodes only if backend isn't in
Dom0. Why are we forcing it this way ?
I am not running my backend in a separate dom as of now, as they
need to share a unix socket with dom0 (with vhost-user-fronend (our
virtio-disk counterpart)) for vhost-user protocol and am not sure
how to set it up. Maybe I need to use "channel" ? or something else
- I tried to hack it up, to keep backend in Dom0 only and create the
iommu nodes unconditionally and the guest kernel is crashing in
iommu_dev = ops->probe_device(dev);
Since grant_dma_iommu_ops have all the fields set to NULL.
- Anything else you might want to share ?
As part of looking at implementing vhost-user daemons which don't have
complete access to a guests address space we noticed the vhost-user spec
has a definition for VHOST_USER_SLAVE_IOTLB_MSG's. Specifically the
message VHOST_IOTLB_ACCESS_FAIL looks like it could be used to delegate
the mapping of memory by the master/frontend so the backend can access
As far as I can see no backends currently use this message and the
specific handling of VHOST_IOTLB_ACCESS_FAIL in QEMU doesn't do much
more than report an error.
The VHOST_IOTLB_MISS does have some handling but the commentary seems to
imply this is needed for the in kernel vhost kernel support (perhaps
when real hardware is filling in a buffer being forwarded to a VirtIO
Can anyone point to any backends that implement these messages?
We have implemented a Xen Vhost User Frontend:
which currently uses a lightly hacked Xen privcmd device to map all of
the guests memory. We want to investigate using the stricter gntdev
device where buffers for individual transactions can be mapped into the
backend domains before being released at the end of the transaction. We
want to keep the hypervisor specific code in the frontend so the backend
can stay portable between different hypervisors.
Virtualisation Tech Lead @ Linaro