On 24/08/2022 10:19, Viresh Kumar wrote:
On 24-03-22, 06:12, Juergen Gross wrote:
For a rather long time we were using "normal" user pages for this purpose, which were just locked into memory for doing the hypercall.
Unfortunately there have been very rare problems with that approach, as the Linux kernel can set a user page related PTE to invalid for short periods of time, which led to EFAULT in the hypervisor when trying to access the hypercall data.
In Linux this can avoided only by using kernel memory, which is the reason why the hypercall buffers are allocated and mmap()-ed through the privcmd driver.
Hi Juergen,
I understand why we moved from user pages to kernel pages, but I don't fully understand why we need to make two separate calls to map the guest memory, i.e. mmap() followed by ioctl(IOCTL_PRIVCMD_MMAPBATCH).
Why aren't we doing all of it from mmap() itself ? I hacked it up to check on it and it works fine if we do it all from mmap() itself.
Aren't we abusing the Linux userspace ABI here ? As standard userspace code would expect just mmap() to be enough to map the memory. Yes, the current user, Xen itself, is adapted to make two calls, but it breaks as soon as we want to use something that relies on Linux userspace ABI.
For instance, in our case, where we are looking to create hypervisor-agnostic virtio backends, the rust-vmm library [1] issues mmap() only and expects it to work. It doesn't know it is running on a Xen system, and it shouldn't know that as well.
Use /dev/xen/hypercall which has a sane ABI for getting "safe" memory. privcmd is very much not sane.
In practice you'll need to use both. /dev/xen/hypercall for getting "safe" memory, and /dev/xen/privcmd for issuing hypercalls for now.
~Andrew