On 24.08.22 13:47, Alex Bennée wrote:
Juergen Gross jgross@suse.com writes:
[[PGP Signed Part:Undecided]] On 24.08.22 11:19, Viresh Kumar wrote:
On 24-03-22, 06:12, Juergen Gross wrote:
For a rather long time we were using "normal" user pages for this purpose, which were just locked into memory for doing the hypercall.
Unfortunately there have been very rare problems with that approach, as the Linux kernel can set a user page related PTE to invalid for short periods of time, which led to EFAULT in the hypervisor when trying to access the hypercall data.
In Linux this can avoided only by using kernel memory, which is the reason why the hypercall buffers are allocated and mmap()-ed through the privcmd driver.
Hi Juergen, I understand why we moved from user pages to kernel pages, but I don't fully understand why we need to make two separate calls to map the guest memory, i.e. mmap() followed by ioctl(IOCTL_PRIVCMD_MMAPBATCH). Why aren't we doing all of it from mmap() itself ? I hacked it up to check on it and it works fine if we do it all from mmap() itself.
Hypercall buffers are needed for more than just the "MMAPBATCH" hypercall. Or are you suggesting one device per possible hypercall?
Aren't we abusing the Linux userspace ABI here ? As standard userspace code would expect just mmap() to be enough to map the memory. Yes, the current user, Xen itself, is adapted to make two calls, but it breaks as soon as we want to use something that relies on Linux userspace ABI.
I think you are still mixing up the hypercall buffers with the memory you want to map via the hypercall. At least the reference to kernel memory above is suggesting that.
Aren't the hypercall buffers all internal to the kernel/hypervisor interface or are you talking about the ioctl contents?
The hypercall buffers are filled by the Xen libraries in user mode. The ioctl() is really only a passthrough mechanism for doing hypercalls, as hypercalls are allowed only from the kernel. In order not having to adapt the kernel driver for each new hypercall, all parameters for the hypercall, including the in memory ones, are prepared by the Xen libraries and then given to the hypervisor via the ioctl(). This allows to use existing kernels with new Xen versions.
Your decision to ignore the Xen libraries might fire back in case a dom0-only hypercall is being changed in a new Xen version or even in a Xen update: as Xen tools and the hypervisor are coupled, the updated Xen libraries will work with the new hypervisor, while your VMM will probably break, unless you are building it for each Xen version.
Juergen