Julien Grall julien@xen.org writes:
Hi Alex,
On 22/10/2020 10:46, Alex Bennée wrote:
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi Julien,
2020年10月22日(木) 17:11 Julien Grall julien@xen.org:
On 22/10/2020 08:09, Masami Hiramatsu wrote:
Hi,
Hi,
Now I'm trying to use vhost-user-gpu.
I built qemu and xen and installed user /usr/local/. Xen installed it's qemu in /usr/local/lib/xen/bin, Qemu was installed in /usr/local/bin, so those are not conflict.
If I added device_model_args= option to the domU.conf file, it didn't work (failed to exec qemu).
What is the path used for QEMU?
It should be /usr/local/lib/xen/bin/qemu-system-i386, because I replaced it with a wrapper shell script to record the parameters :)
With the device_model_override, I used locally installed /usr/local/bin/qemu-system-i386.
I also added device_model_override= but it didn't change neither. But in both cases, it seems vhost-user-gpu got access from qemu.
To clarify what argument passed to the qemu, I hacked /usr/local/lib/xen/bin/qemu-system-i386, and found that the arguments passed via device_model_args=[] were just added to the original arguments.
OK, now I'm trying to test the qemu-system-i386 itself as below
mhiramat@develbox:/opt/agl$ sudo /usr/local/lib/xen/bin/qemu-system-i386.bin -xen-domid 11 -no-shutdown -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-11,server,nowait -mon chardev=libxl-cmd,mode=control -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-11,server,nowait -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config -xen-attach -name agldemo -vnc none -display none -nographic -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on -numa node,memdev=mem -chardev socket,path=/opt/agl/vgpu.sock,id=vgpu -device vhost-user-gpu-pci,chardev=vgpu -machine xenpv -m 513
I am a bit confused what you are trying to do. QEMU has to be started after the domain is created by the toolstack (aka xl).
Can you describe a bit more the steps you are using?
This command is solely executed without xl, because I would like to check what error happens. With xl, I just got the errors below.
libxl: error: libxl_exec.c:434:spawn_middle_death: domain 14 device model [20073]: unexpectedly exited with exit status 0, when we were waiting for it to confirm startup libxl: error: libxl_dm.c:3102:device_model_spawn_outcome: Domain 14:domain 14 device model: spawn failed (rc=-3) libxl: error: libxl_dm.c:3322:device_model_postconfig_done: Domain 14:Post DM startup configs failed, rc=-3 libxl: error: libxl_create.c:1836:domcreate_devmodel_started: Domain 14:device model did not start: -3 libxl: error: libxl_aoutils.c:646:libxl__kill_xs_path: Device Model already exited libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain 14:Non-existant domain libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain 14:Unable to destroy guest libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain 14:Destruction of domain failed
Is there any way to get the qemu's error output?
qemu-system-i386.bin: -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on: -mem-path not supported with Xen
If I use the qemu in xen, it doesn't support -mem-path option. Hmm, memory backend support is not enabled?
This is expected. QEMU is only providing emulated/PV device for Xen guest. The memory management is done by Xen itself. When QEMU requires to access the guest memory, then it will have to issue an hypercall.
I got it. That is the most different point from KVM. (I'm not sure Xen can overcommit memory for the domains)
Alex, this might be a barrier for the vhost-user-gpu, can I remove this memory backend part?
Any virtio implementation currently requires the backend to have full access to the guests memory space as the memory referenced in the queue could exist anywhere in the guests address space. Passing the view of the memory from QEMU itself to the external vhost-user device is all in the Dom0 domain using the standard Linux/POSIX memory sharing APIs. I would hope once Xen has granted access to Dom0 passing that around shouldn't be a problem.
Dom0 has already full permission to map the memory for a given guest. However, the pages are not mapped by default.
I don't know much about the vhost-user and the interaction with QEMU. Could you details a bit more?
The part I am mostly interested is how the I/O is handled. Is it received by QEMU and then forwarded to the vhost-user?
It can be - in a pure TCG setup MMIO accesses in QEMU would trigger an eventfd signal sent to the vhost-user process. In the Xen case if accesses trap and result in an MMIO access being processed in QEMU it can send it there.
In the KVM case the doorbell is picked up in the host kernel and forwarded directly to the eventfd socket the vhost-user daemon is listening to.
[...]
Hmm, Alex, have you ever tried to boot up DomU with virtio-pci ?
No - I think Stefano said PCI support requires some additional work.
That's correct. You would need to implement an hostbridge.
If you are using only one device model to emulate all the devices, you could emulate the hostbridge in QEMU. Otherwise, you would need to emulate it in Xen.
However there are non-PCI equivalent MMIO devices (vhost-user-gpu and virtio-gpu-device) which sit on the virtio-bus. The details of their mappings are passed over via FDT to the guest.
I would suggest to use the MMIO one for now. This should reduce the amount of work necessary to get it working.
Cheers,