Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi Julien,
2020年10月22日(木) 17:11 Julien Grall julien@xen.org:
On 22/10/2020 08:09, Masami Hiramatsu wrote:
Hi,
Hi,
Now I'm trying to use vhost-user-gpu.
I built qemu and xen and installed user /usr/local/. Xen installed it's qemu in /usr/local/lib/xen/bin, Qemu was installed in /usr/local/bin, so those are not conflict.
If I added device_model_args= option to the domU.conf file, it didn't work (failed to exec qemu).
What is the path used for QEMU?
It should be /usr/local/lib/xen/bin/qemu-system-i386, because I replaced it with a wrapper shell script to record the parameters :)
With the device_model_override, I used locally installed /usr/local/bin/qemu-system-i386.
I also added device_model_override= but it didn't change neither. But in both cases, it seems vhost-user-gpu got access from qemu.
To clarify what argument passed to the qemu, I hacked /usr/local/lib/xen/bin/qemu-system-i386, and found that the arguments passed via device_model_args=[] were just added to the original arguments.
OK, now I'm trying to test the qemu-system-i386 itself as below
mhiramat@develbox:/opt/agl$ sudo /usr/local/lib/xen/bin/qemu-system-i386.bin -xen-domid 11 -no-shutdown -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-11,server,nowait -mon chardev=libxl-cmd,mode=control -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-11,server,nowait -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config -xen-attach -name agldemo -vnc none -display none -nographic -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on -numa node,memdev=mem -chardev socket,path=/opt/agl/vgpu.sock,id=vgpu -device vhost-user-gpu-pci,chardev=vgpu -machine xenpv -m 513
I am a bit confused what you are trying to do. QEMU has to be started after the domain is created by the toolstack (aka xl).
Can you describe a bit more the steps you are using?
This command is solely executed without xl, because I would like to check what error happens. With xl, I just got the errors below.
libxl: error: libxl_exec.c:434:spawn_middle_death: domain 14 device model [20073]: unexpectedly exited with exit status 0, when we were waiting for it to confirm startup libxl: error: libxl_dm.c:3102:device_model_spawn_outcome: Domain 14:domain 14 device model: spawn failed (rc=-3) libxl: error: libxl_dm.c:3322:device_model_postconfig_done: Domain 14:Post DM startup configs failed, rc=-3 libxl: error: libxl_create.c:1836:domcreate_devmodel_started: Domain 14:device model did not start: -3 libxl: error: libxl_aoutils.c:646:libxl__kill_xs_path: Device Model already exited libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain 14:Non-existant domain libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain 14:Unable to destroy guest libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain 14:Destruction of domain failed
Is there any way to get the qemu's error output?
qemu-system-i386.bin: -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on: -mem-path not supported with Xen
If I use the qemu in xen, it doesn't support -mem-path option. Hmm, memory backend support is not enabled?
This is expected. QEMU is only providing emulated/PV device for Xen guest. The memory management is done by Xen itself. When QEMU requires to access the guest memory, then it will have to issue an hypercall.
I got it. That is the most different point from KVM. (I'm not sure Xen can overcommit memory for the domains)
Alex, this might be a barrier for the vhost-user-gpu, can I remove this memory backend part?
Any virtio implementation currently requires the backend to have full access to the guests memory space as the memory referenced in the queue could exist anywhere in the guests address space. Passing the view of the memory from QEMU itself to the external vhost-user device is all in the Dom0 domain using the standard Linux/POSIX memory sharing APIs. I would hope once Xen has granted access to Dom0 passing that around shouldn't be a problem.
However we could use the inbuilt virtio-gpu-device although AFAIK this can't do any of the virgl offloading. However lets start with the small steps ;-)
mhiramat@develbox:/opt/agl$ sudo /usr/local/bin/qemu-system-i386 -xen-domid 11 -no-shutdown -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-11,server,nowait -mon chardev=libxl-cmd,mode=control -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-11,server,nowait -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config -xen-attach -name agldemo -vnc none -display none -nographic -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on -numa node,memdev=mem -chardev socket,path=/opt/agl/vgpu.sock,id=vgpu -device vhost-user-gpu-pci,chardev=vgpu -machine xenpv -m 513
qemu-system-i386: -xen-domid 11: Option not supported for this target
With the qemu which I built, it seems it doesn't support xen... but I'm sure the qemu has to be configured to support xen as below
Your command line seems to contain conflicting information. AFAIK, vhost-user-gpu-pci is based on Virtio PCI but you are specifying the machine 'xenpv' which will only provide Xen PV backend support.
Virtio PCI is not supported out-of-box on Xen on Arm. There are modification required in both Xen and QEMU.
Hmm, Alex, have you ever tried to boot up DomU with virtio-pci ?
No - I think Stefano said PCI support requires some additional work. However there are non-PCI equivalent MMIO devices (vhost-user-gpu and virtio-gpu-device) which sit on the virtio-bus. The details of their mappings are passed over via FDT to the guest.
I am aware of a series to provide the infrastructure in Xen for Virtio MMIO but not Virtio PCI. I am not aware of any patch series for QEMU.
Could you provide more information on your setup? Are you using upstream or modified Xen/QEMU?
I'm using the Xen and Qemu master branch, just add a fix for uninstall error on Xen.
diff --git a/tools/vchan/Makefile b/tools/vchan/Makefile index a731e0e073..4a42514c45 100644 --- a/tools/vchan/Makefile +++ b/tools/vchan/Makefile @@ -28,6 +28,10 @@ install: all $(INSTALL_DIR) $(DESTDIR)$(bindir) $(INSTALL_PROG) vchan-socket-proxy $(DESTDIR)$(bindir)
+.PHONY: uninstall +uninstall:
$(RM) $(DESTDIR)$(bindir)/vchan-socket-proxy
.PHONY: clean clean: $(RM) -f *.o vchan-node1 vchan-node2 $(DEPS_RM)
Thank you,