On Thu, 22 Oct 2020, Julien Grall via Stratos-dev wrote:
On 22/10/2020 09:55, Masami Hiramatsu wrote:
2020年10月22日(木) 17:11 Julien Grall julien@xen.org:
On 22/10/2020 08:09, Masami Hiramatsu wrote:
Hi, Now I'm trying to use vhost-user-gpu.
I built qemu and xen and installed user /usr/local/. Xen installed it's qemu in /usr/local/lib/xen/bin, Qemu was installed in /usr/local/bin, so those are not conflict.
If I added device_model_args= option to the domU.conf file, it didn't work (failed to exec qemu).
What is the path used for QEMU?
It should be /usr/local/lib/xen/bin/qemu-system-i386, because I replaced it with a wrapper shell script to record the parameters :)
With the device_model_override, I used locally installed /usr/local/bin/qemu-system-i386.
I also added device_model_override= but it didn't change neither. But in both cases, it seems vhost-user-gpu got access from qemu.
To clarify what argument passed to the qemu, I hacked /usr/local/lib/xen/bin/qemu-system-i386, and found that the arguments passed via device_model_args=[] were just added to the original arguments.
OK, now I'm trying to test the qemu-system-i386 itself as below
mhiramat@develbox:/opt/agl$ sudo /usr/local/lib/xen/bin/qemu-system-i386.bin -xen-domid 11 -no-shutdown -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-11,server,nowait -mon chardev=libxl-cmd,mode=control -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-11,server,nowait -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config -xen-attach -name agldemo -vnc none -display none -nographic -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on -numa node,memdev=mem -chardev socket,path=/opt/agl/vgpu.sock,id=vgpu -device vhost-user-gpu-pci,chardev=vgpu -machine xenpv -m 513
I am a bit confused what you are trying to do. QEMU has to be started after the domain is created by the toolstack (aka xl).
Can you describe a bit more the steps you are using?
This command is solely executed without xl, because I would like to check what error happens. With xl, I just got the errors below.
libxl: error: libxl_exec.c:434:spawn_middle_death: domain 14 device model [20073]: unexpectedly exited with exit status 0, when we were waiting for it to confirm startup libxl: error: libxl_dm.c:3102:device_model_spawn_outcome: Domain 14:domain 14 device model: spawn failed (rc=-3) libxl: error: libxl_dm.c:3322:device_model_postconfig_done: Domain 14:Post DM startup configs failed, rc=-3 libxl: error: libxl_create.c:1836:domcreate_devmodel_started: Domain 14:device model did not start: -3 libxl: error: libxl_aoutils.c:646:libxl__kill_xs_path: Device Model already exited libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain 14:Non-existant domain libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain 14:Unable to destroy guest libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain 14:Destruction of domain failed
Is there any way to get the qemu's error output?
IIRC the qemu's log should be in /var/log/xen/*. As you have a wrapper shell script, you may be able to re-direct them somewhere different if you can't find them.
Replying to Masami. Before adding any additional parameters like "-device vhost-user-gpu-pci,chardev=vgpu", have you double-checked that you can start regular DomUs successfully? I.e. domU without any special device_model_args?
qemu-system-i386.bin: -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on: -mem-path not supported with Xen
If I use the qemu in xen, it doesn't support -mem-path option. Hmm, memory backend support is not enabled?
This is expected. QEMU is only providing emulated/PV device for Xen guest. The memory management is done by Xen itself. When QEMU requires to access the guest memory, then it will have to issue an hypercall.
I got it. That is the most different point from KVM. (I'm not sure Xen can overcommit memory for the domains)
Xen doesn't support overcommitting memory.
[...]
I'm using the Xen and Qemu master branch, just add a fix for uninstall error on Xen.
I am afraid this is not going to be enough to test virtio on Xen on Arm. For Xen, you would at least need [1].
[1] IOREQ feature (+ virtio-mmio) on Arm
Yes, exactly. Before you can get something like vhost-user-gpu-pci to work you need the following things:
1) ioreq infrastructure in Xen 2) enable ioreq infrastructure in QEMU (is already there, but unused on ARM) 3) add vhost-user-gpu-pci to QEMU
1) is covered by "IOREQ feature (+ virtio-mmio) on Arm", see https://marc.info/?l=xen-devel&m=160278030131796
2) no patch series has been posted to the list for this, but EPAM mentioned they have a series to enable ioreqs in QEMU for ARM, they used it for testing. As part of this, you might also have to change the machine type to something other than -M xenpv, because xenpv doesn't come with ioreqs or any virtio devices. (More on this below.)
3) I don't know if you can find a way to add the vhost-user-gpu-pci emulator to qemu-system-i386 (--target-list=i386-softmmu), and certainly you cannot easily add it to the xenpv machine because it doesn't come with a virtio transport. But this is something EPAM might have already done as part of their unpublished QEMU series.
For QEMU, I don't think you would be able to use qemu i386 because it is going to expose x86 device.
You most likely going to need to enable Xen for qemu arm and create a new machine that would match Xen layout.
Just as a clarification, the issue is not much that qemu i386 is "x86", because there is no x86 emulation being done. The issue is that QEMU comes with two machines we can use: "xenpv" and "xenfv".
xenpv is not tied to x86 in any way but also doesn't come with any emulated devices, only Xen PV devices. So unless you add a virtio bus transport to xenpv, it wouldn't work for vhost-user-gpu-pci.
xenfv is tied to x86 because it comes with a PIIX emulator. It would be easy to add Virtio devices to it, but we can't really use it on ARM.
Either way something needs to be changed. It is probably not a good idea to extend xenpv, because it is good as it is now, without any emulators. Which means that we need to introduce something like xenfv for ARM, maybe based on the aarch64 'virt' generic virtual platform. In which case, it also means that we need to use a different --target-list for QEMU. So far we only used --target-list=i386-softmmu on ARM and x86.