On Thu, 22 Oct 2020, Julien Grall via Stratos-dev wrote:
On 22/10/2020 09:55, Masami Hiramatsu wrote:
2020年10月22日(木) 17:11 Julien Grall julien@xen.org:
On 22/10/2020 08:09, Masami Hiramatsu wrote:
Hi, Now I'm trying to use vhost-user-gpu.
I built qemu and xen and installed user /usr/local/. Xen installed it's qemu in /usr/local/lib/xen/bin, Qemu was installed in /usr/local/bin, so those are not conflict.
If I added device_model_args= option to the domU.conf file, it didn't work (failed to exec qemu).
What is the path used for QEMU?
It should be /usr/local/lib/xen/bin/qemu-system-i386, because I replaced it with a wrapper shell script to record the parameters :)
With the device_model_override, I used locally installed /usr/local/bin/qemu-system-i386.
I also added device_model_override= but it didn't change neither. But in both cases, it seems vhost-user-gpu got access from qemu.
To clarify what argument passed to the qemu, I hacked /usr/local/lib/xen/bin/qemu-system-i386, and found that the arguments passed via device_model_args=[] were just added to the original arguments.
OK, now I'm trying to test the qemu-system-i386 itself as below
mhiramat@develbox:/opt/agl$ sudo /usr/local/lib/xen/bin/qemu-system-i386.bin -xen-domid 11 -no-shutdown -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-11,server,nowait -mon chardev=libxl-cmd,mode=control -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-11,server,nowait -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config -xen-attach -name agldemo -vnc none -display none -nographic -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on -numa node,memdev=mem -chardev socket,path=/opt/agl/vgpu.sock,id=vgpu -device vhost-user-gpu-pci,chardev=vgpu -machine xenpv -m 513
I am a bit confused what you are trying to do. QEMU has to be started after the domain is created by the toolstack (aka xl).
Can you describe a bit more the steps you are using?
This command is solely executed without xl, because I would like to check what error happens. With xl, I just got the errors below.
libxl: error: libxl_exec.c:434:spawn_middle_death: domain 14 device model [20073]: unexpectedly exited with exit status 0, when we were waiting for it to confirm startup libxl: error: libxl_dm.c:3102:device_model_spawn_outcome: Domain 14:domain 14 device model: spawn failed (rc=-3) libxl: error: libxl_dm.c:3322:device_model_postconfig_done: Domain 14:Post DM startup configs failed, rc=-3 libxl: error: libxl_create.c:1836:domcreate_devmodel_started: Domain 14:device model did not start: -3 libxl: error: libxl_aoutils.c:646:libxl__kill_xs_path: Device Model already exited libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain 14:Non-existant domain libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain 14:Unable to destroy guest libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain 14:Destruction of domain failed
Is there any way to get the qemu's error output?
IIRC the qemu's log should be in /var/log/xen/*. As you have a wrapper shell script, you may be able to re-direct them somewhere different if you can't find them.
Replying to Masami. Before adding any additional parameters like "-device vhost-user-gpu-pci,chardev=vgpu", have you double-checked that you can start regular DomUs successfully? I.e. domU without any special device_model_args?
qemu-system-i386.bin: -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on: -mem-path not supported with Xen
If I use the qemu in xen, it doesn't support -mem-path option. Hmm, memory backend support is not enabled?
This is expected. QEMU is only providing emulated/PV device for Xen guest. The memory management is done by Xen itself. When QEMU requires to access the guest memory, then it will have to issue an hypercall.
I got it. That is the most different point from KVM. (I'm not sure Xen can overcommit memory for the domains)
Xen doesn't support overcommitting memory.
[...]
I'm using the Xen and Qemu master branch, just add a fix for uninstall error on Xen.
I am afraid this is not going to be enough to test virtio on Xen on Arm. For Xen, you would at least need [1].
[1] IOREQ feature (+ virtio-mmio) on Arm
Yes, exactly. Before you can get something like vhost-user-gpu-pci to work you need the following things:
1) ioreq infrastructure in Xen 2) enable ioreq infrastructure in QEMU (is already there, but unused on ARM) 3) add vhost-user-gpu-pci to QEMU
1) is covered by "IOREQ feature (+ virtio-mmio) on Arm", see https://marc.info/?l=xen-devel&m=160278030131796
2) no patch series has been posted to the list for this, but EPAM mentioned they have a series to enable ioreqs in QEMU for ARM, they used it for testing. As part of this, you might also have to change the machine type to something other than -M xenpv, because xenpv doesn't come with ioreqs or any virtio devices. (More on this below.)
3) I don't know if you can find a way to add the vhost-user-gpu-pci emulator to qemu-system-i386 (--target-list=i386-softmmu), and certainly you cannot easily add it to the xenpv machine because it doesn't come with a virtio transport. But this is something EPAM might have already done as part of their unpublished QEMU series.
For QEMU, I don't think you would be able to use qemu i386 because it is going to expose x86 device.
You most likely going to need to enable Xen for qemu arm and create a new machine that would match Xen layout.
Just as a clarification, the issue is not much that qemu i386 is "x86", because there is no x86 emulation being done. The issue is that QEMU comes with two machines we can use: "xenpv" and "xenfv".
xenpv is not tied to x86 in any way but also doesn't come with any emulated devices, only Xen PV devices. So unless you add a virtio bus transport to xenpv, it wouldn't work for vhost-user-gpu-pci.
xenfv is tied to x86 because it comes with a PIIX emulator. It would be easy to add Virtio devices to it, but we can't really use it on ARM.
Either way something needs to be changed. It is probably not a good idea to extend xenpv, because it is good as it is now, without any emulators. Which means that we need to introduce something like xenfv for ARM, maybe based on the aarch64 'virt' generic virtual platform. In which case, it also means that we need to use a different --target-list for QEMU. So far we only used --target-list=i386-softmmu on ARM and x86.
Hi,
2020年10月23日(金) 5:57 Stefano Stabellini stefano.stabellini@xilinx.com:
On Thu, 22 Oct 2020, Julien Grall via Stratos-dev wrote:
On 22/10/2020 09:55, Masami Hiramatsu wrote:
2020年10月22日(木) 17:11 Julien Grall julien@xen.org:
On 22/10/2020 08:09, Masami Hiramatsu wrote:
Hi, Now I'm trying to use vhost-user-gpu.
I built qemu and xen and installed user /usr/local/. Xen installed it's qemu in /usr/local/lib/xen/bin, Qemu was installed in /usr/local/bin, so those are not conflict.
If I added device_model_args= option to the domU.conf file, it didn't work (failed to exec qemu).
What is the path used for QEMU?
It should be /usr/local/lib/xen/bin/qemu-system-i386, because I replaced it with a wrapper shell script to record the parameters :)
With the device_model_override, I used locally installed /usr/local/bin/qemu-system-i386.
I also added device_model_override= but it didn't change neither. But in both cases, it seems vhost-user-gpu got access from qemu.
To clarify what argument passed to the qemu, I hacked /usr/local/lib/xen/bin/qemu-system-i386, and found that the arguments passed via device_model_args=[] were just added to the original arguments.
OK, now I'm trying to test the qemu-system-i386 itself as below
mhiramat@develbox:/opt/agl$ sudo /usr/local/lib/xen/bin/qemu-system-i386.bin -xen-domid 11 -no-shutdown -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-11,server,nowait -mon chardev=libxl-cmd,mode=control -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-11,server,nowait -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config -xen-attach -name agldemo -vnc none -display none -nographic -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on -numa node,memdev=mem -chardev socket,path=/opt/agl/vgpu.sock,id=vgpu -device vhost-user-gpu-pci,chardev=vgpu -machine xenpv -m 513
I am a bit confused what you are trying to do. QEMU has to be started after the domain is created by the toolstack (aka xl).
Can you describe a bit more the steps you are using?
This command is solely executed without xl, because I would like to check what error happens. With xl, I just got the errors below.
libxl: error: libxl_exec.c:434:spawn_middle_death: domain 14 device model [20073]: unexpectedly exited with exit status 0, when we were waiting for it to confirm startup libxl: error: libxl_dm.c:3102:device_model_spawn_outcome: Domain 14:domain 14 device model: spawn failed (rc=-3) libxl: error: libxl_dm.c:3322:device_model_postconfig_done: Domain 14:Post DM startup configs failed, rc=-3 libxl: error: libxl_create.c:1836:domcreate_devmodel_started: Domain 14:device model did not start: -3 libxl: error: libxl_aoutils.c:646:libxl__kill_xs_path: Device Model already exited libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain 14:Non-existant domain libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain 14:Unable to destroy guest libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain 14:Destruction of domain failed
Is there any way to get the qemu's error output?
IIRC the qemu's log should be in /var/log/xen/*. As you have a wrapper shell script, you may be able to re-direct them somewhere different if you can't find them.
Replying to Masami. Before adding any additional parameters like "-device vhost-user-gpu-pci,chardev=vgpu", have you double-checked that you can start regular DomUs successfully? I.e. domU without any special device_model_args?
Yes, I have checked that.
qemu-system-i386.bin: -object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on: -mem-path not supported with Xen
If I use the qemu in xen, it doesn't support -mem-path option. Hmm, memory backend support is not enabled?
This is expected. QEMU is only providing emulated/PV device for Xen guest. The memory management is done by Xen itself. When QEMU requires to access the guest memory, then it will have to issue an hypercall.
I got it. That is the most different point from KVM. (I'm not sure Xen can overcommit memory for the domains)
Xen doesn't support overcommitting memory.
[...]
I'm using the Xen and Qemu master branch, just add a fix for uninstall error on Xen.
I am afraid this is not going to be enough to test virtio on Xen on Arm. For Xen, you would at least need [1].
[1] IOREQ feature (+ virtio-mmio) on Arm
Yes, exactly. Before you can get something like vhost-user-gpu-pci to work you need the following things:
ioreq infrastructure in Xen
enable ioreq infrastructure in QEMU (is already there, but unused on ARM)
add vhost-user-gpu-pci to QEMU
is covered by "IOREQ feature (+ virtio-mmio) on Arm", see
https://marc.info/?l=xen-devel&m=160278030131796
- no patch series has been posted to the list for this, but EPAM
mentioned they have a series to enable ioreqs in QEMU for ARM, they used it for testing. As part of this, you might also have to change the machine type to something other than -M xenpv, because xenpv doesn't come with ioreqs or any virtio devices. (More on this below.)
- I don't know if you can find a way to add the vhost-user-gpu-pci
emulator to qemu-system-i386 (--target-list=i386-softmmu), and certainly you cannot easily add it to the xenpv machine because it doesn't come with a virtio transport. But this is something EPAM might have already done as part of their unpublished QEMU series.
For QEMU, I don't think you would be able to use qemu i386 because it is going to expose x86 device.
You most likely going to need to enable Xen for qemu arm and create a new machine that would match Xen layout.
Just as a clarification, the issue is not much that qemu i386 is "x86", because there is no x86 emulation being done. The issue is that QEMU comes with two machines we can use: "xenpv" and "xenfv".
So, it's just for emulating the devices. Something like trapping the IO access or Xen PV backend processing.
xenpv is not tied to x86 in any way but also doesn't come with any emulated devices, only Xen PV devices. So unless you add a virtio bus transport to xenpv, it wouldn't work for vhost-user-gpu-pci.
In that case, xenpv device model can be ported on arm? And would you mean we need to add a new virtio layer over the Xen PV devices?
xenfv is tied to x86 because it comes with a PIIX emulator. It would be easy to add Virtio devices to it, but we can't really use it on ARM.
Hm, so if qemu-arm has a standard DMAC and PIC for PCIe, we might able to enable it on arm too?
Either way something needs to be changed. It is probably not a good idea to extend xenpv, because it is good as it is now, without any emulators. Which means that we need to introduce something like xenfv for ARM, maybe based on the aarch64 'virt' generic virtual platform. In which case, it also means that we need to use a different --target-list for QEMU. So far we only used --target-list=i386-softmmu on ARM and x86.
OK, what features would Xenfv need?
Thank you,
On Fri, 23 Oct 2020, Masami Hiramatsu wrote:
I'm using the Xen and Qemu master branch, just add a fix for uninstall error on Xen.
I am afraid this is not going to be enough to test virtio on Xen on Arm. For Xen, you would at least need [1].
[1] IOREQ feature (+ virtio-mmio) on Arm
Yes, exactly. Before you can get something like vhost-user-gpu-pci to work you need the following things:
ioreq infrastructure in Xen
enable ioreq infrastructure in QEMU (is already there, but unused on ARM)
add vhost-user-gpu-pci to QEMU
is covered by "IOREQ feature (+ virtio-mmio) on Arm", see
https://marc.info/?l=xen-devel&m=160278030131796
- no patch series has been posted to the list for this, but EPAM
mentioned they have a series to enable ioreqs in QEMU for ARM, they used it for testing. As part of this, you might also have to change the machine type to something other than -M xenpv, because xenpv doesn't come with ioreqs or any virtio devices. (More on this below.)
- I don't know if you can find a way to add the vhost-user-gpu-pci
emulator to qemu-system-i386 (--target-list=i386-softmmu), and certainly you cannot easily add it to the xenpv machine because it doesn't come with a virtio transport. But this is something EPAM might have already done as part of their unpublished QEMU series.
For QEMU, I don't think you would be able to use qemu i386 because it is going to expose x86 device.
You most likely going to need to enable Xen for qemu arm and create a new machine that would match Xen layout.
Just as a clarification, the issue is not much that qemu i386 is "x86", because there is no x86 emulation being done. The issue is that QEMU comes with two machines we can use: "xenpv" and "xenfv".
So, it's just for emulating the devices. Something like trapping the IO access or Xen PV backend processing.
Yes, xenpv and xenfv both are for devices only.
xenpv is not tied to x86 in any way but also doesn't come with any emulated devices, only Xen PV devices. So unless you add a virtio bus transport to xenpv, it wouldn't work for vhost-user-gpu-pci.
In that case, xenpv device model can be ported on arm?
Yes, xenpv can be ported to arm, and in fact it already works today. For example, at Xilinx we are using it regularly to provide the 9pfs Xen PV backend. See hw/xenpv/xen_machine_pv.c.
And would you mean we need to add a new virtio layer over the Xen PV devices?
Not exactly. If you look at hw/xenpv/xen_machine_pv.c, you'll see that the xenpv machine doesn't come with any devices other then Xen PV backends. There is no virtio, no PCI, no PIIX3, nothing. We would have to add virtio-mmio or virtio-pci emulation to the xenpv machine.
The xenpv machine is close to what we need, but unfortunately is tied to the x86 build today so it can only be enabled as part of the qemu-system-i386/x86_64 build. Ideally xenpv would be arch-neutral or at least could be enabled for aarch64-softmmu.
xenfv is tied to x86 because it comes with a PIIX emulator. It would be easy to add Virtio devices to it, but we can't really use it on ARM.
Hm, so if qemu-arm has a standard DMAC and PIC for PCIe, we might able to enable it on arm too?
There are two ways QEMU can get I/O requests for emulation: 1. via Xen PV Xen interfaces (xenstore, Xen shared rings, etc.) 2. via ioreqs, which are based on MMIO traps. This is what virtio needs.
The xenpv machine only uses 1 today. The xenfv machine has both. When a guest traps on a given MMIO address, Xen forwards the I/O requests as an ioreq to QEMU which emulates the access.
One key thing about this is: not everything enabled in the QEMU build is necessarely used. If QEMU is built with a DMAC emulator, unless Xen forwards corresponding I/O requests to QEMU, QEMU will never run anything related to DMAC.
So technically, we could use qemu-system-i386 because the x86 part is never used. PIIX emulation should never come into play. But it is a bad idea to have such a mismatch between QEMU's view of the machine and Xen's view of the machine. Ideally they should be aligned. And it becomes more problematic for things like interrupt-injection. QEMU's view of the virtual interrupt controller should be the same as Xen's.
The guest traps on a given memory access, Xen realizes is virtio-foobar, and sends the related ioreq to QEMU, which has the virtio-foobar emulator. If QEMU has also another emulator for a device which is not actually used (e.g. DMAC) it is not the end of the world but we should try to avoid it.
Thus, I was suggesting to use the virt machine in QEMU because is fairly limited in scope and made for ARM64. It should be a close match to Xen's view of the virtual machine.
Either way something needs to be changed. It is probably not a good idea to extend xenpv, because it is good as it is now, without any emulators. Which means that we need to introduce something like xenfv for ARM, maybe based on the aarch64 'virt' generic virtual platform. In which case, it also means that we need to use a different --target-list for QEMU. So far we only used --target-list=i386-softmmu on ARM and x86.
OK, what features would Xenfv need?
As a start, it would only need virtio-mmio and virtio devices such as net, block, serial and gpu.
stratos-dev@op-lists.linaro.org