On Fri, 16 Oct 2020, Alex Bennée via Stratos-dev wrote:
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi Alex,
2020年10月16日(金) 2:01 Alex Bennée alex.bennee@linaro.org:
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi,
I've succeeded to make the X.org running on Dom0. It seems that Xorg's nouveau driver caused SIGBUS issue. Custom nouveau kernel driver + Xorg fbdev driver seems stable. (Even if it doesn't work again, I'll try to use USB-HDMI adaptor next time)
So, I would like to test the virtio-video for the next step. Alex, how can I help you to test it?
In one window you need the vhost-user gpu daemon:
./vhost-user-gpu --socket-path=vgpu.sock -v
Hmm, I couldn't find vhost-user-gpu (I've installed xen tools under /usr/local, but I can not find vhost* tools)
The vhost-user-gpu tool is part of the QEMU source tree (contrib/vhost-user-gpu).
and then on the QEMU command line you need the memory sharing and the socket connection as well as the device:
$QEMU $ARGS \ -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \ -numa node,memdev=mem \ -chardev socket,path=vgpu.sock,id=vgpu \ -device vhost-user-gpu-pci,chardev=vgpu
I'm using xl command ( xl create -dc CONFIGFILE ) to boot up guest domain. Can I boot it up via qemu too?
Hmm so this is where we might need some extra tooling. I'm not sure how QEMU gets invoked by the xl tooling but QEMU for Xen is a fairly different beast to the normal hypervisor interaction as rather than handling vmexits from the hypervisor it just gets commands via the Xen control interface to service emulation and I/O requests.
The above QEMU commands ensure that:
- the guest memory space is shared wit the vhost-user-gpu daemon
- a control path is wired up so vhost-user messages can be sent during setup (inititalise the device etc)
- the same socket path is fed eventfd messages as the guest triggers virtio events. These can all come from QEMU if the kernel isn't translating mmio access to the virtqueues to eventfd events.
Steffano, how does the XL tooling invoke QEMU and can the command line be modified?
Yes, it can, in a couple of different ways. You can add arguments to the QEMU command line by specifying:
device_model_args=["arg1", "arg2", "etc."]
in the vm config file. You can also choose a different QEMU binary to run with:
device_model_override="/path/to/your/qemu"
You can use it to have your own qemu script that adds additional arguments before calling the actual qemu binary (by default is /usr/lib/xen/bin/qemu-system-i386 even on ARM.)
Hi Alex and Stefano,
2020年10月20日(火) 2:14 Stefano Stabellini stefano.stabellini@xilinx.com:
On Fri, 16 Oct 2020, Alex Bennée via Stratos-dev wrote:
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi Alex,
2020年10月16日(金) 2:01 Alex Bennée alex.bennee@linaro.org:
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi,
I've succeeded to make the X.org running on Dom0. It seems that Xorg's nouveau driver caused SIGBUS issue. Custom nouveau kernel driver + Xorg fbdev driver seems stable. (Even if it doesn't work again, I'll try to use USB-HDMI adaptor next time)
So, I would like to test the virtio-video for the next step. Alex, how can I help you to test it?
In one window you need the vhost-user gpu daemon:
./vhost-user-gpu --socket-path=vgpu.sock -v
Hmm, I couldn't find vhost-user-gpu (I've installed xen tools under /usr/local, but I can not find vhost* tools)
The vhost-user-gpu tool is part of the QEMU source tree (contrib/vhost-user-gpu).
and then on the QEMU command line you need the memory sharing and the socket connection as well as the device:
$QEMU $ARGS \ -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \ -numa node,memdev=mem \ -chardev socket,path=vgpu.sock,id=vgpu \ -device vhost-user-gpu-pci,chardev=vgpu
BTW, what "$ARGS" you usually pass? And should I use /dev/shm as a memory backend? (is that mandatory for vhost-gpu?)
I'm using xl command ( xl create -dc CONFIGFILE ) to boot up guest domain. Can I boot it up via qemu too?
Hmm so this is where we might need some extra tooling. I'm not sure how QEMU gets invoked by the xl tooling but QEMU for Xen is a fairly different beast to the normal hypervisor interaction as rather than handling vmexits from the hypervisor it just gets commands via the Xen control interface to service emulation and I/O requests.
The above QEMU commands ensure that:
- the guest memory space is shared wit the vhost-user-gpu daemon
- a control path is wired up so vhost-user messages can be sent during setup (inititalise the device etc)
- the same socket path is fed eventfd messages as the guest triggers virtio events. These can all come from QEMU if the kernel isn't translating mmio access to the virtqueues to eventfd events.
Steffano, how does the XL tooling invoke QEMU and can the command line be modified?
Yes, it can, in a couple of different ways. You can add arguments to the QEMU command line by specifying:
device_model_args=["arg1", "arg2", "etc."]
OK, so as above command, we need device_model_args=[ ..., "-object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on", "-numa node,memdev=mem", "-chardev socket,path=vgpu.sock,id=vgpu", "-device vhost-user-gpu-pci,chardev=vgpu" ] (the ... is a placeholder for $ARGS)
in the vm config file. You can also choose a different QEMU binary to run with:
device_model_override="/path/to/your/qemu"
You can use it to have your own qemu script that adds additional arguments before calling the actual qemu binary (by default is /usr/lib/xen/bin/qemu-system-i386 even on ARM.)
Should I use qemu-system-i386 even if I rebuild qemu outside Xen?
Thank you,
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi Alex and Stefano,
2020年10月20日(火) 2:14 Stefano Stabellini stefano.stabellini@xilinx.com:
On Fri, 16 Oct 2020, Alex Bennée via Stratos-dev wrote:
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi Alex,
2020年10月16日(金) 2:01 Alex Bennée alex.bennee@linaro.org:
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi,
I've succeeded to make the X.org running on Dom0. It seems that Xorg's nouveau driver caused SIGBUS issue. Custom nouveau kernel driver + Xorg fbdev driver seems stable. (Even if it doesn't work again, I'll try to use USB-HDMI adaptor next time)
So, I would like to test the virtio-video for the next step. Alex, how can I help you to test it?
In one window you need the vhost-user gpu daemon:
./vhost-user-gpu --socket-path=vgpu.sock -v
Hmm, I couldn't find vhost-user-gpu (I've installed xen tools under /usr/local, but I can not find vhost* tools)
The vhost-user-gpu tool is part of the QEMU source tree (contrib/vhost-user-gpu).
and then on the QEMU command line you need the memory sharing and the socket connection as well as the device:
$QEMU $ARGS \ -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \ -numa node,memdev=mem \ -chardev socket,path=vgpu.sock,id=vgpu \ -device vhost-user-gpu-pci,chardev=vgpu
BTW, what "$ARGS" you usually pass?
My current command line for the AGL demo is:
./aarch64-softmmu/qemu-system-aarch64 \ -cpu cortex-a57 \ # for TCG, for real virt use -cpu host -machine type=virt,virtualization=on,gic-version=3 \ # again GIC for TCG only -serial mon:stdio \ -netdev user,id=unet,hostfwd=tcp::2222-:22 \ -device virtio-net-device,netdev=unet,id=virt-net \ -drive id=disk0,file=$HOME/images/agl/agl-demo-platform-crosssdk-qemuarm64.ext4,if=none,format=raw \ -device virtio-blk-device,drive=disk0 \ -soundhw hda \ -device VGA,edid=on,xmax=1024,ymax=768 \ -device qemu-xhci \ -device usb-tablet \ -device usb-kbd \ -object rng-random,filename=/dev/urandom,id=rng0 \ -device virtio-rng-pci,rng=rng0 \ -device virtio-serial-device \ -chardev null,id=virtcon \ -device virtconsole,chardev=virtcon \ -m 4096 \ -kernel ~/images/agl/Image \ -append "console=ttyAMA0 root=/dev/vda" \ -display gtk,show-cursor=on \ -smp 4 \ -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \ -numa node,memdev=mem \ -chardev socket,path=vgpu.sock,id=vgpu \ -device vhost-user-gpu-pci,chardev=vgpu
My "normal" Debian test image is a bit simpler and uses virtio-scsi and PCI devices and stores the block device on an LVM volume group:
./aarch64-softmmu/qemu-system-aarch64 \ -cpu max -machine type=virt,virtualization=on \ -display none \ -serial mon:stdio \ -netdev user,id=unet,hostfwd=tcp::2222-:22 \ -device virtio-net-pci,netdev=unet,id=virt-net,disable-legacy=on \ -device virtio-scsi-pci,id=virt-scsi,disable-legacy=on \ -blockdev driver=raw,node-name=hd,discard=unmap,file.driver=host_device,file.filename=/dev/zen-disk/debian-buster-arm64 \ -device scsi-hd,drive=hd,id=virt-scsi-hd \ -kernel ~/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image \ -append "root=/dev/sda2 console=ttyAMA0" \ -m 4096 -smp 4
And when I'm booting with the bios I have the following instead of -kernel/-append lines:
-drive if=pflash,file=/usr/share/AAVMF/AAVMF_CODE.fd,format=raw,readonly \ -drive if=pflash,file=/home/alex/models/qemu-arm64-efivars,format=raw \
And should I use /dev/shm as a memory backend? (is that mandatory for vhost-gpu?)
I think so but I can check up on that.
I'm using xl command ( xl create -dc CONFIGFILE ) to boot up guest domain. Can I boot it up via qemu too?
Hmm so this is where we might need some extra tooling. I'm not sure how QEMU gets invoked by the xl tooling but QEMU for Xen is a fairly different beast to the normal hypervisor interaction as rather than handling vmexits from the hypervisor it just gets commands via the Xen control interface to service emulation and I/O requests.
The above QEMU commands ensure that:
- the guest memory space is shared wit the vhost-user-gpu daemon
- a control path is wired up so vhost-user messages can be sent during setup (inititalise the device etc)
- the same socket path is fed eventfd messages as the guest triggers virtio events. These can all come from QEMU if the kernel isn't translating mmio access to the virtqueues to eventfd events.
Steffano, how does the XL tooling invoke QEMU and can the command line be modified?
Yes, it can, in a couple of different ways. You can add arguments to the QEMU command line by specifying:
device_model_args=["arg1", "arg2", "etc."]
OK, so as above command, we need device_model_args=[ ..., "-object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on", "-numa node,memdev=mem", "-chardev socket,path=vgpu.sock,id=vgpu", "-device vhost-user-gpu-pci,chardev=vgpu" ] (the ... is a placeholder for $ARGS)
in the vm config file. You can also choose a different QEMU binary to run with:
device_model_override="/path/to/your/qemu"
You can use it to have your own qemu script that adds additional arguments before calling the actual qemu binary (by default is /usr/lib/xen/bin/qemu-system-i386 even on ARM.)
Should I use qemu-system-i386 even if I rebuild qemu outside Xen?
Yes - this is because Xen is slightly weird in that QEMU doesn't have anything to do with the CPU virtualisation and only really deals in servicing the I/O requests that come through the Xen PV interface.
On Tue, 20 Oct 2020, Alex Bennée wrote:
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi Alex and Stefano, 2020年10月20日(火) 2:14 Stefano Stabellini stefano.stabellini@xilinx.com:
On Fri, 16 Oct 2020, Alex Bennée via Stratos-dev wrote:
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
Hi Alex,
2020年10月16日(金) 2:01 Alex Bennée alex.bennee@linaro.org:
Masami Hiramatsu masami.hiramatsu@linaro.org writes:
> Hi, > > I've succeeded to make the X.org running on Dom0. > It seems that Xorg's nouveau driver caused SIGBUS issue. Custom > nouveau kernel driver + Xorg fbdev driver seems stable. > (Even if it doesn't work again, I'll try to use USB-HDMI adaptor next time) > > So, I would like to test the virtio-video for the next step. > Alex, how can I help you to test it?
In one window you need the vhost-user gpu daemon:
./vhost-user-gpu --socket-path=vgpu.sock -v
Hmm, I couldn't find vhost-user-gpu (I've installed xen tools under /usr/local, but I can not find vhost* tools)
The vhost-user-gpu tool is part of the QEMU source tree (contrib/vhost-user-gpu).
and then on the QEMU command line you need the memory sharing and the socket connection as well as the device:
$QEMU $ARGS \ -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \ -numa node,memdev=mem \ -chardev socket,path=vgpu.sock,id=vgpu \ -device vhost-user-gpu-pci,chardev=vgpu
BTW, what "$ARGS" you usually pass?
My current command line for the AGL demo is:
./aarch64-softmmu/qemu-system-aarch64 \ -cpu cortex-a57 \ # for TCG, for real virt use -cpu host -machine type=virt,virtualization=on,gic-version=3 \ # again GIC for TCG only -serial mon:stdio \ -netdev user,id=unet,hostfwd=tcp::2222-:22 \ -device virtio-net-device,netdev=unet,id=virt-net \ -drive id=disk0,file=$HOME/images/agl/agl-demo-platform-crosssdk-qemuarm64.ext4,if=none,format=raw \ -device virtio-blk-device,drive=disk0 \ -soundhw hda \ -device VGA,edid=on,xmax=1024,ymax=768 \ -device qemu-xhci \ -device usb-tablet \ -device usb-kbd \ -object rng-random,filename=/dev/urandom,id=rng0 \ -device virtio-rng-pci,rng=rng0 \ -device virtio-serial-device \ -chardev null,id=virtcon \ -device virtconsole,chardev=virtcon \ -m 4096 \ -kernel ~/images/agl/Image \ -append "console=ttyAMA0 root=/dev/vda" \ -display gtk,show-cursor=on \ -smp 4 \ -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \ -numa node,memdev=mem \ -chardev socket,path=vgpu.sock,id=vgpu \ -device vhost-user-gpu-pci,chardev=vgpu
My "normal" Debian test image is a bit simpler and uses virtio-scsi and PCI devices and stores the block device on an LVM volume group:
./aarch64-softmmu/qemu-system-aarch64 \ -cpu max -machine type=virt,virtualization=on \ -display none \ -serial mon:stdio \ -netdev user,id=unet,hostfwd=tcp::2222-:22 \ -device virtio-net-pci,netdev=unet,id=virt-net,disable-legacy=on \ -device virtio-scsi-pci,id=virt-scsi,disable-legacy=on \ -blockdev driver=raw,node-name=hd,discard=unmap,file.driver=host_device,file.filename=/dev/zen-disk/debian-buster-arm64 \ -device scsi-hd,drive=hd,id=virt-scsi-hd \ -kernel ~/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image \ -append "root=/dev/sda2 console=ttyAMA0" \ -m 4096 -smp 4
And when I'm booting with the bios I have the following instead of -kernel/-append lines:
-drive if=pflash,file=/usr/share/AAVMF/AAVMF_CODE.fd,format=raw,readonly \ -drive if=pflash,file=/home/alex/models/qemu-arm64-efivars,format=raw \
And should I use /dev/shm as a memory backend? (is that mandatory for vhost-gpu?)
I think so but I can check up on that.
I'm using xl command ( xl create -dc CONFIGFILE ) to boot up guest domain. Can I boot it up via qemu too?
Hmm so this is where we might need some extra tooling. I'm not sure how QEMU gets invoked by the xl tooling but QEMU for Xen is a fairly different beast to the normal hypervisor interaction as rather than handling vmexits from the hypervisor it just gets commands via the Xen control interface to service emulation and I/O requests.
The above QEMU commands ensure that:
- the guest memory space is shared wit the vhost-user-gpu daemon
- a control path is wired up so vhost-user messages can be sent during setup (inititalise the device etc)
- the same socket path is fed eventfd messages as the guest triggers virtio events. These can all come from QEMU if the kernel isn't translating mmio access to the virtqueues to eventfd events.
Steffano, how does the XL tooling invoke QEMU and can the command line be modified?
Yes, it can, in a couple of different ways. You can add arguments to the QEMU command line by specifying:
device_model_args=["arg1", "arg2", "etc."]
OK, so as above command, we need device_model_args=[ ..., "-object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on", "-numa node,memdev=mem", "-chardev socket,path=vgpu.sock,id=vgpu", "-device vhost-user-gpu-pci,chardev=vgpu" ] (the ... is a placeholder for $ARGS)
The vhost-user-gpu-pci device might need some more plumbings, more on this below.
in the vm config file. You can also choose a different QEMU binary to run with:
device_model_override="/path/to/your/qemu"
You can use it to have your own qemu script that adds additional arguments before calling the actual qemu binary (by default is /usr/lib/xen/bin/qemu-system-i386 even on ARM.)
Should I use qemu-system-i386 even if I rebuild qemu outside Xen?
Yes - this is because Xen is slightly weird in that QEMU doesn't have anything to do with the CPU virtualisation and only really deals in servicing the I/O requests that come through the Xen PV interface.
Yes, let me explain.
There are two modes of execution for QEMU in regards to Xen: PV and HVM. QEMU in HVM mode is similar to QEMU for KVM: it emulates devices for the virtual machine and treats Xen as the "accelerator". (See hw/i386/pc_piix.c:xenfv) QEMU in HVM mode is only used on x86, and it doesn't emulate the CPU, only peripherals. Typically qemu-system-i386 is used even on x86_64 because again only peripherals are emulated. I/O requests are forwarded by Xen to QEMU in the form of "ioreqs" (see hw/i386/xen/xen-hvm.c:cpu_get_ioreq.)
Then there is PV QEMU: this is a QEMU instance that doesn't emulate any devices, it only provides PV backends, such as PV disk and Xen 9pfs. (See hw/xenpv/xen_machine_pv.c) PV QEMU is used on x86 and ARM. PV QEMU cannot handle any I/O requests coming from Xen at all (hw/i386/xen/xen-hvm.c:cpu_get_ioreq doesn't get called.) PV QEMU is not even architecture dependent, however, QEMU had to tie the PV infrastructure to a "platform", so qemu-system-i386 was chosen again. That is how we end up running qemu-system-i386 on a Xen on ARM system. Not only it doesn't do any i386 emulation, but also it doesn't do any peripherals emulation. It only serves PV backends. HVM QEMU can do PV backends, but PV QEMU cannot emulate anything. Which leads me to vhost-user-gpu-pci.
To run a QEMU instance that does emulation for a Xen virtual machine (so an HVM QEMU) we need the ioreq infrastructure in Xen, to forward I/O requests to dom0 userspace, and QEMU, to handle I/O requests coming from Xen. EPAM has patches for both but they are not upstream yet, their Xen series is titled "IOREQ feature (+ virtio-mmio) on Arm".
stratos-dev@op-lists.linaro.org