Hi Alex, Arnd, Jean-Philippe, and all,
During the OpenAMP App-services call this week WindRiver gave a couple
of extremely interesting presentations, see attached slides.
Dan Milea discussed the usage of virtio between heterogeneous clusters,
i.e. virtio frontends on the Cortex-R cluster and virtio backends on
the Cortex-A cluster. They used a setup based on pre-shared memory to
make it work, which I believe is similar to the swiotlb approach we
discussed in Stratos. They noted that the last version of the virtio
spec on github had something in regards to virtio and pre-shared memory
regions that might help us cover this use-case from a spec perspective.
Dan, would you be able to share a pointer to it for clarity? So far, we
acted on the assumption that the virtio spec doesn't allow for this
architecture today; it would be fantastic if it turns out that it
already does.
The other very interesting presentation by Joshua Pincus was about
virtio-mmio and MSIs. They did excellent measurements on the
performance of virtio-mmio and they found out that the single source of
notifications (one interrupt) is the bottleneck. Adding MSIs vastly
improved performance. You can see the detailed breakdown on slide #8 of
"OpenAMP Virt I/O MMIO w/ MSI". This analysis really points in the
direction of adding MSIs to virtio-mmio.
Cheers,
Stefano
---------- Forwarded message ----------
Date: Wed, 28 Oct 2020 18:41:09 +0000
From: Nathalie Chan King Choy via App-services
<app-services(a)lists.openampproject.org>
Reply-To: Nathalie Chan King Choy <nathalie(a)xilinx.com>
To: "app-services(a)lists.openampproject.org"
<app-services(a)lists.openampproject.org>
Subject: [App-services] 2020-10-27 OpenAMP App-services call recording, notes,
slides, and action items
Hi all,
The notes from yesterday’s OpenAMP App-services call can be found at:
https://github.com/OpenAMP/open-amp/wiki/OpenAMP-Application-Services-Subgr…
The link to the Webex recording is in the notes. I am not sure how long before the recordings expire or I will hit my storage limit, so if
you need to catch up by watching the recording, please download it in the next couple weeks.
Please find attached the slides from Dan & Josh.
Action items:
* Dan & Josh to send slides (DONE)
* Stefano to start a thread w/ the folks who are working on shared memory & VirtIO
Best regards,
Nathalie C. Chan King Choy
Program Manager focused on Open Source and Community
--
App-services mailing list
App-services(a)lists.openampproject.org
https://lists.openampproject.org/mailman/listinfo/app-services
Hi,
I think I've mentioned them in other threads but to bring it all in one
place. Using the images from:
https://download.automotivelinux.org/AGL/release/jellyfish/10.0.0/qemuarm64…
You just need the kernel image and the
agl-demo-platform-crosssdk-qemuarm64.ext4 filesystem. The initrd is a
slimmed down rootfs in a RAM disk.
To run with a simple VGA-like frame buffer:
./aarch64-softmmu/qemu-system-aarch64 -cpu host -enable-kvm \
-smp 2 -machine type=virt -serial mon:stdio \
-netdev user,id=unet,hostfwd=tcp::2222-:22 \
-device virtio-net-device,netdev=unet,id=virt-net \
-drive id=disk0,file=agl-demo-platform-crosssdk-qemuarm64.ext4,if=none,format=raw \
-device virtio-blk-device,drive=disk0 \
-soundhw hda \
-device qemu-xhci \
-device usb-tablet \
-device usb-kbd \
-object rng-random,filename=/dev/urandom,id=rng0 \
-device virtio-rng-pci,rng=rng0 \
-device virtio-serial-device \
-chardev null,id=virtcon
-device virtconsole,chardev=virtcon -m 4096 \
-kernel ~/images/agl/Image \
-append "console=ttyAMA0 root=/dev/vda" \
-display gtk,show-cursor=on \
-device VGA,edid=on,xmax=1024,ymax=768
Running with virtio-gpu should give a similar 2D frame buffer experience,
making the last two lines:
-display gtk,show-cursor=on \
-device virtio-gpu-pci
And finally enabling VirGL for accelerated pass-through you get:
-display gtk,show-cursor=on,gl-on \
-device virtio-gpu-pci,virgl=on
although obviously if your host ends up sending it to softpipe in the
end the results won't be as impressive. I've also replicated the config
(find it in /boot) and re-built my own kernel current stable kernel and
it worked fine.
--
Alex Bennée
Hi All
We have only a small agenda so far [1] with some updates on the demo
progress and a posed question. Do we have any other agenda topics?
We are planning a Huawei presentation for the following call; Salil will
highlight their interest in the topics touched on by the Stratos project.
If you want to catch Salili he is also presenting at the KVM forum this
week
https://kvmforum2020.sched.com/event/eE4m/challenges-in-supporting-virtual-…
Mike
[1]
https://collaborate.linaro.org/display/STR/2020-10-29+Project+Stratos+Sync+…
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative, the rest follows"
Hi,
This is a refinement of the Xen loader I posted a few weeks ago.
Rather than complicate the generic loader with extra options I went
for the more expedient approach of adding a completely new device
called the guest-loader. As it didn't need to deal with any of the
subtleties of the generic-loader it also worked out somewhat simpler.
Instead of allowing the user to hand hack FDT blobs we simply add
syntactic sugar that understands the difference between a kernel and
initrd. This could be expanded in future if we want although at the
moment I don't know what else you would add.
The new syntax is now simpler:
-device guest-loader,addr=0x42000000,kernel=Image,bootargs="console=hvc0 earlyprintk=xen" \
-device guest-loader,addr=0x47000000,initrd=rootfs.cpio
So any objections? It would be nice to get in this cycle but we only
have a week left until soft-freeze.
Alex Bennée (4):
hw/board: promote fdt from ARM VirtMachineState to MachineState
hw/riscv: migrate fdt field to generic MachineState
device_tree: add qemu_fdt_setprop_string_array helper
hw/core: implement a guest-loader to support static hypervisor guests
hw/core/guest-loader.h | 34 ++++
include/hw/arm/virt.h | 1 -
include/hw/boards.h | 1 +
include/hw/riscv/virt.h | 1 -
include/sysemu/device_tree.h | 17 ++
hw/arm/virt.c | 322 ++++++++++++++++++-----------------
hw/core/guest-loader.c | 140 +++++++++++++++
hw/riscv/virt.c | 18 +-
softmmu/device_tree.c | 26 +++
hw/core/meson.build | 2 +
10 files changed, 398 insertions(+), 164 deletions(-)
create mode 100644 hw/core/guest-loader.h
create mode 100644 hw/core/guest-loader.c
--
2.20.1
On Thu, 22 Oct 2020, Julien Grall via Stratos-dev wrote:
> On 22/10/2020 09:55, Masami Hiramatsu wrote:
> > 2020年10月22日(木) 17:11 Julien Grall <julien(a)xen.org>:
> > > On 22/10/2020 08:09, Masami Hiramatsu wrote:
> > > > Hi,
> > > > Now I'm trying to use vhost-user-gpu.
> > > >
> > > > I built qemu and xen and installed user /usr/local/.
> > > > Xen installed it's qemu in /usr/local/lib/xen/bin,
> > > > Qemu was installed in /usr/local/bin, so those are not conflict.
> > > >
> > > > If I added device_model_args= option to the domU.conf file, it didn't
> > > > work (failed to exec qemu).
> > >
> > > What is the path used for QEMU?
> >
> > It should be /usr/local/lib/xen/bin/qemu-system-i386, because I
> > replaced it with a wrapper shell script to record the parameters :)
> >
> > With the device_model_override, I used locally installed
> > /usr/local/bin/qemu-system-i386.
> >
> > >
> > > > I also added device_model_override= but it didn't change neither.
> > > > But in both cases, it seems vhost-user-gpu got access from qemu.
> > > >
> > > > To clarify what argument passed to the qemu, I hacked
> > > > /usr/local/lib/xen/bin/qemu-system-i386, and found that the arguments
> > > > passed via device_model_args=[] were just added to the original
> > > > arguments.
> > > >
> > > > OK, now I'm trying to test the qemu-system-i386 itself as below
> > > >
> > > > mhiramat@develbox:/opt/agl$ sudo
> > > > /usr/local/lib/xen/bin/qemu-system-i386.bin -xen-domid 11 -no-shutdown
> > > > -chardev
> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-11,server,nowait
> > > > -mon chardev=libxl-cmd,mode=control -chardev
> > > > socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-11,server,nowait
> > > > -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config
> > > > -xen-attach -name agldemo -vnc none -display none -nographic -object
> > > > memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on -numa
> > > > node,memdev=mem -chardev socket,path=/opt/agl/vgpu.sock,id=vgpu
> > > > -device vhost-user-gpu-pci,chardev=vgpu -machine xenpv -m 513
> > >
> > > I am a bit confused what you are trying to do. QEMU has to be started
> > > after the domain is created by the toolstack (aka xl).
> > >
> > > Can you describe a bit more the steps you are using?
> >
> > This command is solely executed without xl, because I would like to check
> > what error happens. With xl, I just got the errors below.
> >
> > libxl: error: libxl_exec.c:434:spawn_middle_death: domain 14 device
> > model [20073]: unexpectedly exited with exit status 0, when we were
> > waiting for it to confirm startup
> > libxl: error: libxl_dm.c:3102:device_model_spawn_outcome: Domain
> > 14:domain 14 device model: spawn failed (rc=-3)
> > libxl: error: libxl_dm.c:3322:device_model_postconfig_done: Domain
> > 14:Post DM startup configs failed, rc=-3
> > libxl: error: libxl_create.c:1836:domcreate_devmodel_started: Domain
> > 14:device model did not start: -3
> > libxl: error: libxl_aoutils.c:646:libxl__kill_xs_path: Device Model
> > already exited
> > libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain
> > 14:Non-existant domain
> > libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
> > 14:Unable to destroy guest
> > libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
> > 14:Destruction of domain failed
> >
> > Is there any way to get the qemu's error output?
>
> IIRC the qemu's log should be in /var/log/xen/*. As you have a wrapper shell
> script, you may be able to re-direct them somewhere different if you can't
> find them.
Replying to Masami. Before adding any additional parameters like
"-device vhost-user-gpu-pci,chardev=vgpu", have you double-checked that
you can start regular DomUs successfully? I.e. domU without any special
device_model_args?
> > > > qemu-system-i386.bin: -object
> > > > memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on:
> > > > -mem-path not supported with Xen
> > > >
> > > > If I use the qemu in xen, it doesn't support -mem-path option. Hmm,
> > > > memory backend support is not enabled?
> > >
> > > This is expected. QEMU is only providing emulated/PV device for Xen
> > > guest. The memory management is done by Xen itself. When QEMU requires
> > > to access the guest memory, then it will have to issue an hypercall.
> >
> > I got it. That is the most different point from KVM.
> > (I'm not sure Xen can overcommit memory for the domains)
>
> Xen doesn't support overcommitting memory.
>
> [...]
>
> > I'm using the Xen and Qemu master branch, just add a fix for uninstall
> > error on Xen.
>
> I am afraid this is not going to be enough to test virtio on Xen on Arm. For
> Xen, you would at least need [1].
>
> [1] IOREQ feature (+ virtio-mmio) on Arm
Yes, exactly. Before you can get something like vhost-user-gpu-pci to
work you need the following things:
1) ioreq infrastructure in Xen
2) enable ioreq infrastructure in QEMU (is already there, but unused on ARM)
3) add vhost-user-gpu-pci to QEMU
1) is covered by "IOREQ feature (+ virtio-mmio) on Arm", see
https://marc.info/?l=xen-devel&m=160278030131796
2) no patch series has been posted to the list for this, but EPAM
mentioned they have a series to enable ioreqs in QEMU for ARM, they used
it for testing. As part of this, you might also have to change the
machine type to something other than -M xenpv, because xenpv doesn't
come with ioreqs or any virtio devices. (More on this below.)
3) I don't know if you can find a way to add the vhost-user-gpu-pci
emulator to qemu-system-i386 (--target-list=i386-softmmu), and certainly
you cannot easily add it to the xenpv machine because it doesn't come
with a virtio transport. But this is something EPAM might have already
done as part of their unpublished QEMU series.
> For QEMU, I don't think you would be able to use qemu i386 because it is going
> to expose x86 device.
>
> You most likely going to need to enable Xen for qemu arm
> and create a new machine that would match Xen layout.
Just as a clarification, the issue is not much that qemu i386 is "x86",
because there is no x86 emulation being done. The issue is that QEMU
comes with two machines we can use: "xenpv" and "xenfv".
xenpv is not tied to x86 in any way but also doesn't come with any
emulated devices, only Xen PV devices. So unless you add a virtio bus
transport to xenpv, it wouldn't work for vhost-user-gpu-pci.
xenfv is tied to x86 because it comes with a PIIX emulator. It would be
easy to add Virtio devices to it, but we can't really use it on ARM.
Either way something needs to be changed. It is probably not a good idea
to extend xenpv, because it is good as it is now, without any emulators.
Which means that we need to introduce something like xenfv for ARM,
maybe based on the aarch64 'virt' generic virtual platform. In which
case, it also means that we need to use a different --target-list for
QEMU. So far we only used --target-list=i386-softmmu on ARM and x86.
* Arnd Bergmann via Stratos-dev <stratos-dev(a)op-lists.linaro.org> [2020-10-12 15:56:01]:
> Yes, my rough idea would be to stay as close as possible to the existing
> virtio/virtqueue design, but replace the existing descriptors pointing to guest
> physical memory with TLV headers describing the data in the ring buffer
> itself. This might not actually be possible, but my hope is that we can get
> away with making the ring buffer large enough for all data that needs to
> be in flight at any time, require all data to be processed in order,
I had the impression that virtio allows out-of-order processing of requests i.e
requests can be completed not strictly in FIFO order. I think your proposed
virtqueue design should still allow that?
Also I think reserving 512kB of memory per virtqueue may not be acceptable in
all cases. Some device virtqueue could be more busy than other at a given time -
the busy virtqueue fills up fast and stalls for more memory to become available,
while there is free memory available with other device virtqueues. A global
(swiotlb) pool shared among multiple devices would allow for better sharing of
available memory in that sense. For memory constrained configurations,
a global shared pool may be desirable solution (vs dedicating per-virtqueue
buffers of fixed size).
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
Rob Bradford <robert.bradford(a)intel.com> writes:
> Reserve an ID for a watchdog device which may be used to ensure that the
> guest is responsive. This is equivalent of a hardware watchdog device
> and will trigger the reboot of the guest if the the host does not
> periodic ping from the the guest.
Out of interest is your watchdog device also going to allow for straight
power/reset control?
The use case I'm thinking of is QEMU's -m virt machine which is a purely
virtual machine which is a target for a number of firmware/bootcode
projects. QEMU provides a firmware emulation that will do the right
thing if you issue PSCI supervisor calls and you have directly loaded a
kernel. However if you are testing firmware - for example EDK2 or Arm
Trusted Firmware it is expected to make the final hw appropriate twiddle
to reboot the machine and of course that is undefined for -m virt.
> Signed-off-by: Rob Bradford <robert.bradford(a)intel.com>
Reviewed-by: Alex Bennée <alex.bennee(a)linaro.org>
--
Alex Bennée
Hi,
Now I'm trying to use vhost-user-gpu.
I built qemu and xen and installed user /usr/local/.
Xen installed it's qemu in /usr/local/lib/xen/bin,
Qemu was installed in /usr/local/bin, so those are not conflict.
If I added device_model_args= option to the domU.conf file, it didn't
work (failed to exec qemu).
I also added device_model_override= but it didn't change neither.
But in both cases, it seems vhost-user-gpu got access from qemu.
To clarify what argument passed to the qemu, I hacked
/usr/local/lib/xen/bin/qemu-system-i386, and found that the arguments
passed via device_model_args=[] were just added to the original
arguments.
OK, now I'm trying to test the qemu-system-i386 itself as below
mhiramat@develbox:/opt/agl$ sudo
/usr/local/lib/xen/bin/qemu-system-i386.bin -xen-domid 11 -no-shutdown
-chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-11,server,nowait
-mon chardev=libxl-cmd,mode=control -chardev
socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-11,server,nowait
-mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config
-xen-attach -name agldemo -vnc none -display none -nographic -object
memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on -numa
node,memdev=mem -chardev socket,path=/opt/agl/vgpu.sock,id=vgpu
-device vhost-user-gpu-pci,chardev=vgpu -machine xenpv -m 513
qemu-system-i386.bin: -object
memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on:
-mem-path not supported with Xen
If I use the qemu in xen, it doesn't support -mem-path option. Hmm,
memory backend support is not enabled?
mhiramat@develbox:/opt/agl$ sudo /usr/local/bin/qemu-system-i386
-xen-domid 11 -no-shutdown -chardev
socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-11,server,nowait -mon
chardev=libxl-cmd,mode=control -chardev
socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-11,server,nowait
-mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config
-xen-attach -name agldemo -vnc none -display none -nographic -object
memory-backend-file,id=mem,size=512M,mem-path=/dev/shm,share=on -numa
node,memdev=mem -chardev socket,path=/opt/agl/vgpu.sock,id=vgpu
-device vhost-user-gpu-pci,chardev=vgpu -machine xenpv -m 513
qemu-system-i386: -xen-domid 11: Option not supported for this target
With the qemu which I built, it seems it doesn't support xen... but
I'm sure the qemu has to be configured to support xen as below
mhiramat@develbox:~/ksrc/qemu$ grep XEN build/config-host.mak
CONFIG_XEN_BACKEND=y
CONFIG_XEN_CTRL_INTERFACE_VERSION=41500
XEN_CFLAGS=-I/usr/local/include
XEN_LIBS=-L/usr/local/lib -lxenctrl -lxenstore -lxenguest
-lxenforeignmemory -lxengnttab -lxenevtchn -lxendevicemodel
-lxentoolcore
Thanks,
--
Masami Hiramatsu