Hi,
The following is a breakdown (as best I can figure) of the work needed
to demonstrate VirtIO backends in Rust on the Xen hypervisor. It
requires work across a number of projects but notably core rust and virtio
enabling in the Xen project (building on the work EPAM has already done)
and the start of enabling rust-vmm crate to work with Xen.
The first demo is a fairly simple toy to exercise the direct hypercall
approach for a unikernel backend. On it's own it isn't super impressive
but hopefully serves as a proof of concept for the idea of having
backends running in a single exception level where latency will be
important.
The second is a much more ambitious bridge between Xen and vhost-user to
allow for re-use of the existing vhost-user backends with the bridge
acting as a proxy for what would usually be a full VMM in the type-2
hypervisor case. With that in mind the rust-vmm work is only aimed at
doing the device emulation and doesn't address the larger question of
how type-1 hypervisors can be integrated into the rust-vmm hypervisor
model.
A quick note about the estimates. They are exceedingly rough guesses
plucked out of the air and I would be grateful for feedback from the
appropriate domain experts on if I'm being overly optimistic or
pessimistic.
The links to the Stratos JIRA should be at least read accessible to all
although they contain the same information as the attached document
(albeit with nicer PNG renderings of my ASCII art ;-). There is a
Stratos sync-up call next Thursday:
https://calendar.google.com/event?action=TEMPLATE&tmeid=MWpidm5lbzM5NjlydnA…
and I'm sure there will also be discussion in the various projects
(hence the wide CC list). The Stratos calls are open to anyone who wants
to attend and we welcome feedback from all who are interested.
So on with the work breakdown:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
STRATOS PLANNING FOR 21 TO 22
Alex Bennée
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Table of Contents
─────────────────
1. Xen Rust Bindings ([STR-51])
.. 1. Upstream an "official" rust crate for Xen ([STR-52])
.. 2. Basic Hypervisor Interactions hypercalls ([STR-53])
.. 3. [#10] Access to XenStore service ([STR-54])
.. 4. VirtIO support hypercalls ([STR-55])
2. Xen Hypervisor Support for Stratos ([STR-56])
.. 1. Stable ABI for foreignmemory mapping to non-dom0 ([STR-57])
.. 2. Tweaks to tooling to launch VirtIO guests
3. rust-vmm support for Xen VirtIO ([STR-59])
.. 1. Make vm-memory Xen aware ([STR-60])
.. 2. Xen IO notification and IRQ injections ([STR-61])
4. Stratos Demos
.. 1. Rust based stubdomain monitor ([STR-62])
.. 2. Xen aware vhost-user master ([STR-63])
1 Xen Rust Bindings ([STR-51])
══════════════════════════════
There exists a [placeholder repository] with the start of a set of
x86_64 bindings for Xen and a very basic hello world uni-kernel
example. This forms the basis of the initial Xen Rust work and will be
available as a [xen-sys crate] via cargo.
[STR-51] <https://linaro.atlassian.net/browse/STR-51>
[placeholder repository] <https://gitlab.com/cardoe/oxerun.git>
[xen-sys crate] <https://crates.io/crates/xen-sys>
1.1 Upstream an "official" rust crate for Xen ([STR-52])
────────────────────────────────────────────────────────
To start with we will want an upstream location for future work to be
based upon. The intention is the crate is independent of the version
of Xen it runs on (above the baseline version chosen). This will
entail:
• ☐ agreeing with upstream the name/location for the source
• ☐ documenting the rules for the "stable" hypercall ABI
• ☐ establish an internal interface to elide between ioctl mediated
and direct hypercalls
• ☐ ensure the crate is multi-arch and has feature parity for arm64
As such we expect the implementation to be standalone, i.e. not
wrapping the existing Xen libraries for mediation. There should be a
close (1-to-1) mapping between the interfaces in the crate and the
eventual hypercall made to the hypervisor.
Estimate: 4w (elapsed likely longer due to discussion)
[STR-52] <https://linaro.atlassian.net/browse/STR-52>
1.2 Basic Hypervisor Interactions hypercalls ([STR-53])
───────────────────────────────────────────────────────
These are the bare minimum hypercalls implemented as both ioctl and
direct calls. These allow for a very basic binary to:
• ☐ console_io - output IO via the Xen console
• ☐ domctl stub - basic stub for domain control (different API?)
• ☐ sysctl stub - basic stub for system control (different API?)
The idea would be this provides enough hypercall interface to query
the list of domains and output their status via the xen console. There
is an open question about if the domctl and sysctl hypercalls are way
to go.
Estimate: 6w
[STR-53] <https://linaro.atlassian.net/browse/STR-53>
1.3 [#10] Access to XenStore service ([STR-54])
───────────────────────────────────────────────
This is a shared configuration storage space accessed via either Unix
sockets (on dom0) or via the Xenbus. This is used to access
configuration information for the domain.
Is this needed for a backend though? Can everything just be passed
direct on the command line?
Estimate: 4w
[STR-54] <https://linaro.atlassian.net/browse/STR-54>
1.4 VirtIO support hypercalls ([STR-55])
────────────────────────────────────────
These are the hypercalls that need to be implemented to support a
VirtIO backend. This includes the ability to map another guests memory
into the current domains address space, register to receive IOREQ
events when the guest knocks at the doorbell and inject kicks into the
guest. The hypercalls we need to support would be:
• ☐ dmop - device model ops (*_ioreq_server, setirq, nr_vpus)
• ☐ foreignmemory - map and unmap guest memory
The DMOP space is larger than what we need for an IOREQ backend so
I've based it just on what arch/arm/dm.c exports which is the subset
introduced for EPAM's virtio work.
Estimate: 12w
[STR-55] <https://linaro.atlassian.net/browse/STR-55>
2 Xen Hypervisor Support for Stratos ([STR-56])
═══════════════════════════════════════════════
These tasks include tasks needed to support the various different
deployments of Stratos components in Xen.
[STR-56] <https://linaro.atlassian.net/browse/STR-56>
2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57])
───────────────────────────────────────────────────────────────
Currently the foreign memory mapping support only works for dom0 due
to reference counting issues. If we are to support backends running in
their own domains this will need to get fixed.
Estimate: 8w
[STR-57] <https://linaro.atlassian.net/browse/STR-57>
2.2 Tweaks to tooling to launch VirtIO guests
─────────────────────────────────────────────
There might not be too much to do here. The EPAM work already did
something similar for their PoC for virtio-block. Essentially we need
to ensure:
• ☐ DT bindings are passed to the guest for virtio-mmio device
discovery
• ☐ Our rust backend can be instantiated before the domU is launched
This currently assumes the tools and the backend are running in dom0.
Estimate: 4w
3 rust-vmm support for Xen VirtIO ([STR-59])
════════════════════════════════════════════
This encompasses the tasks required to get a vhost-user server up and
running while interfacing to the Xen hypervisor. This will require the
xen-sys.rs crate for the actual interface to the hypervisor.
We need to work out how a Xen configuration option would be passed to
the various bits of rust-vmm when something is being built.
[STR-59] <https://linaro.atlassian.net/browse/STR-59>
3.1 Make vm-memory Xen aware ([STR-60])
───────────────────────────────────────
The vm-memory crate is the root crate for abstracting access to the
guests memory. It currently has multiple configuration builds to
handle difference between mmap on Windows and Unix. Although mmap
isn't directly exposed the public interfaces support a mmap like
interface. We would need to:
• ☐ work out how to expose foreign memory via the vm-memory mechanism
I'm not sure if this just means implementing the GuestMemory trait for
a GuestMemoryXen or if we need to present a mmap like interface.
Estimate: 8w
[STR-60] <https://linaro.atlassian.net/browse/STR-60>
3.2 Xen IO notification and IRQ injections ([STR-61])
─────────────────────────────────────────────────────
The KVM world provides for ioeventfd (notifications) and irqfd
(injection) to signal asynchronously between the guest and the
backend. As far a I can tell this is currently handled inside the
various VMMs which assume a KVM backend.
While the vhost-user slave code doesn't see the
register_ioevent/register_irqfd events it does deal with EventFDs
throughout the code. Perhaps the best approach here would be to create
a IOREQ crate that can create EventFD descriptors which can then be
passed to the slaves to use for notification and injection.
Otherwise there might be an argument for a new crate that can
encapsulate this behaviour for both KVM/ioeventd and Xen/IOREQ setups?
Estimate: 8w?
[STR-61] <https://linaro.atlassian.net/browse/STR-61>
4 Stratos Demos
═══════════════
These tasks cover the creation of demos that brig together all the
previous bits of work to demonstrate a new area of capability that has
been opened up by Stratos work.
4.1 Rust based stubdomain monitor ([STR-62])
────────────────────────────────────────────
This is a basic demo that is a proof of concept for a unikernel style
backend written in pure Rust. This work would be a useful precursor
for things such as the RTOS Dom0 on a safety island ([STR-11]) or as a
carrier for the virtio-scmi backend.
The monitor program will periodically poll the state of the other
domains and echo their status to the Xen console.
Estimate: 4w
#+name: stub-domain-example
#+begin_src ditaa :cmdline -o :file stub_domain_example.png
Dom0 | DomU | DomStub
| |
: /-------------\ :
| |cPNK | |
| | | |
| | | |
/------------------------------------\ | | GuestOS | |
|cPNK | | | | |
EL0 | Dom0 Userspace (xl tools, QEMU) | | | | | /---------------\
| | | | | | |cYEL |
\------------------------------------/ | | | | | |
+------------------------------------+ | | | | | Rust Monitor |
EL1 |cA1B Dom0 Kernel | | | | | | |
+------------------------------------+ | \-------------/ | \---------------/
-------------------------------------------------------------------------------=------------------
+-------------------------------------------------------------------------------------+
EL2 |cC02 Xen Hypervisor |
+-------------------------------------------------------------------------------------+
#+end_src
[STR-62] <https://linaro.atlassian.net/browse/STR-62>
[STR-11] <https://linaro.atlassian.net/browse/STR-11>
4.2 Xen aware vhost-user master ([STR-63])
──────────────────────────────────────────
Usually the master side of a vhost-user system is embedded directly in
the VMM itself. However in a Xen deployment their is no overarching
VMM but a series of utility programs that query the hypervisor
directly. The Xen tooling is also responsible for setting up any
support processes that are responsible for emulating HW for the guest.
The task aims to bridge the gap between Xen's normal HW emulation path
(ioreq) and VirtIO's userspace device emulation (vhost-user). The
process would be started with some information on where the
virtio-mmio address space is and what the slave binary will be. It
will then:
• map the guest into Dom0 userspace and attach to a MemFD
• register the appropriate memory regions as IOREQ regions with Xen
• create EventFD channels for the virtio kick notifications (one each
way)
• spawn the vhost-user slave process and mediate the notifications and
kicks between the slave and Xen itself
#+name: xen-vhost-user-master
#+begin_src ditaa :cmdline -o :file xen_vhost_user_master.png
Dom0 DomU
|
|
|
|
|
|
+-------------------+ +-------------------+ |
| |----------->| | |
| vhost-user | vhost-user | vhost-user | : /------------------------------------\
| slave | protocol | master | | | |
| (existing) |<-----------| (rust) | | | |
+-------------------+ +-------------------+ | | |
^ ^ | ^ | | Guest Userspace |
| | | | | | |
| | | IOREQ | | | |
| | | | | | |
v v V | | \------------------------------------/
+---------------------------------------------------+ | +------------------------------------+
| ^ ^ | ioctl ^ | | | |
| | iofd/irqfd eventFD | | | | | | Guest Kernel |
| +---------------------------+ | | | | | +-------------+ |
| | | | | | | virtio-dev | |
| Host Kernel V | | | | +-------------+ |
+---------------------------------------------------+ | +------------------------------------+
| ^ | | ^
| hyper | | |
----------------------=------------- | -=--- | ----=------ | -----=- | --------=------------------
| call | Trap | | IRQ
V | V |
+-------------------------------------------------------------------------------------+
| | ^ | ^ |
| | +-------------+ | |
EL2 | Xen Hypervisor | | |
| +-------------------------------+ |
| |
+-------------------------------------------------------------------------------------+
#+end_src
[STR-63] <https://linaro.atlassian.net/browse/STR-63>
--
Alex Bennée
Hello,
This patchset adds vhost-user-gpio device's support in Qemu. The support for the
same has already been added to virtio specification and Linux Kernel.
A Rust based backend is also in progress and is tested against this patchset:
https://github.com/rust-vmm/vhost-device/pull/76
--
Viresh
Viresh Kumar (2):
hw/virtio: add boilerplate for vhost-user-gpio device
hw/virtio: add vhost-user-gpio-pci boilerplate
hw/virtio/Kconfig | 5 +
hw/virtio/meson.build | 2 +
hw/virtio/vhost-user-gpio-pci.c | 69 ++++++
hw/virtio/vhost-user-gpio.c | 343 ++++++++++++++++++++++++++++
include/hw/virtio/vhost-user-gpio.h | 35 +++
5 files changed, 454 insertions(+)
create mode 100644 hw/virtio/vhost-user-gpio-pci.c
create mode 100644 hw/virtio/vhost-user-gpio.c
create mode 100644 include/hw/virtio/vhost-user-gpio.h
--
2.31.1.272.g89b43f80a514
This series adds support for virtio-video decoder devices in Qemu
and also provides a vhost-user-video vmm implementation.
The vhost-user-video vmm currently parses virtio-vido v3 protocol
(as that is what the Linux frontend driver implements).
It then converts that to a v4l2 mem2mem stateful decoder device.
Currently this has been tested using v4l2 vicodec test driver in Linux
[1] but it is intended to be used with Arm SoCs which often implement
v4l2 stateful decoders/encoders drivers for their video accelerators.
The primary goal so far has been to allow continuing development
of virtio-video Linux frontend driver and testing with Qemu. Using
vicodec on the host allows a purely virtual dev env, and allows for
ci integration in the future by kernelci etc.
This series also adds the virtio_video.h header and adds the
FWHT format which is used by vicodec driver.
I have tested this VMM using v4l2-ctl from v4l2 utils in the guest
to do a video decode to a file. This can then be validated using ffplay
v4l2-compliance tool in the guest has also been run which stresses the
interface and issues lots of syscall level tests
See the README.md for example commands on how to configure guest kernel
and do a video decode using Qemu, vicodec using this VMM.
Linux virtio-video frontend driver code:
https://github.com/petegriffin/linux/commits/v5.10-virtio-video-latest
Qemu vmm code:
https://github.com/petegriffin/qemu/tree/vhost-virtio-video-master-v1
This is part of a wider initiative by Linaro called
"project Stratos" for which you can find information here:
https://collaborate.linaro.org/display/STR/Stratos+Home
Applies cleanly to git://git.qemu.org/qemu.git master(a3607def89).
Thanks,
Peter.
[1] https://lwn.net/Articles/760650/
Peter Griffin (8):
vhost-user-video: Add a README.md with cheat sheet of commands
MAINTAINERS: Add virtio-video section
vhost-user-video: boiler plate code for vhost-user-video device
vhost-user-video: add meson subdir build logic
standard-headers: Add virtio_video.h
virtio_video: Add Fast Walsh-Hadamard Transform format
hw/display: add vhost-user-video-pci
tools/vhost-user-video: Add initial vhost-user-video vmm
MAINTAINERS | 8 +
hw/display/Kconfig | 5 +
hw/display/meson.build | 3 +
hw/display/vhost-user-video-pci.c | 82 +
hw/display/vhost-user-video.c | 386 ++++
include/hw/virtio/vhost-user-video.h | 41 +
include/standard-headers/linux/virtio_video.h | 484 +++++
tools/meson.build | 9 +
tools/vhost-user-video/50-qemu-rpmb.json.in | 5 +
tools/vhost-user-video/README.md | 98 +
tools/vhost-user-video/main.c | 1680 ++++++++++++++++
tools/vhost-user-video/meson.build | 10 +
tools/vhost-user-video/v4l2_backend.c | 1777 +++++++++++++++++
tools/vhost-user-video/v4l2_backend.h | 99 +
tools/vhost-user-video/virtio_video_helpers.c | 462 +++++
tools/vhost-user-video/virtio_video_helpers.h | 166 ++
tools/vhost-user-video/vuvideo.h | 43 +
17 files changed, 5358 insertions(+)
create mode 100644 hw/display/vhost-user-video-pci.c
create mode 100644 hw/display/vhost-user-video.c
create mode 100644 include/hw/virtio/vhost-user-video.h
create mode 100644 include/standard-headers/linux/virtio_video.h
create mode 100644 tools/vhost-user-video/50-qemu-rpmb.json.in
create mode 100644 tools/vhost-user-video/README.md
create mode 100644 tools/vhost-user-video/main.c
create mode 100644 tools/vhost-user-video/meson.build
create mode 100644 tools/vhost-user-video/v4l2_backend.c
create mode 100644 tools/vhost-user-video/v4l2_backend.h
create mode 100644 tools/vhost-user-video/virtio_video_helpers.c
create mode 100644 tools/vhost-user-video/virtio_video_helpers.h
create mode 100644 tools/vhost-user-video/vuvideo.h
--
2.25.1
Hi,
To start the new year I thought would dump some of my thoughts on
zero-copy between VM domains. For project Stratos we've gamely avoided
thinking too hard about this while we've been concentrating on solving
more tractable problems. However we can't put it off forever so lets
work through the problem.
Memory Sharing
==============
For any zero-copy to work there has to be memory sharing between the
domains. For traditional KVM this isn't a problem as the host kernel
already has access to the whole address space of all it's guests.
However type-1 setups (and now pKVM) are less promiscuous about sharing
their address space across the domains.
We've discussed options like dynamically sharing individual regions in
the past (maybe via iommu hooks). However given the performance
requirements I think that is ruled out in favour of sharing of
appropriately sized blocks of memory. Either one of the two domains has
to explicitly share a chunk of its memory with the other or the
hypervisor has to allocate the memory and make it visible to both. What
considerations do we have to take into account to do this?
* the actual HW device may only have the ability to DMA to certain
areas of the physical address space.
* there may be alignment requirements for HW to access structures (e.g.
GPU buffers/blocks)
Which domain should do the sharing? The hypervisor itself likely doesn't
have all the information to make the choice but in a distributed driver
world it won't always be the Dom0/Host equivalent. While the domain with
the HW driver in it will know what the HW needs it might not know if the
GPA's being used are actually visible to the real PA it is mapped to.
I think this means for useful memory sharing we need the allocation to
be done by the HW domain but with support from the hypervisor to
validate the region meets all the physical bus requirements.
Buffer Allocation
=================
Ultimately I think the majority of the work that will be needed comes
down to how buffer allocation is handled in the kernels. This is also
the area I'm least familiar with so I look forward to feedback from
those with deeper kernel knowledge.
For Linux there already exists the concept of DMA reachable regions that
take into account the potentially restricted set of addresses that HW
can DMA to. However we are now adding a second constraint which is where
the data is eventually going to end up.
For example the HW domain may be talking to network device but the
packet data from that device might be going to two other domains. We
wouldn't want to share a region for received network packets between
both domains because that would leak information so the network driver
needs knowledge of which shared region to allocate from and hope the HW
allows us to filter the packets appropriately (maybe via VLAN tag). I
suspect the pure HW solution of just splitting into two HW virtual
functions directly into each domain is going to remain the preserve of
expensive enterprise kit for some time.
Should the work be divided up between sub-systems? Both the network and
block device sub-systems have their own allocation strategies and would
need some knowledge about the final destination for their data. What
other driver sub-systems are going to need support for this sort of
zero-copy forwarding? While it would be nice for every VM transaction to
be zero-copy we don't really need to solve it for low speed transports.
Transparent fallback and scaling
================================
As we know memory is always a precious resource that we never have
enough of. The more we start carving up memory regions for particular
tasks the less flexibility the system has as a whole to make efficient
use of it. We can almost guarantee whatever number we pick for given
VM-to-VM conduit will be wrong. Any memory allocation system based on
regions will have to be able to fall back graciously to using other
memory in the HW domain and rely on traditional bounce buffering
approaches while under heavy load. This will be a problem for VirtIO
backends to understand when some data that needs to go to the FE domain
needs this bounce buffer treatment. This will involve tracking
destination domain metadata somewhere in the system so it can be queried
quickly.
Is there a cross-over here with the kernels existing support for NUMA
architectures? It seems to me there are similar questions about the best
place to put memory that perhaps we can treat multi-VM domains as
different NUMA zones?
Finally there is the question of scaling. While mapping individual
transactions would be painfully slow we need to think about how dynamic
a modern system is. For example do you size your shared network region
to cope with a full HD video stream of data? Most of the time the
user won't be doing anything nearly as network intensive.
Of course adding the dynamic addition (and removal) of shared memory
regions brings in more potential synchronisation problems of ensuring
shared memory isn't accessed by either side when taken down. We would
need some sort of assurance the sharee has finished with all the data in
a given region before the sharer brings the share down.
Conclusion
==========
This long text hasn't even attempted to come up with a zero-copy
architecture for Linux VMs. I'm hoping as we discuss this we can capture
all the various constraints any such system is going to need to deal
with. So my final questions are:
- what other constraints we need to take into account?
- can we leverage existing sub-systems to build this support?
I look forward to your thoughts ;-)
--
Alex Bennée
+Bill Mills <bill.mills(a)linaro.org>
I confirm that the patch gets Ubuntu 21.10 "mostly" work (SD card access
issue related to IRQ I believe) as DOM0 for Xen 4.16 (booted
SystemReady-IR) on macchiatobin.
The patch allows the Linux ComPhy driver to call SiP specific SMC call to
initialize the ComPhy chip (SerDes in my mind).
Investigation shows that EDK2 is initializing ComPhy (PCI, SATA... lanes)
prior booting Linux and thus do not need the patch (SMC call is done prior
Xen takes over).
Cheers
FF
Additional information:
Documentation: auto-init of ComPhy
<https://github.com/andreiw/MacchiatoBin-edk2/blob/uefi-2.7-armada-18.12-and…>
Comphy configuration in Marvell.dec
<https://github.com/andreiw/MacchiatoBin-edk2/blob/uefi-2.7-armada-18.12-and…>
MvComphyInit called in PlatInitDxe
<https://github.com/andreiw/MacchiatoBin-edk2/blob/uefi-2.7-armada-18.12-and…>
MvComPhyInit
<https://github.com/andreiw/MacchiatoBin-edk2/blob/uefi-2.7-armada-18.12-and…>
:
BoardDescProtocol->BoardDescComPhyGet (BoardDescProtocol,
&ComPhyBoardDesc); /* gets the configuration above in a C structure */
For each chip {
InitComPhyConfig (PtrChipCfg, LaneData, &ComPhyBoardDesc[Index]);
Configure each lane based on Pcd data
}
On Thu, 22 Oct 2020 at 23:46, Stefano Stabellini via Stratos-dev <
stratos-dev(a)op-lists.linaro.org> wrote:
> On Thu, 22 Oct 2020, Alex Bennée wrote:
> > Stefano Stabellini <stefano.stabellini(a)xilinx.com> writes:
> > (XEN) Check compatible for node /a(XEN) Loading d0 initrd from
> 00000000aefac000 to 0x0000000028200000-0x0000000029eedb66
> > (XEN) Loading d0 DTB to 0x0000000028000000-0x0000000028005ed9
> > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> > (XEN) Scrubbing Free RAM in background
> > (XEN) Std. Loglevel: All
> > (XEN) Guest Loglevel: All
> > (XEN) ***************************************************
> > (XEN) PLEASE SPECIFY dom0_mem PARAMETER - USING 512M FOR NOW
> > (XEN) ***************************************************
>
> This is a problem, especially if you are booting Debian 512MB are not
> going to be enough. It might also be the reason for the hang below.
>
> Try to add something like: dom0_mem=2G to the Xen command line.
>
>
> > (XEN) 3... 2... 1...
> > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch
> input)
> > (XEN) Check compatible for node /chosen/module@b0c9b000
> > (XEN) cplen 17
> > (XEN) multiboot,module
> > (XEN) Check compatible for node /chosen/module@aefac000
> > (XEN) cplen 17
> > (XEN) multiboot,module
> > (XEN) Freed 340kB init memory.
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER24
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER28
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER32
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER36
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER40
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> > (XEN) d0v2: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> > (XEN) d0v3: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000001 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000002 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000004 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000008 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000010 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000020 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000040 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000080 to ICPENDR8
> > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=25: not implemented
> > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000100 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000200 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000000000400 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000800 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000001000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000002000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000004000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000008000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000010000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000020000 to ICPENDR8
> > (XEN) d0v2 Unhandled SMC/HVC: 0x82000001
> > (XEN) d0v3: vGICD: unhandled word write 0x00000000040000 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000000080000 to ICPENDR8
> > (XEN) d0v2 Unhandled SMC/HVC: 0x82000002
> > (XEN) d0v3: vGICD: unhandled word write 0x00000000100000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000200000 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000400000 to ICPENDR8
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000800000 to ICPENDR8
> > (XEN) d0v1: vGICD: unhandled word write 0x00000001000000 to ICPENDR8
> > (XEN) d0v1: vGICD: unhandled word write 0x00000002000000 to ICPENDR8
> > (XEN) d0v1: vGICD: unhandled word write 0x00000004000000 to ICPENDR8
> > (XEN) d0v1: vGICD: unhandled word write 0x00000008000000 to ICPENDR8
> > (XEN) d0v1 Unhandled SMC/HVC: 0x82000001
> > (XEN) d0v3 Unhandled SMC/HVC: 0x82000002
> > (XEN) d0v3: vGICD: unhandled word write 0x00000010000000 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000020000000 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000040000000 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000080000000 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000000000001 to ICPENDR12
> > (XEN) d0v3 Unhandled SMC/HVC: 0x82000001
> > (XEN) d0v1 Unhandled SMC/HVC: 0x82000002
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000000002 to ICPENDR12
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000000004 to ICPENDR12
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000000008 to ICPENDR12
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000000010 to ICPENDR12
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000000020 to ICPENDR12
> > (XEN) d0v1 Unhandled SMC/HVC: 0x82000001
> > (XEN) d0v1 Unhandled SMC/HVC: 0x82000002
>
> I suspect the ICPENDR problem is not an issue anymore. We are seeing
> another hang, maybe due to the lack of dom0_mem or something else.
>
> The "Unhandled SMC/HVC" messages are interesting: Xen blocks SMC calls
> by default. The dom0 kernel here is trying to make two SiP calls, Xen
> blocks them and returns "unimplemented". I don't know if it causes any
> issues to the kernel but I can imagine that the kernel driver might
> refuse to continue. It is typically a firmware driver
> (drivers/firmware).
>
> If the two calls are actually required to boot, then Xen should have a
> 'mediator' driver to filter the calls allowed from the ones that are not
> allowed. See for instance xen/arch/arm/platforms/xilinx-zynqmp-eemi.c.
> As a test, the appended patch allows all SMC calls for dom0:
>
>
> diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> index a36db15fff..821c15852a 100644
> --- a/xen/arch/arm/vsmc.c
> +++ b/xen/arch/arm/vsmc.c
> @@ -286,10 +286,32 @@ static bool vsmccc_handle_call(struct cpu_user_regs
> *regs)
>
> if ( !handled )
> {
> - gprintk(XENLOG_INFO, "Unhandled SMC/HVC: %#x\n", funcid);
> + if ( is_hardware_domain(current->domain) )
> + {
> + struct arm_smccc_res res;
> +
> + arm_smccc_1_1_smc(get_user_reg(regs, 0),
> + get_user_reg(regs, 1),
> + get_user_reg(regs, 2),
> + get_user_reg(regs, 3),
> + get_user_reg(regs, 4),
> + get_user_reg(regs, 5),
> + get_user_reg(regs, 6),
> + get_user_reg(regs, 7),
> + &res);
> +
> + set_user_reg(regs, 0, res.a0);
> + set_user_reg(regs, 1, res.a1);
> + set_user_reg(regs, 2, res.a2);
> + set_user_reg(regs, 3, res.a3);
> + }
> + else
> + {
> + gprintk(XENLOG_INFO, "Unhandled SMC/HVC: %#x\n", funcid);
>
> - /* Inform caller that function is not supported. */
> - set_user_reg(regs, 0, ARM_SMCCC_ERR_UNKNOWN_FUNCTION);
> + /* Inform caller that function is not supported. */
> + set_user_reg(regs, 0, ARM_SMCCC_ERR_UNKNOWN_FUNCTION);
> + }
> }
>
> return true;
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
>
--
François-Frédéric Ozog | *Director Business Development*
T: +33.67221.6485
francois.ozog(a)linaro.org | Skype: ffozog