> On Jul 27, 2020, at 8:48 AM, Mike Holmes via Stratos-dev <stratos-dev(a)op-lists.linaro.org> wrote:
>
> All
>
> We have been using the 3pm GMT slot on Thursdays every other week for the
> sync call, unfortunately this repeatedly clashes with an all hands Linaro
> call, as it does this week
>
> I propose we make it the 2.00pm GMT Thursday slot every other week, any
> other suggestions ?
Or perhaps flip it around. Does anyone disagree with moving it to 1400 GMT?
Certainly sounds good. :-)
> Mike
>
> --
> Mike Holmes | Director, Foundation Technologies, Linaro
> Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
> "Work should be fun and collaborative, the rest follows"
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
All
We have been using the 3pm GMT slot on Thursdays every other week for the
sync call, unfortunately this repeatedly clashes with an all hands Linaro
call, as it does this week
I propose we make it the 2.00pm GMT Thursday slot every other week, any
other suggestions ?
Mike
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative, the rest follows"
Hi,
Looking at the history of VirtIO is seems very much like virtio-mmio is
treated as the legacy interface with most of the action to date being
directed at virtio-pci which has a number of advantages including:
- already specified behaviour and driver model
- enumeration support
- support for MSI signals
The benefits of this have even led to specs for alternative
transport/devices such and IVSHMEMv2 being based around a PCI device
model.
There are detractors however, most notably the new microvm machine types
have chosen virtio-mmio's static assignment because it allows for a
faster spin up of VMs as you don't need to deal with specified settle
times in the spec.
Azzedine mentioned in a previous call that the PCI device model adds
complications based on assumptions about real HW which might not be
relevant to virtual devices. Others have mentioned the increased
complexity of the code base to handle a full PCI device.
I'm curious to what the implications are for supporting PCI in type-1
hypervisors? Is the desire to avoid handling all the details of PCI
within the small hypervisor code base and punt details to the Dom0
causing more vmexits?
As Srivatsa mentioned there have been some attempts to update the
virtio-mmio spec to support MSI IRQs which I think are the best way to
minimise unwanted context switches. So should we raise a card to help
with that effort?
Work that would be involved:
- defining the out-of-band information (MSI irqs in device tree?)
- updating the spec
- supporting MSI in the virtio-mmio front-end
Anything else?
Of course one worry would be that by adding all this additional
functionality to virtio-mmio you start to approach the complexity of
virtio-pci. I don't know if this is a real worry but I suspect people
using virtio-mmio for speed won't want any optional enhancements to slow
it down.
Thoughts?
--
Alex Bennée
Hi Stefano,
I thought I'd better commit the idea to email (and the list) so we can
work out what's feasible.
Concept:
Boot AGL Linux on two different boards/hypervisors
Detail:
We build AGL Linux with VirtIO block/network enabled in the kernel. We
then take the single image (blockdev with firmware boot or direct
kernel boot?) and demonstrate it booting on two different hardware
platforms with two different hypervisors.
System 1:
MACCHIATOBin (4xA72)
Debian Host OS + KVM
Boot with QEMU + KVM
System 2
Ultra 96 (2xA53 + 2xR5s)
Xen Hypervisor boot
DomU Debian
We could use different backends for the block device on each setup, say
LVM and qcow2?
Could we make it more engaging? How much effort would it be to get some
sort of graphics up and running?
Things to build:
- AGL Linux
- Demo Boxes
- Xen + virtio (out-of-tree)
Thoughts?
--
Alex Bennée