On 15-01-21, 14:16, Arnd Bergmann via Stratos-dev wrote:
> You need a driver in the guest though that understands the
> device specific signaling and additional data.
> Let's look at a moderately complex example I picked at random,
> drivers/leds/leds-lp8501.c:
>
> I assume the idea would be to not replace the entire driver
> with a greybus specific one, but to reused as much as possible
> from the existing code. The driver has no interrupts but it needs
> to access a gpio line and some device specific configuration
> data, which it can get either from a platform_data or from DT
> properties.
>
> Passing such a device through greybus then requires at least
> these steps:
>
> * allocate a unique device:vendor ID pair
> * create a lp8501 specific manifest binding for that ID
> * for the host, create an lp8501 specific greybus host driver to
> - read the device tree in the host, convert into
> manifest format according to the binding
> - open the raw i2c device and gpio line from user space
> - create virtual devices for these two, describe them
> in the manifest
> * for the guest, create an lp8501 greybus device driver for the
> vendor:device ID pair, to
> - interpret the manifest, convert data into lp55xx_platform_data
> - instantiate a gpio controller with one gpio line,
> - allocate a gpio number for that controller, add it to the platform data
> - instantiate a i2c host
> - instantiate a i2c device on that host, using the platform_data
> and the "lp8501" i2c_device_id string.
>
> If a device has no DT properties or platform_data, and no gpio,
> reset, regulator, clock, or other dependendencies, some of the
> steps can be skipped, but at the minimum you still need device
> specific code in the guest to map the vendor:device ID to
> an i2c_device_id.
Right, I misunderstood it earlier when I thought you are talking about
the controller.
Greybus takes care of how a i2c message gets transferred and gets
translated to the host's controller driver, but the device sits above
this layer.
The device (like i2c memory or touchscreen) has its own driver and
protocols to follow, which don't have much to do with i2c (it can just
be spi as well with almost exactly the same driver). i2c here is just
a bus, as we have amba buses for ARM platforms (where we can directly
access registers). And the end device's driver will be there at the
guest and not host.
I don't think copying their properties to manifest adds any real value
here. We should do that part over DT itself, as the guest is going to
receive one from the host anyway. We just need to see how we divide
this information between DT and manifests, or maybe we can make
greybus work with DT as well (at the client side).
At this point I would also like to ask if we should also get this
discussion going over the greybus-list, specially people like Greg KH
and Johan Hovold, as any changes to greybus would require their
approval as well anyway and they may be able to give some ideas as
well.
--
viresh
Hi All (inc upstream authors CC'ed),
There have been various discussions over the last few weeks about where
the development priorities for Stratos should be. I wanted to lay out a
summary of those discussions and where I think the focus is and what
open questions remain.
Multimedia
==========
With virtio-video approaching standardisation:
Subject: [RFC PATCH v5] virtio-video: Add virtio video device specification
Date: Wed, 20 Jan 2021 17:31:43 +0900
Message-Id: <20210120083143.766189-1-acourbot(a)chromium.org>
we think enabling this would be a good introduction to the challenges of
high bandwidth multimedia. We considered more advanced devices such as
cameras but thought that given the Linux kernel API is still evolving it
was too soon to try and stabilise a VirtIO specification - especially if
we want to avoid just making it ape the Linux API. virtio-gpu (including
virtio-wayland) already has a number of implementations across a number
of VMMs and hypervisors that it doesn't make sense to add yet another
one to the mix. However virtio-video does share some similar problems
including needing to solve the management of memory across virtual
domains where the final location and alignment of memory are important.
Peter Griffin is leading this work and creating some cards shortly.
Broadly this will cover:
- Helping get the Linux FE (from Google's ChromeOS) up-streamed
- Implementing a standalone Backend (vhost-user, via QEMU)
- Architecture document for more complex deployments
The initial demo will involve terminating the backend on a KVM Host or
Xen Dom0. The architecture work will consider how the more complex
deployments would work (splitting domains, mapping to secure world etc)
and form the basis for future work.
Memory Isolation
================
We did a bunch of investigative work last cycle but generated rather
more questions than concrete answers. There are a number of avenues to
explore but currently there isn't a clear way forward for a general
purpose solution for the problem. There is ongoing work in the community
on solving the specific zero-copy problem for virtio-gpu and we hope to
learn more lessons with our virtio-video work. In the meantime there was
a potential copy based solution proposed that works for low performance
interface. Currently described as "Fat VirtQueues" (name subject to
change) this embeds all data inside the virtqueues themselves. The major
limitation is that any data frames passed this way must be fully self
contained and not reference memory outside the queue.
This makes the isolation problem more tractable as the queue itself will
be the only thing that needs to be shared between virtual domains.
Arnd Bergmann will be leading this work which is currently captured in
the STR-25 card:
https://projects.linaro.org/browse/STR-25
Xen Work
========
We did a bit of work on Xen last cycle which was mostly housekeeping
work to fix regressions and issues booting up on ARM64 systems. We want
to continue the work here to make Xen our reference type-1 hypervisor
for VirtIO work. There is currently a patch series:
Subject: [PATCH V4 00/24] IOREQ feature (+ virtio-mmio) on Arm
Date: Tue, 12 Jan 2021 23:52:08 +0200
Message-Id: <1610488352-18494-1-git-send-email-olekstysh(a)gmail.com>
which we are helping review and test. It currently comes with it's own
virtio-block device backend which can replace the Xen block device
approach. We plan to build on this work and enable QEMU as a generic
virtio backend for Xen ioreq devices as a general proving ground for
virtio backends. While it won't allow for the fastest virtio it will
give access to a broad range of backends thanks to QEMUs general purpose
approach.
I'll be taking the lead on this work which is covered by STR-19:
https://projects.linaro.org/browse/STR-19
We are also looking at implementing a Xen mediator for the Arm Firmware
Framework. This is a general purpose framework where a hypervisor can
communicate with the system firmware with a common API. This avoids the
need to have multiple firmware aware implementations in the hypervisor
for accessing secure services. As long as the firmware provides the
interface the hypervisor will be able to run on it.
Ruchika Gupta is leading this work under STR-23 which is part of the
broader trusted substrate initiative:
https://projects.linaro.org/browse/STR-23
SCMI Server
===========
The System Control and Management Interface (SCMI) provides a mechanism
for clients (e.g. kernels needing resources) to request hardware
resources from the system. The server usually sits in the secure
firmware layer and responds to secure calls from the kernel to turn
resources on and off. It is key to efficient power management as you
might for example want to turn clock sources off between decoded video
frames.
In a multi-domain system you have to mediate between a number of
potential users of these resources. For the non-primary domain you can
use a virtio-scmi device:
Subject: [PATCH v5] Add virtio SCMI device specification
Date: Wed, 27 May 2020 19:43:25 +0200
Message-ID: <20200527174325.9529-1-peter.hilber(a)opensynergy.com>
There is already a proposal for the kernel driver to go along with the
specification:
Subject: [RFC PATCH v2 00/10] firmware: arm_scmi: Add virtio transport
Date: Thu, 5 Nov 2020 22:21:06 +0100
Message-ID: <20201105212116.411422-1-peter.hilber(a)opensynergy.com>
So our work would be focused on helping those get upstream and working
on an open source reference implementation of the server in the backend.
The question of where the SCMI server should be implemented is an open
one.
The simplest would be a proof of concept user-space server which extends
the existing testing build. This would demonstrate the connection but
wouldn't be usable in production as there isn't currently a method for
user space to access the resource hierarchy maintained by the kernel.
Another option would be to terminate virtio-scmi inside the host kernel
where it could then be merged with the hosts own requests. However this
does seem like a horrific hack that embeds policy decisions in the
kernel.
The other two options are enable the virtio backend for OPTEE (where the
SCMI server can live) or enable the SCMI server in a Zephyr RTOS which
has already got some experimental virtio support in preparation for a
Zephyr Dom0.
This work is being led by Vincent Guittot and can be followed from:
https://projects.linaro.org/browse/STR-4
VirtIO serial devices
=====================
There is a desire to implement another serial like interface for virtio
which are common in exposing hardware on embedded and mobile devices.
There are several option available although currently only virtio-i2c
has a proposal for the standard:
Date: Fri, 8 Jan 2021 15:39:08 +0800
Message-Id: <dfb21780647c69519f01fb0afbbd18f780963af9.1610091344.git.jie.deng(a)intel.com>
Subject: [virtio-comment] [PATCH v7] virtio-i2c: add the device specification
however there have been a number of alternative proposals including
using virtio-greybus or virtio-rpmsg as general purpose multiplexer
transports for these sort of low bandwidth datagram services. Having a
virtio-i2c implementation would be useful for testing the fat virtqueue
concept, although both the existing virtio-rpmb and proposed virtio-scmi
daemons could also be pressed into service for this.
Currently we don't have anyone assigned to look at this so I think this
needs someone to step forward with a proposed use case to take this up.
Housekeeping
============
I'm planning on closing out STR-7 (Create a common virtio library for
use by programs implementing a backend) as I'm not sure what it would
achieve. We have implemented one C based backend using the libvhost code
inside the QEMU repository. Although not totally separate from the rest
of the source tree it could be made so with minimal effort if needed. In
the meantime Takahiro has enabled VirtIO inside Zephyr by adapting the
current Linux code into it.
The main contender for a common library comes from the rust-vmm project:
https://github.com/rust-vmm
and specifically the vhost-user-backend crate:
https://github.com/rust-vmm/vhost-user-backend/
There are a number of backends that have been implemented with it but it
probably requires someone with good Rust background to evaluate the
current state of the libraries. To my untrained eye there is still some
commonality in the handling that could be moved from the individual
daemons to make the core libraries easier to use. If we want to go
forward with Rust we should create a specific card for that that a Rust
expert could work on.
Summary
=======
Apologies for the long read and the delay getting this out but hopefully
that gives a good overview of the thinking of the next cycle. Do shout
if I missed anything out and please come with your questions and comments
on the list and at the Stratos sync tomorrow afternoon.
--
Alex Bennée
On Wed, Jan 13, 2021 at 12:21 AM Stefano Stabellini via Stratos-dev
<stratos-dev(a)op-lists.linaro.org> wrote:
> On Tue, 12 Jan 2021, Alex Bennée wrote:
> >
> > I wanted to bounce some ideas around about our aim of limited memory
> > sharing.
> >
> > At the end of last year Windriver presented their shared memory approach
> > for hypervisor-less virto. We have also discussed QC's iotlb approach.
> > So far there is no proposed draft the Virtio spec and there are
> > questions about how these shared memory approaches fit within the
> > existing Virtio memory model and how they would interact with a Linux
> > guest driver API to minimise the amount of copying needed as data moves
> > from a primary guest to a back-end.
> >
> > Given the performance requirements for high bandwidth multimedia devices
> > it feels like we need to get some working code published so we can
> > compare behaviour and implementations details. I think we are still a
> > fair way off in being able to propose any updates to the standard until
> > we can see the changes needed across guest APIs and get some measure of
> > performance and bottlenecks.
> >
> > However there are a range of devices we are interested in that are less
> > performance sensitive - e.g. SPI, I2C and other "serial" buses. They
> > would also benefit from having a minimal memory profile. Is it worth
> > considering addressing a separate simpler and less performance
> > orientated solution?
> >
> > Arnd suggested something that I'm going to call a fat VirtQueues. The
> > idea being that both data and descriptor are stored in the same
> > VirtQueue structure. While it would necessitate copying data from guest
> > address space to the queue and back it could be kept to the lower levels
> > of the driver stack without the drivers themselves having to worry too
> > much about the details. With everything contained in the VirtQueue there
> > is only one bit of memory to co-ordinate between the primary guest and
> > service OS which makes isolation a lot easier.
> >
> > Of course this doesn't solve the problem for the more performance
> > sensitive applications but it would be a workable demonstration of
> > memory isolation across VMs and a useful suggestion in it's own right.
> >
> > What do people think?
>
> I think it is a good idea: everyone will agree that the first step is to
> implement a solution that relies on memcpys. Anything smarter is best
> done as a second step and probably requires new hypervisor interfaces.
>
> From a performance perspective, whether we use a separate pre-shared
> buffer, or "fat VirtQueues" as Arnd suggested, the results should be
> very similar. So I think fat VirtQueues are a good way forward to me.
Ok, sounds good to me, too. I would also expect to see similar
performance between any approach using memcpy(). Even with the
current case of communication between two guests using a copy
in the hypervisor, it's probably not much faster than doing a copy in the
guest. Between various options for doing a copy in the guest, the
modified virtqueue should be conceptually cleaner than the others.
I'm still suspicious of the doing page-flipping approaches or anything
that relies on IOMMU to avoid the memcpy() for general virtqueues,
as that sounds like it will be slower than less secure than the memcpy()
because of the added complexity. The best approach that I imagine
using for some cases where the copy is too slow would be to extend
those devices to use "shared memory regions"[1] in addition to
virtqueues. This is already allowed in virtio-fs, virtio-video and virtio-gpu,
and may work for some but not all other device types (i.e. not
virtio-net or virtio-block) that do not lean towards this model.
Arnd
[1] https://github.com/oasis-tcs/virtio-spec/blob/master/shared-mem.tex
On 1/12/21 8:47 AM, Arnd Bergmann via Stratos-dev wrote:
> On Tue, Jan 12, 2021 at 12:24 PM Bill Mills <bill.mills(a)linaro.org> wrote:
>> On 1/12/21 5:57 AM, Viresh Kumar via Stratos-dev wrote:
>>> On 16-12-20, 14:54, Arnd Bergmann wrote:
I've been staying out of this for a bit but I'll offer a
few cents' worth now.
>>>> The problem we get into though is once we try to make this
>>>> work for arbitrary i2c or spi devices. The kernel has around
>>>> 1400 such drivers, and usually we use a device tree description
>>>> based on a unique string for every such device plus additional
>>>> properties for things like gpio lines or out-of-band interrupts.
I think what you're saying is that we already have DT-based
drivers for existing hardware, and to abstract them with
Greybus would likely require a special Greybus shim or something
for every one of those.
>>>> If we want to use greybus manifests in place of device trees,
>>>> that means either needing to find a way to map DT properties
>>>> into the manifest, or have the same data in another format
>>>> for each device we might want to use behind greybus, and
>>>> adapting the corresponding drivers to understand this additional
>>>> format.
Maybe, maybe not. It depends on what level of abstraction
the guest/client needs to represent the device (e.g. i2c or
spi).
Greybus assumes the device hardware is on a module and not
"directly" accessible by the AP. That is, if the AP wants
to send a byte over an I2C device on a module, the only way
to do that is by encapsulating that request in a Greybus
message. It can't use (say) a register interface to cause
the byte to be sent.
In a virtualized environment though, you might *want* to
expose a more "raw" interface to the hardware. What I
mean is you might want the host/server to grant exclusive
access to a guest/client to the register space that controls
the actual hardware, avoiding the need for a shim layer
(and most likely extra memory copies).
For low speed peripherals that probably isn't critical,
but I think it's worth considering whether you want to
use the latter approach (exclusively, or in addition to
something that abstracts hardware like Greybus does).
Note: these comments are about how the Greybus protocols
work; the "raw" approach is different from Greybus in
that respect. The *discovery* of devices available to
guests is a different issue though, and would be more
focused on Greybus *manifests* (or some other mechanism).
>>> I am a bit confused about this. I don't think we need to expose all
>>> that information over the manifest.
Viresh is saying that Greybus abstracts the hardware,
so there's no need to expose the details. The Greybus
driver on the host is the only one that needs to know
the DT-like details of specific implementations.
>>> The SPI controller will be accessible by the host OS (lets say both
>>> host and guest run Linux), the host will give/pass a manifest to the
>>> guest and the guest will send simple commands like read/write to the
>>> host. The Guest doesn't need minute details of how the controller is
>>> getting programmed, which is the responsibility of the host side
>>> controller driver which will have all this information from the DT
>>> passed from bootloader anyway. And so the manifest shouldn't be
>>> required to have equivalent of all DT properties.
>>>
>>> Isn't it ?
>>>
>>
>> Yes for the interrupt for the SPI or I2C controller.
>>
>> I presume Arnd is talking about side band signals from the devices.
>> I2C and SPI don't have a client side concept of interrupt and the
>> originator side (trying to not use the word master) would have to poll
>> otherwise. So many devices hook up a side band interrupt request to a
>> GPIO. Likewise an I2C EEPROM may have a write protect bit that is
>> controlled by a GPIO.
>>
>> Coordinating this between a virtual I2C and a virtual GPIO bank would be
>> complicated to do in the manifest if each is a separate device.
I might be misunderstanding here but I *think* in the Greybus
case, the details of how all signals (including interrupts)
are implemented are the host's responsibility, and do not need
to be visible to the guest.
>> However if we expand the definition of "I2C virtual device" to have an
>> interrupt request line and a couple outputs, the details are in fact on
>> the host side and the guest does not need to understand it all.
Yes, for the Greybus I2C protocol (for example) an
interrupt is represented as a message originated from
the owner of the "real" hardware directed at the user
of the hardware. (From the module to the AP, but in
this case it would be from the host to the guest.) So
these details would be hidden and abstracted.
>> What would this mean for the 1400 devices in the kernel? Would we need
>> to add a Greybus binding to the existing DT binding? That sounds like
>> the wrong way. It would be nice to leverage the DT binding that was
>> already in the kernel.
>
> I believe the majority of the devices are fairly simple, and the main
> thing they need is a character string for identification to work around
> the lack of a device/vendor ID mechanism that PCI or USB use.
If Greybus protocols are used I think all I2C devices would
simply be Greybus I2C devices.
-Alex
> The kernel supports three string based identification methods
> at the moment. Take a minimal wrapper like
> drivers/iio/imu/bmi160/bmi160_i2c.c as an example, where we have
>
> static const struct i2c_device_id bmi160_i2c_id[] = {
> {"bmi160", 0},
> {}
> };
> MODULE_DEVICE_TABLE(i2c, bmi160_i2c_id);
>
> static const struct acpi_device_id bmi160_acpi_match[] = {
> {"BMI0160", 0},
> { },
> };
> MODULE_DEVICE_TABLE(acpi, bmi160_acpi_match);
>
> #ifdef CONFIG_OF
> static const struct of_device_id bmi160_of_match[] = {
> { .compatible = "bosch,bmi160" },
> { },
> };
> MODULE_DEVICE_TABLE(of, bmi160_of_match);
> #endif
>
> static struct i2c_driver bmi160_i2c_driver = {
> .driver = {
> .name = "bmi160_i2c",
> .acpi_match_table = ACPI_PTR(bmi160_acpi_match),
> .of_match_table = of_match_ptr(bmi160_of_match),
> },
> .probe = bmi160_i2c_probe,
> .id_table = bmi160_i2c_id,
> };
> module_i2c_driver(bmi160_i2c_driver);
>
> The "i2c_device_id" structure has a list of strings that is unique in
> the Linux kernel but not standardized. I assume a greybus driver would
> be used to map the numbers from the manifest into this OS specific
> string, but this has to be done for each supported device that can
> be attached to a greybus device.
>
> The "of_device_id" is a similar list but meant to be globally unique
> through the DT binding.
>
>> I hear that ACPI can bind to SPI and I2C. Is this true and how does
>> that work?? (I ask for a reference. I am NOT suggesting we bring ACPI
>> into this.)
>
> The acpi_device_id has the same purpose as of_device_id, but uses
> a different namespace and is only used on PC-style machines.
>
> $ git grep -wl "i2c_driver\|spi_driver" drivers sound | wc -l
> 1424
> $ git grep -wl "i2c_driver\|spi_driver" drivers sound | xargs git
> grep -l of_device_id | wc -l
> 876
> $ git grep -wl "i2c_driver\|spi_driver" drivers sound | xargs git
> grep -l acpi_device_id | wc -l
> 145
>
> Arnd
>
On 1/12/21 5:57 AM, Viresh Kumar via Stratos-dev wrote:
> On 16-12-20, 14:54, Arnd Bergmann wrote:
>> The problem we get into though is once we try to make this
>> work for arbitrary i2c or spi devices. The kernel has around
>> 1400 such drivers, and usually we use a device tree description
>> based on a unique string for every such device plus additional
>> properties for things like gpio lines or out-of-band interrupts.
>>
>> If we want to use greybus manifests in place of device trees,
>> that means either needing to find a way to map DT properties
>> into the manifest, or have the same data in another format
>> for each device we might want to use behind greybus, and
>> adapting the corresponding drivers to understand this additional
>> format.
>
> I am a bit confused about this. I don't think we need to expose all
> that information over the manifest.
>
> The SPI controller will be accessible by the host OS (lets say both
> host and guest run Linux), the host will give/pass a manifest to the
> guest and the guest will send simple commands like read/write to the
> host. The Guest doesn't need minute details of how the controller is
> getting programmed, which is the responsibility of the host side
> controller driver which will have all this information from the DT
> passed from bootloader anyway. And so the manifest shouldn't be
> required to have equivalent of all DT properties.
>
> Isn't it ?
>
Yes for the interrupt for the SPI or I2C controller.
I presume Arnd is talking about side band signals from the devices.
I2C and SPI don't have a client side concept of interrupt and the
originator side (trying to not use the word master) would have to poll
otherwise. So many devices hook up a side band interrupt request to a
GPIO. Likewise an I2C EEPROM may have a write protect bit that is
controlled by a GPIO.
Coordinating this between a virtual I2C and a virtual GPIO bank would be
complicated to do in the manifest if each is a separate device.
However if we expand the definition of "I2C virtual device" to have an
interrupt request line and a couple outputs, the details are in fact on
the host side and the guest does not need to understand it all.
What would this mean for the 1400 devices in the kernel? Would we need
to add a Greybus binding to the existing DT binding? That sounds like
the wrong way. It would be nice to leverage the DT binding that was
already in the kernel.
I hear that ACPI can bind to SPI and I2C. Is this true and how does
that work?? (I ask for a reference. I am NOT suggesting we bring ACPI
into this.)
It would be great if we had connector based DT in the kernel. Then you
could generate a DT fragment for the virtual multifunction vPCI device
and apply it at Greybus enumeration.
Didn't project Ara have this problem? Did it have a solution?
-- Bill
Hi,
I wanted to bounce some ideas around about our aim of limited memory
sharing.
At the end of last year Windriver presented their shared memory approach
for hypervisor-less virto. We have also discussed QC's iotlb approach.
So far there is no proposed draft the Virtio spec and there are
questions about how these shared memory approaches fit within the
existing Virtio memory model and how they would interact with a Linux
guest driver API to minimise the amount of copying needed as data moves
from a primary guest to a back-end.
Given the performance requirements for high bandwidth multimedia devices
it feels like we need to get some working code published so we can
compare behaviour and implementations details. I think we are still a
fair way off in being able to propose any updates to the standard until
we can see the changes needed across guest APIs and get some measure of
performance and bottlenecks.
However there are a range of devices we are interested in that are less
performance sensitive - e.g. SPI, I2C and other "serial" buses. They
would also benefit from having a minimal memory profile. Is it worth
considering addressing a separate simpler and less performance
orientated solution?
Arnd suggested something that I'm going to call a fat VirtQueues. The
idea being that both data and descriptor are stored in the same
VirtQueue structure. While it would necessitate copying data from guest
address space to the queue and back it could be kept to the lower levels
of the driver stack without the drivers themselves having to worry too
much about the details. With everything contained in the VirtQueue there
is only one bit of memory to co-ordinate between the primary guest and
service OS which makes isolation a lot easier.
Of course this doesn't solve the problem for the more performance
sensitive applications but it would be a workable demonstration of
memory isolation across VMs and a useful suggestion in it's own right.
What do people think?
--
Alex Bennée
On 12/10/20 2:20 PM, Arnd Bergmann via Stratos-dev wrote:
> On Tue, Dec 8, 2020 at 8:12 AM Viresh Kumar via Stratos-dev
> <stratos-dev(a)op-lists.linaro.org> wrote:
>>
>> Hi Guys,
>>
>> There are offline discussions going on to assess the possibility of
>> re-using the Linux kernel Greybus framework for Virtio [1] use case,
>> where we can control some of the controllers on the back-end (Host),
>> like SPI, I2C, GPIO, etc, from front-end (VM), using the already well
>> defined Greybus specification [2].
>>
>> The Greybus specification and kernel source was initially developed
>> for Google's Project ARA [3], and the source code was merged into
>> mainline kernel long time back (in drivers/greybus/ and
>> drivers/staging/greybus/). You can find more information about how
>> Greybus works in this LWN article [4].
>>
>> Greybus broadly provides two distinct features:
>>
>> - Device discovery: at runtime, with the help of a manifest file
>> (think of it like DT, though it has a different format). This helps
>> the user of the hardware to identify the capabilities of the remote
>> hardware, which it can use.
>>
>> - Remote control/operation: of the IPs present on the remote hardware,
>> using firmware/OS independent operations, these are already well
>> defined for a lot of device types and can be extended if required.
>>
>> We wanted to share this over email to get some discussion going, so it
>> can be discussed later on the call.
>>
>> Alex Elder (Cc'd) is one of the Maintainers of the Greybus core in
>> Linux Kernel and I worked on a wide variety of stuff and maintain some
>> of it. Both of us worked in project ARA and would like to see Greybus
>> being used in other applications and would like to contribute towards
>> it.
>
> I think the main capability this would add compared to having
> a simple virtio device per bus is that you can have a device that is
> composed of multiple back-ends, e.g. an i2c slave plus a set of
> GPIOs are tied together for one functionality, this is something
> we did not discuss in the call today. The downside is that for
> each remote device, we'd still need to add a binding and a driver
> to make use of it.
In fact, Greybus has the notion of a "bundle" of connections
exactly for this purpose. So really, a device is represented
by a bundle of one or more connections (CPorts). Each connection
uses a protocol that is specific to a service it provides. Some
services represent primitive hardware (like I2C or GPIO or UART).
But for example the camera has one CPort representing management
and another representing data from the camera.
Greybus drivers register with the Greybus core, and they provide
a match table that defines what bundles (devices) should be
associated with the driver when they are probed. The bundles
and connections, etc. are defined in a module's manifest; for
a bundle this includes its vendor id, product id, and class,
which are used in matching it with a Greybus device driver.
So basically the manifest provides an encapsulated description
of hardware functionality, and built into its design is a way
to match that hardware with a (Greybus) driver. This could be
adapted for other environments.
As an aside, let me highlight something:
- Greybus manifest describes the hardware available in a module
- A manifest describes one or more Greybus bundles, each of
which represents a device
- Greybus device driver has a way to identify which Greybus
bundle it should be bound with
- A Greybus bundle (device) is implemented with multiple
connections, each using a particular protocol
All, some, or none of these might be what's really wanted
here, but all are part of and possibly implied by the
term "Greybus." This is why I ask for clarity and precision
about what is really required.
Anyway the questions I have are more about whether using what
Greybus as it exists now aligns well with this application.
Does the Greybus manifest address what's required to provide
virtualized access to real hardware via VirtIO? Does it
limit what could be done? Does it provide functionality
beyond what is needed? How is this better or worse than
using Device Tree (for example)? Is there a more natural
way for VirtIO to advertise available hardware?
To be clear, I'm not trying to discourage using Greybus here.
But as I said, I'm viewing things through a Greybus lens. I'm
working to understand what the Stratos "model" looks like so I
can bridge the divide in my mind between that and Greybus.
-Alex
> The alternative is to use a device tree to describe these to
> the guest kernel at boot time. The advantage is that we only
> need a really simple drivers for each type of host controller
> (i2c, spi, ....), and describing the actual devices works just as
> before, using the existing DT bindings to pass bind the attached
> device to a driver and add auxiliary information with references
> to other devices (gpio, irq, device settings), at the cost of
> needing to configure them at boot time.
>
> Arnd
>
Hi All
Do we have agenda items for this week's call ?
https://collaborate.linaro.org/display/STR/Stratos+Home
Mike
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative, the rest follows"