On 12/10/20 2:20 PM, Arnd Bergmann via Stratos-dev wrote:
> On Tue, Dec 8, 2020 at 8:12 AM Viresh Kumar via Stratos-dev
> <stratos-dev(a)op-lists.linaro.org> wrote:
>>
>> Hi Guys,
>>
>> There are offline discussions going on to assess the possibility of
>> re-using the Linux kernel Greybus framework for Virtio [1] use case,
>> where we can control some of the controllers on the back-end (Host),
>> like SPI, I2C, GPIO, etc, from front-end (VM), using the already well
>> defined Greybus specification [2].
>>
>> The Greybus specification and kernel source was initially developed
>> for Google's Project ARA [3], and the source code was merged into
>> mainline kernel long time back (in drivers/greybus/ and
>> drivers/staging/greybus/). You can find more information about how
>> Greybus works in this LWN article [4].
>>
>> Greybus broadly provides two distinct features:
>>
>> - Device discovery: at runtime, with the help of a manifest file
>> (think of it like DT, though it has a different format). This helps
>> the user of the hardware to identify the capabilities of the remote
>> hardware, which it can use.
>>
>> - Remote control/operation: of the IPs present on the remote hardware,
>> using firmware/OS independent operations, these are already well
>> defined for a lot of device types and can be extended if required.
>>
>> We wanted to share this over email to get some discussion going, so it
>> can be discussed later on the call.
>>
>> Alex Elder (Cc'd) is one of the Maintainers of the Greybus core in
>> Linux Kernel and I worked on a wide variety of stuff and maintain some
>> of it. Both of us worked in project ARA and would like to see Greybus
>> being used in other applications and would like to contribute towards
>> it.
>
> I think the main capability this would add compared to having
> a simple virtio device per bus is that you can have a device that is
> composed of multiple back-ends, e.g. an i2c slave plus a set of
> GPIOs are tied together for one functionality, this is something
> we did not discuss in the call today. The downside is that for
> each remote device, we'd still need to add a binding and a driver
> to make use of it.
In fact, Greybus has the notion of a "bundle" of connections
exactly for this purpose. So really, a device is represented
by a bundle of one or more connections (CPorts). Each connection
uses a protocol that is specific to a service it provides. Some
services represent primitive hardware (like I2C or GPIO or UART).
But for example the camera has one CPort representing management
and another representing data from the camera.
Greybus drivers register with the Greybus core, and they provide
a match table that defines what bundles (devices) should be
associated with the driver when they are probed. The bundles
and connections, etc. are defined in a module's manifest; for
a bundle this includes its vendor id, product id, and class,
which are used in matching it with a Greybus device driver.
So basically the manifest provides an encapsulated description
of hardware functionality, and built into its design is a way
to match that hardware with a (Greybus) driver. This could be
adapted for other environments.
As an aside, let me highlight something:
- Greybus manifest describes the hardware available in a module
- A manifest describes one or more Greybus bundles, each of
which represents a device
- Greybus device driver has a way to identify which Greybus
bundle it should be bound with
- A Greybus bundle (device) is implemented with multiple
connections, each using a particular protocol
All, some, or none of these might be what's really wanted
here, but all are part of and possibly implied by the
term "Greybus." This is why I ask for clarity and precision
about what is really required.
Anyway the questions I have are more about whether using what
Greybus as it exists now aligns well with this application.
Does the Greybus manifest address what's required to provide
virtualized access to real hardware via VirtIO? Does it
limit what could be done? Does it provide functionality
beyond what is needed? How is this better or worse than
using Device Tree (for example)? Is there a more natural
way for VirtIO to advertise available hardware?
To be clear, I'm not trying to discourage using Greybus here.
But as I said, I'm viewing things through a Greybus lens. I'm
working to understand what the Stratos "model" looks like so I
can bridge the divide in my mind between that and Greybus.
-Alex
> The alternative is to use a device tree to describe these to
> the guest kernel at boot time. The advantage is that we only
> need a really simple drivers for each type of host controller
> (i2c, spi, ....), and describing the actual devices works just as
> before, using the existing DT bindings to pass bind the attached
> device to a driver and add auxiliary information with references
> to other devices (gpio, irq, device settings), at the cost of
> needing to configure them at boot time.
>
> Arnd
>
Hi All
Thanks for the great call yesterday [1]
A reminder we will skip the December 24th call, and I am looking for agenda
items and any updates to the notes.
If Linaro members could think about the potential value of Stratos and
other major themes for our mid-cycle review, which is coming up in January,
it would help. I think that with your support we could achieve a lot more
in the next cycle.
Mike
[1]
https://collaborate.linaro.org/display/STR/2020-12-10+Project+Stratos+Sync+…
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
On Tue, Dec 8, 2020 at 8:12 AM Viresh Kumar via Stratos-dev
<stratos-dev(a)op-lists.linaro.org> wrote:
>
> Hi Guys,
>
> There are offline discussions going on to assess the possibility of
> re-using the Linux kernel Greybus framework for Virtio [1] use case,
> where we can control some of the controllers on the back-end (Host),
> like SPI, I2C, GPIO, etc, from front-end (VM), using the already well
> defined Greybus specification [2].
>
> The Greybus specification and kernel source was initially developed
> for Google's Project ARA [3], and the source code was merged into
> mainline kernel long time back (in drivers/greybus/ and
> drivers/staging/greybus/). You can find more information about how
> Greybus works in this LWN article [4].
>
> Greybus broadly provides two distinct features:
>
> - Device discovery: at runtime, with the help of a manifest file
> (think of it like DT, though it has a different format). This helps
> the user of the hardware to identify the capabilities of the remote
> hardware, which it can use.
>
> - Remote control/operation: of the IPs present on the remote hardware,
> using firmware/OS independent operations, these are already well
> defined for a lot of device types and can be extended if required.
>
> We wanted to share this over email to get some discussion going, so it
> can be discussed later on the call.
>
> Alex Elder (Cc'd) is one of the Maintainers of the Greybus core in
> Linux Kernel and I worked on a wide variety of stuff and maintain some
> of it. Both of us worked in project ARA and would like to see Greybus
> being used in other applications and would like to contribute towards
> it.
I think the main capability this would add compared to having
a simple virtio device per bus is that you can have a device that is
composed of multiple back-ends, e.g. an i2c slave plus a set of
GPIOs are tied together for one functionality, this is something
we did not discuss in the call today. The downside is that for
each remote device, we'd still need to add a binding and a driver
to make use of it.
The alternative is to use a device tree to describe these to
the guest kernel at boot time. The advantage is that we only
need a really simple drivers for each type of host controller
(i2c, spi, ....), and describing the actual devices works just as
before, using the existing DT bindings to pass bind the attached
device to a driver and add auxiliary information with references
to other devices (gpio, irq, device settings), at the cost of
needing to configure them at boot time.
Arnd
Hi Guys,
There are offline discussions going on to assess the possibility of
re-using the Linux kernel Greybus framework for Virtio [1] use case,
where we can control some of the controllers on the back-end (Host),
like SPI, I2C, GPIO, etc, from front-end (VM), using the already well
defined Greybus specification [2].
The Greybus specification and kernel source was initially developed
for Google's Project ARA [3], and the source code was merged into
mainline kernel long time back (in drivers/greybus/ and
drivers/staging/greybus/). You can find more information about how
Greybus works in this LWN article [4].
Greybus broadly provides two distinct features:
- Device discovery: at runtime, with the help of a manifest file
(think of it like DT, though it has a different format). This helps
the user of the hardware to identify the capabilities of the remote
hardware, which it can use.
- Remote control/operation: of the IPs present on the remote hardware,
using firmware/OS independent operations, these are already well
defined for a lot of device types and can be extended if required.
We wanted to share this over email to get some discussion going, so it
can be discussed later on the call.
Alex Elder (Cc'd) is one of the Maintainers of the Greybus core in
Linux Kernel and I worked on a wide variety of stuff and maintain some
of it. Both of us worked in project ARA and would like to see Greybus
being used in other applications and would like to contribute towards
it.
--
viresh
[1] https://collaborate.linaro.org/display/STR/Virtio+Interfaces
[2] https://github.com/projectara/greybus-spec
[3] https://en.wikipedia.org/wiki/Project_Ara
[4] https://lwn.net/Articles/715955/
Nataliya Korovkina via Stratos-dev <stratos-dev(a)op-lists.linaro.org> writes:
> Hello,
>
> I'm going to look into the STR-11 task, specifically into Zephyr Dom0
> on Cortex-A. Will be glad to synchronize with other people who watch
> the task as well.
Hi Nataliya,
That's awesome :-)
Can I introduce you to Akashi-san (cc'ed) who has also been looking at
Zephyr on Xen and I believe already has some patches to make things work
better. I assume you already know Stefano who has a bunch of the work on
scoping out this use case in the STR-11 card.
I have a few questions:
- Are you also interested in the R-profile deployments (where the VMM
sits in it's own safety island dedicated to the VMM)?
- What platform are you considering for your implementation?
We also have a regular open fortnightly Stratos call if you want to sync up
with others in the community and discuss any technical issues.
Thanks,
--
Alex Bennée
Hi Masami,
Have you compiled OP-TEE with the CFG_VIRTUALIZATION set ?
https://optee.readthedocs.io/en/latest/architecture/virtualization.html
I have also been trying to bring OPTEE up on the QEMU aarch64 environment
with xen but not much success as yet.
Regards,
Ruchika
On Wed, 2 Dec 2020 at 06:50, Stefano Stabellini via Stratos-dev <
stratos-dev(a)op-lists.linaro.org> wrote:
> CCing Volodymyr who maintains optee support in Xen
>
>
> On Wed, 2 Dec 2020, Masami Hiramatsu via Stratos-dev wrote:
> > Hi Alex,
> >
> > Have you enabled the OP-TEE support on Xen on your matchartbin?
> >
> > When I ran Xen Dom0 with OP-TEE, I got below error.
> >
> > [ 6.482047] optee: probing for conduit method.
> > (XEN) d0v18 Unhandled SMC/HVC: 0xbf00ff01
> > [ 6.482301] nvme nvme0: 1/0/0 default/read/poll queues
> > [ 6.490154] optee: api uid mismatch
> > [ 6.498962] optee: probe of firmware:optee failed with error -22
> >
> > Would you know this issue?
> >
> > Thank you,
> >
> > --
> > Masami Hiramatsu
> > --
> > Stratos-dev mailing list
> > Stratos-dev(a)op-lists.linaro.org
> > https://op-lists.linaro.org/mailman/listinfo/stratos-dev
> >
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
>
On Wed, 2 Dec 2020, Masami Hiramatsu via Stratos-dev wrote:
> Hi Joakim,
>
> 2020年12月2日(水) 17:10 Joakim Bech <joakim.bech(a)linaro.org>:
> >
> > On Wed, Dec 02, 2020 at 07:54:48AM +0000, Alex Bennée via Stratos-dev wrote:
> > >
> > > Masami Hiramatsu <masami.hiramatsu(a)linaro.org> writes:
> > >
> > > > Hi Alex,
> > > >
> > > > Have you enabled the OP-TEE support on Xen on your matchartbin?
> > >
> > > No I haven't... I'm not even sure how you would do so.
> > >
> > > >
> > > > When I ran Xen Dom0 with OP-TEE, I got below error.
> > > >
> > > > [ 6.482047] optee: probing for conduit method.
> > > > (XEN) d0v18 Unhandled SMC/HVC: 0xbf00ff01
> > > > [ 6.482301] nvme nvme0: 1/0/0 default/read/poll queues
> > > > [ 6.490154] optee: api uid mismatch
> > > > [ 6.498962] optee: probe of firmware:optee failed with error -22
> > > >
> > > > Would you know this issue?
> > >
> > > No - perhaps some of the security folk have some insight?
> > >
> > When the kernel first run it either does a SMC or a HVC to see whether
> > OP-TEE is running and runs with the expected version. I believe that is
> > what is failing here. Which one depends on what you have it configured
> > to be in DT. Since this is Xen it must be configured to use HVC. So as
> > Ruchika mentioned in another reply, I'd guess that OP-TEE (TEE core on
> > secure side hasn't been built with Virtualization enabled). It's a long
> > time since I tried this personally, but the information for how to run
> > it can be found here:
> > https://optee.readthedocs.io/en/latest/architecture/virtualization.html
>
> Thank you for the information.
> Yes, currently I configured OP-TEE to use SMC on my machine.
> OK, I'll try to follow the above instruction.
I haven't worked with optee so I don't know how to help you on this
specific issue.
However, I just want to point out that since Xen can trap HVM and SMC
just as easily, there is no need to switch from SMC to HVC for OPTEE.
It should work fine (or not fine) regardless, and I think EPAM mostly
tested with SMC as transport.
In short, the issue is most probably something else.