Hi,
One of the goals of Project Stratos is to enable hypervisor agnostic
backends so we can enable as much re-use of code as possible and avoid
repeating ourselves. This is the flip side of the front end where
multiple front-end implementations are required - one per OS, assuming
you don't just want Linux guests. The resultant guests are trivially
movable between hypervisors modulo any abstracted paravirt type
interfaces.
In my original thumb nail sketch of a solution I envisioned vhost-user
daemons running in a broadly POSIX like environment. The interface to
the daemon is fairly simple requiring only some mapped memory and some
sort of signalling for events (on Linux this is eventfd). The idea was a
stub binary would be responsible for any hypervisor specific setup and
then launch a common binary to deal with the actual virtqueue requests
themselves.
Since that original sketch we've seen an expansion in the sort of ways
backends could be created. There is interest in encapsulating backends
in RTOSes or unikernels for solutions like SCMI. There interest in Rust
has prompted ideas of using the trait interface to abstract differences
away as well as the idea of bare-metal Rust backends.
We have a card (STR-12) called "Hypercall Standardisation" which
calls for a description of the APIs needed from the hypervisor side to
support VirtIO guests and their backends. However we are some way off
from that at the moment as I think we need to at least demonstrate one
portable backend before we start codifying requirements. To that end I
want to think about what we need for a backend to function.
Configuration
=============
In the type-2 setup this is typically fairly simple because the host
system can orchestrate the various modules that make up the complete
system. In the type-1 case (or even type-2 with delegated service VMs)
we need some sort of mechanism to inform the backend VM about key
details about the system:
- where virt queue memory is in it's address space
- how it's going to receive (interrupt) and trigger (kick) events
- what (if any) resources the backend needs to connect to
Obviously you can elide over configuration issues by having static
configurations and baking the assumptions into your guest images however
this isn't scalable in the long term. The obvious solution seems to be
extending a subset of Device Tree data to user space but perhaps there
are other approaches?
Before any virtio transactions can take place the appropriate memory
mappings need to be made between the FE guest and the BE guest.
Currently the whole of the FE guests address space needs to be visible
to whatever is serving the virtio requests. I can envision 3 approaches:
* BE guest boots with memory already mapped
This would entail the guest OS knowing where in it's Guest Physical
Address space is already taken up and avoiding clashing. I would assume
in this case you would want a standard interface to userspace to then
make that address space visible to the backend daemon.
* BE guests boots with a hypervisor handle to memory
The BE guest is then free to map the FE's memory to where it wants in
the BE's guest physical address space. To activate the mapping will
require some sort of hypercall to the hypervisor. I can see two options
at this point:
- expose the handle to userspace for daemon/helper to trigger the
mapping via existing hypercall interfaces. If using a helper you
would have a hypervisor specific one to avoid the daemon having to
care too much about the details or push that complexity into a
compile time option for the daemon which would result in different
binaries although a common source base.
- expose a new kernel ABI to abstract the hypercall differences away
in the guest kernel. In this case the userspace would essentially
ask for an abstract "map guest N memory to userspace ptr" and let
the kernel deal with the different hypercall interfaces. This of
course assumes the majority of BE guests would be Linux kernels and
leaves the bare-metal/unikernel approaches to their own devices.
Operation
=========
The core of the operation of VirtIO is fairly simple. Once the
vhost-user feature negotiation is done it's a case of receiving update
events and parsing the resultant virt queue for data. The vhost-user
specification handles a bunch of setup before that point, mostly to
detail where the virt queues are set up FD's for memory and event
communication. This is where the envisioned stub process would be
responsible for getting the daemon up and ready to run. This is
currently done inside a big VMM like QEMU but I suspect a modern
approach would be to use the rust-vmm vhost crate. It would then either
communicate with the kernel's abstracted ABI or be re-targeted as a
build option for the various hypervisors.
One question is how to best handle notification and kicks. The existing
vhost-user framework uses eventfd to signal the daemon (although QEMU
is quite capable of simulating them when you use TCG). Xen has it's own
IOREQ mechanism. However latency is an important factor and having
events go through the stub would add quite a lot.
Could we consider the kernel internally converting IOREQ messages from
the Xen hypervisor to eventfd events? Would this scale with other kernel
hypercall interfaces?
So any thoughts on what directions are worth experimenting with?
--
Alex Bennée
Hello,
This patchset adds vhost-user-i2c device's support in Qemu. Initially I tried to
add the backend implementation as well into Qemu, but as I was looking for a
hypervisor agnostic backend implementation, I decided to keep it outside of
Qemu. Eventually I implemented it in Rust and it works very well with this
patchset, and it is under review [1] to be merged in common rust vhost devices
crate.
The kernel virtio I2C driver [2] is fully reviewed and is ready to be merged soon.
V1->V2:
- Dropped the backend support from qemu and minor cleanups.
I2C Testing:
------------
I didn't have access to a real hardware where I can play with a I2C
client device (like RTC, eeprom, etc) to verify the working of the
backend daemon, so I decided to test it on my x86 box itself with
hierarchy of two ARM64 guests.
The first ARM64 guest was passed "-device ds1338,address=0x20" option,
so it could emulate a ds1338 RTC device, which connects to an I2C bus.
Once the guest came up, ds1338 device instance was created within the
guest kernel by doing:
echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
[
Note that this may end up binding the ds1338 device to its driver,
which won't let our i2c daemon talk to the device. For that we need to
manually unbind the device from the driver:
echo 0-0020 > /sys/bus/i2c/devices/0-0020/driver/unbind
]
After this is done, you will get /dev/rtc1. This is the device we wanted
to emulate, which will be accessed by the vhost-user-i2c backend daemon
via the /dev/i2c-0 file present in the guest VM.
At this point we need to start the backend daemon and give it a
socket-path to talk to from qemu (you can pass -v to it to get more
detailed messages):
vhost-user-i2c --socket-path=vi2c.sock -l 0:32
[ Here, 0:32 is the bus/device mapping, 0 for /dev/i2c-0 and 32 (i.e.
0x20) is client address of ds1338 that we used while creating the
device. ]
Now we need to start the second level ARM64 guest (from within the first
guest) to get the i2c-virtio.c Linux driver up. The second level guest
is passed the following options to connect to the same socket:
-chardev socket,path=vi2c.sock0,id=vi2c \
-device vhost-user-i2c-pci,chardev=vi2c,id=i2c
Once the second level guest boots up, we will see the i2c-virtio bus at
/sys/bus/i2c/devices/i2c-X/. From there we can now make it emulate the
ds1338 device again by doing:
echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
[ This time we want ds1338's driver to be bound to the device, so it
should be enabled in the kernel as well. ]
And we will get /dev/rtc1 device again here in the second level guest.
Now we can play with the rtc device with help of hwclock utility and we
can see the following sequence of transfers happening if we try to
update rtc's time from system time.
hwclock -w -f /dev/rtc1 (in guest2) ->
Reaches i2c-virtio.c (Linux bus driver in guest2) ->
transfer over virtio ->
Reaches the qemu's vhost-i2c device emulation (running over guest1) ->
Reaches the backend daemon vhost-user-i2c started earlier (in guest1) ->
ioctl(/dev/i2c-0, I2C_RDWR, ..); (in guest1) ->
reaches qemu's hw/rtc/ds1338.c (running over host)
SMBUS Testing:
--------------
I wasn't required to have such a tedious setup for testing out with
SMBUS devices. I was able to emulate a SMBUS device on my x86 machine
using i2c-stub driver.
$ modprobe i2c-stub chip_addr=0x20
//Boot the arm64 guest now with i2c-virtio driver and then do:
$ echo al3320a 0x20 > /sys/class/i2c-adapter/i2c-0/new_device
$ cat /sys/bus/iio/devices/iio:device0/in_illuminance_raw
That's it.
I hope I was able to give a clear picture of my test setup here :)
--
Viresh
Viresh Kumar (3):
hw/virtio: add boilerplate for vhost-user-i2c device
hw/virtio: add vhost-user-i2c-pci boilerplate
MAINTAINERS: Add entry for virtio-i2c
MAINTAINERS | 7 +
hw/virtio/Kconfig | 5 +
hw/virtio/meson.build | 2 +
hw/virtio/vhost-user-i2c-pci.c | 69 +++++++
hw/virtio/vhost-user-i2c.c | 288 +++++++++++++++++++++++++++++
include/hw/virtio/vhost-user-i2c.h | 28 +++
6 files changed, 399 insertions(+)
create mode 100644 hw/virtio/vhost-user-i2c-pci.c
create mode 100644 hw/virtio/vhost-user-i2c.c
create mode 100644 include/hw/virtio/vhost-user-i2c.h
--
2.31.1.272.g89b43f80a514
I top post as I find it difficult to identify where to make the comments.
1) BE acceleration
Network and storage backends may actually be executed in SmartNICs. As
virtio 1.1 is hardware friendly, there may be SmartNICs with virtio 1.1 PCI
VFs. Is it a valid use case for the generic BE framework to be used in this
context?
DPDK is used in some BE to significantly accelerate switching. DPDK is also
used sometimes in guests. In that case, there are no event injection but
just high performance memory scheme. Is this considered as a use case?
2) Virtio as OS HAL
Panasonic CTO has been calling for a virtio based HAL and based on the
teachings of Google GKI, an internal HAL seem inevitable in the long term.
Virtio is then a contender to Google promoted Android HAL. Could the
framework be used in that context?
On Wed, 11 Aug 2021 at 08:28, AKASHI Takahiro via Stratos-dev <
stratos-dev(a)op-lists.linaro.org> wrote:
> On Wed, Aug 04, 2021 at 12:20:01PM -0700, Stefano Stabellini wrote:
> > CCing people working on Xen+VirtIO and IOREQs. Not trimming the original
> > email to let them read the full context.
> >
> > My comments below are related to a potential Xen implementation, not
> > because it is the only implementation that matters, but because it is
> > the one I know best.
>
> Please note that my proposal (and hence the working prototype)[1]
> is based on Xen's virtio implementation (i.e. IOREQ) and particularly
> EPAM's virtio-disk application (backend server).
> It has been, I believe, well generalized but is still a bit biased
> toward this original design.
>
> So I hope you like my approach :)
>
> [1]
> https://op-lists.linaro.org/pipermail/stratos-dev/2021-August/000546.html
>
> Let me take this opportunity to explain a bit more about my approach below.
>
> > Also, please see this relevant email thread:
> > https://marc.info/?l=xen-devel&m=162373754705233&w=2
> >
> >
> > On Wed, 4 Aug 2021, Alex Bennée wrote:
> > > Hi,
> > >
> > > One of the goals of Project Stratos is to enable hypervisor agnostic
> > > backends so we can enable as much re-use of code as possible and avoid
> > > repeating ourselves. This is the flip side of the front end where
> > > multiple front-end implementations are required - one per OS, assuming
> > > you don't just want Linux guests. The resultant guests are trivially
> > > movable between hypervisors modulo any abstracted paravirt type
> > > interfaces.
> > >
> > > In my original thumb nail sketch of a solution I envisioned vhost-user
> > > daemons running in a broadly POSIX like environment. The interface to
> > > the daemon is fairly simple requiring only some mapped memory and some
> > > sort of signalling for events (on Linux this is eventfd). The idea was
> a
> > > stub binary would be responsible for any hypervisor specific setup and
> > > then launch a common binary to deal with the actual virtqueue requests
> > > themselves.
> > >
> > > Since that original sketch we've seen an expansion in the sort of ways
> > > backends could be created. There is interest in encapsulating backends
> > > in RTOSes or unikernels for solutions like SCMI. There interest in Rust
> > > has prompted ideas of using the trait interface to abstract differences
> > > away as well as the idea of bare-metal Rust backends.
> > >
> > > We have a card (STR-12) called "Hypercall Standardisation" which
> > > calls for a description of the APIs needed from the hypervisor side to
> > > support VirtIO guests and their backends. However we are some way off
> > > from that at the moment as I think we need to at least demonstrate one
> > > portable backend before we start codifying requirements. To that end I
> > > want to think about what we need for a backend to function.
> > >
> > > Configuration
> > > =============
> > >
> > > In the type-2 setup this is typically fairly simple because the host
> > > system can orchestrate the various modules that make up the complete
> > > system. In the type-1 case (or even type-2 with delegated service VMs)
> > > we need some sort of mechanism to inform the backend VM about key
> > > details about the system:
> > >
> > > - where virt queue memory is in it's address space
> > > - how it's going to receive (interrupt) and trigger (kick) events
> > > - what (if any) resources the backend needs to connect to
> > >
> > > Obviously you can elide over configuration issues by having static
> > > configurations and baking the assumptions into your guest images
> however
> > > this isn't scalable in the long term. The obvious solution seems to be
> > > extending a subset of Device Tree data to user space but perhaps there
> > > are other approaches?
> > >
> > > Before any virtio transactions can take place the appropriate memory
> > > mappings need to be made between the FE guest and the BE guest.
> >
> > > Currently the whole of the FE guests address space needs to be visible
> > > to whatever is serving the virtio requests. I can envision 3
> approaches:
> > >
> > > * BE guest boots with memory already mapped
> > >
> > > This would entail the guest OS knowing where in it's Guest Physical
> > > Address space is already taken up and avoiding clashing. I would
> assume
> > > in this case you would want a standard interface to userspace to then
> > > make that address space visible to the backend daemon.
>
> Yet another way here is that we would have well known "shared memory"
> between
> VMs. I think that Jailhouse's ivshmem gives us good insights on this matter
> and that it can even be an alternative for hypervisor-agnostic solution.
>
> (Please note memory regions in ivshmem appear as a PCI device and can be
> mapped locally.)
>
> I want to add this shared memory aspect to my virtio-proxy, but
> the resultant solution would eventually look similar to ivshmem.
>
> > > * BE guests boots with a hypervisor handle to memory
> > >
> > > The BE guest is then free to map the FE's memory to where it wants in
> > > the BE's guest physical address space.
> >
> > I cannot see how this could work for Xen. There is no "handle" to give
> > to the backend if the backend is not running in dom0. So for Xen I think
> > the memory has to be already mapped
>
> In Xen's IOREQ solution (virtio-blk), the following information is expected
> to be exposed to BE via Xenstore:
> (I know that this is a tentative approach though.)
> - the start address of configuration space
> - interrupt number
> - file path for backing storage
> - read-only flag
> And the BE server have to call a particular hypervisor interface to
> map the configuration space.
>
> In my approach (virtio-proxy), all those Xen (or hypervisor)-specific
> stuffs are contained in virtio-proxy, yet another VM, to hide all details.
>
> # My point is that a "handle" is not mandatory for executing mapping.
>
> > and the mapping probably done by the
> > toolstack (also see below.) Or we would have to invent a new Xen
> > hypervisor interface and Xen virtual machine privileges to allow this
> > kind of mapping.
>
> > If we run the backend in Dom0 that we have no problems of course.
>
> One of difficulties on Xen that I found in my approach is that calling
> such hypervisor intefaces (registering IOREQ, mapping memory) is only
> allowed on BE servers themselvies and so we will have to extend those
> interfaces.
> This, however, will raise some concern on security and privilege
> distribution
> as Stefan suggested.
> >
> >
> > > To activate the mapping will
> > > require some sort of hypercall to the hypervisor. I can see two
> options
> > > at this point:
> > >
> > > - expose the handle to userspace for daemon/helper to trigger the
> > > mapping via existing hypercall interfaces. If using a helper you
> > > would have a hypervisor specific one to avoid the daemon having to
> > > care too much about the details or push that complexity into a
> > > compile time option for the daemon which would result in different
> > > binaries although a common source base.
> > >
> > > - expose a new kernel ABI to abstract the hypercall differences away
> > > in the guest kernel. In this case the userspace would essentially
> > > ask for an abstract "map guest N memory to userspace ptr" and let
> > > the kernel deal with the different hypercall interfaces. This of
> > > course assumes the majority of BE guests would be Linux kernels and
> > > leaves the bare-metal/unikernel approaches to their own devices.
> > >
> > > Operation
> > > =========
> > >
> > > The core of the operation of VirtIO is fairly simple. Once the
> > > vhost-user feature negotiation is done it's a case of receiving update
> > > events and parsing the resultant virt queue for data. The vhost-user
> > > specification handles a bunch of setup before that point, mostly to
> > > detail where the virt queues are set up FD's for memory and event
> > > communication. This is where the envisioned stub process would be
> > > responsible for getting the daemon up and ready to run. This is
> > > currently done inside a big VMM like QEMU but I suspect a modern
> > > approach would be to use the rust-vmm vhost crate. It would then either
> > > communicate with the kernel's abstracted ABI or be re-targeted as a
> > > build option for the various hypervisors.
> >
> > One thing I mentioned before to Alex is that Xen doesn't have VMMs the
> > way they are typically envisioned and described in other environments.
> > Instead, Xen has IOREQ servers. Each of them connects independently to
> > Xen via the IOREQ interface. E.g. today multiple QEMUs could be used as
> > emulators for a single Xen VM, each of them connecting to Xen
> > independently via the IOREQ interface.
> >
> > The component responsible for starting a daemon and/or setting up shared
> > interfaces is the toolstack: the xl command and the libxl/libxc
> > libraries.
>
> I think that VM configuration management (or orchestration in Startos
> jargon?) is a subject to debate in parallel.
> Otherwise, is there any good assumption to avoid it right now?
>
> > Oleksandr and others I CCed have been working on ways for the toolstack
> > to create virtio backends and setup memory mappings. They might be able
> > to provide more info on the subject. I do think we miss a way to provide
> > the configuration to the backend and anything else that the backend
> > might require to start doing its job.
> >
> >
> > > One question is how to best handle notification and kicks. The existing
> > > vhost-user framework uses eventfd to signal the daemon (although QEMU
> > > is quite capable of simulating them when you use TCG). Xen has it's own
> > > IOREQ mechanism. However latency is an important factor and having
> > > events go through the stub would add quite a lot.
> >
> > Yeah I think, regardless of anything else, we want the backends to
> > connect directly to the Xen hypervisor.
>
> In my approach,
> a) BE -> FE: interrupts triggered by BE calling a hypervisor interface
> via virtio-proxy
> b) FE -> BE: MMIO to config raises events (in event channels), which is
> converted to a callback to BE via virtio-proxy
> (Xen's event channel is internnally implemented by
> interrupts.)
>
> I don't know what "connect directly" means here, but sending interrupts
> to the opposite side would be best efficient.
> Ivshmem, I suppose, takes this approach by utilizing PCI's msi-x mechanism.
>
> >
> > > Could we consider the kernel internally converting IOREQ messages from
> > > the Xen hypervisor to eventfd events? Would this scale with other
> kernel
> > > hypercall interfaces?
> > >
> > > So any thoughts on what directions are worth experimenting with?
> >
> > One option we should consider is for each backend to connect to Xen via
> > the IOREQ interface. We could generalize the IOREQ interface and make it
> > hypervisor agnostic. The interface is really trivial and easy to add.
>
> As I said above, my proposal does the same thing that you mentioned here :)
> The difference is that I do call hypervisor interfaces via virtio-proxy.
>
> > The only Xen-specific part is the notification mechanism, which is an
> > event channel. If we replaced the event channel with something else the
> > interface would be generic. See:
> >
> https://gitlab.com/xen-project/xen/-/blob/staging/xen/include/public/hvm/io…
> >
> > I don't think that translating IOREQs to eventfd in the kernel is a
> > good idea: if feels like it would be extra complexity and that the
> > kernel shouldn't be involved as this is a backend-hypervisor interface.
>
> Given that we may want to implement BE as a bare-metal application
> as I did on Zephyr, I don't think that the translation would not be
> a big issue, especially on RTOS's.
> It will be some kind of abstraction layer of interrupt handling
> (or nothing but a callback mechanism).
>
> > Also, eventfd is very Linux-centric and we are trying to design an
> > interface that could work well for RTOSes too. If we want to do
> > something different, both OS-agnostic and hypervisor-agnostic, perhaps
> > we could design a new interface. One that could be implementable in the
> > Xen hypervisor itself (like IOREQ) and of course any other hypervisor
> > too.
> >
> >
> > There is also another problem. IOREQ is probably not be the only
> > interface needed. Have a look at
> > https://marc.info/?l=xen-devel&m=162373754705233&w=2. Don't we also need
> > an interface for the backend to inject interrupts into the frontend? And
> > if the backend requires dynamic memory mappings of frontend pages, then
> > we would also need an interface to map/unmap domU pages.
>
> My proposal document might help here; All the interfaces required for
> virtio-proxy (or hypervisor-related interfaces) are listed as
> RPC protocols :)
>
> > These interfaces are a lot more problematic than IOREQ: IOREQ is tiny
> > and self-contained. It is easy to add anywhere. A new interface to
> > inject interrupts or map pages is more difficult to manage because it
> > would require changes scattered across the various emulators.
>
> Exactly. I have no confident yet that my approach will also apply
> to other hypervisors than Xen.
> Technically, yes, but whether people can accept it or not is a different
> matter.
>
> Thanks,
> -Takahiro Akashi
>
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
>
--
François-Frédéric Ozog | *Director Business Development*
T: +33.67221.6485
francois.ozog(a)linaro.org | Skype: ffozog
The I2C protocol allows zero-length requests with no data, like the
SMBus Quick command, where the command is inferred based on the
read/write flag itself.
In order to allow such a request, allocate another bit,
VIRTIO_I2C_FLAGS_M_RD(1), in the flags to pass the request type, as read
or write. This was earlier done using the read/write permission to the
buffer itself.
This still won't work well if multiple buffers are passed for the same
request, i.e. the write-read requests, as the VIRTIO_I2C_FLAGS_M_RD flag
can only be used with a single buffer.
Coming back to it, there is no need to send multiple buffers with a
single request. All we need, is a way to group several requests
together, which we can already do based on the
VIRTIO_I2C_FLAGS_FAIL_NEXT flag.
Remove support for multiple buffers within a single request.
Since we are at very early stage of development currently, we can do
these modifications without addition of new features or versioning of
the protocol.
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
V1->V2:
- Name the buffer-less request as zero-length request.
Hi Guys,
I did try to follow the discussion you guys had during V4, where we added
support for multiple buffers for the same request, which I think is unnecessary
now, after introduction of the VIRTIO_I2C_FLAGS_FAIL_NEXT flag.
https://lists.oasis-open.org/archives/virtio-comment/202011/msg00005.html
And so starting this discussion again, because we need to support stuff
like: i2cdetect -q <i2c-bus-number>, which issues a zero-length SMBus
Quick command.
---
virtio-i2c.tex | 60 +++++++++++++++++++++++++-------------------------
1 file changed, 30 insertions(+), 30 deletions(-)
diff --git a/virtio-i2c.tex b/virtio-i2c.tex
index 949d75f44158..ae344b2bc822 100644
--- a/virtio-i2c.tex
+++ b/virtio-i2c.tex
@@ -54,8 +54,7 @@ \subsubsection{Device Operation: Request Queue}\label{sec:Device Types / I2C Ada
\begin{lstlisting}
struct virtio_i2c_req {
struct virtio_i2c_out_hdr out_hdr;
- u8 write_buf[];
- u8 read_buf[];
+ u8 buf[];
struct virtio_i2c_in_hdr in_hdr;
};
\end{lstlisting}
@@ -84,16 +83,16 @@ \subsubsection{Device Operation: Request Queue}\label{sec:Device Types / I2C Ada
and sets it on the other requests. If this bit is set and a device fails
to process the current request, it needs to fail the next request instead
of attempting to execute it.
+
+\item[VIRTIO_I2C_FLAGS_M_RD(1)] is used to mark the request as READ or WRITE.
\end{description}
Other bits of \field{flags} are currently reserved as zero for future feature
extensibility.
-The \field{write_buf} of the request contains one segment of an I2C transaction
-being written to the device.
-
-The \field{read_buf} of the request contains one segment of an I2C transaction
-being read from the device.
+The \field{buf} of the request is optional and contains one segment of an I2C
+transaction being read from or written to the device, based on the value of the
+\field{VIRTIO_I2C_FLAGS_M_RD} bit in the \field{flags} field.
The final \field{status} byte of the request is written by the device: either
VIRTIO_I2C_MSG_OK for success or VIRTIO_I2C_MSG_ERR for error.
@@ -103,27 +102,27 @@ \subsubsection{Device Operation: Request Queue}\label{sec:Device Types / I2C Ada
#define VIRTIO_I2C_MSG_ERR 1
\end{lstlisting}
-If ``length of \field{read_buf}''=0 and ``length of \field{write_buf}''>0,
-the request is called write request.
+If \field{VIRTIO_I2C_FLAGS_M_RD} bit is set in the \field{flags}, then the
+request is called a read request.
-If ``length of \field{read_buf}''>0 and ``length of \field{write_buf}''=0,
-the request is called read request.
+If \field{VIRTIO_I2C_FLAGS_M_RD} bit is unset in the \field{flags}, then the
+request is called a write request.
-If ``length of \field{read_buf}''>0 and ``length of \field{write_buf}''>0,
-the request is called write-read request. It means an I2C write segment followed
-by a read segment. Usually, the write segment provides the number of an I2C
-controlled device register to be read.
+The \field{buf} is optional and will not be present for a zero-length request,
+like SMBus Quick.
-The case when ``length of \field{write_buf}''=0, and at the same time,
-``length of \field{read_buf}''=0 doesn't make any sense.
+The virtio I2C protocol supports write-read requests, i.e. an I2C write segment
+followed by a read segment (usually, the write segment provides the number of an
+I2C controlled device register to be read), by grouping a list of requests
+together using the \field{VIRTIO_I2C_FLAGS_FAIL_NEXT} flag.
\subsubsection{Device Operation: Operation Status}\label{sec:Device Types / I2C Adapter Device / Device Operation: Operation Status}
-\field{addr}, \field{flags}, ``length of \field{write_buf}'' and ``length of \field{read_buf}''
-are determined by the driver, while \field{status} is determined by the processing
-of the device. A driver puts the data written to the device into \field{write_buf}, while
-a device puts the data of the corresponding length into \field{read_buf} according to the
-request of the driver.
+\field{addr}, \field{flags}, and ``length of \field{buf}'' are determined by the
+driver, while \field{status} is determined by the processing of the device. A
+driver, for a write request, puts the data to be written to the device into the
+\field{buf}, while a device, for a read request, puts the data read from device
+into the \field{buf} according to the request from the driver.
A driver may send one request or multiple requests to the device at a time.
The requests in the virtqueue are both queued and processed in order.
@@ -141,11 +140,10 @@ \subsubsection{Device Operation: Operation Status}\label{sec:Device Types / I2C
A driver MUST set the reserved bits of \field{flags} to be zero.
-The driver MUST NOT send a request with ``length of \field{write_buf}''=0 and
-``length of \field{read_buf}''=0 at the same time.
+A driver MUST NOT send the \field{buf}, for a zero-length request.
-A driver MUST NOT use \field{read_buf} if the final \field{status} returned
-from the device is VIRTIO_I2C_MSG_ERR.
+A driver MUST NOT use \field{buf}, for a read request, if the final
+\field{status} returned from the device is VIRTIO_I2C_MSG_ERR.
A driver MUST queue the requests in order if multiple requests are going to
be sent at a time.
@@ -160,11 +158,13 @@ \subsubsection{Device Operation: Operation Status}\label{sec:Device Types / I2C
A device SHOULD keep consistent behaviors with the hardware as described in
\hyperref[intro:I2C]{I2C}.
-A device MUST NOT change the value of \field{addr}, reserved bits of \field{flags}
-and \field{write_buf}.
+A device MUST NOT change the value of \field{addr}, and reserved bits of
+\field{flags}.
+
+A device MUST not change the value of the \field{buf} for a write request.
-A device MUST place one I2C segment of the corresponding length into \field{read_buf}
-according the driver's request.
+A device MUST place one I2C segment of the ``length of \field{buf}'', for the
+read request, into the \field{buf} according the driver's request.
A device MUST guarantee the requests in the virtqueue being processed in order
if multiple requests are received at a time.
--
2.31.1.272.g89b43f80a514
On Fri, Aug 13, 2021 at 6:54 PM Alex Bennée via Stratos-dev <
stratos-dev(a)op-lists.linaro.org> wrote:
>
> AKASHI Takahiro via Stratos-dev <stratos-dev(a)op-lists.linaro.org> writes:
>
> > Hi All,
> >
> > # Some of you might have received a similar message, the purpose
> > # is the same, but for wider audience.
> >
> > I have been thinking of some idea of creating hypervisor-agnostic
> > framework which will enable us to implement virtio device (backend)
> > VMs on whatever the underlying hypervisor is. The aim is to
> > create VMs as bare-metal applications using a small number of
> > "common" vm services. I name this framework "virtio proxy"[1].
> > (You can think it as my solution proposal against the topic which
> > Alex raised a concern on yesterday.)
> >
> > Now it is the time that we have to decide whether this approach
> > can appeal to you and meet your requirements, and so if it is worth
> > my continuing this study in the *next development cycle* which will
> > start in September (or October).
> >
> > 1) Please give me your insights/feedback against my proposal.
> > If we see no positive feedback or interest in the next two weeks or so,
> > especially, from member company engineers, this study will be
> > automatically *cancelled*.
> >
> > 2) If there is any interest seen, I will like to set up a dedicated
> > Stratos call meeting in two weeks timeframe to discuss more.
> > What is the best time slot for you?
> > As I live in Japan, UTC-6, 7, or 8 (or even earlier) would be the best,
> > but I would like to hear from member company engineers in West coast of
> > USA, the timeslot is quite flexible.
> >
> >
> > Here is my draft proposal:
> > [1]
> >
> https://docs.google.com/presentation/d/1jAOKbQpv44Rje74OI4pNKXlsCfRCQf547lH…
>
> I think it would be useful to discuss this at the next Stratos meeting
> at the beginning of September as we discuss the ways forward for the
> next cycle of development.
>
> More broadly now we have a bunch of backends implemented it's time to
> start considering the various approaches to hypervisor agnosticism.
>
> Mike,
>
> Can we shift the Sept meeting to a morning slot to better align with
> JST?
>
Absouiltly - there is no good time for this:
https://www.timeanddate.com/worldclock/meetingtime.html?iso=20210902&p1=770…
Akashi-san, can you pick a time that works for you and the people you know
you want to have on the call, perhaps that makes it a little easier, and
perhaps those interested can chime in with their time zone or City,
Some guesses lead to every time zone :/
Ahakshi-San Tokyo
San Diego - Qualcomm
London - Alex, Mike
Calgary - Mathieu
Stefano - not sure
Vincent - Paris
So
>
> >
> > Thanks,
> > -Takahiro Akashi
>
>
> --
> Alex Bennée
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
>
--
Mike Holmes (he / him / his) | Director, Performance & Enablement, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
Specify 7-bit ASCII character encoding for GPIO names strings.
Fixes: https://github.com/oasis-tcs/virtio-spec/issues/115
Suggested-by: Stefan Hajnoczi <stefanha(a)redhat.com>
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
V2:
- Use ASCII instead of UTF-8.
virtio-gpio.tex | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/virtio-gpio.tex b/virtio-gpio.tex
index 5da16d920aa3..3c614ec97b92 100644
--- a/virtio-gpio.tex
+++ b/virtio-gpio.tex
@@ -119,7 +119,8 @@ \subsubsection{requestq Operation: Get Line Names}\label{sec:Device Types / GPIO
order of the GPIO line numbers. The names of the GPIO lines are optional and may
be present only for a subset of GPIO lines. If missing, then a zero-byte must be
present for the GPIO line. If present, the name string must be zero-terminated
-and the name must be unique within a GPIO Device.
+and the name must be unique within a GPIO Device. The names of the GPIO lines
+are encoded in 7-bit ASCII.
These names of the GPIO lines should be most meaningful producer names for the
system, such as name indicating the usage. For example "MMC-CD", "Red LED Vdd"
--
2.31.1.272.g89b43f80a514
Specify UTF-8 character encoding for GPIO names strings.
Fixes: https://github.com/oasis-tcs/virtio-spec/issues/115
Suggested-by: Stefan Hajnoczi <stefanha(a)redhat.com>
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
virtio-gpio.tex | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/virtio-gpio.tex b/virtio-gpio.tex
index 5da16d920aa3..0b0689cceb08 100644
--- a/virtio-gpio.tex
+++ b/virtio-gpio.tex
@@ -119,7 +119,8 @@ \subsubsection{requestq Operation: Get Line Names}\label{sec:Device Types / GPIO
order of the GPIO line numbers. The names of the GPIO lines are optional and may
be present only for a subset of GPIO lines. If missing, then a zero-byte must be
present for the GPIO line. If present, the name string must be zero-terminated
-and the name must be unique within a GPIO Device.
+and the name must be unique within a GPIO Device. The names of the GPIO lines
+are encoded in UTF-8.
These names of the GPIO lines should be most meaningful producer names for the
system, such as name indicating the usage. For example "MMC-CD", "Red LED Vdd"
--
2.31.1.272.g89b43f80a514
virtio-gpio is a virtual GPIO controller. It provides a way to flexibly
communicate with the host GPIO controllers from the guest.
Note that the current implementation doesn't provide atomic APIs for
GPIO configurations. i.e. the driver (guest) would need to implement
sleep-able versions of the APIs as the guest will respond asynchronously
over the virtqueue.
This patch adds the specification for it.
Based on the initial work posted by:
"Enrico Weigelt, metux IT consult" <lkml(a)metux.net>.
Fixes: https://github.com/oasis-tcs/virtio-spec/issues/110
Reviewed-by: Arnd Bergmann <arnd(a)arndb.de>
Reviewed-by: Linus Walleij <linus.walleij(a)linaro.org>
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
V8->V9:
- Dropped few occurrences of "MUST" from non-normative statements.
- Send the patch for base GPIO specifications separately, so it can get merged
first while the IRQ support is still discussed.
Here is the previous version:
https://lists.oasis-open.org/archives/virtio-dev/202107/msg00232.html
conformance.tex | 28 +++-
content.tex | 1 +
virtio-gpio.tex | 346 ++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 371 insertions(+), 4 deletions(-)
create mode 100644 virtio-gpio.tex
diff --git a/conformance.tex b/conformance.tex
index 94d7a06db899..c52f1a40be2d 100644
--- a/conformance.tex
+++ b/conformance.tex
@@ -30,8 +30,9 @@ \section{Conformance Targets}\label{sec:Conformance / Conformance Targets}
\ref{sec:Conformance / Driver Conformance / IOMMU Driver Conformance},
\ref{sec:Conformance / Driver Conformance / Sound Driver Conformance},
\ref{sec:Conformance / Driver Conformance / Memory Driver Conformance},
-\ref{sec:Conformance / Driver Conformance / I2C Adapter Driver Conformance} or
-\ref{sec:Conformance / Driver Conformance / SCMI Driver Conformance}.
+\ref{sec:Conformance / Driver Conformance / I2C Adapter Driver Conformance},
+\ref{sec:Conformance / Driver Conformance / SCMI Driver Conformance} or
+\ref{sec:Conformance / Driver Conformance / GPIO Driver Conformance}.
\item Clause \ref{sec:Conformance / Legacy Interface: Transitional Device and Transitional Driver Conformance}.
\end{itemize}
@@ -54,8 +55,9 @@ \section{Conformance Targets}\label{sec:Conformance / Conformance Targets}
\ref{sec:Conformance / Device Conformance / IOMMU Device Conformance},
\ref{sec:Conformance / Device Conformance / Sound Device Conformance},
\ref{sec:Conformance / Device Conformance / Memory Device Conformance},
-\ref{sec:Conformance / Device Conformance / I2C Adapter Device Conformance} or
-\ref{sec:Conformance / Device Conformance / SCMI Device Conformance}.
+\ref{sec:Conformance / Device Conformance / I2C Adapter Device Conformance},
+\ref{sec:Conformance / Device Conformance / SCMI Device Conformance} or
+\ref{sec:Conformance / Device Conformance / GPIO Device Conformance}.
\item Clause \ref{sec:Conformance / Legacy Interface: Transitional Device and Transitional Driver Conformance}.
\end{itemize}
@@ -301,6 +303,15 @@ \section{Conformance Targets}\label{sec:Conformance / Conformance Targets}
\item \ref{drivernormative:Device Types / SCMI Device / Device Operation / Setting Up eventq Buffers}
\end{itemize}
+\conformance{\subsection}{GPIO Driver Conformance}\label{sec:Conformance / Driver Conformance / GPIO Driver Conformance}
+
+A General Purpose Input/Output (GPIO) driver MUST conform to the following
+normative statements:
+
+\begin{itemize}
+\item \ref{drivernormative:Device Types / GPIO Device / requestq Operation}
+\end{itemize}
+
\conformance{\section}{Device Conformance}\label{sec:Conformance / Device Conformance}
A device MUST conform to the following normative statements:
@@ -550,6 +561,15 @@ \section{Conformance Targets}\label{sec:Conformance / Conformance Targets}
\item \ref{devicenormative:Device Types / SCMI Device / Device Operation / Shared Memory Operation}
\end{itemize}
+\conformance{\subsection}{GPIO Device Conformance}\label{sec:Conformance / Device Conformance / GPIO Device Conformance}
+
+A General Purpose Input/Output (GPIO) device MUST conform to the following
+normative statements:
+
+\begin{itemize}
+\item \ref{devicenormative:Device Types / GPIO Device / requestq Operation}
+\end{itemize}
+
\conformance{\section}{Legacy Interface: Transitional Device and Transitional Driver Conformance}\label{sec:Conformance / Legacy Interface: Transitional Device and Transitional Driver Conformance}
A conformant implementation MUST be either transitional or
non-transitional, see \ref{intro:Legacy
diff --git a/content.tex b/content.tex
index 31b02e1dca0e..0008727a80df 100644
--- a/content.tex
+++ b/content.tex
@@ -6583,6 +6583,7 @@ \subsubsection{Legacy Interface: Framing Requirements}\label{sec:Device
\input{virtio-mem.tex}
\input{virtio-i2c.tex}
\input{virtio-scmi.tex}
+\input{virtio-gpio.tex}
\chapter{Reserved Feature Bits}\label{sec:Reserved Feature Bits}
diff --git a/virtio-gpio.tex b/virtio-gpio.tex
new file mode 100644
index 000000000000..5da16d920aa3
--- /dev/null
+++ b/virtio-gpio.tex
@@ -0,0 +1,346 @@
+\section{GPIO Device}\label{sec:Device Types / GPIO Device}
+
+The Virtio GPIO device is a virtual General Purpose Input/Output device that
+supports a variable number of named I/O lines, which can be configured in input
+mode or in output mode with logical level low (0) or high (1).
+
+\subsection{Device ID}\label{sec:Device Types / GPIO Device / Device ID}
+41
+
+\subsection{Virtqueues}\label{sec:Device Types / GPIO Device / Virtqueues}
+
+\begin{description}
+\item[0] requestq
+\end{description}
+
+\subsection{Feature bits}\label{sec:Device Types / GPIO Device / Feature bits}
+
+None currently defined.
+
+\subsection{Device configuration layout}\label{sec:Device Types / GPIO Device / Device configuration layout}
+
+GPIO device uses the following configuration structure layout:
+
+\begin{lstlisting}
+struct virtio_gpio_config {
+ le16 ngpio;
+ u8 padding[2];
+ le32 gpio_names_size;
+};
+\end{lstlisting}
+
+\begin{description}
+\item[\field{ngpio}] is the total number of GPIO lines supported by the device.
+
+\item[\field{padding}] has no meaning and is reserved for future use. This is
+ set to zero by the device.
+
+\item[\field{gpio_names_size}] is the size of the gpio-names memory block in
+ bytes, which can be fetched by the driver using the
+ \field{VIRTIO_GPIO_MSG_GET_LINE_NAMES} message. The device sets this to
+ 0 if it doesn't support names for the GPIO lines.
+\end{description}
+
+
+\subsection{Device Initialization}\label{sec:Device Types / GPIO Device / Device Initialization}
+
+\begin{itemize}
+\item The driver configures and initializes the \field{requestq} virtqueue.
+\end{itemize}
+
+\subsection{Device Operation: requestq}\label{sec:Device Types / GPIO Device / requestq Operation}
+
+The driver uses the \field{requestq} virtqueue to send messages to the device.
+The driver sends a pair of buffers, request (filled by driver) and response (to
+be filled by device later), to the device. The device in turn fills the response
+buffer and sends it back to the driver.
+
+\begin{lstlisting}
+struct virtio_gpio_request {
+ le16 type;
+ le16 gpio;
+ le32 value;
+};
+\end{lstlisting}
+
+All the fields of this structure are set by the driver and read by the device.
+
+\begin{description}
+\item[\field{type}] is the GPIO message type, i.e. one of
+ \field{VIRTIO_GPIO_MSG_*} values.
+
+\item[\field{gpio}] is the GPIO line number, i.e. 0 <= \field{gpio} <
+ \field{ngpio}.
+
+\item[\field{value}] is a message specific value.
+\end{description}
+
+\begin{lstlisting}
+struct virtio_gpio_response {
+ u8 status;
+ u8 value;
+};
+
+/* Possible values of the status field */
+#define VIRTIO_GPIO_STATUS_OK 0x0
+#define VIRTIO_GPIO_STATUS_ERR 0x1
+\end{lstlisting}
+
+All the fields of this structure are set by the device and read by the driver.
+
+\begin{description}
+\item[\field{status}] of the GPIO message,
+ \field{VIRTIO_GPIO_STATUS_OK} on success and \field{VIRTIO_GPIO_STATUS_ERR}
+ on failure.
+
+\item[\field{value}] is a message specific value.
+\end{description}
+
+Following is the list of messages supported by the virtio gpio specification.
+
+\begin{lstlisting}
+/* GPIO message types */
+#define VIRTIO_GPIO_MSG_GET_LINE_NAMES 0x0001
+#define VIRTIO_GPIO_MSG_GET_DIRECTION 0x0002
+#define VIRTIO_GPIO_MSG_SET_DIRECTION 0x0003
+#define VIRTIO_GPIO_MSG_GET_VALUE 0x0004
+#define VIRTIO_GPIO_MSG_SET_VALUE 0x0005
+
+/* GPIO Direction types */
+#define VIRTIO_GPIO_DIRECTION_NONE 0x00
+#define VIRTIO_GPIO_DIRECTION_OUT 0x01
+#define VIRTIO_GPIO_DIRECTION_IN 0x02
+\end{lstlisting}
+
+\subsubsection{requestq Operation: Get Line Names}\label{sec:Device Types / GPIO Device / requestq Operation / Get Line Names}
+
+The driver sends this message to receive a stream of zero-terminated strings,
+where each string represents the name of a GPIO line, present in increasing
+order of the GPIO line numbers. The names of the GPIO lines are optional and may
+be present only for a subset of GPIO lines. If missing, then a zero-byte must be
+present for the GPIO line. If present, the name string must be zero-terminated
+and the name must be unique within a GPIO Device.
+
+These names of the GPIO lines should be most meaningful producer names for the
+system, such as name indicating the usage. For example "MMC-CD", "Red LED Vdd"
+and "ethernet reset" are reasonable line names as they describe what the line is
+used for, while "GPIO0" is not a good name to give to a GPIO line.
+
+Here is an example of how the gpio names memory block may look like for a GPIO
+device with 10 GPIO lines, where line names are provided only for lines 0
+("MMC-CD"), 5 ("Red LED Vdd") and 7 ("ethernet reset").
+
+\begin{lstlisting}
+u8 gpio_names[] = {
+ 'M', 'M', 'C', '-', 'C', 'D', 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 'R', 'e', 'd', ' ', 'L', 'E', 'D', ' ', 'V', 'd', 'd', 0,
+ 0,
+ 'E', 't', 'h', 'e', 'r', 'n', 'e', 't', ' ', 'r', 'e', 's', 'e', 't', 0,
+ 0,
+ 0
+};
+\end{lstlisting}
+
+The device sets the \field{gpio_names_size} to a non-zero value if this message
+is supported by the device, else it must be set to zero.
+
+This message type uses different layout for the response structure, as the
+device needs to return the \field{gpio_names} array.
+
+\begin{lstlisting}
+struct virtio_gpio_response_N {
+ u8 status;
+ u8 value[N];
+};
+\end{lstlisting}
+
+The driver must allocate the \field{value[N]} buffer in the \field{struct
+virtio_gpio_response_N} for N bytes, where N = \field{gpio_names_size}.
+
+\begin{tabularx}{\textwidth}{ |l||X|X|X| }
+\hline
+\textbf{Request} & \field{type} & \field{gpio} & \field{value} \\
+\hline
+& \field{VIRTIO_GPIO_MSG_GET_LINE_NAMES} & 0 & 0 \\
+\hline
+\end{tabularx}
+
+\begin{tabularx}{\textwidth}{ |l||X|X|X| }
+\hline
+\textbf{Response} & \field{status} & \field{value[N]} & \field{Where N is} \\
+\hline
+& \field{VIRTIO_GPIO_STATUS_*} & gpio-names & \field{gpio_names_size} \\
+\hline
+\end{tabularx}
+
+\subsubsection{requestq Operation: Get Direction}\label{sec:Device Types / GPIO Device / requestq Operation / Get Direction}
+
+The driver sends this message to request the device to return a line's
+configured direction.
+
+\begin{tabularx}{\textwidth}{ |l||X|X|X| }
+\hline
+\textbf{Request} & \field{type} & \field{gpio} & \field{value} \\
+\hline
+& \field{VIRTIO_GPIO_MSG_GET_DIRECTION} & line number & 0 \\
+\hline
+\end{tabularx}
+
+\begin{tabularx}{\textwidth}{ |l||X|X| }
+\hline
+\textbf{Response} & \field{status} & \field{value} \\
+\hline
+& \field{VIRTIO_GPIO_STATUS_*} & 0 = none, 1 = output, 2 = input \\
+\hline
+\end{tabularx}
+
+\subsubsection{requestq Operation: Set Direction}\label{sec:Device Types / GPIO Device / requestq Operation / Set Direction}
+
+The driver sends this message to request the device to configure a line's
+direction. The driver can either set the direction to
+\field{VIRTIO_GPIO_DIRECTION_IN} or \field{VIRTIO_GPIO_DIRECTION_OUT}, which
+also activates the line, or to \field{VIRTIO_GPIO_DIRECTION_NONE}, which
+deactivates the line.
+
+The driver should set the value of the GPIO line, using the
+\field{VIRTIO_GPIO_MSG_SET_VALUE} message, before setting the direction of the
+line to output to avoid any undesired behavior.
+
+\begin{tabularx}{\textwidth}{ |l||X|X|X| }
+\hline
+\textbf{Request} & \field{type} & \field{gpio} & \field{value} \\
+\hline
+& \field{VIRTIO_GPIO_MSG_SET_DIRECTION} & line number & 0 = none, 1 = output, 2 = input \\
+\hline
+\end{tabularx}
+
+\begin{tabularx}{\textwidth}{ |l||X|X| }
+\hline
+\textbf{Response} & \field{status} & \field{value} \\
+\hline
+& \field{VIRTIO_GPIO_STATUS_*} & 0 \\
+\hline
+\end{tabularx}
+
+\subsubsection{requestq Operation: Get Value}\label{sec:Device Types / GPIO Device / requestq Operation / Get Value}
+
+The driver sends this message to request the device to return current value
+sensed on a line.
+
+\begin{tabularx}{\textwidth}{ |l||X|X|X| }
+\hline
+\textbf{Request} & \field{type} & \field{gpio} & \field{value} \\
+\hline
+& \field{VIRTIO_GPIO_MSG_GET_VALUE} & line number & 0 \\
+\hline
+\end{tabularx}
+
+\begin{tabularx}{\textwidth}{ |l||X|X| }
+\hline
+\textbf{Response} & \field{status} & \field{value} \\
+\hline
+& \field{VIRTIO_GPIO_STATUS_*} & 0 = low, 1 = high \\
+\hline
+\end{tabularx}
+
+\subsubsection{requestq Operation: Set Value}\label{sec:Device Types / GPIO Device / requestq Operation / Set Value}
+
+The driver sends this message to request the device to set the value of a line.
+The line may already be configured for output or may get configured to output
+later, at which point this output value must be used by the device for the line.
+
+\begin{tabularx}{\textwidth}{ |l||X|X|X| }
+\hline
+\textbf{Request} & \field{type} & \field{gpio} & \field{value} \\
+\hline
+& \field{VIRTIO_GPIO_MSG_SET_VALUE} & line number & 0 = low, 1 = high \\
+\hline
+\end{tabularx}
+
+\begin{tabularx}{\textwidth}{ |l||X|X| }
+\hline
+\textbf{Response} & \field{status} & \field{value} \\
+\hline
+& \field{VIRTIO_GPIO_STATUS_*} & 0 \\
+\hline
+\end{tabularx}
+
+\subsubsection{requestq Operation: Message Flow}\label{sec:Device Types / GPIO Device / requestq Operation / Message Flow}
+
+\begin{itemize}
+\item The driver queues \field{struct virtio_gpio_request} and
+ \field{virtio_gpio_response} buffers to the \field{requestq} virtqueue,
+ after filling all fields of the \field{struct virtio_gpio_request} buffer as
+ defined by the specific message type.
+
+\item The driver notifies the device of the presence of buffers on the
+ \field{requestq} virtqueue.
+
+\item The device, after receiving the message from the driver, processes it and
+ fills all the fields of the \field{struct virtio_gpio_response} buffer
+ (received from the driver). The \field{status} must be set to
+ \field{VIRTIO_GPIO_STATUS_OK} on success and \field{VIRTIO_GPIO_STATUS_ERR}
+ on failure.
+
+\item The device puts the buffers back on the \field{requestq} virtqueue and
+ notifies the driver of the same.
+
+\item The driver fetches the buffers and processes the response received in the
+ \field{virtio_gpio_response} buffer.
+
+\item The driver can send multiple messages in parallel for same or different
+ GPIO line.
+\end{itemize}
+
+\drivernormative{\subsubsection}{requestq Operation}{Device Types / GPIO Device / requestq Operation}
+
+\begin{itemize}
+\item The driver MUST send messages on the \field{requestq} virtqueue.
+
+\item The driver MUST queue both \field{struct virtio_gpio_request} and
+ \field{virtio_gpio_response} for every message sent to the device.
+
+\item The \field{struct virtio_gpio_request} buffer MUST be filled by the driver
+ and MUST be read-only for the device.
+
+\item The \field{struct virtio_gpio_response} buffer MUST be filled by the
+ device and MUST be writable by the device.
+
+\item The driver MAY send multiple messages for same or different GPIO lines in
+ parallel.
+\end{itemize}
+
+\devicenormative{\subsubsection}{requestq Operation}{Device Types / GPIO Device / requestq Operation}
+
+\begin{itemize}
+\item The device MUST set all the fields of the \field{struct
+ virtio_gpio_response} before sending it back to the driver.
+
+\item The device MUST set all the fields of the \field{struct
+ virtio_gpio_config} on receiving a configuration request from the driver.
+
+\item The device MUST set the \field{gpio_names_size} field as zero in the
+ \field{struct virtio_gpio_config}, if it doesn't implement names for
+ individual GPIO lines.
+
+\item The device MUST set the \field{gpio_names_size} field, in the
+ \field{struct virtio_gpio_config}, with the size of gpio-names memory block
+ in bytes, if the device implements names for individual GPIO lines. The
+ strings MUST be zero-terminated and an unique (if available) within the GPIO
+ device.
+
+\item The device MUST process multiple messages, for the same GPIO line,
+ sequentially and respond to them in the order they were received on the
+ virtqueue.
+
+\item The device MAY process messages, for different GPIO lines, out of order
+ and in parallel, and MAY send message's response to the driver out of order.
+
+\item The device MUST discard all state information corresponding to a GPIO
+ line, once the driver has requested to set its direction to
+ \field{VIRTIO_GPIO_DIRECTION_NONE}.
+\end{itemize}
--
2.31.1.272.g89b43f80a514