Hi,
The following is a breakdown (as best I can figure) of the work needed
to demonstrate VirtIO backends in Rust on the Xen hypervisor. It
requires work across a number of projects but notably core rust and virtio
enabling in the Xen project (building on the work EPAM has already done)
and the start of enabling rust-vmm crate to work with Xen.
The first demo is a fairly simple toy to exercise the direct hypercall
approach for a unikernel backend. On it's own it isn't super impressive
but hopefully serves as a proof of concept for the idea of having
backends running in a single exception level where latency will be
important.
The second is a much more ambitious bridge between Xen and vhost-user to
allow for re-use of the existing vhost-user backends with the bridge
acting as a proxy for what would usually be a full VMM in the type-2
hypervisor case. With that in mind the rust-vmm work is only aimed at
doing the device emulation and doesn't address the larger question of
how type-1 hypervisors can be integrated into the rust-vmm hypervisor
model.
A quick note about the estimates. They are exceedingly rough guesses
plucked out of the air and I would be grateful for feedback from the
appropriate domain experts on if I'm being overly optimistic or
pessimistic.
The links to the Stratos JIRA should be at least read accessible to all
although they contain the same information as the attached document
(albeit with nicer PNG renderings of my ASCII art ;-). There is a
Stratos sync-up call next Thursday:
https://calendar.google.com/event?action=TEMPLATE&tmeid=MWpidm5lbzM5NjlydnA…
and I'm sure there will also be discussion in the various projects
(hence the wide CC list). The Stratos calls are open to anyone who wants
to attend and we welcome feedback from all who are interested.
So on with the work breakdown:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
STRATOS PLANNING FOR 21 TO 22
Alex Bennée
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Table of Contents
─────────────────
1. Xen Rust Bindings ([STR-51])
.. 1. Upstream an "official" rust crate for Xen ([STR-52])
.. 2. Basic Hypervisor Interactions hypercalls ([STR-53])
.. 3. [#10] Access to XenStore service ([STR-54])
.. 4. VirtIO support hypercalls ([STR-55])
2. Xen Hypervisor Support for Stratos ([STR-56])
.. 1. Stable ABI for foreignmemory mapping to non-dom0 ([STR-57])
.. 2. Tweaks to tooling to launch VirtIO guests
3. rust-vmm support for Xen VirtIO ([STR-59])
.. 1. Make vm-memory Xen aware ([STR-60])
.. 2. Xen IO notification and IRQ injections ([STR-61])
4. Stratos Demos
.. 1. Rust based stubdomain monitor ([STR-62])
.. 2. Xen aware vhost-user master ([STR-63])
1 Xen Rust Bindings ([STR-51])
══════════════════════════════
There exists a [placeholder repository] with the start of a set of
x86_64 bindings for Xen and a very basic hello world uni-kernel
example. This forms the basis of the initial Xen Rust work and will be
available as a [xen-sys crate] via cargo.
[STR-51] <https://linaro.atlassian.net/browse/STR-51>
[placeholder repository] <https://gitlab.com/cardoe/oxerun.git>
[xen-sys crate] <https://crates.io/crates/xen-sys>
1.1 Upstream an "official" rust crate for Xen ([STR-52])
────────────────────────────────────────────────────────
To start with we will want an upstream location for future work to be
based upon. The intention is the crate is independent of the version
of Xen it runs on (above the baseline version chosen). This will
entail:
• ☐ agreeing with upstream the name/location for the source
• ☐ documenting the rules for the "stable" hypercall ABI
• ☐ establish an internal interface to elide between ioctl mediated
and direct hypercalls
• ☐ ensure the crate is multi-arch and has feature parity for arm64
As such we expect the implementation to be standalone, i.e. not
wrapping the existing Xen libraries for mediation. There should be a
close (1-to-1) mapping between the interfaces in the crate and the
eventual hypercall made to the hypervisor.
Estimate: 4w (elapsed likely longer due to discussion)
[STR-52] <https://linaro.atlassian.net/browse/STR-52>
1.2 Basic Hypervisor Interactions hypercalls ([STR-53])
───────────────────────────────────────────────────────
These are the bare minimum hypercalls implemented as both ioctl and
direct calls. These allow for a very basic binary to:
• ☐ console_io - output IO via the Xen console
• ☐ domctl stub - basic stub for domain control (different API?)
• ☐ sysctl stub - basic stub for system control (different API?)
The idea would be this provides enough hypercall interface to query
the list of domains and output their status via the xen console. There
is an open question about if the domctl and sysctl hypercalls are way
to go.
Estimate: 6w
[STR-53] <https://linaro.atlassian.net/browse/STR-53>
1.3 [#10] Access to XenStore service ([STR-54])
───────────────────────────────────────────────
This is a shared configuration storage space accessed via either Unix
sockets (on dom0) or via the Xenbus. This is used to access
configuration information for the domain.
Is this needed for a backend though? Can everything just be passed
direct on the command line?
Estimate: 4w
[STR-54] <https://linaro.atlassian.net/browse/STR-54>
1.4 VirtIO support hypercalls ([STR-55])
────────────────────────────────────────
These are the hypercalls that need to be implemented to support a
VirtIO backend. This includes the ability to map another guests memory
into the current domains address space, register to receive IOREQ
events when the guest knocks at the doorbell and inject kicks into the
guest. The hypercalls we need to support would be:
• ☐ dmop - device model ops (*_ioreq_server, setirq, nr_vpus)
• ☐ foreignmemory - map and unmap guest memory
The DMOP space is larger than what we need for an IOREQ backend so
I've based it just on what arch/arm/dm.c exports which is the subset
introduced for EPAM's virtio work.
Estimate: 12w
[STR-55] <https://linaro.atlassian.net/browse/STR-55>
2 Xen Hypervisor Support for Stratos ([STR-56])
═══════════════════════════════════════════════
These tasks include tasks needed to support the various different
deployments of Stratos components in Xen.
[STR-56] <https://linaro.atlassian.net/browse/STR-56>
2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57])
───────────────────────────────────────────────────────────────
Currently the foreign memory mapping support only works for dom0 due
to reference counting issues. If we are to support backends running in
their own domains this will need to get fixed.
Estimate: 8w
[STR-57] <https://linaro.atlassian.net/browse/STR-57>
2.2 Tweaks to tooling to launch VirtIO guests
─────────────────────────────────────────────
There might not be too much to do here. The EPAM work already did
something similar for their PoC for virtio-block. Essentially we need
to ensure:
• ☐ DT bindings are passed to the guest for virtio-mmio device
discovery
• ☐ Our rust backend can be instantiated before the domU is launched
This currently assumes the tools and the backend are running in dom0.
Estimate: 4w
3 rust-vmm support for Xen VirtIO ([STR-59])
════════════════════════════════════════════
This encompasses the tasks required to get a vhost-user server up and
running while interfacing to the Xen hypervisor. This will require the
xen-sys.rs crate for the actual interface to the hypervisor.
We need to work out how a Xen configuration option would be passed to
the various bits of rust-vmm when something is being built.
[STR-59] <https://linaro.atlassian.net/browse/STR-59>
3.1 Make vm-memory Xen aware ([STR-60])
───────────────────────────────────────
The vm-memory crate is the root crate for abstracting access to the
guests memory. It currently has multiple configuration builds to
handle difference between mmap on Windows and Unix. Although mmap
isn't directly exposed the public interfaces support a mmap like
interface. We would need to:
• ☐ work out how to expose foreign memory via the vm-memory mechanism
I'm not sure if this just means implementing the GuestMemory trait for
a GuestMemoryXen or if we need to present a mmap like interface.
Estimate: 8w
[STR-60] <https://linaro.atlassian.net/browse/STR-60>
3.2 Xen IO notification and IRQ injections ([STR-61])
─────────────────────────────────────────────────────
The KVM world provides for ioeventfd (notifications) and irqfd
(injection) to signal asynchronously between the guest and the
backend. As far a I can tell this is currently handled inside the
various VMMs which assume a KVM backend.
While the vhost-user slave code doesn't see the
register_ioevent/register_irqfd events it does deal with EventFDs
throughout the code. Perhaps the best approach here would be to create
a IOREQ crate that can create EventFD descriptors which can then be
passed to the slaves to use for notification and injection.
Otherwise there might be an argument for a new crate that can
encapsulate this behaviour for both KVM/ioeventd and Xen/IOREQ setups?
Estimate: 8w?
[STR-61] <https://linaro.atlassian.net/browse/STR-61>
4 Stratos Demos
═══════════════
These tasks cover the creation of demos that brig together all the
previous bits of work to demonstrate a new area of capability that has
been opened up by Stratos work.
4.1 Rust based stubdomain monitor ([STR-62])
────────────────────────────────────────────
This is a basic demo that is a proof of concept for a unikernel style
backend written in pure Rust. This work would be a useful precursor
for things such as the RTOS Dom0 on a safety island ([STR-11]) or as a
carrier for the virtio-scmi backend.
The monitor program will periodically poll the state of the other
domains and echo their status to the Xen console.
Estimate: 4w
#+name: stub-domain-example
#+begin_src ditaa :cmdline -o :file stub_domain_example.png
Dom0 | DomU | DomStub
| |
: /-------------\ :
| |cPNK | |
| | | |
| | | |
/------------------------------------\ | | GuestOS | |
|cPNK | | | | |
EL0 | Dom0 Userspace (xl tools, QEMU) | | | | | /---------------\
| | | | | | |cYEL |
\------------------------------------/ | | | | | |
+------------------------------------+ | | | | | Rust Monitor |
EL1 |cA1B Dom0 Kernel | | | | | | |
+------------------------------------+ | \-------------/ | \---------------/
-------------------------------------------------------------------------------=------------------
+-------------------------------------------------------------------------------------+
EL2 |cC02 Xen Hypervisor |
+-------------------------------------------------------------------------------------+
#+end_src
[STR-62] <https://linaro.atlassian.net/browse/STR-62>
[STR-11] <https://linaro.atlassian.net/browse/STR-11>
4.2 Xen aware vhost-user master ([STR-63])
──────────────────────────────────────────
Usually the master side of a vhost-user system is embedded directly in
the VMM itself. However in a Xen deployment their is no overarching
VMM but a series of utility programs that query the hypervisor
directly. The Xen tooling is also responsible for setting up any
support processes that are responsible for emulating HW for the guest.
The task aims to bridge the gap between Xen's normal HW emulation path
(ioreq) and VirtIO's userspace device emulation (vhost-user). The
process would be started with some information on where the
virtio-mmio address space is and what the slave binary will be. It
will then:
• map the guest into Dom0 userspace and attach to a MemFD
• register the appropriate memory regions as IOREQ regions with Xen
• create EventFD channels for the virtio kick notifications (one each
way)
• spawn the vhost-user slave process and mediate the notifications and
kicks between the slave and Xen itself
#+name: xen-vhost-user-master
#+begin_src ditaa :cmdline -o :file xen_vhost_user_master.png
Dom0 DomU
|
|
|
|
|
|
+-------------------+ +-------------------+ |
| |----------->| | |
| vhost-user | vhost-user | vhost-user | : /------------------------------------\
| slave | protocol | master | | | |
| (existing) |<-----------| (rust) | | | |
+-------------------+ +-------------------+ | | |
^ ^ | ^ | | Guest Userspace |
| | | | | | |
| | | IOREQ | | | |
| | | | | | |
v v V | | \------------------------------------/
+---------------------------------------------------+ | +------------------------------------+
| ^ ^ | ioctl ^ | | | |
| | iofd/irqfd eventFD | | | | | | Guest Kernel |
| +---------------------------+ | | | | | +-------------+ |
| | | | | | | virtio-dev | |
| Host Kernel V | | | | +-------------+ |
+---------------------------------------------------+ | +------------------------------------+
| ^ | | ^
| hyper | | |
----------------------=------------- | -=--- | ----=------ | -----=- | --------=------------------
| call | Trap | | IRQ
V | V |
+-------------------------------------------------------------------------------------+
| | ^ | ^ |
| | +-------------+ | |
EL2 | Xen Hypervisor | | |
| +-------------------------------+ |
| |
+-------------------------------------------------------------------------------------+
#+end_src
[STR-63] <https://linaro.atlassian.net/browse/STR-63>
--
Alex Bennée
This series adds support for virtio-video decoder devices in Qemu
and also provides a vhost-user-video vmm implementation.
The vhost-user-video vmm currently parses virtio-vido v3 protocol
(as that is what the Linux frontend driver implements).
It then converts that to a v4l2 mem2mem stateful decoder device.
Currently this has been tested using v4l2 vicodec test driver in Linux
[1] but it is intended to be used with Arm SoCs which often implement
v4l2 stateful decoders/encoders drivers for their video accelerators.
The primary goal so far has been to allow continuing development
of virtio-video Linux frontend driver and testing with Qemu. Using
vicodec on the host allows a purely virtual dev env, and allows for
ci integration in the future by kernelci etc.
This series also adds the virtio_video.h header and adds the
FWHT format which is used by vicodec driver.
I have tested this VMM using v4l2-ctl from v4l2 utils in the guest
to do a video decode to a file. This can then be validated using ffplay
v4l2-compliance tool in the guest has also been run which stresses the
interface and issues lots of syscall level tests
See the README.md for example commands on how to configure guest kernel
and do a video decode using Qemu, vicodec using this VMM.
Linux virtio-video frontend driver code:
https://github.com/petegriffin/linux/commits/v5.10-virtio-video-latest
Qemu vmm code:
https://github.com/petegriffin/qemu/tree/vhost-virtio-video-master-v1
This is part of a wider initiative by Linaro called
"project Stratos" for which you can find information here:
https://collaborate.linaro.org/display/STR/Stratos+Home
Applies cleanly to git://git.qemu.org/qemu.git master(a3607def89).
Thanks,
Peter.
[1] https://lwn.net/Articles/760650/
Peter Griffin (8):
vhost-user-video: Add a README.md with cheat sheet of commands
MAINTAINERS: Add virtio-video section
vhost-user-video: boiler plate code for vhost-user-video device
vhost-user-video: add meson subdir build logic
standard-headers: Add virtio_video.h
virtio_video: Add Fast Walsh-Hadamard Transform format
hw/display: add vhost-user-video-pci
tools/vhost-user-video: Add initial vhost-user-video vmm
MAINTAINERS | 8 +
hw/display/Kconfig | 5 +
hw/display/meson.build | 3 +
hw/display/vhost-user-video-pci.c | 82 +
hw/display/vhost-user-video.c | 386 ++++
include/hw/virtio/vhost-user-video.h | 41 +
include/standard-headers/linux/virtio_video.h | 484 +++++
tools/meson.build | 9 +
tools/vhost-user-video/50-qemu-rpmb.json.in | 5 +
tools/vhost-user-video/README.md | 98 +
tools/vhost-user-video/main.c | 1680 ++++++++++++++++
tools/vhost-user-video/meson.build | 10 +
tools/vhost-user-video/v4l2_backend.c | 1777 +++++++++++++++++
tools/vhost-user-video/v4l2_backend.h | 99 +
tools/vhost-user-video/virtio_video_helpers.c | 462 +++++
tools/vhost-user-video/virtio_video_helpers.h | 166 ++
tools/vhost-user-video/vuvideo.h | 43 +
17 files changed, 5358 insertions(+)
create mode 100644 hw/display/vhost-user-video-pci.c
create mode 100644 hw/display/vhost-user-video.c
create mode 100644 include/hw/virtio/vhost-user-video.h
create mode 100644 include/standard-headers/linux/virtio_video.h
create mode 100644 tools/vhost-user-video/50-qemu-rpmb.json.in
create mode 100644 tools/vhost-user-video/README.md
create mode 100644 tools/vhost-user-video/main.c
create mode 100644 tools/vhost-user-video/meson.build
create mode 100644 tools/vhost-user-video/v4l2_backend.c
create mode 100644 tools/vhost-user-video/v4l2_backend.h
create mode 100644 tools/vhost-user-video/virtio_video_helpers.c
create mode 100644 tools/vhost-user-video/virtio_video_helpers.h
create mode 100644 tools/vhost-user-video/vuvideo.h
--
2.25.1
Hello,
I submitted https://lore.kernel.org/all/CAKycSdDMxfto6oTqt06TbJxXY=S7p_gtEXWDQv8mz0d9zt…
and my attention was drawn here and have a few comments.
Firstly, I was wondering why you didn't create a separate *-sys crate
for these bindings?
see https://doc.rust-lang.org/cargo/reference/build-scripts.html#-sys-packages
for more information.
Secondly, I noticed when developing my aforementioned, patch that
`bindgen` adds quite a few dependencies that probably aren't needed by
the average consumer of this crate.
So I was wondering what are your thoughts about generating and
committing a bindings.rs then optionally using these dependencies via
a feature flag?
Lastly, With your `make` integration, it looks like we could also
remove the `cc` dependency by allowing `make` to build libgpiod
instead and just linking with that, instead of compiling libgpiod
twice.
Kind regards,
Gerard.
Hi GIC experts,
This came up last week in the Stratos sync call when we were discussing
Vincent's SCMI setup:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28665741503/2021-12-09+P…
with the shared memory between the two guests the only reason we
currently exit to the guest userspace (QEMU) is to forward notifications
of virtqueue kicks from one guest to the other. I'm vaguely aware from
my time looking at GIC code that it can be configured for IPI IRQs
between two cores. Do we have that ability between two vCPUs from
different guests?
If not what would it take to enable such a feature?
--
Alex Bennée
Hi,
So the following is what I had in my head for a demo setup based on what
we talked about this morning. Does it make sense?
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
STRATOS TSN NETWORKING WITH VMS
Alex Bennée
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Table of Contents
─────────────────
1. Abstract
.. 1. Hardware
.. 2. Setup
1 Abstract
══════════
In a multi-zone/multi-VM setup it is important that higher priority
(i.e. safety critical) workloads are not impaired by lower priority
ones. In a modern automotive setup which was using something like
ethernet to link a number of units we need to ensure the networking
also allows for effective and timely delivery of important packets.
This demo intends to show how Time Sensitive Networking (TSN) can be
combined with an accelerated AF_XDP path from host to guest to ensure
correct behaviour. The primary guest will be injesting a video stream
which represents a reversing camera which has low latency requirements
while a second guest loads a large amount of mapping data for a
navigation app. We will run two scenarios. The first will use
in-kernel TSN support using software to handle the packet scheduling.
The second will use dedicated HW with TSN support which should achieve
much lower latency than relying on SW.
1.1 Hardware
────────────
• 2 x Machiatobin with 10G copper SFP
• 2 x PCIe network cards with TSN offload (Intel I255 or equivalent)
• 1 x PCIe video card
• 1 x USB Audio
1.2 Setup
─────────
The setup is the same as used for previous AF_XDP measurements except
that *ethT* is either using the builtin SoC networking or the TSN
accelerated PCIe network device.
The target machine (representing a automotive display console) runs
two VMs. A low priority one for the navigation function which is
fetching large amounts of none time critical data for the map display.
The higher priority VM receives and decompressed the state of the
reversing camera.
Both VMs display their output via an virtio-gpu device which is
composited to a single display on the host. For simplicity we shall
assume KVM virtualisation.
--
Alex Bennée
Hi All,
I suspect this weeks Stratos sync will be the last one of the year as we
are about to head into the holiday season. Does anyone have any topics
they want to discuss?
--
Alex Bennée
Currently the GPIO Aggregator does not support interrupts. This means
that kernel drivers going from a GPIO to an IRQ using gpiod_to_irq(),
and userspace applications using line events do not work.
Add interrupt support by providing a gpio_chip.to_irq() callback, which
just calls into the parent GPIO controller.
Note that this does not implement full interrupt controller (irq_chip)
support, so using e.g. gpio-keys with "interrupts" instead of "gpios"
still does not work.
Signed-off-by: Geert Uytterhoeven <geert+renesas(a)glider.be>
---
I would prefer to avoid implementing irq_chip support, until there is a
real use case for this.
This has been tested with gpio-keys and gpiomon on the Koelsch
development board:
- gpio-keys, using a DT overlay[1]:
$ overlay add r8a7791-koelsch-keyboard-controlled-led
$ echo gpio-aggregator > /sys/devices/platform/frobnicator/driver_override
$ echo frobnicator > /sys/bus/platform/drivers/gpio-aggregator/bind
$ gpioinfo frobnicator
gpiochip12 - 3 lines:
line 0: "light" "light" output active-high [used]
line 1: "on" "On" input active-low [used]
line 2: "off" "Off" input active-low [used]
$ echo 255 > /sys/class/leds/light/brightness
$ echo 0 > /sys/class/leds/light/brightness
$ evtest /dev/input/event0
- gpiomon, using the GPIO sysfs API:
$ echo keyboard > /sys/bus/platform/drivers/gpio-keys/unbind
$ echo e6055800.gpio 2,6 > /sys/bus/platform/drivers/gpio-aggregator/new_device
$ gpiomon gpiochip12 0 1
[1] "ARM: dts: koelsch: Add overlay for keyboard-controlled LED"
https://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-drivers.git/c…
---
drivers/gpio/gpio-aggregator.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/gpio/gpio-aggregator.c b/drivers/gpio/gpio-aggregator.c
index e9671d1660ef4b40..869dc952cf45218b 100644
--- a/drivers/gpio/gpio-aggregator.c
+++ b/drivers/gpio/gpio-aggregator.c
@@ -371,6 +371,13 @@ static int gpio_fwd_set_config(struct gpio_chip *chip, unsigned int offset,
return gpiod_set_config(fwd->descs[offset], config);
}
+static int gpio_fwd_to_irq(struct gpio_chip *chip, unsigned int offset)
+{
+ struct gpiochip_fwd *fwd = gpiochip_get_data(chip);
+
+ return gpiod_to_irq(fwd->descs[offset]);
+}
+
/**
* gpiochip_fwd_create() - Create a new GPIO forwarder
* @dev: Parent device pointer
@@ -411,7 +418,8 @@ static struct gpiochip_fwd *gpiochip_fwd_create(struct device *dev,
for (i = 0; i < ngpios; i++) {
struct gpio_chip *parent = gpiod_to_chip(descs[i]);
- dev_dbg(dev, "%u => gpio-%d\n", i, desc_to_gpio(descs[i]));
+ dev_dbg(dev, "%u => gpio %d irq %d\n", i,
+ desc_to_gpio(descs[i]), gpiod_to_irq(descs[i]));
if (gpiod_cansleep(descs[i]))
chip->can_sleep = true;
@@ -429,6 +437,7 @@ static struct gpiochip_fwd *gpiochip_fwd_create(struct device *dev,
chip->get_multiple = gpio_fwd_get_multiple_locked;
chip->set = gpio_fwd_set;
chip->set_multiple = gpio_fwd_set_multiple_locked;
+ chip->to_irq = gpio_fwd_to_irq;
chip->base = -1;
chip->ngpio = ngpios;
fwd->descs = descs;
--
2.25.1
Hi Bartosz,
This patch adds rust bindings for libgpiod v2.0, this is already partially
tested with the virtio rust backend I am developing, which uses these to talk to
the host kernel.
This is based of the next/post-libgpiod-2.0 branch.
I haven't added any mock test for this as of now and I am not sure how exactly
am I required to add them. I did see what you mentioned in your patchset about
mock-test vs gpio-sim stuff. Rust also have its own test-framework and I am not
sure if that should be used instead or something else.
Since I am posting this publicly for the first time, it is still named as V1. I
have not made significant changes to the code since last time, but just divided
the same into multiple files.
--
Viresh
Viresh Kumar (2):
libgpiod: Generate rust FFI bindings
libgpiod: Add rust wrappers
.gitignore | 6 +
bindings/rust/Cargo.toml | 14 +
bindings/rust/build.rs | 60 ++++
bindings/rust/src/bindings.rs | 16 ++
bindings/rust/src/chip.rs | 197 +++++++++++++
bindings/rust/src/edge_event.rs | 78 +++++
bindings/rust/src/event_buffer.rs | 59 ++++
bindings/rust/src/info_event.rs | 70 +++++
bindings/rust/src/lib.rs | 268 +++++++++++++++++
bindings/rust/src/line_config.rs | 431 ++++++++++++++++++++++++++++
bindings/rust/src/line_info.rs | 186 ++++++++++++
bindings/rust/src/line_request.rs | 218 ++++++++++++++
bindings/rust/src/request_config.rs | 118 ++++++++
bindings/rust/wrapper.h | 2 +
14 files changed, 1723 insertions(+)
create mode 100644 bindings/rust/Cargo.toml
create mode 100644 bindings/rust/build.rs
create mode 100644 bindings/rust/src/bindings.rs
create mode 100644 bindings/rust/src/chip.rs
create mode 100644 bindings/rust/src/edge_event.rs
create mode 100644 bindings/rust/src/event_buffer.rs
create mode 100644 bindings/rust/src/info_event.rs
create mode 100644 bindings/rust/src/lib.rs
create mode 100644 bindings/rust/src/line_config.rs
create mode 100644 bindings/rust/src/line_info.rs
create mode 100644 bindings/rust/src/line_request.rs
create mode 100644 bindings/rust/src/request_config.rs
create mode 100644 bindings/rust/wrapper.h
--
2.31.1.272.g89b43f80a514