Hello,
In my earlier attempt [1], I implemented the vhost-user-i2c backend
deamon for QEMU (though the code was generic enough to be used with
any hypervisor).
And here is a Rust implementation of the vhost-user-i2c backend
daemon. Again this is generic enough to be used with any hypervisor
and can live in its own repository now:
https://github.com/vireshk/vhost-user-i2c
I am not sure what's the process to get this merged to Rust-vmm.
Can someone help ? Is that the right thing to do ?
-------------------------8<-------------------------
Here are other details (which are same since the earlier Qemu
attempt):
This is an initial implementation of a generic vhost-user backend for
the I2C bus. This is based of the virtio specifications (already merged)
for the I2C bus.
The kernel virtio I2C driver is still under review, here [2] is the latest
version (v10):
The backend is implemented as a vhost-user device because we want to
experiment in making portable backends that can be used with multiple
hypervisors. We also want to support backends isolated in their own
separate service VMs with limited memory cross-sections with the
principle guest. This is part of a wider initiative by Linaro called
"project Stratos" for which you can find information here:
https://collaborate.linaro.org/display/STR/Stratos+Home
I2C Testing:
------------
I didn't have access to a real hardware where I can play with a I2C
client device (like RTC, eeprom, etc) to verify the working of the
backend daemon, so I decided to test it on my x86 box itself with
hierarchy of two ARM64 guests.
The first ARM64 guest was passed "-device ds1338,address=0x20" option,
so it could emulate a ds1338 RTC device, which connects to an I2C bus.
Once the guest came up, ds1338 device instance was created within the
guest kernel by doing:
echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
[
Note that this may end up binding the ds1338 device to its driver,
which won't let our i2c daemon talk to the device. For that we need to
manually unbind the device from the driver:
echo 0-0020 > /sys/bus/i2c/devices/0-0020/driver/unbind
]
After this is done, you will get /dev/rtc1. This is the device we wanted
to emulate, which will be accessed by the vhost-user-i2c backend daemon
via the /dev/i2c-0 file present in the guest VM.
At this point we need to start the backend daemon and give it a
socket-path to talk to from qemu (you can pass -v to it to get more
detailed messages):
target/debug/vhost-user-i2c --socket-path=vi2c.sock -l 0:32
[ Here, 0:32 is the bus/device mapping, 0 for /dev/i2c-0 and 32 (i.e.
0x20) is client address of ds1338 that we used while creating the
device. ]
Now we need to start the second level ARM64 guest (from within the first
guest) to get the i2c-virtio.c Linux driver up. The second level guest
is passed the following options to connect to the same socket:
-chardev socket,path=vi2c.sock,id=vi2c \
-device vhost-user-i2c-pci,chardev=vi2c,id=i2c
Once the second level guest boots up, we will see the i2c-virtio bus at
/sys/bus/i2c/devices/i2c-X/. From there we can now make it emulate the
ds1338 device again by doing:
echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
[ This time we want ds1338's driver to be bound to the device, so it
should be enabled in the kernel as well. ]
And we will get /dev/rtc1 device again here in the second level guest.
Now we can play with the rtc device with help of hwclock utility and we
can see the following sequence of transfers happening if we try to
update rtc's time from system time.
hwclock -w -f /dev/rtc1 (in guest2) ->
Reaches i2c-virtio.c (Linux bus driver in guest2) ->
transfer over virtio ->
Reaches the qemu's vhost-i2c device emulation (running over guest1) ->
Reaches the backend daemon vhost-user-i2c started earlier (in guest1) ->
ioctl(/dev/i2c-0, I2C_RDWR, ..); (in guest1) ->
reaches qemu's hw/rtc/ds1338.c (running over host)
SMBUS Testing:
--------------
I wasn't required to have such a tedious setup for testing out with
SMBUS devices. I was able to emulate a SMBUS device on my x86 machine
using i2c-stub driver.
$ modprobe i2c-stub chip_addr=0x20
//Boot the arm64 guest now with i2c-virtio driver and then do:
$ echo al3320a 0x20 > /sys/class/i2c-adapter/i2c-0/new_device
$ cat /sys/bus/iio/devices/iio:device0/in_illuminance_raw
That's it.
I hope I was able to give a clear picture of my test setup here :)
--
viresh
[1] https://lore.kernel.org/qemu-devel/cover.1617278395.git.viresh.kumar@linaro…
[2] https://lore.kernel.org/lkml/226a8d5663b7bb6f5d06ede7701eedb18d1bafa1.16164…
Hi Peter,
Thanks for noticing the announcement. We will arrange the presentation in two weeks and intersect w/ Project Stratos call. In the meantime you can try the QEMU based release and provide feedback.
---Trilok Soni
-----Original Message-----
From: Stratos-dev <stratos-dev-bounces(a)op-lists.linaro.org> On Behalf Of Peter Griffin via Stratos-dev
Sent: Tuesday, April 27, 2021 2:48 AM
To: Mike Holmes <mike.holmes(a)linaro.org>
Cc: Stratos Mailing List <stratos-dev(a)op-lists.linaro.org>
Subject: Re: [Stratos-dev] Linaro Stratos virtio discussions Agenda
-------------------------------------------------------------------------
CAUTION: This email originated from outside of the organization.
-------------------------------------------------------------------------
Hi folks,
On Tue, 27 Apr 2021 at 08:52, Mike Holmes via Stratos-dev < stratos-dev(a)op-lists.linaro.org> wrote:
> Hi All
>
> It is time to generate an agendahttps://
> collaborate.linaro.org/display/STR/Stratos+Home
> Do we have any topics for the call?
>
I noticed on LinkedIn (from Trilok) that QUIC has pushed https://github.com/quic/gunyah-hypervisor open source.
So maybe an overview of what the plans / roadmap are for that project might be useful, either for this call or a future one if it is possible to talk about publicly?
Thanks,
Peter.
>
> Mike
>
> --
> Mike Holmes | Director, Foundation Technologies, Linaro
> Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org> "Work should be fun
> and collaborative; the rest follows."
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
>
--
Stratos-dev mailing list
Stratos-dev(a)op-lists.linaro.org
https://op-lists.linaro.org/mailman/listinfo/stratos-dev
Hi folks,
On Tue, 27 Apr 2021 at 08:52, Mike Holmes via Stratos-dev <
stratos-dev(a)op-lists.linaro.org> wrote:
> Hi All
>
> It is time to generate an agendahttps://
> collaborate.linaro.org/display/STR/Stratos+Home
> Do we have any topics for the call?
>
I noticed on LinkedIn (from Trilok) that QUIC has pushed
https://github.com/quic/gunyah-hypervisor open source.
So maybe an overview of what the plans / roadmap are
for that project might be useful, either for this call or a future
one if it is possible to talk about publicly?
Thanks,
Peter.
>
> Mike
>
> --
> Mike Holmes | Director, Foundation Technologies, Linaro
> Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
> "Work should be fun and collaborative; the rest follows."
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
>
Hi All
It is time to generate an agendahttps://
collaborate.linaro.org/display/STR/Stratos+Home
Do we have any topics for the call?
Mike
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
Hi Alex,
On Mon, Apr 12, 2021 at 08:34:54AM +0000, Alex Benn??e via Stratos-dev wrote:
>
> Alex Bennée via Stratos-dev <stratos-dev(a)op-lists.linaro.org> writes:
>
> > Hi All,
> >
> > We've been discussing various ideas for Stratos in and around STR-7
> > (common virtio library). I'd originally de-emphasised the STR-7 work
> > because I wasn't sure if this was duplicate effort given we already had
> > libvhost-user as well as interest in rust-vmm for portable backends in
> > user-space. However we have seen from the Windriver hypervisor-less
> > virtio, NXP's Zephyr/Jailhouse and the requirements for the SCMI server
> > that there is a use-case for a small, liberally licensed C library that
> > is suitable for embedding in lightweight backends without a full Linux
> > stack behind it. These workloads would run in either simple command
> > loops, RTOSes or Unikernels.
> >
> > Given the multiple interested parties I'm hoping we have enough people
> > who can devote time to collaborate on the project to make the following
> > realistic over the next cycle and culminate in the following demo in 6
> > months:
> >
> > Components
> > ==========
> >
> > portable virtio backend library
> > -------------------------------
> >
> > * source based library (so you include directly in your project)
> > * liberally licensed (Apache? to facilitate above)
> > * tailored for non-POSIX, limited resource setups
> > - e.g. avoid malloc/free, provide abstraction hooks where needed
> > - not assume OS facilities (so embeddable in RTOS or Unikernel)
> > - static virtio configuration supplied from outside library (location
> > of queues etc)
> > * hypervisor agnostic
> > - provide a hook to signal when queues need processing
>
> Following on from a discussion with Vincent and Akashi-san last week we
> need to think more about how to make this hypervisor agnostic. There
> will always be some code that has to live outside the library but if it
> ends up being the same amount again what have we gained?
>
> > I suspect this should be a from scratch implementation but it's
> > certainly worth investigating the BSD implementation as Windriver have
> > suggested.
> >
> >
> > SCMI server
> > -----------
> >
> > This is the work product of STR-4, embedded in an RTOS. I'm suggesting
> > Zephyr makes the most sense here given the previous work done by Peng
> > and Akashi-san but I guess an argument could be made for a unikernel. I
> > would suggest demonstrating portability to a unikernel would be a
> > stretch goal for the demo.
> >
> > The server would be *build* time selectable to deploy either in a
> > Jailhouse partition or a Xen Dom U deployment. Either way there will
> > need to be an out-of-band communication of location of virtual queues
> > and memory maps - I assume via DTB.
>
> From our discussion last week Zephry's DTB support is very limited and
> not designed to cope with dynamic setups. So rather than having two
> build time selectable configurations we should opt for a single binary
> with a fixed expectation of where the remote guests memory and
> virtqueues will exist in it's memory space.
I'm still a bit skeptical about "single binary with a fixed expectation"
concept.
- Is it feasible to expect that all the hypervisors would configure
a BE domain in the same way (in respect of assigning a memory region
or an interrupt number)?
- what if we want a BE domain to
- provide different type of virtio devices
- support more than one frontend demains
at the same time?
How can a single binary without ability of dynamic configuration deal
with those requirements?
> It will then be up to the
> VMM setting things up to ensure everything is mapped in the appropriate
> place in the RTOS memory map. There would also be a fixed IRQ map for
> signalling when things are changes to the RTOS and a fixed doorbell for
> signalling the other way.
We should think of two different phases:
1) virtio setup/negotiation via MMIO configuration space
2) (virtio device specific) operation via virtqueue
Anyway,
signaling mechanism can be different from hypervisor to hypervisor;
On Xen, for example,
- the notification of MMIO's to the configuration space by FE will be
trapped and delivered via an event channel + dedicated "shared IO page"
- the notification of virtqueue update from BE to FE will be done
via another event channel, not by interrupt.
Another topic is "virtio device specific configuration parameters,"
for instance, a file path as backend storage for a virtio block.
We might need out-of-band(side-band?) communication channel to feed those
information to a BE domain.
(For Xen, xenstore is used for this purpose in EPAM's virtio-disk
implemenetation.)
> >
> > I'm unfamiliar with the RTOS build process but I guess this would be a
> > single git repository with the RTOS and glue code and git sub-projects
> > for the virtio and scmi libraries?
I think that Zephyr's build process (cmake) allows us to
import a library from an external repository.
> > Deployments
> > ===========
> >
> > To demonstrate portability we would have:
> >
> > - Xen hypervisor
> > - Dom0 with Linux/xen tools
> > - DomU with Linux with a virtio-scmi front-end
> > - DomU with RTOS/SCMI server with virtio-scmi back-end
> >
> > The Dom0 in this case is just for convenience of the demo as we don't
> > currently have a fully working dom0-less setup. The key communication is
> > between the two DomU guests.
>
> However the Dom0 will need also the glue code to setup the communication
> and memory mapping between the two DomU guests.
I initially thought so, but after looking into Xen api's, I found
that we have to call IOREQ-related hypervisor calls directly on a BE domain.
At least under the current implementation, dom0 cannot call them
on behalf of a BE domain.
> This could happily link
> with the existing Xen library for setting up the guest table mappings.
>
> >
> > - Jailhouse
> > - Linux kernel partition with virtio-scmi front-end
> > - RTOS/SCMI server partition with a virtio-scmi back-end
>
> The RTOS/SCMI server would be the same binary blob as for Xen. Again
> some glue setup code would be required. I'm still unsure on how this
> would work for Jailhouse so if we don't have any Jailhouse expertise
> joining the project we could do this with KVM instead.
>
> Linux/KVM host
> - setup code in main host (kvmtool/QEMU launch)
> - KVM guest with Linux with a virtio-scmi front-end
> - KVM guest with RTOS/SCMI server with virtio-scmi back-end
The easiest way of implementing BE for kvm is to utilize vhost-user
library, but please note that this library internally uses socket(AF_UNIX)
eventfd and mmap(), which are in some sense hypervisor-specific interfaces
given that linux works as type-2 hypervisor :)
Then it's not quite straightforward to port it to RTOS like Zephyr
and I don't think a single binary would work both on Xen and kvm.
-Takahiro Akashi
> > This is closer to Windrivers' hypervisor-less virtio deployment as
> > Jailhouse is not a "proper" hypervisor in this case just a way of
> > partitioning up the resources. There will need to be some way for the
> > kernel and server partitions to signal each other when queues are
> > updated.
> >
> > Platform
> > ========
> >
> > We know we have working Xen on Synquacer and Jailhouse on the iMX.
> > Should we target those as well as a QEMU -M virt for those who wish to
> > play without hardware?
> >
> >
> > Stretch Goals
> > =============
> >
> > * Integrate Arnd's fat virtqueues
> >
> > Hopefully this will be ready soon enough in the cycle that we can add
> > this to the library and prototype the minimal memory cross section.
>
> This is dependant on having something at least sketched out early in the
> cycle. It would allow us to simplify the shared memory mapping to just
> plain virtqueus.
>
> > * Port the server/library to another RTOS/unikernel
> >
> > This would demonstrate the core code hasn't grown any assumptions
> > about what it is running in.
> >
> > * Run the server blob on another hypervisor
> >
> > Running in KVM is probably boring at this point. Maybe investigate
> > having it Hafnium? Or in a R-profile safety island setup?
> >
> > So what do people think? Thoughts? Comments? Volunteers?
>
>
> --
> Alex Bennée
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
All,
The meeting minutes are up, thanks for a lively discussion
https://collaborate.linaro.org/display/STR/2021-04-15+Project+Stratos+Sync+…
Mike
On Thu, Apr 15, 2021 at 11:21 AM Mike Holmes via Stratos-dev <
stratos-dev(a)op-lists.linaro.org> wrote:
> Hi All
>
> We have a new agenda item
>
>
> - Vincent: Should we consider static vs dynamic users of a backend
>
>
> On Tue, Apr 13, 2021 at 12:51 PM Mike Holmes <mike.holmes(a)linaro.org>
> wrote:
>
> > Hi All
> >
> > Do we have any topic proposals for this week?
> >
> > https://collaborate.linaro.org/display/STR/Stratos+Home
> >
> > Thanks Mike
> >
> > --
> > Mike Holmes | Director, Foundation Technologies, Linaro
> > Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
> > "Work should be fun and collaborative; the rest follows."
> >
> >
>
> --
> Mike Holmes | Director, Foundation Technologies, Linaro
> Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
> "Work should be fun and collaborative; the rest follows."
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
>
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
Hi All
Do we have any topic proposals for this week?
https://collaborate.linaro.org/display/STR/Stratos+Home
Thanks Mike
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
Alex Bennée via Stratos-dev <stratos-dev(a)op-lists.linaro.org> writes:
> Hi All,
>
> We've been discussing various ideas for Stratos in and around STR-7
> (common virtio library). I'd originally de-emphasised the STR-7 work
> because I wasn't sure if this was duplicate effort given we already had
> libvhost-user as well as interest in rust-vmm for portable backends in
> user-space. However we have seen from the Windriver hypervisor-less
> virtio, NXP's Zephyr/Jailhouse and the requirements for the SCMI server
> that there is a use-case for a small, liberally licensed C library that
> is suitable for embedding in lightweight backends without a full Linux
> stack behind it. These workloads would run in either simple command
> loops, RTOSes or Unikernels.
>
> Given the multiple interested parties I'm hoping we have enough people
> who can devote time to collaborate on the project to make the following
> realistic over the next cycle and culminate in the following demo in 6
> months:
>
> Components
> ==========
>
> portable virtio backend library
> -------------------------------
>
> * source based library (so you include directly in your project)
> * liberally licensed (Apache? to facilitate above)
> * tailored for non-POSIX, limited resource setups
> - e.g. avoid malloc/free, provide abstraction hooks where needed
> - not assume OS facilities (so embeddable in RTOS or Unikernel)
> - static virtio configuration supplied from outside library (location
> of queues etc)
> * hypervisor agnostic
> - provide a hook to signal when queues need processing
Following on from a discussion with Vincent and Akashi-san last week we
need to think more about how to make this hypervisor agnostic. There
will always be some code that has to live outside the library but if it
ends up being the same amount again what have we gained?
> I suspect this should be a from scratch implementation but it's
> certainly worth investigating the BSD implementation as Windriver have
> suggested.
>
>
> SCMI server
> -----------
>
> This is the work product of STR-4, embedded in an RTOS. I'm suggesting
> Zephyr makes the most sense here given the previous work done by Peng
> and Akashi-san but I guess an argument could be made for a unikernel. I
> would suggest demonstrating portability to a unikernel would be a
> stretch goal for the demo.
>
> The server would be *build* time selectable to deploy either in a
> Jailhouse partition or a Xen Dom U deployment. Either way there will
> need to be an out-of-band communication of location of virtual queues
> and memory maps - I assume via DTB.
>From our discussion last week Zephry's DTB support is very limited and
not designed to cope with dynamic setups. So rather than having two
build time selectable configurations we should opt for a single binary
with a fixed expectation of where the remote guests memory and
virtqueues will exist in it's memory space. It will then be up to the
VMM setting things up to ensure everything is mapped in the appropriate
place in the RTOS memory map. There would also be a fixed IRQ map for
signalling when things are changes to the RTOS and a fixed doorbell for
signalling the other way.
>
> I'm unfamiliar with the RTOS build process but I guess this would be a
> single git repository with the RTOS and glue code and git sub-projects
> for the virtio and scmi libraries?
>
> Deployments
> ===========
>
> To demonstrate portability we would have:
>
> - Xen hypervisor
> - Dom0 with Linux/xen tools
> - DomU with Linux with a virtio-scmi front-end
> - DomU with RTOS/SCMI server with virtio-scmi back-end
>
> The Dom0 in this case is just for convenience of the demo as we don't
> currently have a fully working dom0-less setup. The key communication is
> between the two DomU guests.
However the Dom0 will need also the glue code to setup the communication
and memory mapping between the two DomU guests. This could happily link
with the existing Xen library for setting up the guest table mappings.
>
> - Jailhouse
> - Linux kernel partition with virtio-scmi front-end
> - RTOS/SCMI server partition with a virtio-scmi back-end
The RTOS/SCMI server would be the same binary blob as for Xen. Again
some glue setup code would be required. I'm still unsure on how this
would work for Jailhouse so if we don't have any Jailhouse expertise
joining the project we could do this with KVM instead.
Linux/KVM host
- setup code in main host (kvmtool/QEMU launch)
- KVM guest with Linux with a virtio-scmi front-end
- KVM guest with RTOS/SCMI server with virtio-scmi back-end
> This is closer to Windrivers' hypervisor-less virtio deployment as
> Jailhouse is not a "proper" hypervisor in this case just a way of
> partitioning up the resources. There will need to be some way for the
> kernel and server partitions to signal each other when queues are
> updated.
>
> Platform
> ========
>
> We know we have working Xen on Synquacer and Jailhouse on the iMX.
> Should we target those as well as a QEMU -M virt for those who wish to
> play without hardware?
>
>
> Stretch Goals
> =============
>
> * Integrate Arnd's fat virtqueues
>
> Hopefully this will be ready soon enough in the cycle that we can add
> this to the library and prototype the minimal memory cross section.
This is dependant on having something at least sketched out early in the
cycle. It would allow us to simplify the shared memory mapping to just
plain virtqueus.
> * Port the server/library to another RTOS/unikernel
>
> This would demonstrate the core code hasn't grown any assumptions
> about what it is running in.
>
> * Run the server blob on another hypervisor
>
> Running in KVM is probably boring at this point. Maybe investigate
> having it Hafnium? Or in a R-profile safety island setup?
>
> So what do people think? Thoughts? Comments? Volunteers?
--
Alex Bennée
Hello,
This is an initial implementation of a generic vhost-user backend for
the I2C bus. This is based of the virtio specifications (already merged)
for the I2C bus.
The kernel virtio I2C driver is still under review, here is the latest
version (v10):
https://lore.kernel.org/lkml/226a8d5663b7bb6f5d06ede7701eedb18d1bafa1.16164…
The backend is implemented as a vhost-user device because we want to
experiment in making portable backends that can be used with multiple
hypervisors. We also want to support backends isolated in their own
separate service VMs with limited memory cross-sections with the
principle guest. This is part of a wider initiative by Linaro called
"project Stratos" for which you can find information here:
https://collaborate.linaro.org/display/STR/Stratos+Home
I mentioned this to explain the decision to write the daemon as a fairly
pure glib application that just depends on libvhost-user.
We are not sure where the vhost-user backend should get queued, qemu or
a separate repository. Similar questions were raised by an earlier
thread by Alex Bennée for his RPMB work:
https://lore.kernel.org/qemu-devel/20200925125147.26943-1-alex.bennee@linar…
I2C Testing:
------------
I didn't have access to a real hardware where I can play with a I2C
client device (like RTC, eeprom, etc) to verify the working of the
backend daemon, so I decided to test it on my x86 box itself with
hierarchy of two ARM64 guests.
The first ARM64 guest was passed "-device ds1338,address=0x20" option,
so it could emulate a ds1338 RTC device, which connects to an I2C bus.
Once the guest came up, ds1338 device instance was created within the
guest kernel by doing:
echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
[
Note that this may end up binding the ds1338 device to its driver,
which won't let our i2c daemon talk to the device. For that we need to
manually unbind the device from the driver:
echo 0-0020 > /sys/bus/i2c/devices/0-0020/driver/unbind
]
After this is done, you will get /dev/rtc1. This is the device we wanted
to emulate, which will be accessed by the vhost-user-i2c backend daemon
via the /dev/i2c-0 file present in the guest VM.
At this point we need to start the backend daemon and give it a
socket-path to talk to from qemu (you can pass -v to it to get more
detailed messages):
vhost-user-i2c --socket-path=vi2c.sock --device-list 0:20
[ Here, 0:20 is the bus/device mapping, 0 for /dev/i2c-0 and 20 is
client address of ds1338 that we used while creating the device. ]
Now we need to start the second level ARM64 guest (from within the first
guest) to get the i2c-virtio.c Linux driver up. The second level guest
is passed the following options to connect to the same socket:
-chardev socket,path=vi2c.sock,id=vi2c \
-device vhost-user-i2c-pci,chardev=vi2c,id=i2c
Once the second level guest boots up, we will see the i2c-virtio bus at
/sys/bus/i2c/devices/i2c-X/. From there we can now make it emulate the
ds1338 device again by doing:
echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
[ This time we want ds1338's driver to be bound to the device, so it
should be enabled in the kernel as well. ]
And we will get /dev/rtc1 device again here in the second level guest.
Now we can play with the rtc device with help of hwclock utility and we
can see the following sequence of transfers happening if we try to
update rtc's time from system time.
hwclock -w -f /dev/rtc1 (in guest2) ->
Reaches i2c-virtio.c (Linux bus driver in guest2) ->
transfer over virtio ->
Reaches the qemu's vhost-i2c device emulation (running over guest1) ->
Reaches the backend daemon vhost-user-i2c started earlier (in guest1) ->
ioctl(/dev/i2c-0, I2C_RDWR, ..); (in guest1) ->
reaches qemu's hw/rtc/ds1338.c (running over host)
SMBUS Testing:
--------------
I wasn't required to have such a tedious setup for testing out with
SMBUS devices. I was able to emulate a SMBUS device on my x86 machine
using i2c-stub driver.
$ modprobe i2c-stub chip_addr=0x20
//Boot the arm64 guest now with i2c-virtio driver and then do:
$ echo al3320a 0x20 > /sys/class/i2c-adapter/i2c-0/new_device
$ cat /sys/bus/iio/devices/iio:device0/in_illuminance_raw
That's it.
I hope I was able to give a clear picture of my test setup here :)
Thanks.
V1->V2:
- Add a separate patch (not to be merged) to keep stuff temporarily
taken from Linux kernel.
- Support SMBUS devices/busses in the backend daemon.
- Fix lots of checkpatch warnings/errors.
- Some other bug fixes, suggestions, etc.
Viresh Kumar (6):
!Merge: Update virtio headers from kernel
hw/virtio: add boilerplate for vhost-user-i2c device
hw/virtio: add vhost-user-i2c-pci boilerplate
tools/vhost-user-i2c: Add backend driver
docs: add a man page for vhost-user-i2c
MAINTAINERS: Add entry for virtio-i2c
MAINTAINERS | 9 +
docs/tools/index.rst | 1 +
docs/tools/vhost-user-i2c.rst | 75 ++
hw/virtio/Kconfig | 5 +
hw/virtio/meson.build | 2 +
hw/virtio/vhost-user-i2c-pci.c | 69 ++
hw/virtio/vhost-user-i2c.c | 285 +++++++
include/hw/virtio/vhost-user-i2c.h | 37 +
include/standard-headers/linux/virtio_i2c.h | 40 +
include/standard-headers/linux/virtio_ids.h | 1 +
tools/meson.build | 8 +
tools/vhost-user-i2c/50-qemu-i2c.json.in | 5 +
tools/vhost-user-i2c/main.c | 809 ++++++++++++++++++++
tools/vhost-user-i2c/meson.build | 10 +
14 files changed, 1356 insertions(+)
create mode 100644 docs/tools/vhost-user-i2c.rst
create mode 100644 hw/virtio/vhost-user-i2c-pci.c
create mode 100644 hw/virtio/vhost-user-i2c.c
create mode 100644 include/hw/virtio/vhost-user-i2c.h
create mode 100644 include/standard-headers/linux/virtio_i2c.h
create mode 100644 tools/vhost-user-i2c/50-qemu-i2c.json.in
create mode 100644 tools/vhost-user-i2c/main.c
create mode 100644 tools/vhost-user-i2c/meson.build
--
2.25.0.rc1.19.g042ed3e048af