Hello,
This patchset tries to update the vhost-user protocol to make it support special memory mapping required in case of Xen hypervisor.
The first patch is mostly cleanup and second one introduces a new xen specific feature.
V2->V3: - Remove the extra message and instead update the memory regions to carry additional data.
- Drop the one region one mmap relationship and allow back-end to map only parts of a region at once, required for Xen grant mappings.
- Additional cleanup patch 1/2.
V1->V2: - Make the custom mmap feature Xen specific, instead of being generic. - Clearly define which memory regions are impacted by this change. - Allow VHOST_USER_SET_XEN_MMAP to be called multiple times. - Additional Bit(2) property in flags.
Viresh Kumar (2): docs: vhost-user: Define memory region separately docs: vhost-user: Add Xen specific memory mapping support
docs/interop/vhost-user.rst | 60 ++++++++++++++++++++++++------------- 1 file changed, 39 insertions(+), 21 deletions(-)
The same layout is defined twice, once in "single memory region description" and then in "memory regions description".
Separate out details of memory region from these two and reuse the same definition later on.
While at it, also rename "memory regions description" to "multiple memory regions description", to avoid potential confusion around similar names. And define single region before multiple ones.
This is just a documentation optimization, the protocol remains the same.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- docs/interop/vhost-user.rst | 39 +++++++++++++++++-------------------- 1 file changed, 18 insertions(+), 21 deletions(-)
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst index 8a5924ea75ab..1720d681264d 100644 --- a/docs/interop/vhost-user.rst +++ b/docs/interop/vhost-user.rst @@ -130,18 +130,8 @@ A vring address description Note that a ring address is an IOVA if ``VIRTIO_F_IOMMU_PLATFORM`` has been negotiated. Otherwise it is a user address.
-Memory regions description -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -+-------------+---------+---------+-----+---------+ -| num regions | padding | region0 | ... | region7 | -+-------------+---------+---------+-----+---------+ - -:num regions: a 32-bit number of regions - -:padding: 32-bit - -A region is: +Memory region description +^^^^^^^^^^^^^^^^^^^^^^^^^
+---------------+------+--------------+-------------+ | guest address | size | user address | mmap offset | @@ -158,19 +148,26 @@ Memory regions description Single memory region description ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-+---------+---------------+------+--------------+-------------+ -| padding | guest address | size | user address | mmap offset | -+---------+---------------+------+--------------+-------------+ ++---------+--------+ +| padding | region | ++---------+--------+
:padding: 64-bit
-:guest address: a 64-bit guest address of the region +A region is represented by Memory region description.
-:size: a 64-bit size +Multiple Memory regions description +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-:user address: a 64-bit user address ++-------------+---------+---------+-----+---------+ +| num regions | padding | region0 | ... | region7 | ++-------------+---------+---------+-----+---------+
-:mmap offset: 64-bit offset where region starts in the mapped memory +:num regions: a 32-bit number of regions + +:padding: 32-bit + +A region is represented by Memory region description.
Log description ^^^^^^^^^^^^^^^ @@ -952,8 +949,8 @@ Front-end message types ``VHOST_USER_SET_MEM_TABLE`` :id: 5 :equivalent ioctl: ``VHOST_SET_MEM_TABLE`` - :request payload: memory regions description - :reply payload: (postcopy only) memory regions description + :request payload: multiple memory regions description + :reply payload: (postcopy only) multiple memory regions description
Sets the memory map regions on the back-end so it can translate the vring addresses. In the ancillary data there is an array of file
On Thu, Mar 09, 2023 at 02:21:00PM +0530, Viresh Kumar wrote:
The same layout is defined twice, once in "single memory region description" and then in "memory regions description".
Separate out details of memory region from these two and reuse the same definition later on.
While at it, also rename "memory regions description" to "multiple memory regions description", to avoid potential confusion around similar names. And define single region before multiple ones.
This is just a documentation optimization, the protocol remains the same.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
docs/interop/vhost-user.rst | 39 +++++++++++++++++-------------------- 1 file changed, 18 insertions(+), 21 deletions(-)
Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Viresh Kumar viresh.kumar@linaro.org writes:
The same layout is defined twice, once in "single memory region description" and then in "memory regions description".
Separate out details of memory region from these two and reuse the same definition later on.
While at it, also rename "memory regions description" to "multiple memory regions description", to avoid potential confusion around similar names. And define single region before multiple ones.
This is just a documentation optimization, the protocol remains the same.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
Reviewed-by: Alex Bennée alex.bennee@linaro.org
The current model of memory mapping at the back-end works fine where a standard call to mmap() (for the respective file descriptor) is enough before the front-end can start accessing the guest memory.
There are other complex cases though where the back-end needs more information and simple mmap() isn't enough. For example Xen, a type-1 hypervisor, currently supports memory mapping via two different methods, foreign-mapping (via /dev/privcmd) and grant-dev (via /dev/gntdev). In both these cases, the back-end needs to call mmap() and ioctl(), with extra information like the Xen domain-id of the guest whose memory we are trying to map.
Add a new protocol feature, 'VHOST_USER_PROTOCOL_F_XEN_MMAP', which lets the back-end know about the additional memory mapping requirements. When this feature is negotiated, the front-end will send the additional information within the memory regions themselves.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- docs/interop/vhost-user.rst | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst index 1720d681264d..5a070adbc1aa 100644 --- a/docs/interop/vhost-user.rst +++ b/docs/interop/vhost-user.rst @@ -145,6 +145,26 @@ Memory region description
:mmap offset: 64-bit offset where region starts in the mapped memory
+When the ``VHOST_USER_PROTOCOL_F_XEN_MMAP`` protocol feature has been +successfully negotiated, the memory region description contains two extra +fields at the end. + ++---------------+------+--------------+-------------+----------------+-------+ +| guest address | size | user address | mmap offset | xen mmap flags | domid | ++---------------+------+--------------+-------------+----------------+-------+ + +:xen mmap flags: 32-bit bit field + +- Bit 0 is set for Xen foreign memory mapping. +- Bit 1 is set for Xen grant memory mapping. +- Bit 8 is set if the memory region can not be mapped in advance, and memory + areas within this region must be mapped / unmapped only when required by the + back-end. The back-end shouldn't try to map the entire region at once, as the + front-end may not allow it. The back-end should rather map only the required + amount of memory at once and unmap it after it is used. + +:domid: a 32-bit Xen hypervisor specific domain id. + Single memory region description ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -864,6 +884,7 @@ Protocol features #define VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS 14 #define VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS 15 #define VHOST_USER_PROTOCOL_F_STATUS 16 + #define VHOST_USER_PROTOCOL_F_XEN_MMAP 17
Front-end message types -----------------------
On Thu, Mar 09, 2023 at 02:21:01PM +0530, Viresh Kumar wrote:
The current model of memory mapping at the back-end works fine where a standard call to mmap() (for the respective file descriptor) is enough before the front-end can start accessing the guest memory.
There are other complex cases though where the back-end needs more information and simple mmap() isn't enough. For example Xen, a type-1 hypervisor, currently supports memory mapping via two different methods, foreign-mapping (via /dev/privcmd) and grant-dev (via /dev/gntdev). In both these cases, the back-end needs to call mmap() and ioctl(), with extra information like the Xen domain-id of the guest whose memory we are trying to map.
Add a new protocol feature, 'VHOST_USER_PROTOCOL_F_XEN_MMAP', which lets the back-end know about the additional memory mapping requirements. When this feature is negotiated, the front-end will send the additional information within the memory regions themselves.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
docs/interop/vhost-user.rst | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
Reviewed-by: Stefan Hajnoczi stefanha@redhat.com
Viresh Kumar viresh.kumar@linaro.org writes:
The current model of memory mapping at the back-end works fine where a standard call to mmap() (for the respective file descriptor) is enough before the front-end can start accessing the guest memory.
There are other complex cases though where the back-end needs more information and simple mmap() isn't enough. For example Xen, a type-1 hypervisor, currently supports memory mapping via two different methods, foreign-mapping (via /dev/privcmd) and grant-dev (via /dev/gntdev). In both these cases, the back-end needs to call mmap() and ioctl(), with extra information like the Xen domain-id of the guest whose memory we are trying to map.
Add a new protocol feature, 'VHOST_USER_PROTOCOL_F_XEN_MMAP', which lets the back-end know about the additional memory mapping requirements. When this feature is negotiated, the front-end will send the additional information within the memory regions themselves.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
Reviewed-by: Alex Bennée alex.bennee@linaro.org
On 09-03-23, 14:20, Viresh Kumar wrote:
Hello,
This patchset tries to update the vhost-user protocol to make it support special memory mapping required in case of Xen hypervisor.
The first patch is mostly cleanup and second one introduces a new xen specific feature.
V2->V3:
Remove the extra message and instead update the memory regions to carry additional data.
Drop the one region one mmap relationship and allow back-end to map only parts of a region at once, required for Xen grant mappings.
Additional cleanup patch 1/2.
Stefan,
Does this version look better ?
On Thu, Mar 09, 2023 at 02:20:59PM +0530, Viresh Kumar wrote:
Hello,
This patchset tries to update the vhost-user protocol to make it support special memory mapping required in case of Xen hypervisor.
The first patch is mostly cleanup and second one introduces a new xen specific feature.
V2->V3:
Remove the extra message and instead update the memory regions to carry additional data.
Drop the one region one mmap relationship and allow back-end to map only parts of a region at once, required for Xen grant mappings.
Additional cleanup patch 1/2.
V1->V2:
- Make the custom mmap feature Xen specific, instead of being generic.
- Clearly define which memory regions are impacted by this change.
- Allow VHOST_USER_SET_XEN_MMAP to be called multiple times.
- Additional Bit(2) property in flags.
Looks good, thanks!
Michael is the maintainer and this patch series will go through his tree.
Stefan
On 09-03-23, 14:20, Viresh Kumar wrote:
Hello,
This patchset tries to update the vhost-user protocol to make it support special memory mapping required in case of Xen hypervisor.
The first patch is mostly cleanup and second one introduces a new xen specific feature.
Can we apply this now ? I have developed code for rust-vmm crates based on this and we need to get this merged/finalized first before merging those changes.
Thanks.
Viresh Kumar viresh.kumar@linaro.org writes:
On 09-03-23, 14:20, Viresh Kumar wrote:
Hello,
This patchset tries to update the vhost-user protocol to make it support special memory mapping required in case of Xen hypervisor.
The first patch is mostly cleanup and second one introduces a new xen specific feature.
Can we apply this now ? I have developed code for rust-vmm crates based on this and we need to get this merged/finalized first before merging those changes.
I've queued into my virtio/vhost-user-device series so I'll get merged with that series unless mst wants to take it now.
Thanks.
On Wed, Apr 05, 2023 at 11:00:34AM +0100, Alex Bennée wrote:
Viresh Kumar viresh.kumar@linaro.org writes:
On 09-03-23, 14:20, Viresh Kumar wrote:
Hello,
This patchset tries to update the vhost-user protocol to make it support special memory mapping required in case of Xen hypervisor.
The first patch is mostly cleanup and second one introduces a new xen specific feature.
Can we apply this now ? I have developed code for rust-vmm crates based on this and we need to get this merged/finalized first before merging those changes.
I've queued into my virtio/vhost-user-device series so I'll get merged with that series unless mst wants to take it now.
Well the patches are tagged and I was going to take these after the release. Probably easier not to work on this in two trees. Still if there's something in your tree being blocked by these patches then Acked-by: Michael S. Tsirkin mst@redhat.com Let me know.
Thanks.
-- Alex Bennée Virtualisation Tech Lead @ Linaro
"Michael S. Tsirkin" mst@redhat.com writes:
On Wed, Apr 05, 2023 at 11:00:34AM +0100, Alex Bennée wrote:
Viresh Kumar viresh.kumar@linaro.org writes:
On 09-03-23, 14:20, Viresh Kumar wrote:
Hello,
This patchset tries to update the vhost-user protocol to make it support special memory mapping required in case of Xen hypervisor.
The first patch is mostly cleanup and second one introduces a new xen specific feature.
Can we apply this now ? I have developed code for rust-vmm crates based on this and we need to get this merged/finalized first before merging those changes.
I've queued into my virtio/vhost-user-device series so I'll get merged with that series unless mst wants to take it now.
Well the patches are tagged and I was going to take these after the release. Probably easier not to work on this in two trees. Still if there's something in your tree being blocked by these patches then Acked-by: Michael S. Tsirkin mst@redhat.com Let me know.
The virtio/vhost-user-device tree work is orthogonal to this vhost-user enhancement although all the work is related to our latest VirtIO project inside Linaro, Orko: https://linaro.atlassian.net/wiki/spaces/ORKO/overview
So if you are happy please take these patches now for when the tree re-opens.
Thanks.
-- Alex Bennée Virtualisation Tech Lead @ Linaro
On Wed, Apr 05, 2023 at 11:24:43AM +0100, Alex Bennée wrote:
"Michael S. Tsirkin" mst@redhat.com writes:
On Wed, Apr 05, 2023 at 11:00:34AM +0100, Alex Bennée wrote:
Viresh Kumar viresh.kumar@linaro.org writes:
On 09-03-23, 14:20, Viresh Kumar wrote:
Hello,
This patchset tries to update the vhost-user protocol to make it support special memory mapping required in case of Xen hypervisor.
The first patch is mostly cleanup and second one introduces a new xen specific feature.
Can we apply this now ? I have developed code for rust-vmm crates based on this and we need to get this merged/finalized first before merging those changes.
I've queued into my virtio/vhost-user-device series so I'll get merged with that series unless mst wants to take it now.
Well the patches are tagged and I was going to take these after the release. Probably easier not to work on this in two trees. Still if there's something in your tree being blocked by these patches then Acked-by: Michael S. Tsirkin mst@redhat.com Let me know.
The virtio/vhost-user-device tree work is orthogonal to this vhost-user enhancement although all the work is related to our latest VirtIO project inside Linaro, Orko: https://linaro.atlassian.net/wiki/spaces/ORKO/overview
So if you are happy please take these patches now for when the tree re-opens.
Yes, I tagged them for when the tree reopens.
Thanks.
-- Alex Bennée Virtualisation Tech Lead @ Linaro
-- Alex Bennée Virtualisation Tech Lead @ Linaro
stratos-dev@op-lists.linaro.org