On Fri, Oct 16, 2020 at 11:06:07AM +0100, Alex Bennée wrote:
Jean-Philippe Brucker jean-philippe@linaro.org writes:
On Fri, Oct 02, 2020 at 04:26:10PM +0200, Arnd Bergmann wrote:
At the moment the bounce buffer is allocated from a global pool in the low physical pages. However a recent proposal by Chromium would add support for per-device swiotlb pools:
https://lore.kernel.org/linux-iommu/20200728050140.996974-1-tientzu@chromium...
And quoting Tomasz from the discussion on patch 4:
For this, I'd like to propose a "restricted-dma-region" (feel free to suggest a better name) binding, which is explicitly specified to be the only DMA-able memory for this device and make Linux use the given pool for coherent DMA allocations and bouncing non-coherent DMA.
Right, I think this can work, but there are very substantial downsides to it:
- it is a fairly substantial departure from the virtio specification, which defines that transfers can be made to any part of the guest physical address space
Coming back to this point, that was originally true but prevented implementing hardware virtio devices or putting a vIOMMU in front of the device. The spec now defines feature bit VIRTIO_F_ACCESS_PLATFORM (previously called VIRTIO_F_IOMMU_PLATFORM):
A device SHOULD offer VIRTIO_F_ACCESS_PLATFORM if its access to memory is through bus addresses distinct from and translated by the platform to physical addresses used by the driver, and/or if it can only access certain memory addresses with said access specified and/or granted by the platform. A device MAY fail to operate further if VIRTIO_F_ACCESS_PLATFORM is not accepted.
With this the driver has to follow the DMA layout given by the platform, in our case given by the device tree.
Is there any overlap between this concept and the blob resources concept that the virtio-gpu drivers have?
No, I don't think so.
From the limited following of the discussion they have:
(1) shared resources (basically udmabuf support). (2) blob resources (what VIRTIO_GPU_F_MEMORY used to be, but simpler). (3) hostmem (host-allocated resources).
Looking at the proposed Linux implementation, the last two seems to rely on the shmem feature from the upcoming virtio release:
"Shared memory regions are an additional facility available to devices that need a region of memory that’s continuously shared between the device and the driver, rather than passed between them in the way virtqueue elements are." https://lists.oasis-open.org/archives/virtio-dev/201907/msg00021.html
It was introduced for virtio-fs, to allow mapping a file for direct access. The virtio-sound device will probably use them as well, though the current spec forbids using shmem for streaming data.
The way it works for virtio-fs: a memory region (gpa, size) is exposed in the virtio-pci capability or via virtio-mmio registers. Then the virtio-fs driver sends a "setupmapping" request to the device, asking to map the file at a specific offset into that region. When it succeeds, the guest can access the file directly through that region.
is this just a case of GPU's being a special case or is there some overlap we could use here?
The mechanism for using shared regions is specific to each device type (it can't be applied to a virtio-net device for example, without amending the virtio-net specification). It does seem close in concept to the pre-share solution, but I'm not sure how it could be generalized to virtqueues.
Thanks, Jean