Hi All,
There have been discussions about virtio-camera before and more recently I've heard the term virtio-sensor used. I think using "sensor" eludes to the fact that there are a whole class of devices that provide some sort of 2d plane view of the world (cameras, fingerprint readers, LIDAR?) that would benefit in being consumed by a workload in a standard non-bespoke way.
Why not virtio-video? =====================
There is already a specification and various implementations of virtio-video in various states of up-streaming. It is tempting to think of a camera as a simplified subset of processing video streams. However while virtio-video allows the consumption and display of various video formats it offers no direct control of the source itself.
Complex control plane =====================
Modern cameras are more than a simple CCD recording photons. Aside from controlling things like f-stop/exposure/position there are also more complex computational photography aspects. The camera soc might be capable of doing edge or object detection or even facial and feature recognition. Cameras are no longer simple webcams and have long since gone past the relatively simple API that V4L present (c.f. libcamera).
Cloud native ============
One of the drivers for these virtio devices is the concept of cloud native development. That is developing your workload in the cloud and feeding it data through standardised VirtIO interfaces. Once you are happy with its behaviour you can take that workload and run the same binaries in your edge device but this time with data being provided by a real sensor which is exposed via the same VirtIO interface.
Competing Requirements? =======================
I've heard about use cases across a wide range of deployment scenarios including:
Virtualised Mobile Devices
Here the backend containing the vendors secret sauce exists in its own isolated VM with access to the real camera HW and exports virtio-camera to a standardised main OS.
Desktop Virtualisation
Here the aim is to expose host camera devices (such as webcams) to guest system which would be an otherwise sandboxed VM that needs access to the system camera for a particular task.
Automotive
Cars are rapidly growing cameras both as driver aids and for more advanced use cases such as self driving. The cloud native case is particularly strong here as a lot of validation and iteration will be taking place in the relatively limitless resources of the cloud before being run in the carefully controlled and isolated safety critical environs of the car itself.
Do these use-cases have competing demands? Can a solution be found that satisfies all of them?
So for the next Stratos sync-up call I'd like to discuss virtio-camera and if there is enough interest to specify a work package to define and upstream the device. I'm casting a wide net for people who are interested in the topic so we can talk through the issues and see if we can arrive at consensus for a minimal viable product.
To help with that I would welcome people submitting ahead of time any:
- use-cases and user stories - concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find this discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC / 16:00 BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
+Tomasz Figa tfiga@google.com as requested.
On Fri, Sep 30, 2022 at 12:52 PM Alex Bennée alex.bennee@linaro.org wrote:
Hi All,
There have been discussions about virtio-camera before and more recently I've heard the term virtio-sensor used. I think using "sensor" eludes to the fact that there are a whole class of devices that provide some sort of 2d plane view of the world (cameras, fingerprint readers, LIDAR?) that would benefit in being consumed by a workload in a standard non-bespoke way.
Why not virtio-video?
There is already a specification and various implementations of virtio-video in various states of up-streaming. It is tempting to think of a camera as a simplified subset of processing video streams. However while virtio-video allows the consumption and display of various video formats it offers no direct control of the source itself.
Complex control plane
Modern cameras are more than a simple CCD recording photons. Aside from controlling things like f-stop/exposure/position there are also more complex computational photography aspects. The camera soc might be capable of doing edge or object detection or even facial and feature recognition. Cameras are no longer simple webcams and have long since gone past the relatively simple API that V4L present (c.f. libcamera).
Cloud native
One of the drivers for these virtio devices is the concept of cloud native development. That is developing your workload in the cloud and feeding it data through standardised VirtIO interfaces. Once you are happy with its behaviour you can take that workload and run the same binaries in your edge device but this time with data being provided by a real sensor which is exposed via the same VirtIO interface.
Competing Requirements?
I've heard about use cases across a wide range of deployment scenarios including:
Virtualised Mobile Devices
Here the backend containing the vendors secret sauce exists in its own isolated VM with access to the real camera HW and exports virtio-camera to a standardised main OS.
Desktop Virtualisation
Here the aim is to expose host camera devices (such as webcams) to guest system which would be an otherwise sandboxed VM that needs access to the system camera for a particular task.
Automotive
Cars are rapidly growing cameras both as driver aids and for more advanced use cases such as self driving. The cloud native case is particularly strong here as a lot of validation and iteration will be taking place in the relatively limitless resources of the cloud before being run in the carefully controlled and isolated safety critical environs of the car itself.
Do these use-cases have competing demands? Can a solution be found that satisfies all of them?
So for the next Stratos sync-up call I'd like to discuss virtio-camera and if there is enough interest to specify a work package to define and upstream the device. I'm casting a wide net for people who are interested in the topic so we can talk through the issues and see if we can arrive at consensus for a minimal viable product.
To help with that I would welcome people submitting ahead of time any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find this discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC / 16:00 BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
-- Alex Bennée
+Shahbazian, Andy andyshah@amazon.com as requested.
On Mon, Oct 3, 2022 at 10:54 PM Anmar Oueja anmar.oueja@linaro.org wrote:
+Tomasz Figa tfiga@google.com as requested.
On Fri, Sep 30, 2022 at 12:52 PM Alex Bennée alex.bennee@linaro.org wrote:
Hi All,
There have been discussions about virtio-camera before and more recently I've heard the term virtio-sensor used. I think using "sensor" eludes to the fact that there are a whole class of devices that provide some sort of 2d plane view of the world (cameras, fingerprint readers, LIDAR?) that would benefit in being consumed by a workload in a standard non-bespoke way.
Why not virtio-video?
There is already a specification and various implementations of virtio-video in various states of up-streaming. It is tempting to think of a camera as a simplified subset of processing video streams. However while virtio-video allows the consumption and display of various video formats it offers no direct control of the source itself.
Complex control plane
Modern cameras are more than a simple CCD recording photons. Aside from controlling things like f-stop/exposure/position there are also more complex computational photography aspects. The camera soc might be capable of doing edge or object detection or even facial and feature recognition. Cameras are no longer simple webcams and have long since gone past the relatively simple API that V4L present (c.f. libcamera).
Cloud native
One of the drivers for these virtio devices is the concept of cloud native development. That is developing your workload in the cloud and feeding it data through standardised VirtIO interfaces. Once you are happy with its behaviour you can take that workload and run the same binaries in your edge device but this time with data being provided by a real sensor which is exposed via the same VirtIO interface.
Competing Requirements?
I've heard about use cases across a wide range of deployment scenarios including:
Virtualised Mobile Devices
Here the backend containing the vendors secret sauce exists in its own isolated VM with access to the real camera HW and exports virtio-camera to a standardised main OS.
Desktop Virtualisation
Here the aim is to expose host camera devices (such as webcams) to guest system which would be an otherwise sandboxed VM that needs access to the system camera for a particular task.
Automotive
Cars are rapidly growing cameras both as driver aids and for more advanced use cases such as self driving. The cloud native case is particularly strong here as a lot of validation and iteration will be taking place in the relatively limitless resources of the cloud before being run in the carefully controlled and isolated safety critical environs of the car itself.
Do these use-cases have competing demands? Can a solution be found that satisfies all of them?
So for the next Stratos sync-up call I'd like to discuss virtio-camera and if there is enough interest to specify a work package to define and upstream the device. I'm casting a wide net for people who are interested in the topic so we can talk through the issues and see if we can arrive at consensus for a minimal viable product.
To help with that I would welcome people submitting ahead of time any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find this discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC / 16:00 BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
-- Alex Bennée
+Umair Jameel Hashim umair.hashim@arm.com
On Tue, Oct 4, 2022 at 8:58 AM Anmar Oueja anmar.oueja@linaro.org wrote:
+Shahbazian, Andy andyshah@amazon.com as requested.
On Mon, Oct 3, 2022 at 10:54 PM Anmar Oueja anmar.oueja@linaro.org wrote:
+Tomasz Figa tfiga@google.com as requested.
On Fri, Sep 30, 2022 at 12:52 PM Alex Bennée alex.bennee@linaro.org wrote:
Hi All,
There have been discussions about virtio-camera before and more recently I've heard the term virtio-sensor used. I think using "sensor" eludes to the fact that there are a whole class of devices that provide some sort of 2d plane view of the world (cameras, fingerprint readers, LIDAR?) that would benefit in being consumed by a workload in a standard non-bespoke way.
Why not virtio-video?
There is already a specification and various implementations of virtio-video in various states of up-streaming. It is tempting to think of a camera as a simplified subset of processing video streams. However while virtio-video allows the consumption and display of various video formats it offers no direct control of the source itself.
Complex control plane
Modern cameras are more than a simple CCD recording photons. Aside from controlling things like f-stop/exposure/position there are also more complex computational photography aspects. The camera soc might be capable of doing edge or object detection or even facial and feature recognition. Cameras are no longer simple webcams and have long since gone past the relatively simple API that V4L present (c.f. libcamera).
Cloud native
One of the drivers for these virtio devices is the concept of cloud native development. That is developing your workload in the cloud and feeding it data through standardised VirtIO interfaces. Once you are happy with its behaviour you can take that workload and run the same binaries in your edge device but this time with data being provided by a real sensor which is exposed via the same VirtIO interface.
Competing Requirements?
I've heard about use cases across a wide range of deployment scenarios including:
Virtualised Mobile Devices
Here the backend containing the vendors secret sauce exists in its own isolated VM with access to the real camera HW and exports virtio-camera to a standardised main OS.
Desktop Virtualisation
Here the aim is to expose host camera devices (such as webcams) to guest system which would be an otherwise sandboxed VM that needs access to the system camera for a particular task.
Automotive
Cars are rapidly growing cameras both as driver aids and for more advanced use cases such as self driving. The cloud native case is particularly strong here as a lot of validation and iteration will be taking place in the relatively limitless resources of the cloud before being run in the carefully controlled and isolated safety critical environs of the car itself.
Do these use-cases have competing demands? Can a solution be found that satisfies all of them?
So for the next Stratos sync-up call I'd like to discuss virtio-camera and if there is enough interest to specify a work package to define and upstream the device. I'm casting a wide net for people who are interested in the topic so we can talk through the issues and see if we can arrive at consensus for a minimal viable product.
To help with that I would welcome people submitting ahead of time any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find this discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC / 16:00 BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
-- Alex Bennée
Thanks Anmar. Is there still a possibility of changing the time of the meeting? I'm located in Tokyo, Japan and it would be midnight in my time zone.
On Tue, Oct 4, 2022 at 11:54 AM Anmar Oueja anmar.oueja@linaro.org wrote:
+Tomasz Figa tfiga@google.com as requested.
On Fri, Sep 30, 2022 at 12:52 PM Alex Bennée alex.bennee@linaro.org wrote:
Hi All,
There have been discussions about virtio-camera before and more recently I've heard the term virtio-sensor used. I think using "sensor" eludes to the fact that there are a whole class of devices that provide some sort of 2d plane view of the world (cameras, fingerprint readers, LIDAR?) that would benefit in being consumed by a workload in a standard non-bespoke way.
Why not virtio-video?
There is already a specification and various implementations of virtio-video in various states of up-streaming. It is tempting to think of a camera as a simplified subset of processing video streams. However while virtio-video allows the consumption and display of various video formats it offers no direct control of the source itself.
Complex control plane
Modern cameras are more than a simple CCD recording photons. Aside from controlling things like f-stop/exposure/position there are also more complex computational photography aspects. The camera soc might be capable of doing edge or object detection or even facial and feature recognition. Cameras are no longer simple webcams and have long since gone past the relatively simple API that V4L present (c.f. libcamera).
Cloud native
One of the drivers for these virtio devices is the concept of cloud native development. That is developing your workload in the cloud and feeding it data through standardised VirtIO interfaces. Once you are happy with its behaviour you can take that workload and run the same binaries in your edge device but this time with data being provided by a real sensor which is exposed via the same VirtIO interface.
Competing Requirements?
I've heard about use cases across a wide range of deployment scenarios including:
Virtualised Mobile Devices
Here the backend containing the vendors secret sauce exists in its own isolated VM with access to the real camera HW and exports virtio-camera to a standardised main OS.
Desktop Virtualisation
Here the aim is to expose host camera devices (such as webcams) to guest system which would be an otherwise sandboxed VM that needs access to the system camera for a particular task.
Automotive
Cars are rapidly growing cameras both as driver aids and for more advanced use cases such as self driving. The cloud native case is particularly strong here as a lot of validation and iteration will be taking place in the relatively limitless resources of the cloud before being run in the carefully controlled and isolated safety critical environs of the car itself.
Do these use-cases have competing demands? Can a solution be found that satisfies all of them?
So for the next Stratos sync-up call I'd like to discuss virtio-camera and if there is enough interest to specify a work package to define and upstream the device. I'm casting a wide net for people who are interested in the topic so we can talk through the issues and see if we can arrive at consensus for a minimal viable product.
To help with that I would welcome people submitting ahead of time any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find this discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC / 16:00 BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
-- Alex Bennée
Tomasz Figa tfiga@google.com writes:
Thanks Anmar. Is there still a possibility of changing the time of the meeting? I'm located in Tokyo, Japan and it would be midnight in my time zone.
I'm afraid we already had a ballot for the current slot a few weeks back. However I will ask people at the meeting if they are OK with me recording the session for catching up asynchronously.
If there are any particular issues you want to raise or get discussed please do raise them on this thread.
Thanks,
To help with that I would welcome people submitting ahead of time any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find this discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC / 16:00 BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
-- Alex Bennée
On Thu, Oct 6, 2022 at 6:23 PM Alex Bennée alex.bennee@linaro.org wrote:
Tomasz Figa tfiga@google.com writes:
Thanks Anmar. Is there still a possibility of changing the time of the meeting? I'm located in Tokyo, Japan and it would be midnight in my time zone.
I'm afraid we already had a ballot for the current slot a few weeks back. However I will ask people at the meeting if they are OK with me recording the session for catching up asynchronously.
I see. A recording would be nice, thanks.
If there are any particular issues you want to raise or get discussed please do raise them on this thread.
I don't have any specific concerns, but it may be useful to note that from the ChromeOS point of view it's important that the new virtualization specification would be suitable to implement the Android Camera HAL3 API on top of it, with a capability level similar to modern mobile phones (e.g. Camera HAL3 LEVEL3 capabilities).
Best regards, Tomasz
Thanks,
To help with that I would welcome people submitting ahead of time any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find this discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC / 16:00 BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
-- Alex Bennée
-- Alex Bennée
Tomasz Figa tfiga@google.com writes:
On Thu, Oct 6, 2022 at 6:23 PM Alex Bennée alex.bennee@linaro.org wrote:
Tomasz Figa tfiga@google.com writes:
Thanks Anmar. Is there still a possibility of changing the time of the meeting? I'm located in Tokyo, Japan and it would be midnight in my time zone.
I'm afraid we already had a ballot for the current slot a few weeks back. However I will ask people at the meeting if they are OK with me recording the session for catching up asynchronously.
I see. A recording would be nice, thanks.
If there are any particular issues you want to raise or get discussed please do raise them on this thread.
I don't have any specific concerns, but it may be useful to note that from the ChromeOS point of view it's important that the new virtualization specification would be suitable to implement the Android Camera HAL3 API on top of it, with a capability level similar to modern mobile phones (e.g. Camera HAL3 LEVEL3 capabilities).
For others on the list:
https://source.android.com/docs/core/camera/camera3
I've had a skim through the API and I noticed when it comes to buffer management there has been a change in v3 where requests and the resultant buffers have been separated. Is this just for internal Android buffer management purposes or for a wider reason? I think any virtio implementation will need to have requests tied to buffers as the "hw device" on the other end won't be able to allocate its own buffers and will need somewhere to write the data. The alternative would be to pre-transfer buffers into the device which would be unusual for a virtio device.
The mode control looks like things like auto-[focus|expose|whitebalance] can either be delegated to the HAL or explicitly controlled. Would the same autonomy be something passed down to the HW implementation? I guess image stabilisation is similar here?
The range of formats (RGB, YUV, HEIC, 10bit) can probably be managed in a similar way to the virtio-video configuration. Are they potentially expected to change on a frame-by-frame basis? Does handling mono-chrome fall into this or is there more information needed to be known about what that range of values covers (e.g. IR or maybe even feature detection)?
The fact Android has added an extensions API to cover all the weird and wonderful things cameras can do (Bokeh, HDR, Night-vision) implies there will always have to be some degree of understanding by the eventual client of camera about what the capabilities it has are. The question then becomes do we represent this with a slowly growing bitfield of standard feature bits (because fixed will undoubtedly run out) or something more dynamic and extensible? For example is one vendors Bokeh effect another vendors Tilt-shift effect? Should the feature be treated the same at the device level?
The motion tracking capabilities look like the are there to cover after the fact image correction in the place of image stabilisation on the camera HW itself?
Torch strength is an interesting one from the phone world (who knew 20 years ago that phones would replace real torches as our go to help during power cuts). Thinking of other use-cases in the automotive world I guess there is a broader general case of controlling the illumination of the cameras scene.
Best regards, Tomasz
Thanks,
To help with that I would welcome people submitting ahead of time any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find this discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC / 16:00 BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
-- Alex Bennée
-- Alex Bennée
Hi Alex,
On Tue, Oct 11, 2022 at 6:44 PM Alex Bennée alex.bennee@linaro.org wrote:
Tomasz Figa tfiga@google.com writes:
On Thu, Oct 6, 2022 at 6:23 PM Alex Bennée alex.bennee@linaro.org wrote:
Tomasz Figa tfiga@google.com writes:
Thanks Anmar. Is there still a possibility of changing the time of the meeting? I'm located in Tokyo, Japan and it would be midnight in my time zone.
I'm afraid we already had a ballot for the current slot a few weeks back. However I will ask people at the meeting if they are OK with me recording the session for catching up asynchronously.
I see. A recording would be nice, thanks.
If there are any particular issues you want to raise or get discussed please do raise them on this thread.
I don't have any specific concerns, but it may be useful to note that from the ChromeOS point of view it's important that the new virtualization specification would be suitable to implement the Android Camera HAL3 API on top of it, with a capability level similar to modern mobile phones (e.g. Camera HAL3 LEVEL3 capabilities).
For others on the list:
https://source.android.com/docs/core/camera/camera3
I've had a skim through the API and
Thanks for looking into this!
I noticed when it comes to buffer management there has been a change in v3 where requests and the resultant buffers have been separated. Is this just for internal Android buffer management purposes or for a wider reason? I think any virtio implementation will need to have requests tied to buffers as the "hw device" on the other end won't be able to allocate its own buffers and will need somewhere to write the data. The alternative would be to pre-transfer buffers into the device which would be unusual for a virtio device.
Prior to v3.5 (IIRC) the Android frameworks would allocate the buffers and then pass to the HAL as needed for each capture request and the requirement was that the same buffers have to be returned in the corresponding capture result for that frame. In v3.5 a new mode was introduced that allows the HAL to request buffers from the frameworks at any given time and then use them in an arbitrary order to fulfil the capture requests.
The mode control looks like things like auto-[focus|expose|whitebalance] can either be delegated to the HAL or explicitly controlled. Would the same autonomy be something passed down to the HW implementation? I guess image stabilisation is similar here?
The range of formats (RGB, YUV, HEIC, 10bit) can probably be managed in a similar way to the virtio-video configuration. Are they potentially expected to change on a frame-by-frame basis? Does handling mono-chrome fall into this or is there more information needed to be known about what that range of values covers (e.g. IR or maybe even feature detection)?
My understanding is that the initial configure_streams stage would define the set of streams for the capture session, which implies the formats. To change those, the capture session needs to be flushed and then another configure_streams invoked. One thing to note, though, is that a capture request might refer to only a subset of the configured streams.
The fact Android has added an extensions API to cover all the weird and wonderful things cameras can do (Bokeh, HDR, Night-vision) implies there will always have to be some degree of understanding by the eventual client of camera about what the capabilities it has are. The question then becomes do we represent this with a slowly growing bitfield of standard feature bits (because fixed will undoubtedly run out) or something more dynamic and extensible? For example is one vendors Bokeh effect another vendors Tilt-shift effect? Should the feature be treated the same at the device level?
The motion tracking capabilities look like the are there to cover after the fact image correction in the place of image stabilisation on the camera HW itself?
Torch strength is an interesting one from the phone world (who knew 20 years ago that phones would replace real torches as our go to help during power cuts). Thinking of other use-cases in the automotive world I guess there is a broader general case of controlling the illumination of the cameras scene.
Torch might be a thing that could possibly be handled separately, via a different virtual device, although sometimes it needs to be synchronized closely with the camera (e.g. when used as the flashlight).
By the way, is your team in contact with Laurent Pinchart or Eddy Talvala by any chance?
Best regards, Tomasz
Best regards, Tomasz
Thanks,
To help with that I would welcome people submitting ahead of time any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find this discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC / 16:00 BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
-- Alex Bennée
-- Alex Bennée
-- Alex Bennée
+Changyeon Jo changyeon@google.com and +Jyoti Bhayana jbhayana@google.com for AAOS camera
Thanks, - Enrico
On Tue, Oct 11, 2022 at 6:08 AM Tomasz Figa tfiga@google.com wrote:
Hi Alex,
On Tue, Oct 11, 2022 at 6:44 PM Alex Bennée alex.bennee@linaro.org wrote:
Tomasz Figa tfiga@google.com writes:
On Thu, Oct 6, 2022 at 6:23 PM Alex Bennée alex.bennee@linaro.org
wrote:
Tomasz Figa tfiga@google.com writes:
Thanks Anmar. Is there still a possibility of changing the time of the meeting? I'm located in Tokyo, Japan and it would be midnight in my time zone.
I'm afraid we already had a ballot for the current slot a few weeks back. However I will ask people at the meeting if they are OK with me recording the session for catching up asynchronously.
I see. A recording would be nice, thanks.
If there are any particular issues you want to raise or get discussed please do raise them on this thread.
I don't have any specific concerns, but it may be useful to note that from the ChromeOS point of view it's important that the new virtualization specification would be suitable to implement the Android Camera HAL3 API on top of it, with a capability level similar to modern mobile phones (e.g. Camera HAL3 LEVEL3 capabilities).
For others on the list:
https://source.android.com/docs/core/camera/camera3
I've had a skim through the API and
Thanks for looking into this!
I noticed when it comes to buffer management there has been a change in v3 where requests and the resultant buffers have been separated. Is this just for internal Android buffer management purposes or for a wider reason? I think any virtio implementation will need to have requests tied to buffers as the "hw device" on the other end won't be able to allocate its own buffers and will need somewhere to write the data. The alternative would be to pre-transfer buffers into the device which would be unusual for a virtio device.
Prior to v3.5 (IIRC) the Android frameworks would allocate the buffers and then pass to the HAL as needed for each capture request and the requirement was that the same buffers have to be returned in the corresponding capture result for that frame. In v3.5 a new mode was introduced that allows the HAL to request buffers from the frameworks at any given time and then use them in an arbitrary order to fulfil the capture requests.
The mode control looks like things like auto-[focus|expose|whitebalance] can either be delegated to the HAL or explicitly controlled. Would the same autonomy be something passed down to the HW implementation? I guess image stabilisation is similar here?
The range of formats (RGB, YUV, HEIC, 10bit) can probably be managed in a similar way to the virtio-video configuration. Are they potentially expected to change on a frame-by-frame basis? Does handling mono-chrome fall into this or is there more information needed to be known about what that range of values covers (e.g. IR or maybe even feature
detection)?
My understanding is that the initial configure_streams stage would define the set of streams for the capture session, which implies the formats. To change those, the capture session needs to be flushed and then another configure_streams invoked. One thing to note, though, is that a capture request might refer to only a subset of the configured streams.
The fact Android has added an extensions API to cover all the weird and wonderful things cameras can do (Bokeh, HDR, Night-vision) implies there will always have to be some degree of understanding by the eventual client of camera about what the capabilities it has are. The question then becomes do we represent this with a slowly growing bitfield of standard feature bits (because fixed will undoubtedly run out) or something more dynamic and extensible? For example is one vendors Bokeh effect another vendors Tilt-shift effect? Should the feature be treated the same at the device level?
The motion tracking capabilities look like the are there to cover after the fact image correction in the place of image stabilisation on the camera HW itself?
Torch strength is an interesting one from the phone world (who knew 20 years ago that phones would replace real torches as our go to help during power cuts). Thinking of other use-cases in the automotive world I guess there is a broader general case of controlling the illumination of the cameras scene.
Torch might be a thing that could possibly be handled separately, via a different virtual device, although sometimes it needs to be synchronized closely with the camera (e.g. when used as the flashlight).
By the way, is your team in contact with Laurent Pinchart or Eddy Talvala by any chance?
Best regards, Tomasz
Best regards, Tomasz
Thanks,
To help with that I would welcome people submitting ahead of time
any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find
this
discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC /
16:00
BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
-- Alex Bennée
-- Alex Bennée
-- Alex Bennée
stratos-dev@op-lists.linaro.org