+Changyeon Jo changyeon@google.com and +Jyoti Bhayana jbhayana@google.com for AAOS camera
Thanks, - Enrico
On Tue, Oct 11, 2022 at 6:08 AM Tomasz Figa tfiga@google.com wrote:
Hi Alex,
On Tue, Oct 11, 2022 at 6:44 PM Alex Bennée alex.bennee@linaro.org wrote:
Tomasz Figa tfiga@google.com writes:
On Thu, Oct 6, 2022 at 6:23 PM Alex Bennée alex.bennee@linaro.org
wrote:
Tomasz Figa tfiga@google.com writes:
Thanks Anmar. Is there still a possibility of changing the time of the meeting? I'm located in Tokyo, Japan and it would be midnight in my time zone.
I'm afraid we already had a ballot for the current slot a few weeks back. However I will ask people at the meeting if they are OK with me recording the session for catching up asynchronously.
I see. A recording would be nice, thanks.
If there are any particular issues you want to raise or get discussed please do raise them on this thread.
I don't have any specific concerns, but it may be useful to note that from the ChromeOS point of view it's important that the new virtualization specification would be suitable to implement the Android Camera HAL3 API on top of it, with a capability level similar to modern mobile phones (e.g. Camera HAL3 LEVEL3 capabilities).
For others on the list:
https://source.android.com/docs/core/camera/camera3
I've had a skim through the API and
Thanks for looking into this!
I noticed when it comes to buffer management there has been a change in v3 where requests and the resultant buffers have been separated. Is this just for internal Android buffer management purposes or for a wider reason? I think any virtio implementation will need to have requests tied to buffers as the "hw device" on the other end won't be able to allocate its own buffers and will need somewhere to write the data. The alternative would be to pre-transfer buffers into the device which would be unusual for a virtio device.
Prior to v3.5 (IIRC) the Android frameworks would allocate the buffers and then pass to the HAL as needed for each capture request and the requirement was that the same buffers have to be returned in the corresponding capture result for that frame. In v3.5 a new mode was introduced that allows the HAL to request buffers from the frameworks at any given time and then use them in an arbitrary order to fulfil the capture requests.
The mode control looks like things like auto-[focus|expose|whitebalance] can either be delegated to the HAL or explicitly controlled. Would the same autonomy be something passed down to the HW implementation? I guess image stabilisation is similar here?
The range of formats (RGB, YUV, HEIC, 10bit) can probably be managed in a similar way to the virtio-video configuration. Are they potentially expected to change on a frame-by-frame basis? Does handling mono-chrome fall into this or is there more information needed to be known about what that range of values covers (e.g. IR or maybe even feature
detection)?
My understanding is that the initial configure_streams stage would define the set of streams for the capture session, which implies the formats. To change those, the capture session needs to be flushed and then another configure_streams invoked. One thing to note, though, is that a capture request might refer to only a subset of the configured streams.
The fact Android has added an extensions API to cover all the weird and wonderful things cameras can do (Bokeh, HDR, Night-vision) implies there will always have to be some degree of understanding by the eventual client of camera about what the capabilities it has are. The question then becomes do we represent this with a slowly growing bitfield of standard feature bits (because fixed will undoubtedly run out) or something more dynamic and extensible? For example is one vendors Bokeh effect another vendors Tilt-shift effect? Should the feature be treated the same at the device level?
The motion tracking capabilities look like the are there to cover after the fact image correction in the place of image stabilisation on the camera HW itself?
Torch strength is an interesting one from the phone world (who knew 20 years ago that phones would replace real torches as our go to help during power cuts). Thinking of other use-cases in the automotive world I guess there is a broader general case of controlling the illumination of the cameras scene.
Torch might be a thing that could possibly be handled separately, via a different virtual device, although sometimes it needs to be synchronized closely with the camera (e.g. when used as the flashlight).
By the way, is your team in contact with Laurent Pinchart or Eddy Talvala by any chance?
Best regards, Tomasz
Best regards, Tomasz
Thanks,
To help with that I would welcome people submitting ahead of time
any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find
this
discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC /
16:00
BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+Pr...
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
-- Alex Bennée
-- Alex Bennée
-- Alex Bennée