Re: Advice on extending CanvasRenderingContext2D to support Depth Streams

It's also not clear to me how these API's will be used.
Can you write some very high pseudo-code that shows how a typical author
would get/set the data and present it to the user?

On Mon, Nov 3, 2014 at 7:42 PM, Rik Cabanier <cabanier@gmail.com> wrote:

>
>
> On Fri, Oct 31, 2014 at 5:53 AM, Kostiainen, Anssi <
> anssi.kostiainen@intel.com> wrote:
>
>> > On 30 Oct 2014, at 23:55, Rik Cabanier <cabanier@gmail.com> wrote:
>> >
>> > Did you get any feedback from implementor so far?
>>
>> We have an experimental Chromium implementation. Ningxin can share more
>> information on that. Also folks from the Google’s Project Tango are
>> informed and have been providing feedback.
>>
>> That said, this is a FPWD, and thus the expectation is we’ll get more
>> feedback and can improve the shape of the API. This is what we state in the
>> SoTD section to set the expectations:
>>
>> [[
>>
>> This document is not complete and is subject to change. Early
>> experimentations are encouraged to allow the Media Capture Task Force to
>> evolve the specification based on technical discussions within the Task
>> Force, implementation experience gained from early implementations, and
>> feedback from other groups and individuals.
>>
>> ]]
>>
>> It appears public-canvas-api is a good place to gather such feedback what
>> comes to canvas-related extensions, so we’re happy to get your feedback and
>> improve the spec accordingly.
>>
>> > It seems a bit odd to add something like depthData to the canvas 2D
>> object which is basically just a holder of pixels.
>>
>> Do you have a concrete proposal in mind for an improved model? I’d like
>> to capture your feedback and open an issue to track this, discuss within
>> the TF, but I’d need a bit more details to make the feedback actionable.
>
>
> Can you explain why you would like to see this integrated with canvas?
> It seems that the extension to Canvas 2D could be implemented as a new
> standalone object that works in conjunction with a media element (that is
> possibly drawn to a canvas)
>
> > It's also unclear from reading the spec what happens when you draw over
>> the regions or if this affects canvas tainting.
>>
>> By drawing over regions, you mean interactions with the drawing model
>> [1]? The spec is currently silent on this aspect i.e. this is unspecified.
>>
>> With respect to canvas tainting, the expectation is the behaviour is
>> identical to that of a regular video drawn on a canvas as specified by the
>> resource fetch algorithm for a media element (in the HTML spec) and the
>> MediaStream-specific modifications/restrictions (spec’d in getUserMedia
>> [2]).
>>
>
> The model states that this should taint the canvas [1].
> Is it possible today to draw a local video stream from your camera to a 2d
> context and read the pixels?
>
>
>> We’ll open new issues for these after getting your clarifications.
>>
>
> 1:
> http://www.w3.org/TR/html5/embedded-content-0.html#concept-media-load-resource
>
>
>

Received on Tuesday, 4 November 2014 15:43:00 UTC