Re: Advice on extending CanvasRenderingContext2D to support Depth Streams

On Wed, Nov 12, 2014 at 7:57 PM, Rob Manson <roBman@buildar.com> wrote:

> However, canvas is really the only way to currently access video frame
> pixels so we have been focused on that.


Yes, but it was not intended for video grabbing. It is just an indirect
path that kind of works for what you are doing. Canvases, though they can
be offscreen, are intended to contain displayable pixel data, packing
16-bit depth values into multiple color channels of a canvas 2d buffer (as
in your original proposal) is a hack that worked fine for your proof of
concept implementation, but is not a good idea for a standard. a dedicated
"depth" context is better, but still I don't think a canvas context should
designed to be a vehicle for data that is not intended to be displayed.
Your getDepthTracks().getPixels() approach seems a lot more sensible, and
it avoids making an unnecessary intermediate copy of the data in the
canvas.  Did you get any feedback on that from video people? I suggest you
propose that idea on the whatwg mailing list where both video and canvas
people will chime-in on the same thread.


>
> But for now we are left using rAF and then comparing currentTime and
> readyState to try to minimse the amount of re-processing we're doing.
>
>
That makes me sad. Unfortunately, the behavior you describe is technically
spec compliant. I did some skimming around on the web and found that
Microsoft's documentation is even kind enough to state: "Note  This event
is fired approximately four times a second."  This sucks. Do all major
browsers have this behavior?


> There's also the developing Image Capture API[1] that is designed to
> include a grabFrame() method. But you'll see the example in that spec uses
> setInterval and the strong feedback from the other members of the Working
> Group/Task Force when I raised this a year or two ago was that
> setInterval/rAF are the way this should be handled.
>

What?  Noooooooo.

I would even add that there should be a way to guarantee no dropped frames
when grabbing, even if that means queuing a backlog. An API that cannot
provide that service will not meet the requirements of any serious video
application.


> I'd love to hear support to the contrary from "anyone" else 8)
>
> Sorry if this seems like a bit of a rant but it's a really key issue for
> me that extends beyond the Depth Camera work and into all Computer Vision
> on the Web Platform.
>

I agree. I have not been following the discussion you speak of, but it
sounds like some fundamental requirements are being overlooked.


> roBman
>
> [1] http://w3c.github.io/mediacapture-image/index.html
>
>
>

Received on Thursday, 13 November 2014 17:35:27 UTC