- From: Rob Manson <roBman@buildAR.com>
- Date: Fri, 14 Nov 2014 10:06:12 +1100
- To: Justin Novosad <junov@google.com>
- CC: "public-canvas-api@w3.org" <public-canvas-api@w3.org>, "Hu, Ningxin" <ningxin.hu@intel.com>, "Kostiainen, Anssi" <anssi.kostiainen@intel.com>
Hi Justin, thanks that's great feedback 8) We (the authors of the depth extension) are working on an alternative proposal now so we'll post back here when we have that ready (if it's relevant to this list). Personally, I'll definitely follow up with a proposal to whatwg as you suggest too. Thanks Justin, Rik and everyone else who provided feedback, we really appreciate your time on this. roBman On 14/11/14 4:34 AM, Justin Novosad wrote: > > On Wed, Nov 12, 2014 at 7:57 PM, Rob Manson <roBman@buildar.com > <mailto:roBman@buildar.com>> wrote: > > However, canvas is really the only way to currently access video > frame pixels so we have been focused on that. > > > Yes, but it was not intended for video grabbing. It is just an indirect > path that kind of works for what you are doing. Canvases, though they > can be offscreen, are intended to contain displayable pixel data, > packing 16-bit depth values into multiple color channels of a canvas 2d > buffer (as in your original proposal) is a hack that worked fine for > your proof of concept implementation, but is not a good idea for a > standard. a dedicated "depth" context is better, but still I don't think > a canvas context should designed to be a vehicle for data that is not > intended to be displayed. Your getDepthTracks().getPixels() approach > seems a lot more sensible, and it avoids making an unnecessary > intermediate copy of the data in the canvas. Did you get any feedback > on that from video people? I suggest you propose that idea on the whatwg > mailing list where both video and canvas people will chime-in on the > same thread. > > > But for now we are left using rAF and then comparing currentTime and > readyState to try to minimse the amount of re-processing we're doing. > > That makes me sad. Unfortunately, the behavior you describe is > technically spec compliant. I did some skimming around on the web and > found that Microsoft's documentation is even kind enough to state: "Note > This event is fired approximately four times a second." This sucks. > Do all major browsers have this behavior? > > There's also the developing Image Capture API[1] that is designed to > include a grabFrame() method. But you'll see the example in that > spec uses setInterval and the strong feedback from the other members > of the Working Group/Task Force when I raised this a year or two ago > was that setInterval/rAF are the way this should be handled. > > > What? Noooooooo. > > I would even add that there should be a way to guarantee no dropped > frames when grabbing, even if that means queuing a backlog. An API that > cannot provide that service will not meet the requirements of any > serious video application. > > I'd love to hear support to the contrary from "anyone" else 8) > > Sorry if this seems like a bit of a rant but it's a really key issue > for me that extends beyond the Depth Camera work and into all > Computer Vision on the Web Platform. > > > I agree. I have not been following the discussion you speak of, but it > sounds like some fundamental requirements are being overlooked. > > > roBman > > [1] http://w3c.github.io/__mediacapture-image/index.html > <http://w3c.github.io/mediacapture-image/index.html> > > >
Received on Thursday, 13 November 2014 22:59:09 UTC