- From: Johannes Odland <johannes.odland@gmail.com>
- Date: Tue, 12 Feb 2013 07:33:38 +0100
- To: Travis Leithead <travis.leithead@microsoft.com>
- Cc: "public-media-capture@w3.org" <public-media-capture@w3.org>
- Message-ID: <3140459160996900730@unknownmsgid>
Is there a specific motivation or use scenario for such a solution, or is this just hypothetical? My main concern is to write clean and reusable code. I use JavaScript MVC frameworks to separate my view/DOM code from functional code. I try as much as possible not to reach into the DOM. As a result you get more modular code that lends it self better to unit testing. I've been playing around with webRTC and gUM, and find the tight coupling with the DOM problematic. Whenever I want to capture a frame from the LMS I'm forced to create a video and a canvas element. It would be far preferable to be able to capture directly from the LMS into a 2dContext, and leave the DOM for actual UX. Johannes Odland Den 11. feb. 2013 kl. 19:20 skrev Travis Leithead < travis.leithead@microsoft.com>: >> Has the possibility and benefits of creating a media sink outside the DOM been considered? Interesting question. I don't think this group has thought too much about this scenario. However, as long as permissions are applied consistently, there's no reason why a PeerConnection sync couldn't be created in a Worker, nor that the user media couldn't be obtained from a Worker. Being able to make a MediaStream transferrable is also within the realm of possibility. I wonder if Media Source Extensions could be used to pass the stream into the DOM for display in a Video tag. Is there a specific motivation or use scenario for such a solution, or is this just hypothetical? *From:* Johannes Odland [mailto:johannes.odland@gmail.com<johannes.odland@gmail.com>] *Sent:* Friday, February 8, 2013 10:48 AM *To:* public-media-capture@w3.org *Subject:* Capture to CanvasRenderingContext2D (without DOM) I just read the MediaStream Capture Scenarios draft from 04 January 2013 ( https://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/scenarios.html) In chapter 8. Design Considerations and Remarks, under 8.6 Post-processing under point 2 it is noted that "*Canvas' drawing API allows for drawing frames from a **video** element, which is the link between the media capture sink and the effects made possible via Canvas.*" While using the HTMLMediaElement as a stream sink and the HtmlCanvasElement as an image processing API works, you end up creating two DOM elements for doing tasks that not necessarily will affect the UI. Many times it is only the result of the post-processing that should be rendered into the UI, and there are use cases where the result might not be rendered directly into the view but stored locally. As a developer I would benefit greatly from being able to create a drawing context outside the DOM, and draw into that context from the stream without creating a video element as a sink. This makes for more loosely coupled code, where the code for views (DOM handling) and functionality (image manipulation) can be kept separate. It would also make it possible to implement the image capture and manipulation APIs where a DOM is not available (such as on Node.js). Ian Hickson specced a proposal for an image processing API that support asynchronous image processing using Web Workers. The DOM is not available in the workers, so Hicksons proposal includes making it possible to instantiate the CanvasRenderingContext2D without the canvas element. http://lists.w3.org/Archives/Public/public-whatwg-archive/2012Nov/0199.html His proposal eliminates the need for the canvas element, as all the image manipulation API is located on the rendering context that now can be instantiated directly, but we're still left with creating a video element as a sink. Has the possibility and benefits of creating a media sink outside the DOM been considered? -Johannes Odland
Received on Tuesday, 12 February 2013 06:34:07 UTC