- From: Adam Bergkvist <adam.bergkvist@ericsson.com>
- Date: Tue, 5 Nov 2013 09:09:38 +0100
- To: Rob Manson <robman@mob-labs.com>, <public-media-capture@w3.org>
On 2013-11-05 08:26, Rob Manson wrote: > Hi Adam, > > it could be that I've totally missed something (and jumped in half way > through a conversation) but this thread made me think about "how would I > use the output from a Video/Canvas pipeline to generate a > stream/track". e.g. related to the post-processing use cases. > > http://lists.w3.org/Archives/Public/public-media-capture/2013Sep/0062.html > > I'd definitely be happy if you have a pointer to how the output from > canvas_2d_context.getImageData()'s to create a stream that could be > added to an RTCPeerConnection (e.g. to send to a peer)? As far as I'm > aware this is only possible with gUM...but happy to be shown the error > of my ways 8) > Thanks for the clarification. You're right that the API currently doesn't have a way to create a new MediaStreamTrack from the output of a canvas. What we would need to make that possible is an API that given an input, creates a MediaStreamTrack with kind="video". What's being talked about here doesn't prevent us from doing that. /Adam
Received on Tuesday, 5 November 2013 08:10:04 UTC