- From: Rob Manson <robman@mob-labs.com>
- Date: Tue, 05 Nov 2013 19:24:38 +1100
- To: public-media-capture@w3.org
OK...I was thinking a MediaStreamTrack constructor could possibly have enabled that...but obviously this hasn't been fleshed out yet. Good to know this discussion doesn't prevent this type of option in the future. Thanks. roBman On 5/11/13 7:09 PM, Adam Bergkvist wrote: > On 2013-11-05 08:26, Rob Manson wrote: >> Hi Adam, >> >> it could be that I've totally missed something (and jumped in half way >> through a conversation) but this thread made me think about "how would I >> use the output from a Video/Canvas pipeline to generate a >> stream/track". e.g. related to the post-processing use cases. >> >> http://lists.w3.org/Archives/Public/public-media-capture/2013Sep/0062.html >> >> >> I'd definitely be happy if you have a pointer to how the output from >> canvas_2d_context.getImageData()'s to create a stream that could be >> added to an RTCPeerConnection (e.g. to send to a peer)? As far as I'm >> aware this is only possible with gUM...but happy to be shown the error >> of my ways 8) >> > > Thanks for the clarification. > > You're right that the API currently doesn't have a way to create a new > MediaStreamTrack from the output of a canvas. What we would need to > make that possible is an API that given an input, creates a > MediaStreamTrack with kind="video". What's being talked about here > doesn't prevent us from doing that. > > /Adam >
Received on Tuesday, 5 November 2013 08:25:08 UTC