- From: Young, Milan <Milan.Young@nuance.com>
- Date: Tue, 15 May 2012 22:45:50 +0000
- To: "robert@ocallahan.org" <robert@ocallahan.org>
- CC: "public-audio@w3.org" <public-audio@w3.org>
Received on Tuesday, 15 May 2012 22:46:24 UTC
Robert wrote: That's really a WebRTC question. Actually at Mozilla the #1 priority for MediaStream right now is not peer communication but using getUserMedia to capture audio and video streams and take photos. I think we'll need to resuscitate MediaStreamRecorder very soon, to record video clips in camera-using applications --- if someone else doesn't get to it first. If you just want to capture a bundle of video and transmit it to a server in non-real-time, then that's what you want too I guess. The recording use case I had in mind was quasi-realtime. Similar to WebRTC except that I’d like the application layer to have control over the transport. In this model, the “recorder” portion of the MediaStreamRecorder is just a way to get at an encoded version of the audio data. If the Audio API could provide an encoded version of the stream, recording would not be necessary. Thanks
Received on Tuesday, 15 May 2012 22:46:24 UTC