W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

RE: CodecNode

From: Young, Milan <Milan.Young@nuance.com>
Date: Tue, 15 May 2012 22:45:50 +0000
To: "robert@ocallahan.org" <robert@ocallahan.org>
CC: "public-audio@w3.org" <public-audio@w3.org>
Message-ID: <B236B24082A4094A85003E8FFB8DDC3C1A45AE00@SOM-EXCH04.nuance.com>
Robert wrote: That's really a WebRTC question. Actually at Mozilla the #1 priority for MediaStream right now is not peer communication but using getUserMedia to capture audio and video streams and take photos. I think we'll need to resuscitate MediaStreamRecorder very soon, to record video clips in camera-using applications --- if someone else doesn't get to it first. If you just want to capture a bundle of video and transmit it to a server in non-real-time, then that's what you want too I guess.


The recording use case I had in mind was quasi-realtime.  Similar to WebRTC except that I’d like the application layer to have control over the transport.  In this model, the “recorder” portion of the MediaStreamRecorder is just a way to get at an encoded version of the audio data.  If the Audio API could provide an encoded version of the stream, recording would not be necessary.

Thanks



Received on Tuesday, 15 May 2012 22:46:24 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 15 May 2012 22:46:27 GMT