- From: Randell Jesup <randell-ietf@jesup.org>
- Date: Fri, 19 Aug 2011 17:51:38 -0400
- To: "rtcweb@ietf.org" <rtcweb@ietf.org>, "public-webrtc@w3.org" <public-webrtc@w3.org>
On 8/19/2011 5:49 AM, Stefan Håkansson LK wrote: > John, > > this was a good way sort things up. > > I my view we should definitely support local recording of streams (regardless of if they are generated by local devices or received via RTP), and this could be done in parallel to rendering them or not (up to the app). Agreed. Note that there are legal requirements in various locations around recording conversations; that's up to the application IMHO -- however we'll want to make sure it's reasonably easy for the application to do. While I'm not an expert, recording someone in many jurisdictions requires periodic beeps, etc. They'd have to mix it into the outgoing stream, but it would have to remain there even if the user "muted". I want to make sure we're providing something that won't be a hassle for the application developers. > > The recorded media should also be possible to render locally (be the source for a video element). Yes. > > I'm less sure about that the recorded media should also be an RTP source - couldn't you just as well send the file over and then play it at the remote end? That might not work for cases where the two users are talking through different providers/apps, and it also would imply a much longer delay in many cases, plus local storage requirements, etc. Think a recorded greeting played to callers if no one answers, for example. Assuming that recording and playback are of encoded media: There are issues with recording and playback having to do with error recovery. For recording an incoming stream, it's less of an issue - you do normal error recovery, and on playback it would look the same as it would have if the call had been live. For sending a pre-recorded stream (greeting, etc): I'd assume it was recorded without loss. However, the other side may experience loss in receiving it. To deal with this, we can a) decode and re-encode the media, allowing us to react to incoming loss reports, or b) include periodic IDRs or equivalent. I would lean to a) (decode& re-encode), that also handles issues with codec parameters, codec choice, etc. So an input would be from a decoded stream. This may use more resources than b; the application could (at its discretion) not render incoming media while playing back. -- Randell Jesup randell-ietf@jesup.org
Received on Friday, 19 August 2011 21:54:18 UTC