- From: Harald Alvestrand <harald@alvestrand.no>
- Date: Thu, 08 Dec 2011 14:43:30 +0100
- To: Olivier Thereaux <olivier.thereaux@bbc.co.uk>
- CC: public-webrtc@w3.org, public-audio@w3.org
Thanks for the list! Some comment on the "F" entries..... On 12/07/2011 11:54 AM, Olivier Thereaux wrote: > Hello WebRTC WG, > > From the record of the joint session between WebRTC and Audio groups > at TPAC, I see that our groups were to make sure the requirements from > WebRTC are taken into account by the Audio WG: > > [[ The WebRTC WG will send requirements to the Audio WG to ensure they > get properly addressed by the Audio WG. ]] -- > http://www.w3.org/2011/04/webrtc/wiki/Santa_Clara_F2F_Summary#Audio_WG > > Given the requirements document published here: > http://tools.ietf.org/html/draft-ietf-rtcweb-use-cases-and-requirements-06 > > > My understanding is that the applicable requirements for the Audio WG > are A8, A13, A14, A15 and A16, and that to a lesser extent F5, F6, F9, > F13, F14 and F18 are also relevant (see below for expanded list). > Could you confirm this is a reasonable assessment, and point out > requirements which I may have forgotten but which the Audio WG should > look at? > > On a side note, I would suggest systematically disambiguating the > acronyms used in the use cases and requirements document. That would > make reading and understanding easier for the non-initiated. > > Thank you, > > Olivier > > > > > > F5 The browser MUST be able to render good quality > audio and video even in the presence of reasonable > levels of jitter and packet losses. > > TBD: What is a reasonable level? I think this is a WEBRTC-only thing, since the Audio WG in principle ignores codecs. Most of the tricks appropriate here are RTP-specific - such as FEC, retransmission, loss concealment algorithms. > > ---------------------------------------------------------------- > F6 The browser MUST be able to handle high loss and > jitter levels in a graceful way. Same here. > > ---------------------------------------------------------------- > F9 When there are both incoming and outgoing audio > streams, echo cancellation MUST be made available to > avoid disturbing echo during conversation. > > QUESTION: How much control should be left to the > web application? I think echo cancellation is potentially relevant to audio; it needs to have access to the correlation between "stream that is played out" and "stream that comes in (from microphone)". If audio processing happens between the PeerConnection and the audio interfaces, echo cancellation can turn out to be very difficult to implement without accessing the close-to-speaker audio stream. But I'm no expert... > > ---------------------------------------------------------------- > F13 The browser MUST be able to apply spatialization > effects to audio streams. > > ---------------------------------------------------------------- > F14 The browser MUST be able to measure the level > in audio streams. > > ---------------------------------------------------------------- > F15 The browser MUST be able to change the level > in audio streams. These 3 seem highly relevant to audio. > > ---------------------------------------------------------------- > F16 The browser MUST be able to render several > concurrent video streams This is video, so isn't an audio requirement. > > ---------------------------------------------------------------- > F17 The browser MUST be able to mix several > audio streams. > > ---------------------------------------------------------------- > F18 The browser MUST be able to process and mix > sound objects (media that is retrieved from another > source than the established media stream(s) with the > peer(s) with audio streams. These are all audio relevant. If we can satisfy a lot of these by saying "if you need them, invoke the appropriate audio components", I would be very happy.
Received on Thursday, 8 December 2011 13:44:02 UTC