- From: tim panton <thp@westhawk.co.uk>
- Date: Thu, 25 Jul 2013 09:17:47 +0100
- To: lonce <lonce.wyse@zwhome.org>
- Cc: public-webrtc@w3.org
- Message-Id: <92DF3889-3668-4E60-A9C0-36B190EEDBB5@westhawk.co.uk>
When we started discussing the constraints API - this was an issue that came up, you would be able to mark an audio stream as 'for live music' and the codec params would be set accordingly. (low latency, high quality, no voice enhancement). Even though we have settled on Opus I think it would be a bad plan to expose the codec specific 'knobs'. Better to allow the developer to express their needs in more generic terms and have the browser interpret those needs in the context of the codec. (heck, it might decide to do lin16 at 48khz !) T. On 14 Jun 2013, at 08:41, lonce <lonce.wyse@zwhome.org> wrote: > > Hello - > > I have a couple of questions I have not been able to answer myself after looking over published docs. I am interested in maximum speed and uncompromised quality transmission (for musical purposes), which leads to these questions: > > 1) What exactly is the strategy of the "components to conceal packet loss". Is there a strategy specifically for audio packet loss? > > 2) Can the audio echo cancellation (AEC), automatic gain control (AGC), and noise reduction, be turned off (not used)? > > 3) Can compression by turned off completely (to avoid the algorithmic delay of coding/endcoding)? > > 4) If you cannot bypass the compression algorithm, what is the minimum delay one can achieve? It appears to me (from http://www.webrtc.org/reference/architecture and http://en.wikipedia.org/wiki/Opus_%28codec%29 ) that analysis frame sizes down to 2.5ms (CELT layer) and 10ms (SILK layer) are possible. This, in addition to "look ahead" and algorithm delay puts the minimum delay at at least 20 ms, right? > > 5) Does one have control over how many analysis frames are sent per packet (could I set it to 1)? > > Musicians have been using a system called JackTrip (CCRMA, Stanford University) which suuports uncompressed transmission, and sub-millisecond frames (and packet) size. To recover from UDP losses, it sends redundant streams, and the receiver takes the first packet that arrives with the time stamp it needs next to reconstruct the audio on the receiver. My questions above are all about how close WebRTC can come to achieving the same performance. > > Thanks! > - lonce
Received on Thursday, 25 July 2013 08:18:15 UTC