Re: Audio and Opus configurability for speed and quality (at the expense of bandwidth)

On 7/29/13 12:28 AM, Cullen Jennings (fluffy) wrote:
>
> I agree with Slivia and Tim,
>
> My vague recollection was at one of the early meetings this same
> suggestion was made and people agreed to add an API (probably
> constrains) to turn off AGC and another one to indicate it was music
> or something like that. That was a long time ago and I'm not sure
> exactly what we need here but my guess is much of the WG will agree
> with the idea for the JS to say was it wants in a non codec specific
> way.

I think this is even in the use-case document ("distributed music band") 
with the API requirement

"A19     The Web API must provide means for the web
            application to indicate the type of audio signal
            (speech, audio) for audio stream(s)/stream
            component(s)."

Stefan

>
>
> On Jul 25, 2013, at 2:27 PM, Silvia Pfeiffer
> <silviapfeiffer1@gmail.com> wrote:
>
>> I love that idea! There can't be that many different use cases to
>> consider. Do you have a start of a list? Do we separate one-way
>> (streaming) from synchronised two-way (collaboration)? E.g music
>> streaming (even live) has different needs to a distributed live
>> music band.
>>
>> Silvia On 25 Jul 2013 18:19, "tim panton" <thp@westhawk.co.uk>
>> wrote: When we started discussing the constraints API - this was an
>> issue that came up, you would be able to mark an audio stream as
>> 'for live music' and the codec params would be set accordingly.
>> (low latency, high quality, no voice enhancement).
>>
>> Even though we have settled on Opus I think it would be a bad plan
>> to expose the codec specific 'knobs'. Better to allow the developer
>> to express their needs in more generic terms and have the browser
>> interpret those needs in the context of the codec. (heck, it might
>> decide to do lin16 at 48khz !)
>>
>> T.
>>
>> On 14 Jun 2013, at 08:41, lonce <lonce.wyse@zwhome.org> wrote:
>>
>>>
>>> Hello -
>>>
>>> I have a couple of questions I have not been able to answer
>>> myself after looking over published docs. I am interested in
>>> maximum speed and uncompromised quality transmission (for musical
>>> purposes), which leads to these questions:
>>>
>>> 1) What exactly is the strategy of the "components to conceal
>>> packet loss".  Is there a strategy specifically for audio packet
>>> loss?
>>>
>>> 2) Can the audio echo cancellation (AEC), automatic gain control
>>> (AGC), and noise reduction, be turned off (not used)?
>>>
>>> 3) Can compression by turned off completely (to avoid the
>>> algorithmic delay of coding/endcoding)?
>>>
>>> 4)  If you cannot bypass the compression algorithm, what is the
>>> minimum delay one can achieve?  It appears to me (from
>>> http://www.webrtc.org/reference/architecture and
>>> http://en.wikipedia.org/wiki/Opus_%28codec%29 ) that analysis
>>> frame sizes down to 2.5ms (CELT layer) and 10ms (SILK layer) are
>>> possible. This, in addition to "look ahead"  and algorithm delay
>>> puts the minimum delay at at least 20 ms, right?
>>>
>>> 5) Does one have control over how many analysis frames are sent
>>> per packet (could I set it to 1)?
>>>
>>> Musicians have been using a system called JackTrip (CCRMA,
>>> Stanford University) which suuports uncompressed transmission,
>>> and sub-millisecond frames (and packet) size. To recover from UDP
>>> losses, it sends redundant streams, and the receiver takes the
>>> first packet that arrives with the time stamp it needs next to
>>> reconstruct the audio on the receiver. My questions above are all
>>> about how close WebRTC can come to achieving the same
>>> performance.
>>>
>>> Thanks! - lonce
>>
>
>
>


Received on Monday, 29 July 2013 07:00:25 UTC