Re: WebRTC NV Use Cases

On Mon, Jun 18, 2018 at 8:22 PM Ben Schwartz <bemasc@google.com> wrote:

> On Mon, Jun 18, 2018 at 11:23 AM Mészáros Mihály <misi@niif.hu> wrote:
>
>> You are right it is low latency audio and video.
>>
>> The key how low it could be. Ultra low latency is important for
>> performing arts.
>>
>> Requirements what I could think about is to turning off the jitter buffer
>> totally, raw audio codec, (may skip encryption), no packet checksum
>> counting, so to avoid anything that could add any latency.
>>
> I don't think "turning off the jitter buffer" is really well-defined,
>
but I can imagine exposing an explicit setting to limit the jitter buffer
> length
>


True.  The way it interacts with the OS, the OS is going to ask for audio
to play ever X (say, 10)ms.  So you'll have a buffer for receiving from the
network and feeding to that.  But you could go crazy and make that 1-2
packets and live with the audio sounding bad when it underuns.


> or increase the acceptable loss rate due to late packets.
>

I think it's a good idea to expose knobs like that on the jitter buffer.
 The current NetEq code hardcodes a 5% target loss rate (if I'm reading
this right:
https://cs.chromium.org/chromium/src/third_party/webrtc/modules/audio_coding/neteq/delay_manager.h?type=cs&g=0&l=120),
and I think it would make sense to allow changing that.  It could also make
sense to have a max buffer size.  In fact, it's already implemented in the
low-level parts of libwebrtc (
https://cs.chromium.org/chromium/src/third_party/webrtc/modules/audio_coding/acm2/audio_coding_module.cc?g=0&l=1089
)

But if we have a way for wasm to do feed audio to the OS performantly
enough, then we don't need to define what "turn off the jitter buffer"
means, because it would be up to the app to decide.


> Also, you may need sub-frame video, if you care about video latency.  That
> can affect all the video processing APIs: anything that would pass a frame
> (raw or compressed) would need to be able to pass a fraction of a frame.
>

What would you do with subframe videos?  Have partial frame updates
(because you lost the other frames updates)?  No one has ever brought that
up as a use case before as far as I know.  But it sounds interesting.


> Misi
>>
>> 2018-06-18 17:05 keltezéssel, Peter Thatcher írta:
>>
>> How is it different than an audio call, which allows attempts to be as
>> low-latency as possible?  Is there a requirement for this use case that we
>> don't already have?
>>
>> On Mon, Jun 18, 2018 at 11:43 AM Mészáros Mihály <bakfitty@gmail.com>
>> wrote:
>>
>>> 2018-05-09 21:29 keltezéssel, Bernard Aboba írta:
>>>
>>> On June 19-20 the WebRTC WG will be holding a face-to-face meeting in Stockholm, which will focus largely on WebRTC NV.
>>>
>>> Early on in the discussion, we would like to have a discussion of the use cases that WebRTC NV will address.
>>>
>>> Since the IETF has already published RFC 7478, we are largely interested in use cases that are either beyond those articulated in RFC 7478, or use cases in the document that somehow can be done better with WebRTC NV than they could with WebRTC 1.0.
>>>
>>> As with any successful effort, we are looking for volunteers to develop a presentation for the F2F, and perhaps even a document.
>>>
>>>
>>> Hi,
>>>
>>> Let me add one possible WebRTC Use Case: Ultra Low Latency audio/ video
>>> for musical performances and other performing arts with WebRTC
>>> Tuned WebRTC stack for ultra low latency.
>>>
>>> SW/HW solution that we use actually to solve the use case
>>> e.g.
>>>
>>>    - http://www.ultragrid.cz/
>>>    - https://www.garr.it/en/communities/music-and-art/lola
>>>
>>> Read more on tools and use case on https://npapws.org/ :
>>>
>>>    -
>>>    https://npapws.org/wp-content/uploads/2017/01/S.Ubik-J.Melnikov-Network-delay-management-2017.pptx
>>>    -
>>>    https://npapws.org/wp-content/uploads/2016/02/Performing-Arts-and-Advanced-Networking.pptx
>>>
>>> Regards,
>>> Misi
>>>
>>
>>

Received on Monday, 18 June 2018 18:45:42 UTC