W3C home > Mailing lists > Public > public-webrtc@w3.org > June 2018

Re: WebRTC NV Use Cases

From: Ben Schwartz <bemasc@google.com>
Date: Mon, 18 Jun 2018 15:06:43 -0400
Message-ID: <CAHbrMsDcpo2ZD6nZdrGu8FhJfHk3EDDu04fESAccW4xBx7D+eg@mail.gmail.com>
To: Peter Thatcher <pthatcher@google.com>
Cc: misi@niif.hu, public-webrtc@w3.org
On Mon, Jun 18, 2018 at 2:45 PM Peter Thatcher <pthatcher@google.com> wrote:

> On Mon, Jun 18, 2018 at 8:22 PM Ben Schwartz <bemasc@google.com> wrote:
>
>> On Mon, Jun 18, 2018 at 11:23 AM Mészáros Mihály <misi@niif.hu> wrote:
>>
>>> You are right it is low latency audio and video.
>>>
>>> The key how low it could be. Ultra low latency is important for
>>> performing arts.
>>>
>>> Requirements what I could think about is to turning off the jitter
>>> buffer totally, raw audio codec, (may skip encryption), no packet checksum
>>> counting, so to avoid anything that could add any latency.
>>>
>> I don't think "turning off the jitter buffer" is really well-defined,
>>
> but I can imagine exposing an explicit setting to limit the jitter buffer
>> length
>>
>
>
> True.  The way it interacts with the OS, the OS is going to ask for audio
> to play ever X (say, 10)ms.  So you'll have a buffer for receiving from the
> network and feeding to that.  But you could go crazy and make that 1-2
> packets and live with the audio sounding bad when it underuns.
>
>
>> or increase the acceptable loss rate due to late packets.
>>
>
> I think it's a good idea to expose knobs like that on the jitter buffer.
>  The current NetEq code hardcodes a 5% target loss rate (if I'm reading
> this right:
> https://cs.chromium.org/chromium/src/third_party/webrtc/modules/audio_coding/neteq/delay_manager.h?type=cs&g=0&l=120),
> and I think it would make sense to allow changing that.  It could also make
> sense to have a max buffer size.  In fact, it's already implemented in the
> low-level parts of libwebrtc (
> https://cs.chromium.org/chromium/src/third_party/webrtc/modules/audio_coding/acm2/audio_coding_module.cc?g=0&l=1089
> )
>
> But if we have a way for wasm to do feed audio to the OS performantly
> enough, then we don't need to define what "turn off the jitter buffer"
> means, because it would be up to the app to decide.
>
>
>> Also, you may need sub-frame video, if you care about video latency.
>> That can affect all the video processing APIs: anything that would pass a
>> frame (raw or compressed) would need to be able to pass a fraction of a
>> frame.
>>
>
> What would you do with subframe videos?  Have partial frame updates
> (because you lost the other frames updates)?
>

I'm referring to "slice encoding".  Typically, this is done by encoding the
top 64 rows of pixels, then the next 64, etc.


>   No one has ever brought that up as a use case before as far as I know.
> But it sounds interesting.
>

It's mostly used in proprietary video systems, I think.  Roughly, the idea
is that an optimally designed video system is fully utilizing its network
bandwidth, CPU time, etc.  If you have a new frame every 33ms, then ideally
you would spend 33ms encoding each frame, 33ms transmitting each frame, and
maybe even 33ms decoding each frame.  Otherwise, you could be getting
higher quality by sending a higher bitrate or spending more CPU time on the
codec.  However, if each pipeline stage only handles full frames, then each
stage adds 33ms of latency (in addition to network latency).

Slice encoding saves latency because the receiver can be decoding the top
of the frame before the sender has started encoding the bottom.

Analogous slice streaming can also apply on the webcam link (limited by USB
bandwidth), and even the decoder->display link (limited by GPU upload
bandwidth).

> Misi
>>>
>>> 2018-06-18 17:05 keltezéssel, Peter Thatcher írta:
>>>
>>> How is it different than an audio call, which allows attempts to be as
>>> low-latency as possible?  Is there a requirement for this use case that we
>>> don't already have?
>>>
>>> On Mon, Jun 18, 2018 at 11:43 AM Mészáros Mihály <bakfitty@gmail.com>
>>> wrote:
>>>
>>>> 2018-05-09 21:29 keltezéssel, Bernard Aboba írta:
>>>>
>>>> On June 19-20 the WebRTC WG will be holding a face-to-face meeting in Stockholm, which will focus largely on WebRTC NV.
>>>>
>>>> Early on in the discussion, we would like to have a discussion of the use cases that WebRTC NV will address.
>>>>
>>>> Since the IETF has already published RFC 7478, we are largely interested in use cases that are either beyond those articulated in RFC 7478, or use cases in the document that somehow can be done better with WebRTC NV than they could with WebRTC 1.0.
>>>>
>>>> As with any successful effort, we are looking for volunteers to develop a presentation for the F2F, and perhaps even a document.
>>>>
>>>>
>>>> Hi,
>>>>
>>>> Let me add one possible WebRTC Use Case: Ultra Low Latency audio/ video
>>>> for musical performances and other performing arts with WebRTC
>>>> Tuned WebRTC stack for ultra low latency.
>>>>
>>>> SW/HW solution that we use actually to solve the use case
>>>> e.g.
>>>>
>>>>    - http://www.ultragrid.cz/
>>>>    - https://www.garr.it/en/communities/music-and-art/lola
>>>>
>>>> Read more on tools and use case on https://npapws.org/ :
>>>>
>>>>    -
>>>>    https://npapws.org/wp-content/uploads/2017/01/S.Ubik-J.Melnikov-Network-delay-management-2017.pptx
>>>>    -
>>>>    https://npapws.org/wp-content/uploads/2016/02/Performing-Arts-and-Advanced-Networking.pptx
>>>>
>>>> Regards,
>>>> Misi
>>>>
>>>
>>>

Received on Monday, 18 June 2018 19:07:20 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:18:42 UTC