Re: Call for Consensus (CfC): WebRTC-NV Use Cases

Just a minor follow up on Jan-Ivar’s input below.

The html media element already has the autoplay attribute, it should be made clear why it would not satisfy N36.


From: Jan-Ivar Bruaroey <>
Date: Tuesday, 14 December 2021 at 02:12
To: Bernard Aboba <>
Cc: <>
Subject: Re: Call for Consensus (CfC): WebRTC-NV Use Cases
(Chair-hat off) Firstly, my apologies for not giving feedback on these use cases sooner.

> Section 3.2: Low latency P2P broadcast:<>

This seems broad and confusing to me as written. The term 'broadcast' isn't well-defined, and I worry different people will read different things into this, including:
1. A mandate to somehow tackle mass audiences in WebRTC's p2p model
2. A mandate to support DRM in WebRTC's native media stack or data channels
3. A mandate to officially support the practice of sending realtime media over data channels (which if adopted, might encumber us to provide more congestion control options)
4. A mandate to officially support higher-latency non-realtime media use cases, such as "HLS extended with P2P caching" (from Bernard's response)

That's a lot of different things, some of them competing. I'm glad WebRTC is being used for things the WG couldn't have envisioned. But it's hard for one API to do many things well, and officially adopting use cases means committing vendors to tweaking WebRTC to fit each use case rather than tweaking each use case to fit WebRTC. So I think we need a high bar here, with the spec at REC, which I think means requiring new use cases to be more narrowly defined.

Auto-play (N36) seems outside the scope of the WebRTC WG, and probably belongs in the Media WG.

This leaves DRM (and subtitles?) where I agree with Bernard it's not clear what the ask is that cannot be done with existing APIs.

I therefore object to the inclusion of this use case, in its present form.

I'm sensitive to there appearing to be concrete underlying use cases driving this, so I want to be careful not to reject that there may be a need that is not served here, and I'd be willing to help untangle it further. I just don't see it as written.
> Section 3.4: Decentralized Internet:<>

This (N34) is basically WebRTC in Service Workers. Since Service Workers outlive the pages that create them, we cannot merely expose RTCDataChannel like we recently did in Web Workers. We'd have to expose the RTCPeerConnection API wholesale to Service Workers, I think.

This seems like a heavy lift, so I'd be reluctant to undertake it without better understanding the value-add and without first hearing about other approaches considered, such as maybe shimming fetch in JS libraries.

I therefore object to the inclusion of this use case at this time, without some additional supporting documentation of what the obstacles were that could not be solved any other way than through Service Workers.

> Section 3.9: Reduced complexity signaling:<>
Isn't this WISH?<>

On (N39), why is a new URI "format" required in W3C? Is this use case meant to support development in the IETF?

I'm sensitive to the value of keeping use cases here for other WGs to reference, but I'm not sure how to respond as a WebRTC member, since it's not clear to me what the ask from our WG here will be as far as APIs we'd be committing to that would support this new format. It also seems like we should ask in the IETF about this.

Regarding the remaining requirements N30, N31, N32, N35, N37, N38, I agree with Bernard that they seem doable without new APIs, for the reasons Bernard mention.

.: Jan-Ivar :.

On Wed, Dec 1, 2021 at 4:08 PM Bernard Aboba <<>> wrote:
My opinion is as follows:

1.      Section 3.2: Low latency P2P broadcast:<>

[BA] I support inclusion of this use case. Since P2P has also been used to extend the scale of conventional streaming, it may also have value in scenarios which may not require "low latency" (e.g. HLS extended with P2P caching).

2.      Section 3.4: Decentralized Internet:<>
[BA] I support inclusion of this use case.

3.      Section 3.9: Reduced complexity signaling:<>
[BA] I support inclusion of this use case.

Of the requirements, I support inclusion of N34, N36 and N39.

It seems to me that requirements N30, N31, N32, N35, N37, N38 may possibly be met without new APIs. For example, N30 and N32 can be accomplished via additional signaling, N31 can be accomplished by downloading the hold music prior to the interruption, N35 seems like it can be accomplished using a mesh topology, N37 seems like it might be satisfied by transporting containerized content over data channel and using EME for content protection (or is the idea to protect the media without containerization?).

N33 seems like it would require support for link local name resolution facilities beyond what is commonly implemented in most operating systems.

The new requirements include:

The user agent must provide the ability to re-establish media after an interruption.
The user agent must provide the ability to play selected media to the remote party during an interuption (c.f. on hold music).
The user agent must provide the ability to 'park' a connection such that it can be retrieved and continued by a newly loaded page to prevent accidental 'browsing away' from dropping a call irretrievably.
A 'long-term connection' must be able to be re-established without access to external services in the event of the local network becoming isolated from the wider network without compromising e2e security.
Ability to intercept the fetch API and service it over a P2P link. One way to do this would be to support data channels in Service Workers which can already intercept fetch.
A group member can encrypt and send copies of the encoded media directly to multiple group members without the intervention of the media server.
Predictable auto-play for media elements that works for first time users and is testable.
Ability to reuse DRM assets streamed over data channels.
Ability to reuse subtitle assets streamed over data channels.
A URI format that defines the remaining transport related fields (e.g. service address/port, ICE credentials, DTLS fingerprint).

We would now like to do a Call for Consensus (CfC) on addition of these new use cases and requirements.  For each of the new use cases and requirements, please indicate:

· I support inclusion within the WebRTC-NV Use Cases document.
· I object to inclusion within the WebRTC-NV Use Cases document.
If you would like to provide further explanation of your objections, you can file an Issue in the repo:<>

The CfC will end on December 13, 2021 at midnight Pacific Time.

For the Chairs

.: Jan-Ivar :.

Received on Tuesday, 14 December 2021 07:46:11 UTC