- From: Tim Panton <tim@pi.pe>
- Date: Tue, 14 Dec 2021 11:35:17 +0000
- To: Jan-Ivar Bruaroey <jib@mozilla.com>
- Cc: Bernard Aboba <Bernard.Aboba@microsoft.com>, "public-webrtc@W3.org" <public-webrtc@w3.org>
- Message-Id: <637ABD2E-B252-4D67-A9DA-296643112FE9@pi.pe>
> On 14 Dec 2021, at 01:10, Jan-Ivar Bruaroey <jib@mozilla.com> wrote: > > (Chair-hat off) Firstly, my apologies for not giving feedback on these use cases sooner. I too should have followed up sooner, but now the CfC has expired perhaps we can reformulate the document so it is clearer. > > > Section 3.2: Low latency P2P broadcast: https://w3c.github.io/webrtc-nv-use-cases/#auction <https://w3c.github.io/webrtc-nv-use-cases/#auction> > > This seems broad and confusing to me as written. The term 'broadcast' isn't well-defined, and I worry different people will read different things into this, including: > 1. A mandate to somehow tackle mass audiences in WebRTC's p2p model > 2. A mandate to support DRM in WebRTC's native media stack or data channels > 3. A mandate to officially support the practice of sending realtime media over data channels (which if adopted, might encumber us to provide more congestion control options) > 4. A mandate to officially support higher-latency non-realtime media use cases, such as "HLS extended with P2P caching" (from Bernard's response) Agreed, Broadcast is the wrong term. I’m thinking of multiple unidirectional realtime streams - multiple viewers watching a concert or sports event or church service. > > That's a lot of different things, some of them competing. I'm glad WebRTC is being used for things the WG couldn't have envisioned. But it's hard for one API to do many things well, and officially adopting use cases means committing vendors to tweaking WebRTC to fit each use case rather than tweaking each use case to fit WebRTC. So I think we need a high bar here, with the spec at REC, which I think means requiring new use cases to be more narrowly defined. > > Auto-play (N36) seems outside the scope of the WebRTC WG, and probably belongs in the Media WG. Autoplay is currently unusable for small realtime sites. In chrome the rules for autoplay change depending on the frequency of visit of a site. So a new site will not autoplay, except for the developers. Just this week I spent an hour explaining why a javascript originated click isn’t enough to start a media flow, with a developer who had a classic case of ‘works for me’ - which indeed it did. What I’m looking for is a) a level playing field - the encumbents shouldn’t get a free ride where new entrants have to do an extra click b) a definition of the behaviour that can be understood by a web-dev in less than 30 mins. Both of these are in the purview of webRTC-WG IMHO. > > This leaves DRM (and subtitles?) where I agree with Bernard it's not clear what the ask is that cannot be done with existing APIs. > > I therefore object to the inclusion of this use case, in its present form. > > I'm sensitive to there appearing to be concrete underlying use cases driving this, so I want to be careful not to reject that there may be a need that is not served here, and I'd be willing to help untangle it further. I just don't see it as written. Agree - we seem to have lost the original narrative/needs somewhere in the process. Perhaps we can get some expert input to help define this - I’m just aware that there is _something_ here... > > > Section 3.4: Decentralized Internet: https://w3c.github.io/webrtc-nv-use-cases/#decent <https://w3c.github.io/webrtc-nv-use-cases/#decent> > > This (N34) is basically WebRTC in Service Workers. Since Service Workers outlive the pages that create them, we cannot merely expose RTCDataChannel like we recently did in Web Workers. We'd have to expose the RTCPeerConnection API wholesale to Service Workers, I think. > > This seems like a heavy lift, so I'd be reluctant to undertake it without better understanding the value-add and without first hearing about other approaches considered, such as maybe shimming fetch in JS libraries. > > I therefore object to the inclusion of this use case at this time, without some additional supporting documentation of what the obstacles were that could not be solved any other way than through Service Workers. We should invite the decentralised folks to describe the problem space. > > > Section 3.9: Reduced complexity signaling: https://w3c.github.io/webrtc-nv-use-cases/#urisig <https://w3c.github.io/webrtc-nv-use-cases/#urisig> > > Isn't this WISH? https://datatracker.ietf.org/wg/wish/about/ <https://datatracker.ietf.org/wg/wish/about/> No, WISH is for ingest - this would be for egress - The hope would be that one could use a single URL as the src attribute of a video tag and get (unidirectional) realtime video over webrtc from a server. WISH requires a O/A negotiation - this wouldn’t, just as a .m3u url isn’t a negotiation. > > On (N39), why is a new URI "format" required in W3C? Is this use case meant to support development in the IETF? See above. > > I'm sensitive to the value of keeping use cases here for other WGs to reference, but I'm not sure how to respond as a WebRTC member, since it's not clear to me what the ask from our WG here will be as far as APIs we'd be committing to that would support this new format. It also seems like we should ask in the IETF about this. The IETF has no active rtcweb group to ask. I suppose one could ‘dispatch’ it - but without a use case/requirements/api shape that seems like a lost cause. > > Regarding the remaining requirements N30, N31, N32, N35, N37, N38, I agree with Bernard that they seem doable without new APIs, for the reasons Bernard mention. It is true that some of these can be partially accomplished in very round-about ways, but my original aim was to make usage easier and simpler as well as covering more of the problem space. (See my reply to Bernard for more details) Tim.
Received on Tuesday, 14 December 2021 11:35:34 UTC