W3C home > Mailing lists > Public > public-webrtc@w3.org > June 2018

Re: Summary of e2e encryption discussions

From: youenn fablet <yfablet@apple.com>
Date: Fri, 22 Jun 2018 17:33:54 -0700
Cc: WebRTC WG <public-webrtc@w3.org>, Alexandre GOUAILLARD <agouaillard@gmail.com>, Harald Alvestrand <harald@alvestrand.no>
Message-id: <7E73E319-0A9F-483F-A0D0-A010B22C7CC4@apple.com>
To: Sergio Garcia Murillo <sergio.garcia.murillo@gmail.com>
It seems we have two independent discussion items to dig in.

> On 22/06/2018 11:19, Harald Alvestrand wrote:
>> A critical part of this is also that the Web application has to attach
>> enough information to the encrypted packets that the SFU can do its job
>> of choosing which packets / frames to forward; in SVC or simulcast, this
>> includes labelling packets containing video with enough information to
>> reconstruct the dependency graph, for instance; if "loudest audio"
>> switching is in effect, the outside of the packet has to contain audio
>> level information.
> I don't quite agree, the only information that the SFU requires is already provided by the audio-to-mixer audio level and video frame marking header extensions.
> That should be implemented by the browser and the app would just need to enable them.

The double encryption done by the browser should allow SFUs to work properly without additional work from the web application.
Do we have everything available in terms of IETF standards or are we missing some pieces?

> On 22/06/2018 4:55, youenn fablet wrote:
>> Use case 2 has a somewhat wider scope and a limited complexity. It should first be proved that opaque streams would be actually deployed as it can cause potential user experience issues. For instance, in multi-party video conference scenarios, it is desirable to update the UI based on who is speaking, silence detection might help improve audio quality, a microphone level meter is often available…
> IMHO, this is an issue we should try to solve, even without e2ee. Having to use webaudio to process all the samples to get the audio level of a track doesn't seem a good way to go. It is a so common use case that we should provide an API on the MediaTrack for that.

It seems we agree that isolated streams cannot be deployed as is in SFU-based applications.
It is at least missing:
1. end-to-end encryption
2. some restricted access to the content so that applications can provider a proper user experience

2 is particularly important to evaluate since there is a contradiction between the desire for a proper user experience and the desire for a secure user experience.
It is unclear to me whether this contradiction can be solved.
Received on Saturday, 23 June 2018 00:34:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:18:42 UTC