- From: Sergio Garcia Murillo <sergio.garcia.murillo@gmail.com>
- Date: Fri, 22 Jun 2018 11:26:51 +0200
- To: youenn fablet <yfablet@apple.com>, WebRTC WG <public-webrtc@w3.org>
- Cc: Alexandre GOUAILLARD <agouaillard@gmail.com>
On 22/06/2018 4:55, youenn fablet wrote: > Use case 2 has a somewhat wider scope and a limited complexity. It should first be proved that opaque streams would be actually deployed as it can cause potential user experience issues. For instance, in multi-party video conference scenarios, it is desirable to update the UI based on who is speaking, silence detection might help improve audio quality, a microphone level meter is often available… IMHO, this is an issue we should try to solve, even without e2ee. Having to use webaudio to process all the samples to get the audio level of a track doesn't seem a good way to go. It is a so common use case that we should provide an API on the MediaTrack for that. Best regards Sergio
Received on Friday, 22 June 2018 09:26:09 UTC