- From: Sergio Garcia Murillo <sergio.garcia.murillo@gmail.com>
- Date: Thu, 29 Nov 2018 09:38:49 +0100
- To: public-webrtc@w3.org
Received on Thursday, 29 November 2018 08:35:32 UTC
On 29/11/2018 8:49, Bernard Aboba wrote: > On Nov 23, 2018, at 07:59, Alexandre GOUAILLARD<agouaillard@gmail.com> wrote: >> - In any case, unless the streams are isolated, the application has access to the media already. I believe it to be doable today for audio already since web audio can pipe in a media track. > [BA] Yes, it is already possible to gain access to raw video (e.g. via canvas) as well as audio and do lots of things, including machine learning, encode/decode and encrypt/decrypt. Some of the machine learning demos (e.g. pose estimation) can even run at less than glacial speed on a high end laptop. > > So one could argue that several of the use cases such as e2e and funny hats are less about new functionality than better performance. I agree, and even removing the e2ee use case from the document, it would still be trivially to implement with QUIC due to the other use cases for NV (and even for RTP if we go low level enough on the APIs). Best regards Sergio
Received on Thursday, 29 November 2018 08:35:32 UTC