Re: [minutes] October 14 2021

Thanks for doing this - I’d like to summarise my sense after the meeting whilst it is still fresh in my mind.

This is with my Web developer hat on.

It is pretty clear to me that the streams API is a better fit than callbacks from a dev PoV - it is the natural choice.
But it is also clear that some of the streams metaphors don’t quite map to realtime (eg. Memory management/close() Tee/clone )
Both active proposals are streams with added Realtime semantics. From a dev PoV they are probably largely equivalent.

The sharp points of disagreement were:
1) should the api be useable from main thread?
2) should the api cover audio from day 1?

From a high-level (Dev) point of view these questions resolve to the same thing:

"Should there be 2 ways to do this ?”

Should a Dev have to decide to use webAudio worklet or a mediastreams transform?
(The answer is probably going to be that they will have to implement both behind a polyfil because 
The browsers won’t implement both api’s with the same quality - we haven’t even got webAudio working across platforms).

Should a Dev have to decide if this is better done on main thread or in a worker?

My view is that the first version of this standard should not include audio.
It should also not mandate support of mainThread. (If implementation happen to work on main thread that’s their choice).

It may be that we subsequently decide to add audio and main thread to the spec, but then we end up supporting audio processing on mainthead, and we have tried that and it wasn’t good.

Tim.



> On 14. Oct 2021, at 19:08, Dominique Hazael-Massieux <dom@w3.org> wrote:
> 
> Hi,
> 
> The minutes of our call today are available at:
>  https://www.w3.org/2021/10/14-webrtc-minutes.html
> (can you spot the new experimental feature of the IRC-based minutes
> generator?)

Received on Friday, 15 October 2021 08:38:24 UTC