Re: [mediacapture-transform] Out-of-main-thread processing by default (#23)

> > > Audio worklet is one example.
> 
> I think this is a perfect example, and a strong corollary to video:
> 
> * [ScriptProcessorNode](https://webaudio.github.io/web-audio-api/#ScriptProcessorNode) exposed audio data on the main-thread
> * Everyone agreed this was a terrible mistake, and deprecated it, slating it for removal.
> * Interesting trivia: Its immediate replacement was [initially going to be](http://www.devdoc.net/web/developer.mozilla.org/en-US/docs/Web/API/ScriptProcessorNode.html) "Audio Workers"
> * But the Audio WG found this improvement insufficient, further isolating exposure to a highly controlled AudioWorklet environment.
> 
> There's clear and relevant precedent here for controlling exposure of realtime data away from main thread, possibly even away from workers.
>
> > I was referring specifically to APIs exposed on DedicatedWorker, but not on Window.
> 
> @guidou Is the argument we shouldn't expose methods to workers without also exposing them to main-thread, because that would be unexpected somehow? What's the worry there exactly? If symmetry is of concern, we can do VideoWorklet. 😉
> 

I'm saying that going against the established pattern of exposing in both places requires a stronger argument than "the user might do something wrong in certain circumstances" which is the only argument given so far. This could be said basically about any API, including the audio worklet. 
The comparison with ScriptProcessorNode is meaningless since it could only run only on the main thread and was intended for applications that might require a real-time thread. Neither of those two things are true for the streams-based proposal.
If you think the way to go is a VideoWorklet, then I'm very interested in seeing more details about that proposal.
My impression so far is that introducing a VideoWorklet would be more complex than streams, since streams are already specified, implemented, and proven in production. The benefits of a video workler are unclear to me, but a more concrete proposal might make them clearer. An advantage worklets can provide is the ability to run on any thread, including real-time threads, but that is not needed for our intended use cases.

> > How would a worklet support the use cases that require generating a new track in JS?
> > In particular, the use case where you want to generate a new track based on the contents of two (or more) existing tracks (i.e., the weather report use case).
> 
> It can be done with audio today, so I don't think whether it is possible or not is the discussion.
> 

I was asking just in case you had something concrete in mind.

> Instead, worker vs. worklet I think comes down to what benefits there may be from controlling the environment of exposure.
> 

I'm not very familiar with the history of the audio worklet, but I think a stronger argument than controlled exposure is the ability to run on a real-time thread, which you can't do with workers. A real-time thread is ideal for very low-latency, relatively low-CPU applications, which is very common for audio applications that need to render audio locally. It's not needed for the use cases we intend to support (it would be a negative in some cases).

> I don't know what those are atm, but if we plan to expose `VideoFrame`s from a GPU buffer pool [w3c/webcodecs#83](https://github.com/w3c/webcodecs/issues/83) we might wish we had some control, so JS failing to close them quickly doesn't stall camera capture or WebGL in the browser.

We have control. The application can run processing on a worker if the main-thread is a concern. What other controls do you envision?


-- 
GitHub Notification of comment by guidou
Please view or discuss this issue at https://github.com/w3c/mediacapture-transform/issues/23#issuecomment-842712719 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Monday, 17 May 2021 23:36:48 UTC