Re: Audio Workers - please review

On Fri, Sep 12, 2014 at 5:04 PM, Joseph Berkovitz <joe@noteflight.com>
wrote:

> I suggest that we approach this issue not from the standpoint of what we
> should do now in the API, or even soon — rather, the question is, should we
> adopt a stance that rules out what may be a useful approach in the future,
> whose feasibility might be open to doubt today and later become very clear.
> I propose that the API avoid a stance that implicit parallelization is to
> be avoided for all time, and avoid being skewed in favor of explicit
> parallelization in a permanent fashion.
>

If you are saying the Web Audio system should reserve the right to
arbitrarily (to the developer) insert latency at various points in the node
graph in order to parallelize, I feel that we are passing the time that
needs to be defined.  My vocoder demo, for example, would get pretty messed
up if some of the graph got moved into another thread with added latency,
and some didn't.

If you are saying we want to reserve the possibility to parallelize
intuititively when we can work ahead to avoid adding latency into parts of
the graph, I don't have a problem with that, I'm just highly skeptical it's
worthwhile.  (The vocoder, for example, probably couldn't be broken apart.)

As previously, I think parallelization should be possible for developers to
implement if they wish (inserting latency intelligently).


> We need to leave room for flexibility and avoid reaching premature
> conclusions. Mostly this just means avoiding global scopes that rule out
> parallelism, and avoiding overly specific definitions of behavior in the
> spec.
>

I agree that we should not have global scopes (i.e. nodes sharing things
they don't need to share); I disagree about avoiding specific definitions
of behavior in the spec.  The spec needs to be much MORE specific than it
is today.


> I know for a fact that some native DAWs do arbitrary parallelization as a
> matter of course in isolated linear effect chains, and that it does not
> incur an unacceptable latency cost.
>

But that's not arbitrary (nor "automatic" in the sense that I mean).
 They're inserting parallelization at specific points (isolated effect
chains), and the developer is choosing to do it.  That's exactly the kind
of parallelization I want to make sure we have enabled.  What I'm skeptical
of would be more akin to "Core Audio now arbitrarily may put 20ms latency
into random nodes."


> I think we all know a couple of pro audio app builders, but perhaps not
> the same ones :-)  So automatic parallelization is done already outside the
> web, and it’s apparently considered quite a good idea in at least some
> contexts. Don’t UAs already parallelize lots of activity on the user’s
> behalf without exposing it?
>

Some, when it can be parallelized without side effects; but largely not.
 That's why that main thread is so congested.  :)


> Also, the cost of graph analysis will drop over time. I don’t see offhand
> why the latency it adds is necessarily of a showstopper variety. You said,
> "But I think inserting latency at ANY point in the graph connections, NOT
> at the explicit request of the developer, is a bad idea.” However, if
> inserting latency at some point in a subgraph decreases overall latency in
> the graph as a whole… I don’t know, that seems like a pretty good thing to
> me, in theory.
>

But that's not what we're doing.  There isn't "overall latency in the
graph" - the graph's latency is zero (the latency in Web Audio is at the
input and the output).  The tradeoff here would be CPU bandwidth (balancing
the work across multiple cores - aka lower likelihood of glitching) for
latency.


> But perhaps we don’t have to prove it either way right now :-)
>

If you plan on changing the API at some point so it inserts latency in the
node graph, I absolutely think that needs to be carefully defined and
predictable.

Received on Friday, 12 September 2014 16:46:48 UTC