W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2014

Re: Audio Workers - please review

From: Joseph Berkovitz <joe@noteflight.com>
Date: Fri, 12 Sep 2014 11:04:52 -0400
Cc: Robert O'Callahan <robert@ocallahan.org>, Ehsan Akhgari <ehsan@mozilla.com>, Jussi Kalliokoski <jussi.kalliokoski@gmail.com>, "public-audio@w3.org" <public-audio@w3.org>
Message-Id: <4F84F1B8-B93B-4307-A318-73A4A2B144F2@noteflight.com>
To: Chris Wilson <cwilso@google.com>

I suggest that we approach this issue not from the standpoint of what we should do now in the API, or even soon — rather, the question is, should we adopt a stance that rules out what may be a useful approach in the future, whose feasibility might be open to doubt today and later become very clear. I propose that the API avoid a stance that implicit parallelization is to be avoided for all time, and avoid being skewed in favor of explicit parallelization in a permanent fashion. We need to leave room for flexibility and avoid reaching premature conclusions. Mostly this just means avoiding global scopes that rule out parallelism, and avoiding overly specific definitions of behavior in the spec.

I know for a fact that some native DAWs do arbitrary parallelization as a matter of course in isolated linear effect chains, and that it does not incur an unacceptable latency cost. I think we all know a couple of pro audio app builders, but perhaps not the same ones :-)  So automatic parallelization is done already outside the web, and it’s apparently considered quite a good idea in at least some contexts. Don’t UAs already parallelize lots of activity on the user’s behalf without exposing it?

Also, the cost of graph analysis will drop over time. I don’t see offhand why the latency it adds is necessarily of a showstopper variety. You said, "But I think inserting latency at ANY point in the graph connections, NOT at the explicit request of the developer, is a bad idea.” However, if inserting latency at some point in a subgraph decreases overall latency in the graph as a whole… I don’t know, that seems like a pretty good thing to me, in theory.

But perhaps we don’t have to prove it either way right now :-)


On Sep 12, 2014, at 9:30 AM, Chris Wilson <cwilso@google.com> wrote:

> I'm probably going to sound like a jerk in this response, so I'll apologize in advance; I've thought through parallelism scenarios a lot as a prelude to the audio worker proposal, and been left with a very strong "best left to the developer making informed decisions" feeling.
> Are you presuming that the system would auto-analyze to find optimal subgraphs, and then the user just gets latency added to that sub-graph, or you would try to "work ahead" in the subtree to make up for the inherent inter-thread communication latency?  How frequently would you analyze to make that decision and would you inform the developer somehow?  Would you let them override?  Would any streaming (live input, media element sources) disable parallelism, then?
> I get that it's possible to analyze a subgraph and decide it's 1) computationally expensive and 2) not interconnected, and thus might be a good candidate.  But I think inserting latency at ANY point in the graph connections, NOT at the explicit request of the developer, is a bad idea.  If you inserted a 50ms delay in one side of a graph that's supposed to play at the same time as another subgraph that isn't moved into another thread, that is going to have bad side effects.  And given the standard shape of most mixer graphs (e.g. the standard channel strip/sends model), the computationally expensive bits ARE going to be interconnected.  
> If you say the point here is to enable automatic parallelism with zero additional latency (by only parallelizing in situations where you can work ahead in time and get it to align properly), then I think that's possible, but a big challenge, relatively low return on investment, and much lower priority than 99% of the things I feel we have left on our plate.
> If you're implying auto-parallelization that introduces observable latency, I think that is a bad idea.
> On Fri, Sep 12, 2014 at 3:11 PM, Robert O'Callahan <robert@ocallahan.org> wrote:
> On Sat, Sep 13, 2014 at 1:02 AM, Chris Wilson <cwilso@google.com> wrote:
> 1) I do not believe arbitrary parallelization of the graph is a good idea. It will be moderately difficult to examine the graph to decide that it's "okay" to parallelize (i.e. that there are no interconnections or other dependencies), and far worse, the process will needfully insert latency into the graph.  I believe if authors think they should parallelize their app, they can do that (with workers and inserting their own latency) - I've discussed this with a couple of pro audio app builders, and they were comfortable with this.  (that is to say: you don't get arbitrary parallelization as optimization in any other system I know of.  It has side effects.)
> A parallelism analysis could be done in, er, parallel with the actual audio processing.
> I'm reasonably optimistic that auto-parallelization of Web Audio graphs could pay off.
> Rob
> -- 
> oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
> owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
> osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo owohooo
> osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o oioso
> oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo owohooo
> osoaoyoso,o o‘oYooouo ofooooolo!o’o owoiololo oboeo oiono odoaonogoeoro ooofo
> otohoeo ofoioroeo ooofo ohoeololo.

.            .       .    .  . ...Joe

Joe Berkovitz

Noteflight LLC
Boston, Mass.
phone: +1 978 314 6271
"Your music, everywhere"
Received on Friday, 12 September 2014 15:05:34 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:14 UTC