W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2014

Re: Audio Workers - please review

From: Chris Wilson <cwilso@google.com>
Date: Fri, 12 Sep 2014 15:02:04 +0200
Message-ID: <CAJK2wqWNUhKu_cR5BDR=qxktVnFB+UScKU9g5zskjqbf4oMaQw@mail.gmail.com>
To: Ehsan Akhgari <ehsan@mozilla.com>
Cc: Joseph Berkovitz <joe@noteflight.com>, Jussi Kalliokoski <jussi.kalliokoski@gmail.com>, "public-audio@w3.org" <public-audio@w3.org>
I do want to be clear, by the way - sorry, I'm travelling, and somewhat
distracted - on two points:

1) I do not believe arbitrary parallelization of the graph is a good idea.
It will be moderately difficult to examine the graph to decide that it's
"okay" to parallelize (i.e. that there are no interconnections or other
dependencies), and far worse, the process will needfully insert latency
into the graph.  I believe if authors think they should parallelize their
app, they can do that (with workers and inserting their own latency) - I've
discussed this with a couple of pro audio app builders, and they were
comfortable with this.  (that is to say: you don't get arbitrary
parallelization as optimization in any other system I know of.  It has side
effects.)

2) I'm not optimistic about trying to batch nodes together just in order to
save some instantiation cost.  I'd far rather run this by the TAG and
script-coord, because I feel that's not the right optimization.

On Fri, Sep 12, 2014 at 2:52 PM, Chris Wilson <cwilso@google.com> wrote:

> On Thu, Sep 11, 2014 at 9:22 PM, Ehsan Akhgari <ehsan@mozilla.com> wrote:
>
>> On Thu, Sep 11, 2014 at 12:46 PM, Joseph Berkovitz <joe@noteflight.com>
>> wrote:
>>
>>> main thread set custom wave shapes, read analyzer data, etc. etc.)  It
>>>> seems as though if the goal is to be able to implement native nodes with
>>>> scripted nodes, this would be necessary, yes?
>>>>
>>> Is that really the goal here?  I mean, I agree that it would be nice to
>> be able to implement other nodes on top of worker nodes in JS, but if that
>> is really the goal, there are easier ways of achieving it.  As a food for
>> thought, all that one needs to implement the *entire* Web Audio API in JS
>> is a way to schedule the playback of an array of audio samples at a
>> specific time from a Web Worker.  But I'm much more interested in solving
>> the problem of allowing efficient and low latency audio synthesis through
>> JS on top of Web Audio right now.
>>
>
> To be clear - yes, this is the goal.  Not to reimplement ALL of the Web
> Audio system at once, but build a clear layered system, in Extensible Web
> Manifesto fashion - and yes, at the bottom,  there needs to be a system for
> the audio output device to continually get a stream of sample blocks from
> "the Web Audio system".  But just as similarly, I want to enable efficient,
> ZERO latency audio processing nodes, in JS, to enable experimentation and
> development of new node types, and that basically means you should be able
> to implement native nodes with script.
>
>
Received on Friday, 12 September 2014 13:02:36 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:14 UTC