W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2014

Re: Audio Workers - please review

From: Chris Wilson <cwilso@google.com>
Date: Thu, 28 Aug 2014 08:38:50 -0700
Message-ID: <CAJK2wqUM86TdmzXGy8XCdCKB--ZgY=poiW_Fq6fjXiFw6q-j7w@mail.gmail.com>
To: Norbert Schnell <Norbert.Schnell@ircam.fr>
Cc: "public-audio@w3.org" <public-audio@w3.org>
On Thu, Aug 28, 2014 at 7:16 AM, Norbert Schnell <Norbert.Schnell@ircam.fr>

> I am not sure if I followed all discussions that led to the "worker"-name
> for the node, but from my point of view the name is slightly misleading.
> Wouldn't be something like "SynchronousScriptProcessorNode" more
> appropriate?

Not particularly.  The major difference in programming AudioWorkers (from
ScriptProcessors) is that the code lives in a Worker - and thus all
communication needs to go through the messaging interface.  The current
design of ScriptProcessor is technically a synchronous programming
interface as it is exposed to the developer - there's just an async "leave
plenty of latency and pray we get a response back before we glitch" layer
underneath that, with some developer control of the amount of latency.  To
the developer, however, it still feels synchronous.  (you're called with
buffers, you do your work, you return.)

… or ultimately: Is it really not possible to formalise all this as an
> option of the "ScriptProcessorNode" (given at creation)?
> It already has the right name and the same "onaudioprocess" event.

We could share the top-level name; however, the programming model for each
is rather different in terms of communicating parameters, analysis and
events; and furthermore, as previously discussed, I would really like to
remove the current ScriptProcessor entirely.  It's a poor way to do audio
programming, and yet it is so seductively easy that it will be (in fact,
already is) naively used widely.

It seems you me that, following this path, another future extension of
> the ScriptProcessorNode could be to schedule the "onaudioprocess" events
> neither in the main thread nor the audio thread but in an arbitrary worker
> thread.
> In neither of these cases the introduced formalisation of audio-rate
> parameters of the proposed AudioWorkerNode could hurt.
> Just that we need a "latency" parameter somewhere (why not writeable or at
> least configurable) and/or a separate input and output time.
> Does that make sense? Or do I miss something essential here?

There are a lot of choices there.  Keep in mind that in order to call the
asynchronous onaudioprocess, you have to have the input to hand the call -
inputS, in fact, since AudioParams can contain inputs.  That introduces
latency into each of those (you need to wait to receive enough (blocksize)
of the inputs before you can ask for the output, and then you need to wait
for the response to make its way across threads).

With the AudioWorker design, I'm confident developers could implement the
current ScriptProcessor design as an AudioWorker - simply by doing the
buffering and async message posting across to the main thread themselves.
 I doubt I will get to building that as a sample soon - as I want to build
a couple of other samples first, and I'm a bit swamped as always - but I
have done the mental exercise.
Received on Thursday, 28 August 2014 15:39:18 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:14 UTC