W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2014

Re: Audio Workers - please review

From: Ehsan Akhgari <ehsan@mozilla.com>
Date: Thu, 11 Sep 2014 15:22:47 -0400
Message-ID: <CANTur_5neAJG7ZK8LpovBp9mByTc=WfRT7=OBm98LC_VosW5YA@mail.gmail.com>
To: Joseph Berkovitz <joe@noteflight.com>
Cc: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>, Chris Wilson <cwilso@google.com>, "public-audio@w3.org" <public-audio@w3.org>
On Thu, Sep 11, 2014 at 12:46 PM, Joseph Berkovitz <joe@noteflight.com>

>> Since the recent inclusion of add/removeParameter in the proposal, did
>> anyone (especially Chris :-) consider whether we still truly need
>> postMessage/onmessage support? If we removed it, it would render moot a lot
>> of arguments about what happens when nodes try to talk to each other and
>> might simplify everything a lot. Internode communication seems to me a way
>> to cause a lot of mistaken assumptions re: synchronicity (see above).
>> AudioParams seem cleaner and more in line with what native nodes do.
> I'd argue otherwise. The native nodes have features such as start(),
> stop(), setting the AudioBuffer of an AudioBufferSourceNode, the custom
> waveshape of an oscillator, reading data from an analyzer, the type of a
> filter, etc., none of which are representable reasonably with AudioParams.
> I understand your argument. But instead of just accepting that this
> implies we must support postMessage,

FWIW I think this just means that we need to have a way to send some kinds
of messages to these nodes after they are created.  Supporting postMessage
as the current spec says, however, is a bad idea IMO for reasons that I
have explained before.

> I’d like to ask a follow-on question: having allowed exposure of
> AudioParams in scripted nodes, do we also want to expose functions and
> attributes (e.g. to let the main thread set custom wave shapes, read
> analyzer data, etc. etc.)  It seems as though if the goal is to be able to
> implement native nodes with scripted nodes, this would be necessary, yes?

Is that really the goal here?  I mean, I agree that it would be nice to be
able to implement other nodes on top of worker nodes in JS, but if that is
really the goal, there are easier ways of achieving it.  As a food for
thought, all that one needs to implement the *entire* Web Audio API in JS
is a way to schedule the playback of an array of audio samples at a
specific time from a Web Worker.  But I'm much more interested in solving
the problem of allowing efficient and low latency audio synthesis through
JS on top of Web Audio right now.

> By the way, I think start/stop are understandable solely in terms of a
> node’s lifetime and sequencing of its onaudioprocess calls (granted they
> have nothing to do with AudioParams, but don’t see why they have to do with
> postMessage support either).

I think batching up the updates from the main thread and sending them off
to the audio thread in one go addresses those concerns, unless I'm missing

Received on Thursday, 11 September 2014 19:24:01 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:14 UTC