W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2015

Re: AudioWorker status

From: Paul Adenot <padenot@mozilla.com>
Date: Thu, 24 Sep 2015 09:56:24 +0200
Message-ID: <CANWt0Wp0-bik0gRsjUwC5C4q9yB=bALT_j81fy1FbQ2ZoVfPtg@mail.gmail.com>
To: Chris Wilson <cwilso@google.com>
Cc: Joe Berkovitz <joe@noteflight.com>, Audio Working Group <public-audio@w3.org>
It is already somewhat the case in (at least, I haven't checked others)
Firefox and Chrome, additional threads are used to compute big
convolutions. They are not "rendering threads" per se, and is not
observable.

Implicit multi-thread rendering of OfflineAudioContext graphs is
straighforward (not saying that it's more efficient in any case, but
straightforward to implement naively). Multi-thread rendering of
AudioContext is trickier. It has been shown to be possible in [0], it's
been implemented in SuperCollider, with explicit control from users (but
that looks like a limitation of SuperCollider).

That said, I think it complicates the model. I was thinking of having one
control thread and one rendering thread, per spec (in fact, that's what
I've written in my notes, the control message queue only deals with one
thread), adding a note for implementors saying that it is possible to use
more than one thread in the implementation, as long as it's not observable.

And then when we get bored we can add more things to the model maybe, and
go crazy with parallel subgraph and explicit mutli-threading things, but
not now.

Paul.

[0]: https://www.complang.tuwien.ac.at/Diplomarbeiten/blechmann11.pdf

On Wed, Sep 23, 2015 at 6:02 PM, Chris Wilson <cwilso@google.com> wrote:

> I'm all for the approach of defining concepts and vocabulary up front.
> One question, how can there be more than one rendering thread, without
> introducing latency at arbitrary points in the graph?
>
> On Wed, Sep 23, 2015 at 8:53 AM, Paul Adenot <padenot@mozilla.com> wrote:
>
>> Joe,
>>
>> Sorry for the delay. I've started converting my raw notes into spec text,
>> and to put them publicly online for (very early !) review. I've done a
>> first bit today, here:
>> http://padenot.github.io/web-audio-api/#processing-model. For now, I've
>> only converted how the main thread JS API call send operation to the
>> rendering thread, layoing out some vocabulary and concepts.
>>
>> I'm freeing up from Gecko work at the moment and should be full time on
>> this starting next week (or so). My goal is to have something in good shape
>> before TPAC so we can discuss.
>>
>> I plan to leave some TODO items inline so we can discuss those earlier
>> than later, if needed, look for ugly red TODO boxes.
>>
>> Cheers,
>> Paul.
>>
>>
>>
>> On Fri, Sep 11, 2015 at 3:30 PM, Joe Berkovitz <joe@noteflight.com>
>> wrote:
>>
>>> Hi Paul,
>>>
>>> The group is making good progress on resolving small issues. At the
>>> same, we're not sure where the AudioWorker spec stands, and TPAC is
>>> approaching quickly.
>>>
>>> Are you able to give the group an update on the progress of AudioWorker?
>>>
>>> Thanks so much.
>>>
>>> Best,
>>> .            .       .    .  . ...Joe
>>>
>>> *Joe Berkovitz*
>>> President
>>>
>>> *Noteflight LLC*
>>> 49R Day Street / Somerville, MA 02144 / USA
>>> phone: +1 978 314 6271
>>> www.noteflight.com
>>> "Your music, everywhere"
>>>
>>
>>
>
Received on Thursday, 24 September 2015 07:57:15 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 24 September 2015 07:57:16 UTC