Re: AudioWorker status

More news on this:
- I've made some progress on the "Processing model" section. I invite
everyone to have a look. It describe the underlying infrastructure we've
been using for a while, now.
- I've drafted an early node ordering algorithm. I think I'm missing a
couple edge case, though, feedback welcome.
- I've annotated some sections that have to be synchronous wrt script as
such (mainly exceptions for now)
, I'll continue in the next few days, it's quite a lot of stuff to analyze.
Look for a little sandclock icon

Unrelated to the current effort, but it can be of interest, I was tired
with working with weird indentation and markup, so I reformatted the whole
thing with tidy-html5 (in a separate commit so we can still review things
properly), and set up a travis-ci job to make sure we don't regress. I plan
to extend it with link checks (we depend on some external resources), and
respec checks. For now, everything is pointing at my fork, we can change
than when we're ready to merge.

The preview page is still at https://padenot.github.io/web-audio-api. The
travis-ci thing is linked from the README.me on
https://github.com/padenot/web-audio-api.

I plan to start working on actually running the script in the audio thread
tomorrow or next week. Most of the discussion on the API has happened, and
most of the design work has been done by Chris Wilson, I just need to
change slightly the prose so that it works with the new infrastructure I'm
going to write to run script somewhere else than the main thread or a
worker.

Cheers, and see you soon for the issue resolution session,
Paul.

On Thu, Sep 24, 2015 at 11:38 AM, Stéphane Letz <letz@grame.fr> wrote:

>
> You can perfectly compute an audio DAG without adding any latency. A naive
> implementation would use a global shared stack of ready tasks to be used by
> several rendering threads : each thread gets a ready task from the shared
> stack, computes it, possibly push output tasks (that is tasks that become
> "ready" as the result of the computation of the given task..)  on the
> shared stack, this starting from the inputs of the DAG until the output are
> reached. But having a unique shared stack usually cause some "contention
> issues" very rapidly.
>
> In the contact of the Faust project, we experiment a more efficient model
> based on "work-stealing" approaches, described in the following paper:
>
> http://www.grame.fr/ressources/publications/FAUST_LAC2010.pdf
>
> But as Paul said, probably not the must urgent think to work on.
>
> Stéphane Letz
>
>
> Le 24 sept. 2015 à 09:56, Paul Adenot <padenot@mozilla.com> a écrit :
>
> > It is already somewhat the case in (at least, I haven't checked others)
> Firefox and Chrome, additional threads are used to compute big
> convolutions. They are not "rendering threads" per se, and is not
> observable.
> >
> > Implicit multi-thread rendering of OfflineAudioContext graphs is
> straighforward (not saying that it's more efficient in any case, but
> straightforward to implement naively). Multi-thread rendering of
> AudioContext is trickier. It has been shown to be possible in [0], it's
> been implemented in SuperCollider, with explicit control from users (but
> that looks like a limitation of SuperCollider).
> >
> > That said, I think it complicates the model. I was thinking of having
> one control thread and one rendering thread, per spec (in fact, that's what
> I've written in my notes, the control message queue only deals with one
> thread), adding a note for implementors saying that it is possible to use
> more than one thread in the implementation, as long as it's not observable.
> >
> > And then when we get bored we can add more things to the model maybe,
> and go crazy with parallel subgraph and explicit mutli-threading things,
> but not now.
> >
> > Paul.
> >
> > [0]: https://www.complang.tuwien.ac.at/Diplomarbeiten/blechmann11.pdf
> >
> > On Wed, Sep 23, 2015 at 6:02 PM, Chris Wilson <cwilso@google.com> wrote:
> > I'm all for the approach of defining concepts and vocabulary up front.
> One question, how can there be more than one rendering thread, without
> introducing latency at arbitrary points in the graph?
> >
> > On Wed, Sep 23, 2015 at 8:53 AM, Paul Adenot <padenot@mozilla.com>
> wrote:
> > Joe,
> >
> > Sorry for the delay. I've started converting my raw notes into spec
> text, and to put them publicly online for (very early !) review. I've done
> a first bit today, here:
> http://padenot.github.io/web-audio-api/#processing-model. For now, I've
> only converted how the main thread JS API call send operation to the
> rendering thread, layoing out some vocabulary and concepts.
> >
> > I'm freeing up from Gecko work at the moment and should be full time on
> this starting next week (or so). My goal is to have something in good shape
> before TPAC so we can discuss.
> >
> > I plan to leave some TODO items inline so we can discuss those earlier
> than later, if needed, look for ugly red TODO boxes.
> >
> > Cheers,
> > Paul.
> >
> >
> >
> > On Fri, Sep 11, 2015 at 3:30 PM, Joe Berkovitz <joe@noteflight.com>
> wrote:
> > Hi Paul,
> >
> > The group is making good progress on resolving small issues. At the
> same, we're not sure where the AudioWorker spec stands, and TPAC is
> approaching quickly.
> >
> > Are you able to give the group an update on the progress of AudioWorker?
> >
> > Thanks so much.
> >
> > Best,
> > .            .       .    .  . ...Joe
> >
> > Joe Berkovitz
> > President
> >
> > Noteflight LLC
> > 49R Day Street / Somerville, MA 02144 / USA
> > phone: +1 978 314 6271
> > www.noteflight.com
> > "Your music, everywhere"
> >
> >
> >
>
>

Received on Thursday, 1 October 2015 15:32:08 UTC