W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

Re: Web Audio working draft comments and questions

From: Chris Rogers <crogers@google.com>
Date: Tue, 20 Mar 2012 12:27:08 -0700
Message-ID: <CA+EzO0ma=DvSkLntyqoHqn=NRc9MQE3Ym=54DXSdY7Jy9+UTow@mail.gmail.com>
To: Per Nyblom <perny843@hotmail.com>
Cc: public-audio@w3.org
On Thu, Mar 15, 2012 at 2:54 PM, Per Nyblom <perny843@hotmail.com> wrote:

> Hi,
> I have some comments and questions for the latest Web Audio working draft
> (15 March 2012).
> The questions are about stuff that I could not find in the specification
> but I think should be in there.
> The AudioBufferSourceNode
> -------------------------
> What happens with the playback state when noteGrainOn() is called?

I need to add these states to the .idl, but here's what happens:


> Can you make several calls to noteGrainOn() or is it removed after the
> first scheduled call?
> If you can make several calls to noteGrainOn(), what happens when the
> noteOn/Offs overlap? Can you view it as adding
> the buffers together or simply as a gate that is openened?

You cannot make several calls to noteOn() or noteGrainOn()

> The playback state should also be specified more clearly when the loop
> property is changed. If you set the loop property
> to false, does this make the playback state go to FINISHED after the next
> loop iteration is done?

Good point, I should update the spec to explain that .loop may *only* be
modified in the initial UNSCHEDULED_STATE.

> The AudioParam Interface
> ------------------------
> Is it the exponential with base 10 that is used?

As Raymond explained in a previous response, the base doesn't matter.

> What exactly is the timeConstant in setTargetValueAtTime(). It would be
> great with a formula like 10^(-timeConstant * t) or something.

I should update the spec to be more clear about this, but "time constant"
has its standard scientific meaning:

> Dynamic Lifetime
> ----------------
> What if a very long filter chain is set up that you want to reuse and then
> you add a one-shot sound and call noteOn().
> When the sound stops, the complete filter chain must be set up again and
> this seems a bit wasteful. A better way might be
> to have a flag for all nodes that determines whether they can be
> automatically removed or not.
> You should be able to reuse graphs without having to worry about these
> types of automatic removals.

Thanks for thinking so deeply about the details here.  I should be more
clear about the intended design in the spec.  For "very long filter chains"
as you describe where you want to control the dynamic lifetime, the idea is
to make sure to maintain a JavaScript reference to the "head" AudioNode in
the chain.  Thus if there's some JS variable representing the AudioNode and
it's not subject to being garbage collected, then the chain will stay alive.
In other words, the automatic removals only happen if the JS object goes
out of scope (has no more references and is garbage collected).

> An example is when you want to create a synthesizer for Midi with a lot of
> notes playing at the same time, sending data through
> a filter chain (channel). When you use the current specification, the
> channel can suddenly get removed by the system
> when no notes are played at the moment. You only have the choice of
> creating a copy of the complete channel each time a note
> is played (wasteful) or create some kind of dummy, streaming node that
> prevents the chain from getting removed.
> A flag would be better I think. It could default to automatic remove, but
> it is important to be able to prevent this.
> Best regards
> Per Nyblom
Received on Tuesday, 20 March 2012 19:27:42 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:49:58 UTC