Re: Precise control over transient effects in immediate sounds

Hi Karl,

I think that this is one symptom of a larger issue that has been discussed (but not resolved) in the past, which is that it is not possible today to discover how much de facto latency is present in the  audio engine. If it were, the xxxAtTime() calls would be completely effective because you would know what “minimal latency” meant in absolute terms.

See this issue for more detail:
https://github.com/WebAudio/web-audio-api/issues/12

I am glad to see this coming to the fore because I think it is a serious problem with the API.

.            .       .    .  . ...Joe

Joe Berkovitz
President

Noteflight LLC
Boston, Mass.
phone: +1 978 314 6271
www.noteflight.com
"Your music, everywhere"


On Nov 25, 2013, at 2:13 AM, Karl Tomlinson <karlt+public-audio@karlt.net> wrote:

> [1] attempts to implement an ADSR curve for a simple DTMF dialer.
> It tries to play the sound with minimal latency, but loses control
> over the envelope and duration of the sound because the sound
> cannot start precisely at currentTime.
> 
> The setTargetAtTime() description says it is "useful for
> implementing the "decay" and "release" portions of an ADSR
> envelope", but it is only useful if precise times of phase
> transitions are known.  These are known when the sound is started
> in the future, but not if it is started with minimal latency.
> 
> The only way I can see to handle this with the current API might
> be to use setValueCurveAtTime() because that has a duration
> parameter, but behaviour when startTime is in the past is not
> defined AFAICS and it would seem unfortunate to need to specify
> per-sample values on the envelope, even during linear or sustained
> portions.
> 
> Is there a reason why the API provides control at absolute times
> but not at relative times?  At one stage AudioParam times at least
> were spec'd as relative times [2] but that was considered an error
> [3].
> 
> Do we need to add linearRampToValueAtInterval() and
> setTargetAtInterval()?
> 
> Is it better to try to schedule with a consistent lag than to
> start the sounds as soon as possible?  If so, I don't know how to
> pick this lag.  The client doesn't know how much lag the
> implementation might have between the js and audio threads, and
> neither the implementation nor the client know how much longer the
> script will take to run before the next stable state.
> 
> [1] https://github.com/mozilla-b2g/gaia/blob/add12c96c3e9281baa7f483f7ba751ce63df5749/apps/communications/dialer/js/tone_player.js#L38
> [2] http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0149.html
> [3] https://github.com/WebAudio/web-audio-api/issues/158
> 

Received on Monday, 25 November 2013 14:56:41 UTC