Re: Sync of scriptProcessorNode and native Node

Hmm, interesting.  Any Windows/Linux sound api consumers want to contribute
here?  (I couldn't find a comparable property in Windows APIs with a quick
look.)


On Wed, May 7, 2014 at 9:58 PM, Srikumar K. S. <srikumarks@gmail.com> wrote:

>  (e.g., your Bluetooth example - I'm not sure there's a way to detect that
> latency!)
>
>
> There is ... at least in iOS and MacOSX. I use it in my iOS app. When the
> audio route changes, I just ask for the "CurrentHardwareOutputLatency"
> property
> of the AudioSession.
>
> Even if the internal buffering is the only latency that the system can
> access,
> that would still be useful to have explicitly via the API than not have it.
> This would permit the API implementations to account for such latency
> info where and when it is available.
>
> -Kumar
>
> On 7 May, 2014, at 11:43 pm, Chris Wilson <cwilso@google.com> wrote:
>
> Although this is definitely still an issue (Issue #12, as a matter of
> fact!  https://github.com/WebAudio/web-audio-api/issues/12), I would like
> to caution that we cannot necessarily fix this entirely.  IIRC, in a number
> of cases, we simply do not know what latency is caused by the hardware
> device itself; I think we can only account for the buffering latency in our
> own systems.  (e.g., your Bluetooth example - I'm not sure there's a way to
> detect that latency!)
>
>
> On Tue, May 6, 2014 at 9:54 PM, Srikumar K. S. <srikumarks@gmail.com>wrote:
>
>> There is also a different "sync" issue that is yet to be addressed.
>> Currently, we do not have a way to translate between a time expressed in
>> the AudioContext.currentTime coordinates into the DOMHighResTimeStamp
>> coordinates. Times in requestAnimationFrame are DOMHighResTimeStamp times
>> (iirc) and synchronizing visuals with computed audio is near impossible
>> without a straightforward way to translate between them. This gets worse on
>> mobile devices where a bluetooth speaker can get connected while an audio
>> context is running and on-the-fly add 300ms of latency.
>>
>> I don't think I've missed any development towards this, but if I have, I
>> apologize for raising this again and am all ears to hear the solution.
>>
>> -Kumar
>> sriku.org
>>
>>
>> On 7 May, 2014, at 12:36 am, Joseph Berkovitz <joe@noteflight.com> wrote:
>>
>> To echo Chris W, it is *possible* to sync by paying attention to the
>> playbackTime of an audio processing event, and by scheduling parameter
>> changes and actions on native nodes in relation to that time (which is
>> expressed in the AudioContext timebase).
>>
>> However, due to the fact that the code in a  ScriptProcessorNode runs in
>> the main JS thread, as a practical matter it is difficult to do such
>> syncing reliably and robustly without glitching. There are also some
>> browser portability issues as Chris mentioned.
>>
>> Hence the urgency to find a better solution.
>>
>>    .            .       .    .  . ...Joe
>>
>> *Joe Berkovitz*
>> President
>>
>> *Noteflight LLC*
>> Boston, Mass. phone: +1 978 314 6271
>>   www.noteflight.com
>> "Your music, everywhere"
>>
>> On May 6, 2014, at 4:13 AM, Arnau Julia <Arnau.Julia@ircam.fr> wrote:
>>
>> Hello,
>>
>> I'm aware of public-audio list conversations about the use of workers for
>> the scriptProcessorNode and I'm very excited about the possibilities of
>> this solution, but I supposed that it was possible to sync a
>> scriptProcessorNode and a native Node with the current implementation. Am I
>> wrong? And if not, how it is possible to achieve?
>>
>> Thank you,
>>
>> Arnau
>>
>> On 5 mai 2014, at 18:42, Chris Wilson <cwilso@google.com> wrote:
>>
>> Lonce,
>>
>> this is one of the biggest and most important issues on my Web Audio
>> plate right now.  I'm working on figuring out how to spark some coming
>> together of implementers over the summer to come up with a workable
>> solution.
>>
>>
>> On Fri, May 2, 2014 at 9:38 PM, lonce <lonce.audio@sonic.zwhome.org>wrote:
>>
>>>
>>> Hi -
>>>
>>>     I think the real question is not how to hack this, but the status of
>>> progress on a fundamental solution to this  Achilles heal of the current
>>> system. From what I gather, the solution will probably be in the form of
>>> web workers (?), but I don't know how much attention this is getting now.
>>>     Once this is solved, the system becomes truly extensible and I am
>>> sure it will open up an explosive era of community development just waiting
>>> to happen!
>>>
>>> Best,
>>>              - lonce
>>>
>>>
>>> On 5/2/2014 4:39 PM, Arnau Julia wrote:
>>>
>>>> Hello,
>>>>
>>>> First of all, thank for all your answers.
>>>>
>>>>  The first thing to note is that all script processor node processing
>>>>> happens on the main javascript thread.  This means if you change a
>>>>> global variable in another part of your javascript program, it will
>>>>> show definitely show up on the next AudioProcessingEvent.  So, that
>>>>> answers your first problem - once you set the variable in your
>>>>> javascript, on the next buffer the change will be there.  There's no
>>>>> parallelism at all in the javascript - there's only one thing
>>>>> happening at once.
>>>>>
>>>> I would like to understand how it works. The difference that I found
>>>> between the scriptProcessorNode and the 'native' AudioNode Interface is
>>>> that the first uses a Event Handler and the AudioNodes are EventTargets. Is
>>>> it the reason why the global variables are updated only one time for each
>>>> buffer? Someone have more documentation to understand it more deeply?
>>>>
>>>>  For your second question, you need some sort of timestamp on the
>>>>> buffer.  The web audio api provides this as the playbackTime field on
>>>>> the AudioProcessingEvent.  Of course, you only have access to the
>>>>> playback time of the buffer you are currently processing, but you can
>>>>> guess when the next playbackTime will be by setting the last processed
>>>>> time as a global variable, and then adding one buffer's worth of time
>>>>> to that to get the next playbackTime.  This will be fine unless you
>>>>> drop buffers, in which case you're probably not worried about a smooth
>>>>> ramp :-).  So, one easy solution to your second problem is to always
>>>>> store the last playback time that each of your script nodes processed,
>>>>> and then start the ramp on the *next* buffer.  The spec guarantees
>>>>> that the playbackTime and ramping is sample accurate, so no worries
>>>>> there.  In practice, the last time I checked, which was over a year
>>>>> ago, firefox had serious problems with the playbackTime field (I don't
>>>>> remember if it was just absent or if it had some other problem that
>>>>> made it unusable.)
>>>>>
>>>> It seems a good solution! I didn't found the playbackTime on the last
>>>> stable version of Chrome but I found it in Firefox. Is there any
>>>> alternative for Chrome?
>>>>
>>>> I have done some basic experiments with playbackTime in Firefox and it
>>>> seems that is not totally sync or maybe I don't understand how to use it. I
>>>> uploaded the experiment to jsfiddle (only Firefox!):
>>>> http://jsfiddle.net/PgeLv/11/
>>>> The experiment structure is:
>>>> oscillatorNode (source) ----> scriptProcesorNode -----> GainNode
>>>> -------> Destination
>>>>
>>>> On the other hand, I would like to understand 'what' is exactly the
>>>> playbackTime. I guess that it can be something like that:
>>>>
>>>> playbackTime = bufferSize/sampleRate + 'processTime' + 'wait interval
>>>> until the event return the data to the audio thread'
>>>>
>>>> If this hypothesis is true, it means that the playbackTime is different
>>>> for each event, because it depends on the activity of the general thread.
>>>>
>>>> Thanks,
>>>>
>>>> Arnau
>>>>
>>>> On 22 avr. 2014, at 01:51, Russell McClellan <
>>>> russell.mcclellan@gmail.com> wrote:
>>>>
>>>>  Hey Arnau -
>>>>>
>>>>> Yes, this is probably underdocumented.  The good news is, the
>>>>> designers of the web audio api do actually have an answer for linking
>>>>> native nodes and script processor nodes.
>>>>>
>>>>> The first thing to note is that all script processor node processing
>>>>> happens on the main javascript thread.  This means if you change a
>>>>> global variable in another part of your javascript program, it will
>>>>> show definitely show up on the next AudioProcessingEvent.  So, that
>>>>> answers your first problem - once you set the variable in your
>>>>> javascript, on the next buffer the change will be there.  There's no
>>>>> parallelism at all in the javascript - there's only one thing
>>>>> happening at once.
>>>>>
>>>>> For your second question, you need some sort of timestamp on the
>>>>> buffer.  The web audio api provides this as the playbackTime field on
>>>>> the AudioProcessingEvent.  Of course, you only have access to the
>>>>> playback time of the buffer you are currently processing, but you can
>>>>> guess when the next playbackTime will be by setting the last processed
>>>>> time as a global variable, and then adding one buffer's worth of time
>>>>> to that to get the next playbackTime.  This will be fine unless you
>>>>> drop buffers, in which case you're probably not worried about a smooth
>>>>> ramp :-).  So, one easy solution to your second problem is to always
>>>>> store the last playback time that each of your script nodes processed,
>>>>> and then start the ramp on the *next* buffer.  The spec guarantees
>>>>> that the playbackTime and ramping is sample accurate, so no worries
>>>>> there.  In practice, the last time I checked, which was over a year
>>>>> ago, firefox had serious problems with the playbackTime field (I don't
>>>>> remember if it was just absent or if it had some other problem that
>>>>> made it unusable.)
>>>>>
>>>>> Thanks,
>>>>> -Russell
>>>>>
>>>>> On Fri, Apr 18, 2014 at 10:50 AM, Casper Schipper
>>>>> <casper.schipper@monotonestudio.nl> wrote:
>>>>>
>>>>>> Dear Arnau,
>>>>>>
>>>>>> this is indeed a frustrating (but probably performance wise necessary)
>>>>>> limitation of the normal web audio nodes,
>>>>>> parameters in a scriptProcessorNode can only be updated once every
>>>>>> vector
>>>>>> which is a minimum of 256 samples.
>>>>>>
>>>>>> Maybe you could solve your problem by using one of the javascript
>>>>>> libraries
>>>>>> that  bypass most of web audio api and do everything in JS itself.
>>>>>> What comes first to mind would be the Gibberish.js library by Charlie
>>>>>> Roberts, which gives you the ability to control parameters per sample
>>>>>> and
>>>>>> easily schedule synchronized parameter changes also with sample
>>>>>> accuracy:
>>>>>> http://www.charlie-roberts.com/gibberish/docs.html
>>>>>> It should be quite easy to extend it with your own nodes.
>>>>>> There are other libraries as well like flocking.js and Timbre.js.
>>>>>>
>>>>>> Of course it comes with some performance penalties, but Gibberish
>>>>>> tries to
>>>>>> at least generate javascript code that should be as efficient as
>>>>>> possible
>>>>>> for it's JIT complication style, as far as it's own nodes are
>>>>>> considered.
>>>>>>
>>>>>> Hope it helps,
>>>>>> Casper
>>>>>>
>>>>>> casper.schipper@monotonestudio.nl
>>>>>> Mauritskade 55C (the thinking hut)
>>>>>> 1092 AD  Amsterdam
>>>>>> +316 52 322 590
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 18 apr. 2014, at 10:55, Arnau Julia <Arnau.Julia@ircam.fr> wrote:
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I'm trying to synchronizing the buffer in a scriptProcessorNode with
>>>>>> native/regular web audio nodes and I'm having some problems. My
>>>>>> problem is
>>>>>> that I want to synchronize the scriptProcessorNode with a ramp of a
>>>>>> GainNode.
>>>>>>
>>>>>> My program looks like the attached diagram. Each scriptProcessorNode
>>>>>> is a
>>>>>> filter with n coefficients and these coefficients are in a global
>>>>>> variable.
>>>>>> My problem comes when I try to update these coefficients and do a
>>>>>> ramp in
>>>>>> the gain through an audioParam at the "same time".
>>>>>>
>>>>>> The start scenario is (in pseudo-code):
>>>>>>
>>>>>> audioBufferSourceNode.connect(scriptProcessorNode0);
>>>>>> audioBufferSourceNode.connect(scriptProcessorNode1);
>>>>>>
>>>>>> scriptProcessorNode0.connect(gainNode0);
>>>>>> scriptProcessorNode0.connect(gainNode1);
>>>>>>
>>>>>> gainNode0.connect(audioContext.destination);
>>>>>> gainNode1.connect(audioContext.destination);
>>>>>>
>>>>>> gainNode1.gain.value = 0;
>>>>>> globalVariableOfCoefficients0 = coefficients0;
>>>>>> globalVariableOfCoefficients1 = null;
>>>>>>
>>>>>> audioBufferSourceNode.start(0);
>>>>>>
>>>>>> The reason to have two scriptProcessorNodes is because I want to do a
>>>>>> smooth
>>>>>> transition of the coefficients, so I do a crossfading between the
>>>>>> 'old'
>>>>>> coefficients (scriptProcessorNode0) and the 'new' coefficients
>>>>>> (scriptProcessorNode1) with the ramps of gainNode0 and gainNode1. So
>>>>>> when I
>>>>>> receive the notification to update the coefficients, the global
>>>>>> variable is
>>>>>> updated and the ramps are started.
>>>>>> The first problem is that when I change the
>>>>>> globalVariableOfCoefficients1, I
>>>>>> don't know if the value of the variable is really updated in the
>>>>>> scriptProcessorNode. It seems that the scriptProcessorNode have to
>>>>>> wait
>>>>>> until get a new buffer to update the value of their global variables
>>>>>> . On
>>>>>> the other hand, there a second problem. If I change the value of the
>>>>>> globalVariableOfCoefficients1 and I wait to get a new buffer for
>>>>>> update
>>>>>> their global variables, how I can know when the first sample of this
>>>>>> new
>>>>>> buffer "is" really in the gainNode?
>>>>>>
>>>>>> On the other hand, I would like to find some documentation where the
>>>>>> relation between the scriptProcessorNode and the audio thread  is
>>>>>> explained
>>>>>> for clearly understand the problematic.
>>>>>>
>>>>>> Thank you very much in advance,
>>>>>>
>>>>>> Arnau JuliĆ 
>>>>>>
>>>>>>
>>>>>> <diagram_webAudio.png>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> --
>>> Lonce Wyse Dept. of Communications and New Media National University of
>>> Singapore
>>>
>>>
>>
>>
>>
>>
>
>

Received on Thursday, 8 May 2014 20:09:40 UTC