Great point!
For the realtime case, I believe passing back messages from an audio worker node to the main thread would be enough. For offline audio context, it won’t suffice and we will need a sync mechanism. In fact, I just realized that to even access “currentTime”, we’d need access to the audio context within a worker node .. which the current proposal doesn’t provide and therefore playbackTime is not redundant as the proposal stands.
-Kumar
> On 13 Aug 2014, at 7:03 am, Alan deLespinasse <adelespinasse@gmail.com> wrote:
>
> What about synchronous graph manipulation? Is there a consensus on that, and will that be part of the new spec?
>
> That is, the ability to modify the node graph (add/remove nodes or connections, modify parameters) from the script node callback. Or maybe from another timed callback that doesn't produce or consume audio samples (presumably also in the worker thread).
>
> As mentioned in yet another bug: https://github.com/WebAudio/web-audio-api/issues/69#issuecomment-24244290
>
>
> On Tue, Aug 12, 2014 at 8:56 PM, Srikumar K. S. <srikumarks@gmail.com> wrote:
>> To me, it would even be acceptable to support *only* 128 since I’m happy to build in any
>> other buffering I’d need. The buffer length argument of main thread script node was, I believe,
>> introduced to give some control over potential UI/layout related stuttering by increasing latency.
>> This would no longer be necessary with worker nodes since they’ll be running in the same
>> thread.
>
> Also, if worker nodes are running synchronously in the audio thread, the playbackTime field
> in the event can perhaps be removed. It seems to me that playback time would be the same as
> currentTime or currentTime+bufferLength/samplingRate in all cases?
>