- From: Joseph Berkovitz <joe@noteflight.com>
- Date: Thu, 11 Sep 2014 15:44:58 -0400
- To: Ehsan Akhgari <ehsan@mozilla.com>
- Cc: Chris Wilson <cwilso@google.com>, "public-audio@w3.org" <public-audio@w3.org>
- Message-Id: <60D71F87-EA2A-4FE9-A1C6-4D573C704A32@noteflight.com>
> The assumptions are going to be made one way or another, which is why I'm interested in not opening anything to interpretation. I think we should try to specify the observable behavior as strictly as we can. And yes, admittedly some of that may cost some optimization opportunities. But as I mentioned before, those optimizations are inherently going to entail latency, which would make these nodes not much more useful than ScriptProcessorNodes. I think that any speced behavior which rules out multiple audio threads in the future is a bad idea. This will put Web Audio at a permanent disadvantage to DAWs which can parallelize audio processing pipelines on an as-needed basis. Modern DAW applications accept a small amount of latency (typically < 10 ms with small buffer sizes) in order to allow some inter-thread handoffs, because the gains from subgraph parallelization are huge with multiCPU devices. Lack of parallelization would wind up imposing a much larger degree of latency for many graphs when considered as a whole, if the entire graph is forced to be treated sequentially. This type of latency is far, far smaller than the de facto latency imposed by ScriptProcessorNode which is largely caused by the need to synchronize w/r/t all other main thread activity. I don’t agree that these small latencies make such scripted nodes anywhere near as bad as the old design. They are well within modern audio design tolerances. In short, let's avoid specifying any behavior that mandates the ordering and/or timing of onaudioprocess invocations across multiple nodes. . . . . . ...Joe Joe Berkovitz President Noteflight LLC Boston, Mass. phone: +1 978 314 6271 www.noteflight.com "Your music, everywhere"
Received on Thursday, 11 September 2014 19:45:37 UTC