Re: [minutes] Audio WG teleconference, 2014-09-18

> 
> Actually, not true - in this case, you can easily still expose non-AudioParam attributes and methods on the AudioWorkerNode; you just then need to hook them up (via getters and setters on the attribute) to a postMessage implementation.  Yes, postMessage is in the path; that doesn't mean the user has to see it as "postMessage( {"type":"sawtooth"} );”

Thanks — I see that one would have to have some locally cached value to deliver a synchronous result, but no big deal.

>  
> Personally I am on the fence about this. But I suspect that API developers may not realize that to implement the rather fundamental idea of “method on a scripted node delivering some result” they'll have to implement a little postMessage-based RPC of their own.
> 
> I think most developers won't need to understand this; they'll be consumers of it.  Most library implementers will understand it, though. 

That makes sense to me. Perhaps we could try to explain this goal (and shed light on the expected way to implement attributes/methods) in the spec somehow, for clarity.

>  
> Also I have a sense that thinking about AudioWorkers through an offline lens may also shed light on how we can best tighten up the spec on what a scripted node can expect re timing and sequencing of onaudioprocess callbacks, both online and offline. That tightening-up is something you’ve said you want to do and I share your sense that it’s important. Perhaps that’s yet a third TBD for the AudioWorker spec change? What do you think?
> 
> Well, now I'm on the fence, because I think it's sane to write a section that illustrates to users how the audio processing is constructed, but if that were normative, it would likely remove your freedom to come up with some way to parallelize. 

Hmmm. On this point, I wonder if we’re shooting for the same goal.

I think we should say enough that people understand what is guaranteed in terms of timing/sequencing of audio callbacks, and know what the browser is attempting to optimize overall (e.g. trade off between preventing underruns due to end-to-end graph processing time and minimizing latency). But it sounds as though you want a more exact description of how WebAudio works today, which could be much more specific than that. To the extent that such a description bakes in a serialized approach to audio processing (or even rules out flexibility in the order of serial processing) I think that would be a bad outcome and I don’t yet see how the extra specificity helps anyone since exact synchronization between the interior impls of scripted nodes is forbidden. As long as we are clearly stating the UA’s performance goals and visible guarantees, is that not enough?

Perhaps you can take a crack at writing something up and I can too, and we can try to meld our drafts into something that works for both of us.

.            .       .    .  . ...Joe

Joe Berkovitz
President

Noteflight LLC
Boston, Mass.
phone: +1 978 314 6271
www.noteflight.com
"Your music, everywhere"

Received on Friday, 19 September 2014 19:22:15 UTC