Re: [minutes] Audio WG teleconference, 2014-09-18

On Fri, Sep 19, 2014 at 12:21 PM, Joseph Berkovitz <joe@noteflight.com>
wrote:

> Actually, not true - in this case, you can easily still expose
> non-AudioParam attributes and methods on the AudioWorkerNode; you just then
> need to hook them up (via getters and setters on the attribute) to a
> postMessage implementation.  Yes, postMessage is in the path; that doesn't
> mean the user has to see it as "postMessage( {"type":"sawtooth"} );”
>
> Thanks — I see that one would have to have some locally cached value to
> deliver a synchronous result, but no big deal.
>

On which side?  There is no such thing as synchronous across the thread
boundary (there isn't today, either) - but on the main thread side your
getter and setter would keep the value, as it would on the other side, so
it would act as a synchronous property in each thread.

> Also I have a sense that thinking about AudioWorkers through an offline
>> lens may also shed light on how we can best tighten up the spec on what a
>> scripted node can expect re timing and sequencing of onaudioprocess
>> callbacks, both online and offline. That tightening-up is something you’ve
>> said you want to do and I share your sense that it’s important. Perhaps
>> that’s yet a third TBD for the AudioWorker spec change? What do you think?
>>
> Well, now I'm on the fence, because I think it's sane to write a section
> that illustrates to users how the audio processing is constructed, but if
> that were normative, it would likely remove your freedom to come up with
> some way to parallelize.
>
> Hmmm. On this point, I wonder if we’re shooting for the same goal.
>
> I think we should say enough that people understand what is guaranteed in
> terms of timing/sequencing of audio callbacks, and know what the browser is
> attempting to optimize overall (e.g. trade off between preventing underruns
> due to end-to-end graph processing time and minimizing latency). But it
> sounds as though you want a more exact description of how WebAudio works
> today, which could be much more specific than that.
>

Not really.  Obviously, I think we should be more exact in, say, how a
DynamicsProcessorNode works.  Your goal of making guarantees of timing and
sequencing of audio callbacks, and what the browser is trying to optimize
for, is actually going to be far more limiting than that, though.  This is
why I was saying auto-parallelizing is not something I think we should do,
because it takes that one tradeoff - latency vs CPU overrun - and turns it
into a complex relationship between CPU, number of threads, thread
communication cost and latency.  The decision to move to multiple cores
would knowingly jack up latency (even if it's consistent latency for the
whole graph), in order to optimize for lower CPU in the (single) audio
thread.


> To the extent that such a description bakes in a serialized approach to
> audio processing (or even rules out flexibility in the order of serial
> processing) I think that would be a bad outcome and I don’t yet see how the
> extra specificity helps anyone since exact synchronization between the
> interior impls of scripted nodes is forbidden. As long as we are clearly
> stating the UA’s performance goals and visible guarantees, is that not
> enough?
>

You must be thinking I'm suggesting more than I am.  I don't think the
order of processing can be observed (though I think it would be useful to
non-normatively describe how connections work), and I don't think we can
guarantee timing of any sort - the vagaries of underlying hardware and APIs
make that challenging.  I'd like to say we can describe the tradeoff
between latency and underruns, but I don't think we can in any normative
way if you want to keep the door open for automatic parallelism.

Received on Friday, 19 September 2014 21:27:23 UTC