Other comments on Web Audio

We've had some discussions about sample rates before. Is there a good
reason to expose sampleRate and specify that "sample-rate converters ...
are not supported in real-time processing"? Apart from JS processing, it
seems to me that the API can and should be independent of what sample rates
are used internally, and we should allow for implementations that want to
maintain different sample rates in a single graph.


> AudioBuffer createBuffer(in ArrayBuffer buffer, in boolean mixToMono)
>
>
boolean parameters are generally a bad idea. E.g. reading
"createBuffer(buffer, false)" the casual reader will have no idea what the
boolean means. If it's not too late to change this, please change it.

Why isn't numberOfChannels on audioNode?

What use is numberOfOutputs since you don't actually provide any kind of
access to the outputs?

Is there a use for numberOfInputs other than just looping to disconnect
them all? If not, why not just add a disconnectAll method?

Why allow only one audio listener, on the context? Wouldn't it be more
flexible and simpler to just have the audio listener per AudioPannerNode?
In fact 4.14.2 lists 'listener' as an attribute but it's not in the IDL.

I still think a.connect(b) obscures which of 'a' and 'b' is the
destination, but I suppose we're stuck with it.

In AudioParam, what are 'minValue' and 'maxValue' useful for? I'm unsure
about 'name' and 'units' too.

Rob
-- 
“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others?" [Matthew 5:43-47]

Received on Thursday, 10 May 2012 01:19:57 UTC