Re: Aiding early implementations of the web audio API

Hi Chris,

On 2012-05-22, at 5:00 PM, Chris Rogers wrote:
> That said, I think you'll find that as you continue to go deeper into synthesis and audio processing, you won't be able to avoid the need for new processing units that don't ship with the Web Audio API. For example, if you wanted to create a realistic-sounding model of an analog synthesizer, you'll need band-limited oscillators along the lines of:
> 
> http://www-ccrma.stanford.edu/%7Estilti/papers/blit.pdf
> 
> Maybe you're not aware that we already have high-quality band-limited oscillators implemented in WebKit right now.

That's great news! I saw nothing in the spec to indicate this, nor a reference to the algorithms used by WebKit, so I just assumed they weren't present. Did I miss something in the spec?

If not, can you elaborate on the algorithm you plan to specify for other browsers to implement?

> I think we *are* exposing the most important primitives in the Web Audio API already.  And we've already discussed that worker-based processing is a good thing, which should be part of the spec.

I think that's a really good start, yes! The key, as Jussi has just mentioned, is to think through how we might expose the behaviour of the built-in AudioNodes in a manner that authors of JavaScriptAudioNodes can harness. If a native FFT can blow away one implemented in JavaScript (such as the one implemented by Ofm Labs), perhaps it should be exposed in a way that is not dependent on use of the RealtimeAnalyzerNode?

I'm still coming up to speed on the spec, so I'll continue to mull it over with this in mind. Another thing, off the top of my head, that stands out is the noteOn/noteGrainOn/noteOff methods that some AudioNodes implement. It wasn't clear to me from reading the spec if JavaScriptAudioNodes can also implement this behaviour?

Colin

Received on Tuesday, 22 May 2012 21:23:23 UTC