Re: Aiding early implementations of the web audio API

On Wed, May 23, 2012 at 9:22 AM, Colin Clark <colinbdclark@gmail.com> wrote:

> I think that's a really good start, yes! The key, as Jussi has just
> mentioned, is to think through how we might expose the behaviour of the
> built-in AudioNodes in a manner that authors of JavaScriptAudioNodes can
> harness. If a native FFT can blow away one implemented in JavaScript (such
> as the one implemented by Ofm Labs), perhaps it should be exposed in a way
> that is not dependent on use of the RealtimeAnalyzerNode?
>

I think exposing an FFT library directly to JS (operating on JS typed
arrays) is a no-brainer. It should be fairly easy to spec and implement.

Output signals from AudioNodes can be piped into a Javascript processing
node, giving you some reuse there.

I'm still coming up to speed on the spec, so I'll continue to mull it over
> with this in mind. Another thing, off the top of my head, that stands out
> is the noteOn/noteGrainOn/noteOff methods that some AudioNodes implement.
> It wasn't clear to me from reading the spec if JavaScriptAudioNodes can
> also implement this behaviour?
>

No. Having the ability to schedule the turning on and off of arbitrary
streams/nodes is one of the features MediaStreams Processing has that Web
Audio doesn't.

Rob
-- 
“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others?" [Matthew 5:43-47]

Received on Tuesday, 22 May 2012 22:29:24 UTC