W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Aiding early implementations of the web audio API

From: Chris Wilson <cwilso@google.com>
Date: Tue, 22 May 2012 16:46:06 -0700
Message-ID: <CAJK2wqU0M+NV4Kimg1Lx8qF0G0pzH1kWH1DkW7f5cSfN-EmMMw@mail.gmail.com>
To: robert@ocallahan.org
Cc: Colin Clark <colinbdclark@gmail.com>, Chris Rogers <crogers@google.com>, Jussi Kalliokoski <jussi.kalliokoski@gmail.com>, Marcus Geelnard <mage@opera.com>, public-audio@w3.org, Alistair MacDonald <al@signedon.com>
On Tue, May 22, 2012 at 3:28 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Wed, May 23, 2012 at 9:22 AM, Colin Clark <colinbdclark@gmail.com>wrote:
>
>> I think that's a really good start, yes! The key, as Jussi has just
>> mentioned, is to think through how we might expose the behaviour of the
>> built-in AudioNodes in a manner that authors of JavaScriptAudioNodes can
>> harness. If a native FFT can blow away one implemented in JavaScript (such
>> as the one implemented by Ofm Labs), perhaps it should be exposed in a way
>> that is not dependent on use of the RealtimeAnalyzerNode?
>>
>
> I think exposing an FFT library directly to JS (operating on JS typed
> arrays) is a no-brainer. It should be fairly easy to spec and implement.
>
> Output signals from AudioNodes can be piped into a Javascript processing
> node, giving you some reuse there.
>

Indeed.

One question - "exposing the behaviour of built-in AudioNodes in a manner
that authors of JavaScriptAudioNodes can harness" sounds like subclassing
those nodes to me, which isn't the same thing as providing only lower-level
libraries (like FFT) and asking developers to do the hook-up in JS nodes.
 What's the desire here?  I think Robert and Jussi are suggesting not to
have the native nodes; Colin seems to be saying "just make sure you can
utilize the underlying bits in JSNode".  Is that appropriate?

I'm still coming up to speed on the spec, so I'll continue to mull it over
>> with this in mind. Another thing, off the top of my head, that stands out
>> is the noteOn/noteGrainOn/noteOff methods that some AudioNodes implement.
>> It wasn't clear to me from reading the spec if JavaScriptAudioNodes can
>> also implement this behaviour?
>>
>
> No. Having the ability to schedule the turning on and off of arbitrary
> streams/nodes is one of the features MediaStreams Processing has that Web
> Audio doesn't.
>

That statement (scheduling turning on and off of arbitrary streams/nodes is
in MSP but not WA) is true; however, you COULD implement a
noteOn/noteGrainOn/noteOff on your own JavaScriptAudioNode, could you not?
 If you were trying to implement some scenario that wanted a JS node that
functioned similarly to AudioBufferSourceNode.

noteOn/noteGrainOn/noteOff are not intended, in the WA API, to be a stream
start/pause.  Those are, to me, two separate functions.

-C
Received on Tuesday, 22 May 2012 23:46:57 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 22 May 2012 23:47:06 GMT