W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Aiding early implementations of the web audio API

From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Date: Wed, 23 May 2012 11:43:36 +0300
Message-ID: <CAJhzemW6HysychxQdG_Q1vhSasR+sVorro1Y72sLz8tPbWmuQQ@mail.gmail.com>
To: Marcus Geelnard <mage@opera.com>
Cc: robert@ocallahan.org, Chris Wilson <cwilso@google.com>, Colin Clark <colinbdclark@gmail.com>, Chris Rogers <crogers@google.com>, public-audio@w3.org, Alistair MacDonald <al@signedon.com>
On Wed, May 23, 2012 at 11:17 AM, Marcus Geelnard <mage@opera.com> wrote:

> Den 2012-05-23 01:46:06 skrev Chris Wilson <cwilso@google.com>:
>
>
>  One question - "exposing the behaviour of built-in AudioNodes in a manner
>> that authors of JavaScriptAudioNodes can harness" sounds like subclassing
>> those nodes to me, which isn't the same thing as providing only
>> lower-level libraries (like FFT) and asking developers to do the hook-up
>> in JS nodes. What's the desire here?
>>
>
> I think the cleanest and most useful approach would be to provide
> functions/classes independent of the Audio API, so that you can use it in
> any way you want, including applications other than audio. For instance,
> compare this to how typed arrays originally emerged from WebGL (it was a
> requirement for making WebGL work), but has found wide-spread use in many
> other applications too.
>
> /Marcus
>

My thoughts exactly.
Received on Wednesday, 23 May 2012 08:44:30 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 23 May 2012 08:44:36 GMT