Re: Aiding early implementations of the web audio API

On Wed, May 23, 2012 at 1:00 AM, Marcus Geelnard <mage@opera.com> wrote:

> Den 2012-05-22 19:55:54 skrev Chris Wilson <cwilso@google.com>:
>
>> I have to disagree with the definition of "trivial," then.  The only node
>> types I think could really be considered trivial are Gain, Delay and
>> WaveShaper - every other type is significantly non-trivial to me.
>>
>
> I'd say that at least BiquadFilterNode, RealtimeAnalyserNode (given our
> suggested simplifications), AudioChannelSplitter and AudioChannelMerger are
> trivial too. In fact, if the spec actually specified what the nodes should
> do, the corresponding JavaScript implementations would be quite close to
> copy+paste versions of the spec.


Again - we must have radically different ideas of what "trivial" means.
 AudioChannelSplitter/Merger, perhaps - I haven't used them, so haven't
closely examined them - but I definitely wouldn't put filters and analysers
in that bucket.

And even then, when you layer on the complexity involved with handling
>> AudioParams
>> (for the gain on Gain and the delayTime on Delay), and the interpolation
>> between curve points on WaveShaper, I'm not convinced they're actually
>> trivial.
>>
>
> If handling AudioParams is actually a complex thing, I think we should
> seriously consider simplifying the corresponding requirements or dropping
> it altogether.


Again, perhaps we disagree on "complex".  I think it is sufficiently
complex where I'd rather have the underlying platform support it, rather
than have to implement the timing and interpolation myself.  Naively
handling them probably IS pretty easy.


> The easiest interface would be just be to have an output device stream.
>>  However, I think having a basic audio toolbox in the form of node types
>> will cause an explosion of audio applications -
>>
>
> ...which is why there are JS libs. The Web Audio API is already too
> complex to use for most Web developers, so there are already libs/wrappers
> available for making it easier to build basic audio applications.
>

I'm not sure what you're trying to say.  There's too much complexity, it's
already having to be wrapped for real-world developers, so let's push more
complexity on them?

At any rate, I disagree categorically that "the Web Audio API is already
too complex to use for most Web developers."  It needs the spec to be
improved a bit, and it needs more complete tutorials, like any other API
(like IndexedDB, for example), but the complexity is there for a reason,
and it's not that hard to use for simple cases.


> I'd much rather prefer a JS lib to implement all the common nodes
> (typically the ones that are already in the spec + more). Not only would it
> be 100% cross-browser inter-operable, it would also be extensible at any
> time, without requiring spec updates and adoption by clients.
>
> building the vocoder example was illustrative to me, because I ended up
>
>> using about half of the node types, and found them to be fantastically
>> easy to build on.
>>
>
> That would have been just as easy if the nodes were implemented in a JS
> lib, wouldn't it?


Just as easy?  Well, considering to me (the developer) they're just
objects, the only additional complexity is including an external JS file
(and thereby either dealing with keeping the lib up to date myself or
including external scripts, but that's relatively trivial).  However, I
will point out that my vocoder currently uses approximately 400 AudioNodes
in its base state (28 bands, 14 nodes each, and a half-dozen or so nodes
for convenience,  analysis, input signals and miscellany).  Performance
does count for a lot.

With a JS lib designed and implemented by audio & signal processing
> experts, this would not be a problem. In fact, I personally think that the
> current convolver node is way to abstract for most Web developers anyway.
> How do you make reverb from an array? It's quite a difficult area for
> someone without enough understanding of signal processing & acoustics. I
> guess most developers would likely use pre-created impulse responses and
> copy/paste tutorial code without understanding much of how it works. An
> algorithmic (feedback-based) reverb node with a few simple parameters would
> be much easier to use IMO (even if it wouldn't produce as good/accurate
> results).


You don't HAVE to "make reverb from an array" - in fact, I wouldn't expect
anyone to do this.  I would expect them to do what I did - grab a
pre-created impulse response for the scenario they want, and set up the
node using effectively tutorial code.  I don't see this as a bad thing -
and it will have fantastically more powerful capabilities than any given
simple algorithmic reverb node with a few simple parameters.  (As an aside
- I expect it would be QUITE easy to wrap the current convolution node in a
JS library that creates such a simple parameter-pased algorithmic reverb,
if you thought it had value.)

As the developer became more adept, and more interested in tweaking their
audio experience, they'd look at other impulse responses, and possibly
(though likely not, imo) investigate recording their own.  This is no
different from beginning to experiment with other types of reverb than the
"hall" and "room" presets on your reverb box in the studio.

 -C

Received on Wednesday, 23 May 2012 17:43:09 UTC