Re: Simplifying specing/testing/implementation work

I'd like to request that we not plan any grand changes here until Chris is
back from vacation (end of the month).  I'd also like to explicitly
separate my opinion detailed below from his, since we are coming at the API
from distinctly different angles (I'm mostly a consumer of the API, he's an
API designer) and backgrounds (he's an audio engineering expert, and I'm a
hack who likes playing around with things that go bing!), and despite both
working for Google, aren't always in agreement.  :)

My opinion- in short, I oppose the idea of having a "core spec" as captured
above.  I think it will simply become a way for implementers to skip large
parts of the API, while causing confusion and compatibility problems for
developers using the API.

I think considering JSNode* as the core around which most audio apps will
be built is incorrect.  I've now built a half-dozen relatively complex
audio applications - the Vocoder <http://webaudiovocoder.appspot.com/>, the Web
Audio Playground <http://webaudioplayground.appspot.com/>, my in-progress
DJ deck <http://cwilsotest.appspot.com/wubwubwub/index.html>, a couple of
synthesizers, and a few others I'm not ready to show off yet.  If I had to
use JS node to create my own delays, filters by setting up my own FFT
matrices, etc., quite frankly I would be off doing something else.  I think
recognizing these features as basic audio tools is critical; the point of
the API, as I've gotten to know it, is to enable powerful audio
applications WITHOUT requiring a degree in digital signal processing.  In
the Web Audio coding I've done, I've used JSNode exactly once - and that
was just to test it out.  I have found zero need for it in the apps I've
built, because it's been more performant as well as far, far easier to use
tools provided for me.

If the "core spec" is buffers, JSNodes, and AudioNode, I see this as an
ultimately futile and delaying tactic for getting powerful audio apps built
by those without - very much like we had a "CSS1 Core" spec for a while.
 If the goal is simply to expose the audio output (and presumably input)
mechanism, then I'm not sure why an AudioData API-like write() API is not a
much simpler solution - if there's no other node types than JSNode, I'm not
sure what value the Node routing system provides.

Ultimately, I think a lot of game developers in particular will want to use
the built-in native processing.  If the AudioNode types like Filter and
Convolver aren't required in an implementation, then either we are creating
a much more complex compatibility matrix - like we did with CSS1 Core, but
worse - or they won't be able to rely on those features, in which case I'm
not sure why we have a routing system.

That said - I do agree (as I think Chris does also) that JSNode isn't where
it needs to be.  It DOES need support for AudioParam, support for varying
number of inputs/outputs/channels, and especially worker-based processing.
 But just because it COULD be used to implement DelayNode doesn't mean
DelayNode shouldn't be required.

I'm also not opposed to a new API for doing signal processing on Typed
Arrays in JavaScript.  But again, I'd much rather have the simple interface
of BiquadFilterNode to use than having to implement my own filter via that
interface - I see that as a much more complex tool, when I NEED to build my
own tools.

All this aside, I do believe the spec has to clearly specify how to
implement interoperable code, and I recognize that it is not there today.

-Chris

*I use "JSNode" as shorthand for "programmable node that the developer has
to implement themselves" - that is, independent of whether it's JavaScript
or some other programming language.

On Thu, Jul 19, 2012 at 9:44 AM, Raymond Toy <rtoy@google.com> wrote:

>
>
> On Thu, Jul 19, 2012 at 7:11 AM, Jussi Kalliokoski <
> jussi.kalliokoski@gmail.com> wrote:
>
>>
>> Obviously SIMD code is faster than addition in JS now, for example. And
>> yes, IIR filter is a type of a convolution, but I don't think it's possible
>> to write an efficient IIR filter algorithm using a convolution engine —
>> after all, a convolution engine should be designed to deal with a FIRs. Not
>> to mention that common IIR filters have 4 (LP, HP, BP, N) kernels, which
>> would be really inefficient for a FastConvolution algorithm, even if it
>> supported FIR. And as far as IIR filter performance goes, I think SIMD
>> instructions offer very little usefulness in IIR algorithms, since they're
>> so linear.
>>
>>
>  https://bugs.webkit.org/show_bug.cgi?id=75528 says that adding SIMD
> gives a 45% improvement.
>
> Ray
>

Received on Thursday, 19 July 2012 17:56:10 UTC