W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Simplifying specing/testing/implementation work

From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Date: Thu, 19 Jul 2012 15:07:06 +0300
Message-ID: <CAJhzemVh_56Jrfhzdr8PwUNLEutKcoixF0wX1s-XeY3KaKFPRw@mail.gmail.com>
To: Marcus Geelnard <mage@opera.com>
Cc: public-audio@w3.org
This! +9999. Exactly what I've been wanting all along.

My colleague Jens Nockert is working on something called Hydrazine, which
is a JS extension that provides an API that takes advantage of SIMD
instructions and friends, giving typed arrays a sort of a high-level
assembly language to manipulate them efficiently. It really doesn't overlap
with this though, as stuff like FFT, Convolution and friends are quite
application specific. Although, provided we had a high-performance FFT
library (on the other hand it's actually really surprising how fast an FFT
you can write in JS) and eventually better ways to do parallel processing
in JS, plus what Hydrazine aims to do, the performance difference of a
convolution engine written in JS vs native would be insignificant.

That said, we need to think about the current situation and cater
accordingly. I think IIR filters are unnecessary to be provided by the API,
I hardly think they have similar performance differences / problems as FFT
and Convolution, which I think are the things the API should provide.

Agree with the fact that JSAudioNode needs to be renamed. What about
something like CustomAudioNode or ProgrammableAudioNode? I also think that
since the current JavaScriptAudioNode should be the core building block of
the API, we might as well name createJavaScriptNode to createNode.

Also, I'd like to add MediaElementAudioSource (this name I think needs
simplifying as well) to the list of core elements. Although I'd rather like
to see a MediaStreamSourceNode and that the media elements would have a
.stream property that could be hooked to the same node, not sure why the
media streams and media elements need to be represented by different nodes.


On Thu, Jul 19, 2012 at 2:03 PM, Marcus Geelnard <mage@opera.com> wrote:

> Hi group!
> We have been over this many times before, but since some things are taking
> quite some time (getting the semantics detailed in the spec, getting
> started with test cases, making the API support more use cases etc) I'd
> like to get back to what Olivier brought up in [1], i.e. splitting the spec
> into two (or more) levels.
> We could basically have the "core" part of the API as the most primitive
> level. I suppose it would include:
> * AudioContext
> * AudioNode
> * JavaScriptAudioNode (new name, please)
> * AudioDestinationNode
> * AudioParam
> * AudioBuffer
> The rest, which would mostly fall under the category "signal processing",
> would be included in the next level (or levels).
> This way we can start creating tests and doing implementation much faster,
> not to mention that the "core" spec will become much more manageable.
> Now, if we make sure to "fix" the JavaScriptAudioNode so that it becomes a
> first class citizen (e.g. support for AudioParam, support for varying
> number of inputs/outputs/channels, worker-based processing, etc), most of
> the higher level functionality should be possible to implement using the
> JavaScriptAudioNode (except possibly MediaElementAudioSourceNode?).
> Furthermore, I would like to suggest (as has been discussed before) that
> the Audio WG introduces a new API for doing signal processing on Typed
> Arrays in JavaScript. Ideally it would expose a number of methods that are
> hosted in a separate interface (e.g. named "DSP") that is available to both
> the main context and Web worker contexts, similarly to how the Math
> interface works.
> I've done some work on a draft for such an interface, and based on what
> operations are typical for the Audio API and also based on some
> benchmarking (JS vs native), the interface should probably include: FFT,
> filter (IIR), convolve (special case of filter), interpolation, plus a
> range of simple arithmetic and Math-like operations.
> The merits of such an API would be many:
> * Very simple to specify, implement & test.
> * It would bring JS-based processing performance pretty much to par with
> native AudioNodes.
> * The specification of higher level AudioNodes could refer to the DSP spec
> for implementation details.
> * As a Web developer you're free to customize AudioNodes if they do not
> fulfill all your needs, by re-implementing and extending them in JS, or
> even create new exciting nodes.
> * You would be able to use the native DSP horsepowers of your computer for
> other things than the Audio API (e.g. for things like voice recognition,
> SETI@home-like applications, etc) without having to make ugly abuses of
> the AudioContext.
> * The time-to-market for new Audio API functionality would be close to
> zero, since you can likely shim it using JS+DSP.
> Any comments? Would this be a good strategy?
> /Marcus
> [1] http://lists.w3.org/Archives/**Public/public-audio/**
> 2012AprJun/0388.html<http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0388.html>
> --
> Marcus Geelnard
> Core Graphics Developer
> Opera Software ASA
Received on Thursday, 19 July 2012 12:07:37 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:10 UTC