W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Simplifying specing/testing/implementation work

From: olivier Thereaux <olivier.thereaux@bbc.co.uk>
Date: Thu, 19 Jul 2012 12:28:40 +0100
Cc: <public-audio@w3.org>
Message-Id: <0192993B-DA34-41E0-A9D5-5EB237F32B49@bbc.co.uk>
To: Marcus Geelnard <mage@opera.com>
Hi Marcus, thanks for bringing this discussion back to light. I realise I should have pushed for it a little more before we started the whole rechartering process…

On 19 Jul 2012, at 12:03, Marcus Geelnard wrote:
> We could basically have the "core" part of the API as the most primitive level.
> […]
> The rest, which would mostly fall under the category "signal processing", would be included in the next level (or levels).
> This way we can start creating tests and doing implementation much faster, not to mention that the "core" spec will become much more manageable.

Yes, I'd be curious to hear from members currently looking at implementing the API about this. I am quite positive about the idea of splitting the spec into core and modules (or levels) in principle. However, the split, if any, has to 1) make architectural sense and 2) not create such a complex net of dependencies that each spec will wait for the others before progressing through the standard process.

> Furthermore, I would like to suggest (as has been discussed before) that the Audio WG introduces a new API for doing signal processing on Typed Arrays in JavaScript. Ideally it would expose a number of methods that are hosted in a separate interface (e.g. named "DSP") that is available to both the main context and Web worker contexts, similarly to how the Math interface works.
> I've done some work on a draft for such an interface, and based on what operations are typical for the Audio API and also based on some benchmarking (JS vs native), the interface should probably include: FFT, filter (IIR), convolve (special case of filter), interpolation, plus a range of simple arithmetic and Math-like operations.

This has been floated a few times indeed. Again, the big question for me is whether layering specs would be wonderful in principle, but horrendous to implement and bad for performance. 

> * You would be able to use the native DSP horsepowers of your computer for other things than the Audio API (e.g. for things like voice recognition, SETI@home-like applications, etc) without having to make ugly abuses of the AudioContext.

Would video processing also be a use case for this? Do we know of other groups for which this would solve one of their needs? Do we know of any similar work being done?

Received on Thursday, 19 July 2012 11:28:56 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:10 UTC