W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: TAG feedback on Web Audio

From: Marcus Geelnard <mage@opera.com>
Date: Thu, 08 Aug 2013 12:16:17 +0200
Message-ID: <52036FF1.7060809@opera.com>
To: Noah Mendelsohn <nrm@arcanedomain.com>
CC: robert@ocallahan.org, Jer Noble <jer.noble@apple.com>, "K. Gadd" <kg@luminance.org>, Srikumar Karaikudi Subramanian <srikumarks@gmail.com>, Chris Wilson <cwilso@google.com>, Alex Russell <slightlyoff@google.com>, Anne van Kesteren <annevk@annevk.nl>, Olivier Thereaux <Olivier.Thereaux@bbc.co.uk>, "public-audio@w3.org" <public-audio@w3.org>, "www-tag@w3.org List" <www-tag@w3.org>
2013-08-08 03:50, Noah Mendelsohn skrev:
>
>
> On 8/7/2013 5:44 PM, Robert O'Callahan wrote:
>> On Thu, Aug 8, 2013 at 8:11 AM, Noah Mendelsohn <nrm@arcanedomain.com
>> <mailto:nrm@arcanedomain.com>> wrote:
>>
>>     Now ask questions like: how many bytes per second will be copied in
>>     aggressive usage scenarios for your API? Presumably the answer is 
>> much
>>     higher for video than for audio, and likely higher for multichannel
>>     audio (24 track mixing) than for simpler scenarios.
>>
>>
>> For this we need concrete, realistic test cases. We need people who are
>> concerned about copying overhead to identify test cases that they're
>> willing to draw conclusions from. (I.e., test cases where, if we
>> demonstrate low overhead, they won't just turn around and say "OK 
>> I'll look
>> for a better testcase" :-).)
>
> Right, but with one caveat: an API like this should have a lifetime of 
> a decade or two minimum IMO, so there should be some effort to 
> consider what's likely to change in terms of both use cases and 
> hardware. If there's no clear vision for that, then one could make the 
> case for leaving a very significant performance cushion today: I.e. 
> the API should be implementable for today's use cases not just with 
> acceptable overhead, but with significant "power to spare".

If we're going to discuss performance situations in the future, I 
personally think that the single most important point to consider is 
new/alternate hardware & software architectures.

The current API was clearly designed with modern general purpose CPUs in 
mind, which I think is a bit too narrow if we want to look into the 
future. I can easily see the benefit of using dedicated hardware such as 
DSPs or GPGPUs for audio processing, especially on things like hand-held 
devices. There are clearly interesting things going on in this field all 
the time - and I think we should be open to what architectures could be 
on the market 5-10 years from now.

I strongly believe that the most critical part of the current API that 
could potentially block such future directions is the requirement that 
the audio data buffers must be shared between the audio engine and the 
JS engine. It might seem to be an optimal solution (performance wise) 
right now, but it literally kills many attempts to move audio processing 
off-CPU (e.g. try to imagine the limitations imposed on a hardware 
accelerated 3D graphics processor if it had to be able to observe data 
mutations made by the CPU in shared RAM).

/Marcus

>
> Noah


-- 
Marcus Geelnard
Technical Lead, Mobile Infrastructure
Opera Software
Received on Thursday, 8 August 2013 10:16:50 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:23 UTC