W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Issues with ROC's proposal

From: Marcus Geelnard <mage@opera.com>
Date: Mon, 02 Sep 2013 11:56:42 +0200
Message-ID: <522460DA.5000201@opera.com>
To: WebAudio <public-audio@w3.org>
Hi all!

Since we have a Cfc coming up, I'd like to get some feedback on a few 
points about ROC's proposal for AudioBuffers [1], since I'm a bit short 
on ideas myself.

Generally speaking I'm in the "can live with" camp. I.e. I'm not happy 
with the syntax/semantics, but I understand the position of supporting 
existing content, and it's hard to do anything much different from ROC's 
proposal without breaking existing content (if that indeed is the main 
goal).


_The main things that bother me about the proposal are_:


1) The naming/functionality of getChannelData().

If we disregard the case of the AudioProcessingEvent (see below), the 
main purpose (at least a very important function) of getChannelData() 
seems to be to *set* the channel data.

We now have the copyChannelDataTo() method for getting a persistent 
version of the channel data, while the getChannelData() is used for 
getting a volatile (sometimes persistent!) version of the channel data 
that may later be transfered back to the AudioBuffer.

IMO this is confusing (i.e. "get" ~= set/modify, "copy" == get), to say 
the least.


2) The getChannelData() method can return a persistent copy.

If you call getChannelData() on an AudioBuffer that is in use, you will 
get a persistent copy of the AudioBuffer data (i.e. modifying the data 
does nothing to the AudioBuffer, and the array will never be neutered), 
which kind of goes against the purpose of getChannelData(). Again, I 
find this quite confusing.

I think that a better solution would be to throw in that case (or 
possibly return an empty array).


3) The AudioProcessingEvent still uses AudioBuffers.

I realize that not all agree that this is a problem, but this 
complicates the semantics of the AudioBuffer (among other things). For 
instance, in an AudioProcessingEvent getChannelData() is used both for 
getting the input channel data and for setting the output channel data.

IMO the better solution would be to pass pure Float32Arrays in the event 
handler instead (using neutering and ownership transfer as necessary).


4) Data transfer from JS to an AudioBuffer is implicit.

The data transfer from JS to an AudioBuffer is implicit by design, 
rather than explicit. This is confusing, and could lead to hard-to-find 
bugs.

In general it's also sub-optimal from a performance perspective, since 
it's easier to design a performance critical application if you can 
limit possible performance hits to explicit points in your code (e.g. 
let them happen during pre-processing/loading stages rather than during 
playback stages). Now, since the proposal relies heavily on neutering 
this might not be much of an issue, but I still think that it's a good 
idea to at least *consider* an implementation that does "acquire the 
contents" using a copy operation.


_Possible solutions_?

It's hard to both be backwards compatible and offer solutions to the 
above issues without introducing new interfaces and keeping the old 
interfaces only as deprecated. We've already been over that, and it 
seems to be a bad idea to have deprecated interfaces in the v1 spec.

However, I can think of at least two solutions for the future:

A) Once we introduce worker-based processing nodes we could consider 
using a slightly different design for those, hopefully one that does not 
include AudioBuffers at all.

B) We could also consider deprecating getChannelData() in favor of a 
more explicit interface in a future version of the API.


/Marcus



[1] https://wiki.mozilla.org/User:Roc/AudioBufferProposal

-- 
Marcus Geelnard
Technical Lead, Mobile Infrastructure
Opera Software
Received on Monday, 2 September 2013 09:57:34 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:10 UTC