W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Missing information in the Web Audio spec

From: Philip Jägenstedt <philipj@opera.com>
Date: Wed, 16 May 2012 17:19:29 +0200
To: public-audio@w3.org, "Robert O'Callahan" <robert@ocallahan.org>
Message-ID: <op.weep6rwisr6mfa@kirk>
On Thu, 10 May 2012 03:19:23 +0200, Robert O'Callahan  
<robert@ocallahan.org> wrote:

> Imprecision in the spec has been discussed a bit before but the issues
> haven't been resolved so I want to itemize some details that need to be
> clarified. This is only based on a quick skim of the spec, there are
> probably a lot more like this.

There is a lot more, indeed.

> It needs to be specified (or derivable) what happens when someone  
> creates a
> cycle in the graph.

We agree: https://www.w3.org/2011/audio/track/issues/24

If circular routing is not to be allowed, I quite like the solution in  
MSP, to consider that graph blocked. To require throwing an error  
synchronously would require checking for loops on every operation :(

> In AudioParam, a lot of the processing model is unclear. I assume that
> nominally an AudioParam is a function from time to floats. So, for
> example,what does setValueAtTime actually do? Does it set the value for  
> all
> times >= 'time'? Does setting the 'value' attribute make the function
> constant over all times? And what does it mean to be "relative to the
> AudioContext currentTime"? Does that mean passing 0 changes the value at
> the current time, or that 'time' and 'context.currentTime' are simply on
> the same timeline? If the former, clarify by saying that 0 corresponds to
> the context's currentTime.
>
> The actual values computed by the AudioParam curves must be specified
> mathematically. The current text is too vague to be implemented.

We agree:

https://www.w3.org/2011/audio/track/issues/34
https://www.w3.org/2011/audio/track/issues/35
https://www.w3.org/2011/audio/track/issues/36
https://www.w3.org/2011/audio/track/issues/37
https://www.w3.org/2011/audio/track/issues/38
https://www.w3.org/2011/audio/track/issues/39
https://www.w3.org/2011/audio/track/issues/40
https://www.w3.org/2011/audio/track/issues/41
https://www.w3.org/2011/audio/track/issues/42

> Do AudioBuffers created on one AudioContext work with other  
> AudioContexts?
> This needs to be specified.

I agree, can you enter this into  
<https://www.w3.org/2011/audio/track/issues/new>?

> AudioBuffer.getChannelData needs to specify that it returns the same  
> array
> every time and that modifying the array alters the buffer data. (Unless  
> it
> does something else, in which case that should be specified instead.)

We agree: https://www.w3.org/2011/audio/track/issues/48

Since the spec talks about direct access, it seemed to us that a getter  
serves no purpose and that the following interface would suffice:

interface AudioBuffer {
   readonly attribute float sampleRate;
   Float32Array[] data;
}

-- 
Philip Jägenstedt
Core Developer
Opera Software
Received on Wednesday, 16 May 2012 15:19:56 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 16 May 2012 15:19:57 GMT