W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Missing information in the Web Audio spec

From: Robert O'Callahan <robert@ocallahan.org>
Date: Thu, 10 May 2012 13:19:23 +1200
Message-ID: <CAOp6jLY8HA+pF_55ZAayNTzSCKsFBahoc6snKDgZxU4P7_O58w@mail.gmail.com>
To: public-audio@w3.org
Imprecision in the spec has been discussed a bit before but the issues
haven't been resolved so I want to itemize some details that need to be
clarified. This is only based on a quick skim of the spec, there are
probably a lot more like this.

The createBuffer(in ArrayBuffer buffer) needs to match decodeAudioData with
the text "Audio file data can be in any of the formats supported by the
audio element." And what happens in that method if the resource is not

In AudioNode.connect:

> The output parameter is an index describing which output of the AudioNode
> from which to connect. An out-of-bound value throws an exception.
> The input parameter is an index describing which input of the destination
> AudioNode to connect to. An out-of-bound value throws an exception.

What does it mean to be "out of bounds"? (It can't mean that an input or
output already exists with that index.)

It needs to be specified (or derivable) what happens when someone creates a
cycle in the graph.

In AudioParam, a lot of the processing model is unclear. I assume that
nominally an AudioParam is a function from time to floats. So, for
example,what does setValueAtTime actually do? Does it set the value for all
times >= 'time'? Does setting the 'value' attribute make the function
constant over all times? And what does it mean to be "relative to the
AudioContext currentTime"? Does that mean passing 0 changes the value at
the current time, or that 'time' and 'context.currentTime' are simply on
the same timeline? If the former, clarify by saying that 0 corresponds to
the context's currentTime.

The actual values computed by the AudioParam curves must be specified
mathematically. The current text is too vague to be implemented.

Do AudioBuffers created on one AudioContext work with other AudioContexts?
This needs to be specified.

AudioBuffer.getChannelData needs to specify that it returns the same array
every time and that modifying the array alters the buffer data. (Unless it
does something else, in which case that should be specified instead.)

“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others?" [Matthew 5:43-47]
Received on Thursday, 10 May 2012 01:19:55 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:04 UTC