W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2012

Questions about AudioParam

From: Ehsan Akhgari <ehsan.akhgari@gmail.com>
Date: Wed, 24 Oct 2012 18:45:13 -0400
Message-ID: <CANTur_6JTwxDWNLTtvWQxUCjf9QE-0smaGuVrUCMv1QhZO+dwQ@mail.gmail.com>
To: public-audio@w3.org
I'm implementing AudioParam in Gecko, and one of the things that I'm
missing is what the point behind the minValue and maxValue properties are.
The spec explicitly says that those bounds values should not be enforced
when setting the value property, and WebKit doesn't seem to enforce them
anywhere at all.  Are they intended to be merely informational?  In which
case, should we consider removing them?  Or are implementers supposed to
enforce them in the automation methods?

Another question that I have is with regard to the semantics of value
setter.  I'm not exactly sure how I should reconcile the fact that the
author can set AudioParam.value on an AudioParam object which has a number
of events set up.  Should that value override the value that AudioParam
introduces for any given time?  (The spec seems to suggest that by saying
"If the .value attribute is set after any automation events have been
scheduled, then these events will be removed.")  However, WebKit seems to
use the AudioParam.value to override defaultValue, by initializing an
m_value member to defaultValue when the AudioParam object gets created and
then using m_value as the default value from now on, which effectively
means that if the author sets the value property on an AudioParam, that
value will be used as the default value where that needs to be used, and
AudioParam.defaultValue will always return the initial default value.  If
that is the desired semantics, we should clarify the spec here.  Also, I'm
not sure if having the defaultValue getter would make a lot of sense in
that case, since the default value would effectively be AudioParam.value
when the AudioParam object first gets created.

The last question that I have is about the computedValue property.  It's
not entirely clear to me whether reading that should calculate the current
automation value based on the current time of the AudioContext.  That would
probably introduce the need for some locking if audio is being processed in
a background thread, which would be a little unfortunate.  Also, it's not
clear to me what should be calculated as the second contributing component
in case the AudioParam object is not connected to a node.

Received on Wednesday, 24 October 2012 22:53:08 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:14 UTC