- From: Yury Delendik <async.processingjs@yahoo.com>
- Date: Fri, 16 Jul 2010 20:52:14 -0700 (PDT)
- To: Chris Rogers <crogers@google.com>
- Cc: public-xg-audio@w3.org
l for the Web Audio API at the
moment. The ideas expressed in the specification are straightforward and simple.
The directional graph presentation of the audio processing nodes makes simple to
visualize the signal flow.
My feedback/questions:
1) It took some time to gather all missing pieces of information from: the
examples, the SVN change log, and the public-xg-audio list. I had the trouble to
understand why the examples have AudioMixerNode and there is no such node in the
specification – this node type was in the previous versions. To make the
learning experience better, can the change log section be included in the body
of the proposal/specification?
2) Since the primary subject of the specification is AudioNode based classes, it
will be beneficial to see possible values and details of the its primary
attributes: numberOfInputs and numberOfOutputs, e.g.
AudioBufferSourceNode
==================
numberOfInputs = 0
numberOfOutputs = 1
Output #0 - Audio with same amount of channels and sampleRate as
specified in the AudioBuffer object
3) It looks like the RealtimeAnalyzerNode has special status: it does not output
any audio data. What it really outputs: passes the data without change, only
changes the signal gain (somebody recommended to add “gain” attribute to the
AudioNode), or has no outputs? Can the RealtimeAnalyzerNode be used without
connecting it to the destination node?
4) According section 16, it looks like the only object that can be used without
context is AudioElementSourceNode that can be retrieved via audioSource
property. Is it correct?
5) If the audio element will is playing the streaming data, will the sound also
be “played” in the connected audio context?
6) How many AudioContext instances are possible to run/instantiate on the single
web page?
7) The JavaScript was chosen as a client-side scripting language to control the
objects that are implemented on the high performance languages (typically
C/C++). One of the specifics of the JavaScript objects is to contain some
members that help to discover some meta data. In noticed that AudioBuffer
interface contains “length” attribute that bring different meaning to the
JavaScript “length” property (that usually has specifies the amount of the
members in the object). It's recommended to select names of the methods that
will not conflict or change the meaning of the standard identifiers of the
target scripting language.
8) Some of the class definitions are missing from the specification and really
help to understand it: AudioSourceNode, AudioListenerNode, AudioSource,
AudioBufferSource, AudioCallbackSource, etc.
9) The Modular Routing section states that “the developer doesn't have to worry
about low-level stream format details when two objects are connected together;
the right thing just happens. For example, if a mono audio stream is connected
to a stereo input it should just mix to left and right channels appropriately.”
There are lots of ways/algorithms how to change the amount of the channels, the
sample rate, etc. I think the web developer shall know what they will receive as
a result: only the left channel or mix of all channels from the 5.1 source
stream. Could you document how “the right thing” will happen?
10) How is the sampleRate attribute value defined/chosen for the non-source
nodes, e.g. AudioDestinationNode or AudioGainNode? In case when multiple outputs
mixed in one input?
Thank you,
Yury Delendik
Received on Monday, 19 July 2010 07:42:28 UTC