Re: Adding Web Audio API Spec to W3C Repository

On Sat, Jun 11, 2011 at 1:41 PM, Ian Hickson <> wrote:

> One major difference between AudioNode and Stream is that Stream can have
> multiple audio and video tracks, each with their own set of audio
> channels, whereas AudioNode is specifically about one set of audio
> channels. As I understand it, this distinction is quite important.

For effects on Stream data, I would treat incoming streams as having a
single audio and/or video track by selecting (or mixing) the currently
active tracks, to keep things simple for the common case that those are the
only tracks the author cares about. (At some point we could introduce APIs
to allow extraction of particular tracks into new Streams and combination of
multiple Streams into a single Stream with multiple tracks, if use-cases
require it.)

The fact that Streams can contain video as well as audio, and could be
extended with other timed media types, is an advantage IMHO, since we will
want to process video as well as audio. Mozilla people are interested in
capturing canvas contents to a video Stream; then all we need is for Stream
mixing to composite video streams together, and authors can do some simple
but useful real-time video processing, such as adding overlays to a streamed
or recorded video. It would also make sense for a Worker-based Stream
processing API to be able to manipulate video frames as well as audio
buffers, although to make that useful we'd have to expose APIs like canvas
to Workers, so it won't happen anytime soon.

"Now the Bereans were of more noble character than the Thessalonians, for
they received the message with great eagerness and examined the Scriptures
every day to see if what Paul said was true." [Acts 17:11]

Received on Saturday, 11 June 2011 13:06:25 UTC