Re: Requirements for Web audio APIs

On Mon, May 23, 2011 at 1:27 AM, Robert O'Callahan <>wrote:

> Mon, May 23, 2011 at
> 6:32 PM, Chris Rogers <> wrote:
>> On Sun, May 22, 2011 at 9:10 PM, Robert O'Callahan <>wrote:
>>> The apparent redundancy is with AudioNode and Stream, not
>>> HTMLMediaElement.
>> For Stream, I assume you're talking about Ian's media capture and
>> peer-to-peer streaming proposal:
>> This is designed for peer-to-peer communication.  It's clearly not
>> designed for and does not solve the types of use cases and audio processing
>> tasks which the Web Audio API is designed to solve.  They're different APIs
>> each with their own much different focus and orientation, so I think it's a
>> considerable stretch to call them redundant.
> The redundancy is that both APIs introduce an abstraction representing
> streams of audio data which connect sources and sinks.
> The Stream proposal supports
> -- capturing local audio and video to use as Stream sources
> -- using HTML media elements as Stream sinks for output
> -- compressing Stream output and capturing the results as a binary Blob
> -- using a real-time peer-to-peer connection as a Stream sink
> Your AudioNode proposal supports
> -- using HTML media elements as sources
> -- mixing audio streams with various effects
> -- playback of audio output
> IMHO everything you can do with a Stream you should be able to do with an
> AudioNode and vice versa. I suspect the fact that these specs don't overlap
> in functionality (yet) is sheer luck, and the fact that these specs have
> grown up separately is an accident. But it's not too late to fix that. We
> could perhaps work around it by introducing two-way bridging between Streams
> and AudioNodes, but that seems far less elegant than simply unifying Streams
> and AudioNodes into a single kind of object.
> Rob

Rob, I'm afraid I'll just have to strongly disagree.  The Stream API is
designed specially for peer-to-peer communication.  Superficially it has
sources and sinks of data like the Web Audio API, but if you really read
both specs carefully their design goals are quite distinct. The Web Audio
API has had a great deal of care in its design to address a variety of use
cases for client side games, music, and interactive applications.  The
Stream API does not replace its functionality.  Does it have the concept of
intermediate processing modules, routing graphs, implicit unity gain summing
junctions, fanout support, channel mix-up and mix-down, sample-accurate
timing, among many other design features? -- no it does not.  The Web Audio
API has implementations, working examples and demos, showing it to be viable
in solving the problems it was designed for.  The Stream API is in a much
earlier stage, and although I believe it will be successful for what it was
designed, it's extremely unlikely that we'll be seeing it running the same
kinds of audio applications as the Web Audio API.


Received on Monday, 23 May 2011 17:42:36 UTC