Re: Requirements for Web audio APIs

On Sun, May 22, 2011 at 7:18 PM, Robert O'Callahan <>wrote:

> On Mon, May 23, 2011 at 1:55 PM, Chris Rogers <> wrote:
>> On Sun, May 22, 2011 at 6:11 PM, Robert O'Callahan <>wrote:
>>> On Mon, May 23, 2011 at 12:11 PM, Chris Rogers <>wrote:
>>>> On Sun, May 22, 2011 at 2:39 PM, Robert O'Callahan <
>>>>> wrote:
>>>>> On Sat, May 21, 2011 at 7:17 AM, Chris Rogers <>wrote:
>>>>>>  On Thu, May 19, 2011 at 2:58 AM, Robert O'Callahan <
>>>>>>> wrote:
>>>>>>> My concern is that having multiple abstractions representing streams
>>>>>>> of media data --- AudioNodes and Streams --- would be redundant.
>>>>>> Agreed, there's a need to look at this carefully.  It might be
>>>>>> workable if there were appropriate ways to easily use them together even if
>>>>>> they remain separate types of objects.  In graphics, for example, there are
>>>>>> different objects such as Image, ImageData, and WebGL textures which have
>>>>>> different relationships with each other.  I don't know what the right answer
>>>>>> is, but there are probably various reasonable ways to approach the problem.
>>>>> There are reasons why we need to have different kinds of image objects.
>>>>> For example, a WebGL texture has to live in VRAM so couldn't have its pixel
>>>>> data manipulated by JS the way an ImageData object can. Are there
>>>>> fundamental reasons why AudioNodes and Streams have to be different ... why
>>>>> we couldn't express the functionality of AudioNodes using Streams?
>>>> I didn't say they *have* to be different.  I'm just saying that there
>>>> might be reasonable ways to have AudioNodes and Streams work together. I
>>>> could also turn the question around and ask if we could express the
>>>> functionality of Streams using AudioNodes?
>>> Indeed! One answer to that would be that Streams contain video so
>>> "AudioNode" isn't a great name for them :-).
>>> If they don't have to be different, then they should be unified into a
>>> single abstraction. Otherwise APIs that work on media streams would have to
>>> come in an AudioNode version and a Stream version, or authors would have to
>>> create explicit bridges.
>> For connecting an audio source from an HTMLMediaElement into an audio
>> processing graph using the Web Audio API, I've suggested adding an
>> .audioSource attribute.  A code example with diagram is here in my proposal:
>> I'm fairly confident that this type of approach will work well for
>> HTMLMediaElement.  Basically, it's a "has-a" design instead of "is-a"
> Can you use that node as a sink, or just a source?

Just a source.

> Aside: the "connect" method doesn't make it clear which way the data flows,
> so it's hard to read your example without referring to the spec. Have you
> considered flipping it around and calling it "addSource" instead, or
> something like that?

Think of it this way, it's like plugging a cable into a plug.  For example
if a guitarist plugs a guitar into a guitar amplifier it would be like this:

Of course, in this case, the guitar is the source of audio.

If there's a distortion pedal in between the guitar and amplifier, then it
would be:

So, it reads left to right as source connects (to) destination.

Some AudioNodes are sources (like the guitar in the example), some are
intermediate processing boxes (like the distortion box).  Some are final
destinations for audio (like the guitar amplifier).  When seen in this way,
I think that "connect()" is a reasonable way to approach it.

> Similarly, for Streams I think the same type of approach could be
>> considered.  I haven't looked very closely at the proposed media stream API
>> yet, but would like to explore that in more detail.  If we adopt the "has-a"
>> (instead of "is-a") design then the problem of AudioNode not being a good
>> name for Stream disappears.
> Yes, but the problem of having two objects where one would do remains.

I see it as more of an advantage than a problem.  In object-oriented design,
it's often a good idea to separate out concepts into separate types of
objects with "has-a" relationships instead of lumping together distinct and
separate concepts into monolithic object classes.  As an example of two
quite different types of objects I would look at HTMLMediaElement, a type
which is very much concerned with network state and buffering as evident in
its API.   And the AudioNode,  which deals with concepts very particular to
audio, connections with processing nodes in a graph, etc.   So, I see it as
an advantage to keep them separate.


Received on Monday, 23 May 2011 02:58:35 UTC