W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: MediaStream integration

From: Robert O'Callahan <robert@ocallahan.org>
Date: Thu, 10 May 2012 14:46:52 +1200
Message-ID: <CAOp6jLb4sy=eLrKY=RzUj=OzAuGHJBvTrJ9g7ZNiJwtTnwOhrg@mail.gmail.com>
To: Chris Rogers <crogers@google.com>
Cc: public-audio@w3.org
On Thu, May 10, 2012 at 1:04 PM, Chris Rogers <crogers@google.com> wrote:

> So for the multiple output case, your .stream attribute could be a
> MediaStream with multiple MediaStreamTracks (one per output).

That's a good idea :-).

>  But I think it would be more flexible for the API to allow finer-grained
> control, tapping into individual outputs which is possible if there is an
> explicit destination node representing a MediaStream.

But that would be possible, just by creating a node (say a DelayNode),
connecting the output to it, and taking its stream. Or on the MediaStream
side, extracting the desired track and constructing a new MediaStream with
that track.

I think it's important to maintain the ideas of "nodes" and "connection" so
> that in the simplest case the diagram is a "source" node and a
> "destination" node with a single connection shown as two boxes:
> https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#ModularRouting
> In the case of a stream sending to a remote peer, then it seems to make
> sense to have a specific node representing that destination.

But this node doesn't represent an actual destination. It's just a
placeholder object that doesn't correspond to anything in the author's
mental model. It's not a PeerConnection, or a media element, or anything
that consumes a MediaStream. It's just glue.

I think it's logical to think of the MediaStream as the output arrow(s) so
that you have an AudioNode (source node), its output MediaStream (arrow),
and say a PeerConnection (destination node). And it's simpler.

>>>> I think authors will expect createMediaStreamSource(stream).stream (or
>>>> the equivalent trivial graph using context.createMediaStreamDestination())
>>>> to be semantically a no-op (except for the output having only one audio
>>>> track). I think we should make sure it is.
>>> I think so, if I understand you correctly.
>> Let me clarify what you'd be agreeing to :-). It means, for example, that
>> if MediaStreams get the ability to pause without dropping samples,
>> AudioNodes must be able to as well.
> That's not quite how I think about it.  Currently the HTMLMediaElement and
> the MediaElementAudioSourceNode are distinct types.  One represents a
> high-level "media player" API, with networking and buffering state,
> seeking, etc.  The other is an AudioNode, implementing the semantics of
> being a node (being able to connect() with other nodes, and having specific
> numberOfInputs, numberOfOutputs).  I believe that it's a good principle of
> software design to separate these two concepts - think "loose coupling and
> high cohesion"
> I think we need to keep this distinction in mind with MediaStream as well.

I don't understand how that's related to my point.

“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others?" [Matthew 5:43-47]
Received on Thursday, 10 May 2012 02:47:28 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:04 UTC