Re: MediaStream integration

On Thu, May 10, 2012 at 11:51 AM, Chris Rogers <crogers@google.com> wrote:

> Hi Robert, sorry if there was any confusion about this.  I haven't written
> up any explanation for this API yet, but hope to add it to the main Web
> Audio API editor's draft soon:
> https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html
>
> The intention is that context.createMediaStreamDestination() will create a
> destination node that a sub-graph can connect to.  So it's not capturing
> the output of the entire context (context.destination).  In theory, it
> should be possible to call context.createMediaStreamDestination() multiple
> times, each of which sends out to a different remote peer with different
> processing.
>

OK, I see now. That sounds better. But why not just skip
createMediaStreamDestination and provide 'stream' on every AudioNode?


>
>
>> I think authors will expect createMediaStreamSource(stream).stream (or
>> the equivalent trivial graph using context.createMediaStreamDestination())
>> to be semantically a no-op (except for the output having only one audio
>> track). I think we should make sure it is.
>>
>
> I think so, if I understand you correctly.
>

Let me clarify what you'd be agreeing to :-). It means, for example, that
if MediaStreams get the ability to pause without dropping samples,
AudioNodes must be able to as well.

Rob
-- 
“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others?" [Matthew 5:43-47]

Received on Wednesday, 9 May 2012 23:59:59 UTC