W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: [WebAudioAPI] connecting AudioNodes from different AudioContexts

From: Robert O'Callahan <robert@ocallahan.org>
Date: Wed, 16 May 2012 09:56:57 +1200
Message-ID: <CAOp6jLZyUObk7_9vc_Yn5WL9+vKrZZv=V9ABCi75NObyK-9h4Q@mail.gmail.com>
To: Chris Rogers <crogers@google.com>
Cc: olli@pettay.fi, "public-audio@w3.org" <public-audio@w3.org>
On Wed, May 16, 2012 at 9:54 AM, Chris Rogers <crogers@google.com> wrote:

> I'm not sure that understand the use case here.  If we find one where this
> is absolutely necessary, then maybe we could relax this restriction.

Making things easier for authors when one library uses one AudioContext and
another library uses another AudioContext and the author decides they want
to connect them.

>> We'll have to figure something out for media stream and media element
>> integration with OfflineAudioContext anyway.
> Yes, there are unique challenges if you're running faster (or slower) than
> real-time.  Some things won't make sense such as generating a real-time
> stream to be sent to a remote peer.  But that's ok.  The whole point of
> processing faster than real-time in an offline context is to render a mix
> or "bake" a complex processing effect/sequence to an AudioBuffer.

Sure, but you need to specify exactly what happens in those "cases that
don't make sense".

“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others?" [Matthew 5:43-47]
Received on Tuesday, 15 May 2012 21:57:31 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:04 UTC