Re: Multiple destinations in a single AudioContext

On Mon, Aug 13, 2012 at 10:03 PM, Chris Rogers <crogers@google.com> wrote:

>
>
> On Mon, Aug 13, 2012 at 10:46 AM, Jussi Kalliokoski <
> jussi.kalliokoski@gmail.com> wrote:
>
>> Hey Chris,
>>
>> Fantastic news! createMediaStreamDestination() sounds like it would be a
>> fix for one of the problems I pointed out earlier [1].
>>
>> A few questions:
>>
>>  * Does the created node act as a pull, i.e. can you connect to just the
>> media stream destination without having to connect to an audible output?
>>
>
> Yes, I would expect it to work without connecting to an audible output.
>
>
>>  * Can you use it to connect AudioContexts, e.g. have a media stream
>> destination in one context and use that stream for a media stream source in
>> another?
>>
>
> As you describe it, this is not something in the current design.  But, I
> don't expect it to be a real-world limitation because I think we should be
> able to handle just about any use case with a single context.
>
>
>
>>  * If so, does it add latency?
>>
>> If it doesn't add latency, the API would by itself provide a meaningful
>> basis for a DAW plugin architecture. Excluding MIDI, one could just (at
>> setup) provide the plugin with a stream and the plugin would provide a
>> stream in return.
>>
>
> I know a bit about plugins, having been one of the main developers of the
> Audio Units (AU) plugin model and  I'm excited to see that people are
> already starting to think about how to architect something which can work
> on the web.
>

Actually, I just realized that since my previous idea was somewhat assuming
that the plugin can share references with the host, it might be a better
idea to do it in one graph. If the host (at setup) provides the plugin with
the AudioContext and two gain nodes, one for input and one for output, the
host gets gain/mix control for the plugin for free.

Cheers,
Jussi


>
>
>>
>> Cheers,
>> Jussi
>>
>> [1] http://lists.w3.org/Archives/Public/public-audio/2012JulSep/0551.html
>>
>>
>> On Mon, Aug 13, 2012 at 8:25 PM, Chris Rogers <crogers@google.com> wrote:
>>
>>>
>>>
>>> On Mon, Aug 13, 2012 at 8:53 AM, olivier Thereaux <
>>> olivier.thereaux@bbc.co.uk> wrote:
>>>
>>>> Hello,
>>>>
>>>> As you know, Joe and I have been working on our Use Cases and
>>>> Requirement document, detailing the specific requirements illustrated by
>>>> each of our scenario.
>>>>
>>>> One of the scenario we added a few months ago was that of an "online DJ
>>>> set".
>>>> The current text of the scenario can be read at:
>>>>
>>>> http://dvcs.w3.org/hg/audio/raw-file/tip/reqs/Overview.html#connected-dj-booth
>>>>
>>>> One interesting aspect to the scenario is that it illustrates the need
>>>> for audio to be sent to two different destinations (headphones +
>>>> streaming). Even more interestingly, the application needs to switch a
>>>> given context from one destination to another gradually and seamlessly.
>>>>
>>>> I cannot quite figure out how that could be done with the current draft
>>>> of the Web Audio API. Ideally, it would look like this:
>>>> https://dvcs.w3.org/hg/audio/raw-file/tip/reqs/DJ.png but given the
>>>> API's constraint on the number of destinations a context can have, that is
>>>> not possible.
>>>>
>>>> This might just work:
>>>> https://dvcs.w3.org/hg/audio/raw-file/tip/reqs/DJ2.png
>>>> if a given AudioContext is allowed to have two AudioDestinationNodes in
>>>> its graph, but only one is connected as destination at any given time. This
>>>> is a much lesser solution, because the DJ cannot continue listening to
>>>> headphones as she fades the second track in, but it might just be doable.
>>>>
>>>> Am I missing an obvious alternative which would make this scenario a
>>>> possibility? Any thought on how else you'd do it? Note that I am not saying
>>>> that the API *MUST* enable this scenario - but it is interesting food for
>>>> thought.
>>>>
>>>> Olivier
>>>
>>>
>>> Hi Olivier, you may have noticed the addition of
>>> createMediaStreamSource() to the specification.  It represents a source
>>> from a MediaStream, and work is underway in WebKit to support this now,
>>> including live audio input.  There will be a matching
>>> createMediaStreamDestination() for sending to remote peers.
>>>
>>> For the online DJ case, where headphone output is required in addition
>>> to the "main" audio output then multi-channel devices can be supported as
>>> described so far in the spec (.numberOfChannels and .maxNumberOfChannels of
>>> AudioContext).  We're working hard on implementing that part right now.
>>>
>>> p.s. The W3C server hosting the Web Audio spec seems to be down right
>>> now...
>>>
>>>
>>
>

Received on Monday, 13 August 2012 19:33:34 UTC