W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Multiple destinations in a single AudioContext

From: Chris Rogers <crogers@google.com>
Date: Mon, 13 Aug 2012 10:25:56 -0700
Message-ID: <CA+EzO0meYE-X8jLc+hdfZSx-G+1xLJv43-6=_dqwH_fTQbt9Sg@mail.gmail.com>
To: olivier Thereaux <olivier.thereaux@bbc.co.uk>
Cc: Audio Working Group <public-audio@w3.org>
On Mon, Aug 13, 2012 at 8:53 AM, olivier Thereaux <
olivier.thereaux@bbc.co.uk> wrote:

> Hello,
> As you know, Joe and I have been working on our Use Cases and Requirement
> document, detailing the specific requirements illustrated by each of our
> scenario.
> One of the scenario we added a few months ago was that of an "online DJ
> set".
> The current text of the scenario can be read at:
> http://dvcs.w3.org/hg/audio/raw-file/tip/reqs/Overview.html#connected-dj-booth
> One interesting aspect to the scenario is that it illustrates the need for
> audio to be sent to two different destinations (headphones + streaming).
> Even more interestingly, the application needs to switch a given context
> from one destination to another gradually and seamlessly.
> I cannot quite figure out how that could be done with the current draft of
> the Web Audio API. Ideally, it would look like this:
> https://dvcs.w3.org/hg/audio/raw-file/tip/reqs/DJ.png but given the API's
> constraint on the number of destinations a context can have, that is not
> possible.
> This might just work:
> https://dvcs.w3.org/hg/audio/raw-file/tip/reqs/DJ2.png
> if a given AudioContext is allowed to have two AudioDestinationNodes in
> its graph, but only one is connected as destination at any given time. This
> is a much lesser solution, because the DJ cannot continue listening to
> headphones as she fades the second track in, but it might just be doable.
> Am I missing an obvious alternative which would make this scenario a
> possibility? Any thought on how else you'd do it? Note that I am not saying
> that the API *MUST* enable this scenario - but it is interesting food for
> thought.
> Olivier

Hi Olivier, you may have noticed the addition of createMediaStreamSource()
to the specification.  It represents a source from a MediaStream, and work
is underway in WebKit to support this now, including live audio input.
 There will be a matching createMediaStreamDestination() for sending to
remote peers.

For the online DJ case, where headphone output is required in addition to
the "main" audio output then multi-channel devices can be supported as
described so far in the spec (.numberOfChannels and .maxNumberOfChannels of
AudioContext).  We're working hard on implementing that part right now.

p.s. The W3C server hosting the Web Audio spec seems to be down right now...
Received on Monday, 13 August 2012 17:26:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:11 UTC