"Layering Considerations" issue

Let me propose some answers to your questions in this section --- partly
because I just implemented these features in Gecko!

Can a media element be connected to multiple AudioContexts at the same time?
>

Yes.


> Does ctx.createMediaElementSource(n) disconnect the output from the
> default context?
>

In Gecko, it actually does, because that seems to be what people will
usually want. However nothing in the spec says it should, so I think that
should be a spec change. It's easy to fix our behavior if we decide not to
take that spec change.

If a second context calls ctx2.createMediaElementSource(n) on the same
> media element, is it disconnected from the first?
>

No.


> Assuming it's possible to connect a media element to two contexts,
> effectively "wiring up" the output from one bit of processing to the other,
> is it possible to wire up the output of one context to another?
>

Yes, by connecting a MediaStreamAudioDestinationNode from one context to a
MediaStreamAudioSourceNode in another context.


> Why are there both MedaiaStreamAudioSourceNode and
> MediaElementAudioSourceNode in the spec? What makes them different,
> particularly given that neither appear to have properties or methods and do
> nothing but inherit from AudioNode?
>

> All of this seems to indicate some confusion in, at a minimum, the types
> used in the design. For instance, we could answer a few of the questions if
> we:
>
> Eliminate MediaElementAudioSourceNode and instead re-cast media elements
> as possessing MediaStream audioStream attributes which can be connected to
> AudioContexts
>

This is close to what we implement in Gecko. We have extended media
elements with mozCaptureStream and mozCaptureStreamUntilEnded methods
(sorry about the prefixes, they predate the new policy) which return new
MediaStreams. (I think returning new streams is a more robust design than
having one intrinsic stream that all users of an element must share; also,
as discussed above obtaining a MediaStream disables regular audio output,
which as a side-effecting operation is not suitable for an attribute
getter.) MediaElementAudioSourceNode is just a thin wrapper around
MediaStreamAudioSourceNode taking the result of mozCaptureStream.

That leaves a few open issues for which we don't currently have suggestions
> but believe the WG should address:
>
> What AudioContext do media elements use by default?
>

I don't think defining a "default AudioContext" buys us much, since it has
no real effect.


> Is that context available to script? Is there such a thing as a "default
> context"?
>

No, and no.


> What does it mean to have multiple AudioContext instances for the same
> hardware device?

Chris Wilson advises that they are simply sum'd, but how is that described?
>
By what mechanism is an AudioContext attached to hardware? If I have
> multiple contexts corresponding to independent bits of hardware...how does
> that even happen? AudioContext doesn't seem to support any parameters and
> there aren't any statics defined for "default" audio contexts corresponding
> to attached hardware (or methods for getting them).
>

I think the manner in which media element output and AudioContexts' output
are mixed and played is outside the scope of Web Audio. Some of that should
probably be implementation dependent (e.g. a UA could allow the user to
direct output of particular pages, elements, or AudioContexts to particular
output devices, or let users control the volume of those outputs
independently). Some of that will be affected by proposed specs for "output
channels" (e.g.
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-March/039202.html).

Rob
-- 
Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*

Received on Friday, 26 July 2013 00:13:05 UTC