Re: "Layering Considerations" issue

On Fri, Jul 26, 2013 at 1:35 PM, Alex Russell <slightlyoff@google.com>wrote:

> On Friday, July 26, 2013, Robert O'Callahan wrote:
>
>> Let me propose some answers to your questions in this section --- partly
>> because I just implemented these features in Gecko!
>>
>> Can a media element be connected to multiple AudioContexts at the same
>>> time?
>>>
>>
>> Yes.
>>
>
> The reason for asking this question (and many others that you respond to
> below) wasn't to simply understand the spec -- I know now (having tried it
> out) that such connections are possible. What the question was meant to
> uncover was *by which mechanism *that has become true. It's not exposed,
> so the question presents itself without an obvious answer. *That* is the
> issue. Not connection (or not) to multiple contexts.
>

I'm sympathetic to your general approach, but I don't think this case lacks
an obvious answer. A media element can be connected to multiple
AudioContexts at the same time because no spec imposes restrictions to the
contrary.


>
>>
>>
>>> Does ctx.createMediaElementSource(n) disconnect the output from the
>>> default context?
>>>
>>
>> In Gecko, it actually does, because that seems to be what people will
>> usually want. However nothing in the spec says it should, so I think that
>> should be a spec change. It's easy to fix our behavior if we decide not to
>> take that spec change.
>>
>
> Yes, and that's the case in Blink as well.
>
> What this discussion was about isn't "does this work?" but "how does that
> happen?".
>

I think the spec change to describe this behavior would define an internal
flag on media elements that enables/disables audio output, and specify that
createMediaElementSource sets this flag to "disable". We also need to
define what is produced in more detail, e.g. that the output of all the
element's currently enabled audio tracks is mixed together, and how tracks
with different channel counts are mixed. But maybe you're asking for more
than that?

I'm asking for this WG to explain much more clearly to the rest of the
> world *how* the interactions it creates are plumbed through the platform.
>

You mean we should define "the audio output of an media element" in the
HTML spec somewhere, and refer to that definition from the Web Audio spec?
Or something more than that?


>
>> Assuming it's possible to connect a media element to two contexts,
>>> effectively "wiring up" the output from one bit of processing to the other,
>>> is it possible to wire up the output of one context to another?
>>>
>>
>> Yes, by connecting a MediaStreamAudioDestinationNode from one context to
>> a MediaStreamAudioSourceNode in another context.
>>
>
> Again, I accept that answer (and, of course, discovered it myself before
> writing it down here).
>
> It is meant to get _you_ asking "how does that work?". That it does is no
> mean feat.
>

Unfortunately that depends on the exact processing model of MediaStreams,
which is rather underdefined. You may want to turn the TAG's attention
there next :-).

FWIW, I had a proposal about 18 months ago which defined processing of
MediaStreams in some detail (and defined a simplified audio processing
model on top of MediaStreams). My nice spec lost out to shipping code (and
a more complete API). It will be difficult to find contributors with the
appetite for spec work that doesn't directly affect shipping
implementations.

 It's observable in implementations that there is a default context.
>

How so? I only observe that media elements can produce audio output. This
doesn't mean an AudioContext has to be involved.

Rob
-- 
Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*

Received on Friday, 26 July 2013 02:18:22 UTC