W3C home > Mailing lists > Public > public-media-capture@w3.org > March 2013

Re: Rationalizing new/start/end/mute/unmute/enabled/disabled

From: Martin Thomson <martin.thomson@gmail.com>
Date: Tue, 26 Mar 2013 09:07:37 -0700
Message-ID: <CABkgnnVcGCs1QbRr21Bfke_juUYHNEuGR1=HvfXv=hcPzsgm_A@mail.gmail.com>
To: Stefan HÃ¥kansson LK <stefan.lk.hakansson@ericsson.com>
Cc: "public-media-capture@w3.org" <public-media-capture@w3.org>
On 26 March 2013 02:38, Stefan HÃ¥kansson LK
<stefan.lk.hakansson@ericsson.com> wrote:
> (Commenting on the entire "Rendering" section)
>
> I agree to that we need to describe in more detail how MediaStream's
> interact with media (audio and video) elements. Jim took a stab at it last
> year, but it is time for an update. Audio and video are also different in
> that you can render only one video track in a media element, but mix all
> audio tracks.
>
> However, I want to point out that the "resource fetch algorithm" of the html
> 5 Candidate rec
> (http://www.w3.org/TR/html5/embedded-content-0.html#concept-media-load-resource)
> already describes this quite detailed. I think we might get by with
> referring to that, but clarifying how certain things apply to MediaStreams.

I read through that section and it doesn't really say anything about
what I was getting at: which tracks affect what is rendered.  Making
certain that any work we do is consistent with HTML 5 is one thing,
but we can't rely on HTML to describe this particular characteristic.
Received on Tuesday, 26 March 2013 16:08:05 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:15 UTC