[whatwg] Apple Proposal for Timed Media Elements

On 3/21/07, Maciej Stachowiak <mjs at apple.com> wrote:
>
> On Mar 21, 2007, at 6:16 PM, Ian Hickson wrote:
>
> >   Starting with simple features, and adding features based on demand
> >   rather than just checking off features for parity with other
> > development
> >   environments leads to a more streamlined API that is easier to use.
> >
> >   How should we approach this?

My two cents: we should put off events and other API pieces that
address editing applications. It is possible to write web versions of
things like iMovie and SoundEdit in Flash right now, but I don't think
it is realistic to capture that stuff in a first effort. We should
focus on playback and consumption for v1. So my question for any
proposal right now would be: "why is the feature needed for something
analogous to a VCR or YouTube screen?"

>
> >   For <audio> in general, there's been very little demand for <audio>
> >   other than from people suggesting that it makes abstract logical
> > sense

I disagree. It's been pointed out by multiple people that <video> will
be used for audio. That could be quite likely if the page authors
wants to send ogg vorbis audio.

>
> > * What's the use case for hasAudio or hasVideo? Wouldn't the author
> > know
> >   ahead of time whether the content has audio or video?
>
> That depends. If you are displaying one fixed piece of media, then
> sure. If you are displaying general user-selectable content...

This reasoning seems sound to me. In general, I am weary of proposals
that require control over both sides of the wire to be effective.

> We have included a mechanism for static fallback based on container
> type and codec, so that it's possible to choose the best video format
> for a client even if user agent codec support varies.

What existing markup leads us to believe this will be an effective
method for content negotiation?

--

Robert Sayre

Received on Wednesday, 21 March 2007 20:06:37 UTC