timing model of the media resource in HTML5

Hi all,

I'd like to start discussions about accessibility in media elements
for HTML5 by going all the way back and answering the fundamental
question that Dick Bulterman posed at the recent (well, not so recent
any more) Video Accessibility workshop. He stated that HTML5 hasn't
got a timing model for the media elements and that a discussion about
the timing model needs to be had.

To start off this discussion, I have written a blog post that explains
where I think things are at. It has turned out to be a rather long
blog post, so I'd rather not copy and paste it into the discussion
here. You can read it at
http://blog.gingertech.net/2009/11/23/model-of-a-time-linear-media-resource/
.

If you disagree/agree/want to discuss any of the things I stated
there, please copy the relevant paragraph and quote it into this
thread, so we can all know what we are discussing. (I guess, Google
Wave would come in hand here..)

As a three sentence summary:
Basically, I believe that the 90% use case for the Web is that of a
time-linear media resource. Any other, more complex needs, that
require multiple timelines can be realised using JavaScript and the
APIs to audio and video that we still need to define and that will
expose companion tracks to the Web page and therefore to JavaScript. I
don't believe that there will be many use cases that such a
combination cannot satisfy, but if there are, one can always use the
"object" tag and use external plugins to render the Adobe Flash,
Silverlight or SMIL experience to produce this.

BTW: talking about SMIL - I would be very curious to find out if
somebody has tried implementing SMIL in HTML5 and JavaScript yet. I
think much of what a SMIL file defines should now be able to be
presentable in a Web Browser using existing HTML5 and JavaScript
constructs. It would be an interesting exercise and I'd be curious to
hear if somebody has tried and where they found limitations.

Best Regards,
Silvia.

Received on Monday, 23 November 2009 02:03:00 UTC