W3C home > Mailing lists > Public > public-html-a11y@w3.org > April 2011

Re: [media] issue-152: documents for further discussion

From: Ian Hickson <ian@hixie.ch>
Date: Mon, 11 Apr 2011 22:17:37 +0000 (UTC)
To: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
cc: HTML Accessibility Task Force <public-html-a11y@w3.org>
Message-ID: <Pine.LNX.4.64.1104112200560.25791@ps20323.dreamhostps.com>
On Mon, 11 Apr 2011, Silvia Pfeiffer wrote:
> http://www.w3.org/WAI/PF/HTML/wiki/Media_Multitrack_Change_Proposals_Summary 

Here are some notes on your notes. :-)

| videoTracks are present both in audio and video elements ... maybe 
| should be reduced to just video element?

You can specify a video file in an <audio> element, so I figured it would 
make sense to expose them. Selecting one doesn't do much good though. I 
could see an argument for not exposing this information, or for exposing 
it using an object that didn't allow you to select a track; if that's 
something people have use cases for or against, I'm happy to change it. 
The current design was just easier to spec.

| the TrackList only includes name and language attributes - in analogy to 
| TextTrack it should probably rather include (name, label, language, 
| kind)

I'm fine with exposing more data, but I don't know what data in-band 
tracks typically have. What do in-band tracks in popular video formats 
expose? Is there any documentation on this?

Note that for media-element-level data, you can already use data-* 
attributes to get anything you want, so the out-of-band case is already 
fully handled as far as I can tell.

| a group should be able to loop over the full multitrack rather than a 
| single slave

Not sure what this means.

| some attributes of HTMLMediaElement are missing in the MediaController 
| that might make sense to collect state from the slaves: error, 

Errors only occur as part of loading, which is a per-media-element issue, 
so I don't really know what it would mean for the controller to have it.

| networkState

This only makes sense at a per-media-resource level (this is one reason I 
don't think the master/slave media element design works). What would it 
mean at the controller level?

| readyState

I could expose a readyState that returns the lowest value of all the 
readyState values of the slaved media elements, would that be useful? It 
would be helpful to see a sample script that would make use of this; I 
don't really understand why someone would care about doing this at the 
controller level rather than the individual track level.

| (this one is particularly important for onmetadatavailable events)

The events are independent of the attributes. What events would you want 
on a MediaController, and why? Again, sample code would really help 
clarify the use cases you have in mind.

| seeking

How would this work? Return true if any of the slaved media elements are 
seeking, and false otherwise? I don't really understand the use case.

| TimeRanges played

Would this return the union or the intersection of the slaves'?

| ended

Since tracks can vary in length, this doesn't make much sense at the media 
controller level. You can tell if you're at the end by looking at 
currentTime and duration, but with infinite streams and no buffering the 
underlying slaves might keep moving things along (actively playing) with 
currentTime and duration both equal to zero the whole time. So I'm not 
sure how to really expose 'ended' on the media controller.

| and autoplay.

How would this work? Autoplay doesn't really make sense as an IDL 
attribute, it's the content attribute that matters. And we already have 
that set up to work with media controllers.

| Multiple video and audio active in parallel: the solution must allow 
| multiple video tracks active in parallel, just like audio, even for 
| in-band

I looked at doing this (with just a single <video> and no explicit 
controller, like for audio) but I couldn't find a good solution to the 
problem of how to also support only one track being enabled.

Take this situation:

   Media resource A has three video tracks A1, A2, and A3.
   A1's dimensions are 1600x900 and fill the frame.
   A2's dimensions are 160x90 in the top left.
   A3's dimensions are 160x90 in the bottom right.

   You play the video, with just A1 enabled. Then you turn on A2 and A3. 
   The A1 video plays at 1600x900 with the A2 and A3 videos playing in 
   their respective corner, all small. The <video> element's intrinsic 
   dimensions are 1600x900.

   Now you turn off A1. What happens to the intrinsic dimensions? The 

   Now you turn off A2. What happens to the intrinsic dimensions? The 

   Now you create another <video> and load A1 into it, and you sync the 
   two <video>s using a controller or the master/slave relationship, so 
   that you can have A1 and A3 side by side. What happens to the intrinsic 
   dimensions? The rendering?

I couldn't find intuitive answers to these questions, which is why I 
decided to only support one video per media element and not support the 
in-band dimensions at all.

Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
Received on Monday, 11 April 2011 22:18:01 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:55:54 UTC