W3C home > Mailing lists > Public > public-html-a11y@w3.org > October 2010

[Bug 10941] Media elements need control-independent "pause" for presenting lengthy descriptions/captions

From: <bugzilla@jessica.w3.org>
Date: Mon, 11 Oct 2010 02:00:11 +0000
To: public-html-a11y@w3.org
Message-Id: <E1P57gR-0008NV-8y@jessica.w3.org>
http://www.w3.org/Bugs/Public/show_bug.cgi?id=10941

--- Comment #10 from Silvia Pfeiffer <silviapfeiffer1@gmail.com> 2010-10-11 02:00:06 UTC ---
(In reply to comment #7)
> There are two cases here:
> 
> 1. A browser that supports audio descriptions natively, and needs to stop
> playback of the video and primary audio tracks while a secondary audio
> description track plays back content.
> 
> I'm not familiar with any content like this (I've rarely seen audio
> descriptions on real content at all, but the few times I've seen it, e.g. on
> Digital TV in the UK, the video was never stopped in this fashion), so it's
> hard to comment on this, but if there are user agents that want to implement
> this, I could make the spec handle it by changing this requirement:
> 
> "When a media element is potentially playing and its Document is a fully active
> Document, its current playback position must increase monotonically at
> playbackRate units of media time per unit time of wall clock time."
> 
> ...to cover this case, e.g. by adding " and is not in a description-pause
> state" after "potentially playing", and then define "description-pause state"
> as being for the aforementioned case.
> 
> Are there user agents interested in implementing this? (Are there any media
> formats that support audio tracks having content with non-zero length to be
> played at an instant in time on the primary timeline?)

I think it's this case for now, where we particularly focus on text provided
through WebSRT or an in-band text track. Thus, it is possible to provide this
in existing containers and the pausing of playback would need to be managed by
media players in collaboration with the screen reader.

I think user agents will solve the caption case first before tending to audio
descriptions, and extended audio descriptions only after that. Extended audio
descriptions seems to require close interaction between the screen reader and
the video player - not sure this is something that has been required before for
accessibility APIs.


> 2. A future time where we have multiple media elements all synchronised on one
> timeline, coordinated by some object that can start and stop individual tracks.
> 
> If this is the case being discussed here, then we should make sure to take this
> use case into account when designing the controller API, but it is probably
> premature to make changes to the spec at this point for that case, since
> there's no way to sync tracks so far.

If we focus not only on text, but on an audio description provided as audio, it
becomes indeed more complex again. I think the example at
http://web.mac.com/eric.carlson/w3c/NCAM/extended-audio.html shows a media
resource where the main audio/video has been authored to pause as long as
necessary for the recorded audio description cues to finish. However, it is
actually merged with the main audio track, which I believe is the only way to
realize it in current container formats.

When synchronizing multiple media elements, we can make this dynamic again and
should probably introduce a controller API for it then, which is related to Bug
9452. But you are right - it's a bit premature for that.

-- 
Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.
Received on Monday, 11 October 2010 02:00:12 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 04:42:22 GMT