- From: Dick Bulterman <Dick.Bulterman@cwi.nl>
- Date: Thu, 22 Apr 2010 16:25:56 +0200
- To: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
- CC: Sean Hayes <Sean.Hayes@microsoft.com>, HTML Accessibility Task Force <public-html-a11y@w3.org>
Hi Silvia and Sean, Here is the bottom line on the proposal that I made: a) we should have a well-defined selection mechanism for alternative content that is scalable. b) the processing should not include special-purpose processing for historical reasons, if possible. c) it should be possible to reference embedded and external text tracks. I think that it would be productive to talk thru these issue in a teleconference, rather that converse via text, since this introduces assumptions that may not be valid. I'm happy to this in a large or small group. One of the things that has been bugging me most in thinking about this is that the addition of text associations into the <audio>/<video> object (whichever way we go) is actually a kludge: it is a way of avoiding temporal composition of text along with other media. Cramming this into the video element is actually not the right way to go: it is a stop-gap measure that has no real chances for extensibility. I would much rather look at a real solution to the composition problem, since this would much better need the needs of accessibility (and other) use cases. -d.
Received on Thursday, 22 April 2010 14:34:17 UTC