- From: Geoff Freed <geoff_freed@wgbh.org>
- Date: Mon, 15 Mar 2010 05:25:26 -0400
- To: Silvia Pfeiffer <silviapfeiffer1@gmail.com>, HTML Accessibility Task Force <public-html-a11y@w3.org>
I'm on a rather tight deadline and so may not be able to fully address everything below for a day or two. One comment inline for the time being. Geoff/wgbh ________________________________________ From: public-html-a11y-request@w3.org [public-html-a11y-request@w3.org] On Behalf Of Silvia Pfeiffer [silviapfeiffer1@gmail.com] Sent: Sunday, March 14, 2010 8:08 PM To: HTML Accessibility Task Force Subject: Requirements for external text alternatives for audio/video Hi all, Looking at the recent survey on caption formats and its results, see http://www.w3.org/2002/09/wbs/44061/media-text-format/results, it seems that what is currently written in the change proposal at http://www.w3.org/WAI/PF/HTML/wiki/Media_TextAssociations#File_Formats got confirmed: "A brief discussion at the TPAC in November 2009 seemed to indicate that the W3C Timed Text Format DFXP should be the first choice. As an alternate, simple format the SubRip srt format in its simplest form should also be supported by browsers. Since srt can be regarded as a simple subpart of DFXP, creating support for srt will be simple." We have 15 voices for SRT and 14 for DFXP. However, looking at the detailed replies, I can see that we basically have two camps: one that says "let's just start simple" and the other that says "we need something that is extensible, incorporates styling and markup". What it tells me is that we never really looked at what our requirements for synchronised text alternatives, and in particular for caption formats here. I'd like us to collect these requirements so we can make a better recommendation as a group. We should look at these requirements from several view points, some of which may be: * a legal POV (what do a11y laws require us to do), * a WCAG requirements POV, * a a11y user's usability POV, * an international user's POV, an anything you can think of that I forgot. So, let me pose the key question: why do we need more than unformatted text, a start time and an end time to provide subtitles/captions for users? Or let me be a bit more of a devil's advocate: What functionality is required on top of SRT and who needs it? Seeing as, e.g. YouTube doesn't only start time, end time and unformatted text and gets very far with it, why would we need to support more than that? GF: While I understand this is just a DA point of view, we should definitely not be using one entity's approach to text-display, or caption/subtitle generation, as an example of ideal practice. At this moment, Google/Youtube supports a string of text with a begin time and an end time, but this doesn't mean they won't support other features, including styling, in the future. What they alone are doing at this point in time should not govern our decision. After all, we're serving an audience *in part* of deaf/hard-of-hearing users, not just software engineers. Please help us keep this a debate on facts and on real requirements and not turn it into a religious debate. Best Regards, Silvia.
Received on Monday, 15 March 2010 09:25:59 UTC