- From: Madeleine Rothberg <Madeleine_Rothberg@wgbh.org>
- Date: 24 Nov 1999 10:54:13 -0500
- To: w3c-wai-ua@w3.org
- Cc: ehansen@ets.org
Reply to: RE>>Issues: Part 2 - #16 through #43 Here are some comments on a variety of issues raised by Eric Hansen, snipped from Ian's recent reply. EH: >Issue #16. > c. A "video track that shows sign language" is cited as an example of a > continuous equivalent track, but it is not necessarily "equivalent" to > anything (except itself) since it could be part of the "primary" content. > It needs to be cited as equivalent of (or to or for) something else. Unless > we want to nudge WCAG to requiring non-text equivalents (other than > auditory descriptions), then I suggest removing reference to "video of sign > language" in checkpoint 2.5. The Priority 1 rating of this implies that it > must be done, yet there is no WCAG requirement for non-text equivalents for > "primary content". The checkpoints themselves should only mention what we > are asserting is required. IJ: I don't agree. I think that the text clearly states that this is an example of a continuous equivalent track. MR: I agree with Ian. While it is true that a sign language video might be primary content, it might also be a continuous equivalent. The same is true for text: a text track might be primary content or it might be a closed caption track. That is why SMIL has a system attribute for indicating when a text track is a closed captions track. SMIL does not yet have an attribute for indicating when video is continuous equivalent, such as a sign language translation, so we are left with the decision of whether we can (priority 1) require that UAs allow users to turn on and off tracks that SMIL does not yet allow to be delineated as continuous equivalents. This also applies to Eric's Issue 17. On a related note, the new SMIL public working draft [1] does propose a system attribute for audio descriptions, so I will be submitting revised techniques that tell UAs how to use that attribute to provide improved accessibility for blind and visually impaired users. If the UA WG feels we can encourage UA developers to stretch their features past the existing or recommended SMIL features, I would personally be in favor of encouraging implementation of ways to choose continuous equivalent video, such as sign language, in UAs. [1] http://www.w3.org/TR/smil-boston/ EH: > Issue #37. Reconsider the use of the term "visual impairment". > > In our organization, the term is considered insensitive (unfair). Use > "visual disability". The preferred terms can change, but keeping up with > the preferred terms is important. IJ: Should we drop "impairment" everywhere? MR: my organization does use the term visually impaired, but if others feel it is out of date WAI could decide to drop it. MR: I can't locate the mail where this came up, and I can't remember in which direction the current document comes down, but braille is not a language. It is a way of writing English. I believe in the past we have talked about style sheets for printed and refreshable braille as equivalents to style sheets that differ for screen versus ink print versus small screen displays -- in all cases, still in English. ASL, however, is a natural language of its own. EH: > Issue #39. Reconsider the contrast between "braille and haptic". > > Is braille also haptic? This may be OK as is. See "Definition of "Documents, > Elements, and Attributes" MR: The sentence under discussion here is as follows: "Rendering is not limited to graphical displays alone, but also includes audio (speech and sound) and tactile displays (braille and haptic displays)." Here's my take: braille is tactile (static, in two dimensions) while haptic displays are either not static (force feedback mouse) or are in three dimensions (virtual reality instruments). Haptic devices have great potential for providing access features for blind users. We may want to check with Jutta Treviranus' group in Toronto to find out if there are currently concrete recommendations they propose to address use of haptics on the web, if we want to include this category in our guidelines. EH: > Issue #21. Refine definition of "text transcript." > > This definition needs to be clarified and corrected. I think that the > reference to "continuous equivalent track" is erroneous and misleading > because a text transcript, as currently defined, does not include > synchronization information in the sense that captions do. > > Old: > > "Text transcript" > "A text transcript is a text equivalent of audio information that includes > spoken words and non-spoken sounds such as sound effects. Refer also to > continuous equivalent track." > > New: > > "Text transcript" > "In the context of this document, a text transcript is a text equivalent of > an audio information (e.g., audio clip). It provides text for both speech > and non-speech sounds." IJ: Why only "in the context of this document"? How about: "A text transcript is a text equivalent of auditory information (e.g., an audio clip). A text transcript documents both speech and non-spoken sounds such as sound effects." MR: I think the reason for the phrase "Refer also to continuous equivalent track." was as a contrast, not because the text transcript is a continuous equivalent. Perhaps change to "A text transcript may contain content which is similar or identical to the content of closed captions, but it is generally available in a single time-independent document rather than as a continuous equivalent." MR: I can't locate the first mention of the "captions vs closed captions" issue. I'd like to weigh in that I think the term "closed captions" is useful in distinuishing between information intended to replace audio tracks, and typically intended for use by people who are deaf, and any other thing called a caption, such as a photo caption or table caption. Because closed captioning is quite well known, I think it is helpful to continue using that term in that context. And since other types of captions can get confused, I think it is helpful to always use the extra word to indicate the type of caption being discussed. For example, when we discuss the usefulness of table captions for web surfers who are blind, we don't want people saying "Why do blind people need captions?" Madeleine Rothberg The CPB/WGBH National Center for Accessible Media
Received on Wednesday, 24 November 1999 10:52:17 UTC