media equiv summary

Overview

Changes to 19 November 2004 WD should address; Reviewer verification required

Issue 171 and issue 438 - exception for accessible rebroadcasts (pending)

In previous drafts, there was an exception for content that was rebroadcast. However, it was worded in such a way that readers interpreted it to mean that if content was rebroadcast it was exempt from captions, i.e., it did not need to have captions to be accessible. To address this issue, the exception became a separate success criteria (refer to the 19 November 2004 WD).

Previous wording: "Exception: if content is rebroadcast from another medium or resource that complies to broadcast requirements for accessibility (independent of these guidelines), the rebroadcast satisfies the checkpoint if it complies with the other guidelines."

19 November 2004 text: If multimedia content is rebroadcast from another medium, the accessibility features required by policy for that medium are intact.

Propose that we close the issue. Verify with reviewers.

Issue 792 - Level 1 success criteria ordering and rewording proposal (pending)

This issue contains several proposals, questions, and comments. Here is a summary of responses (more details are available from Proposal for combined Guideline 1.1 and 1.2, summary of issues for 1.2):

Believe that all of the issues are covered by the 19 November 2004 WD. Close and verify with reviewers.

Issue 793 - Clarify the Level 2 success criteria (pending)

In the 11 March 2004 WD, the following multi-part exception applies to the level 1 criteria for captioning and audio description: "if the content is real-time and the content is audio-only and the content is not time-sensitive and the content is not interactive, then a text transcript or other non-audio equivalent does not need to be synchronized with the multimedia content." Also in this draft, is a level 2 criterion for real-time broadcasts with an editorial note "There are questions about what is possible and what should be required for real-time audio description since there is no way to know when there will be gaps in audio (when descriptions could be read) and other issues with describing real-time events."

Of the Level 2 requirement, the reviewer asks, "What is the additional requirement here? The editorial note does not seem to apply to anything at this level. The note is talking about audio descriptions but the success criteria is about captions."

In the 19 November 2004 WD, addition of terms "prerecorded" and "real-time" as well as a variety of other rewrites attempt to clarify the difference.

Propose that we close the issue. Verify with reviewer.

Issue 980 - Making live broadcast/time-dependent content more accessible (pending)

The reviewer comments on this exception from the 24 June 2003 draft:

When adding audio description to existing materials, the amount of information conveyed through audio description is constrained by the amount of space available in the existing audio track unless the audio/video program is periodically frozen to insert audio description. However, it is often impossible or inappropriate to freeze the audio/visual program to insert additional audio description.

The reviewer's comment

The note for the first required success criteria highlights a difficulty but does not present a solution. We would like to see this document place more emphasis on content providers to think about how they can make live broadcast/time-dependent content more accessible to deaf and hard of hearing people. For example, in modern subtitling, computer programs are used where the stenographer simply has to press one button to print a particularly common phrase, such as a description of a common pattern of play in sports commentary. Such solutions should be encouraged in the Best Practice of this guideline, without necessarily making them a condition of conformance.

Believe that several changes in the 19 November 2004 WD should address these concerns.

  1. The note in question describes what can be accomplished in "extended audio description." A Level 3 success criterion was created to highlight that this technology exists, but that it is not yet appropriate for all sites: "Extended audio descriptions are provided for prerecorded multimedia."
  2. Definitions were moved to the glossary and not included as part of the success criteria

Need to verify with the reviewer that these steps address the issue and if so, close the issue.

Issue 1028 - Collated text transcripts, realtime captioning, and describing (pending)

The reviewer describes the difficulty and rarity of creating collated transcripts. Despite the cost and lack of support for these techniques, we hope that in the future they are more readily achievable. Collated text transcripts are a Level 3 criterion (of Guideline 1.1).

Guideline 1.1, Level 3, #1: For multimedia content, a combined transcript of audio descriptions and captions is provided.

Propose that we close the issue. Verify with the reviewer.

Requires further action or discussion

Issue 952 - Ease of access

The reviewer writes, "An audio script is recommended here and <em>ease of access</em> to this script should be stressed."

For Guideline 1.1, we said "ease of access" is a user agent issue. However, since there is not always a means to programmatically associate a transcript with an audio clip, this can not be left solely to the user agent. It seems to depend on the definition of "explicitly associated" that we are waiting for from the 13 January 2004 telecon. It could be a combination of user agent and markup language issue. Perhaps a repair technique in the meantime? Research needed.

Issue 982 - Simultaneous reading and watching required

There is a Note in the 24 June 2003 WD that says, "the presentation does not require the user to read captions and the visual presentation simultaneously in order to understand the content."

The reviewer says:

This point should be a Required Success Criteria. Captions that need to be read at the same time as watching action on the screen do not provide an equivalent user experience.

However, another reviewer (comments not available online) says that this is what watching captions are all about and therefore the Note should be dropped.

Propose that something is said in the General Techniques or multimedia-specific techniques.

Issue 983 - Holes in media-equiv

This issue contains several comments.

The definition of “media equivalents” given here is not sufficiently generic. No mention is made, for example, of sign language avatars (this definition is repeated in the Glossary).

The phrase "media equivalents" is no longer used in the normative part of this guideline. "media alternatives:" is used in the benefits section and is used in such a way that it could be removed.

Further, the reviewer says:

1. Where subtitles are displayed, the designer should ensure sufficient contrast between foreground text and the background behind it (ideally, the user should be given the option to display a caption box behind the subtitles which has a colour that sufficiently contrasts the colour of the text).

2. A minimum size and recommended font for subtitles should be provided (the Royal National Institute of the Blind recommends a minimum of 16 point Helvetica or Arial font).

3. A minimum audio quality requirement should be specified for all audio description.

4. If a sign language interpreter is to be displayed on-screen, either as streamed video of a human interpreter or in the form of an avatar showing a virtual human, then the layout of the site should allow for this without the avatar window overlapping in such a way that essential functionality or information is being hidden. Based on RNID research, we would recommend that an on-screen interpreter should, at minimum, be displayed in the Common Intermediate Format (CIF) of 352x288 pixels and 25 frames per second.

There may also be different recommendations if it is closed versus open captions. Note that other reviewers have said to use any font except Arial.

There is concern by another reviewer that requiring sign language starts us down the slippery slope of requiring translations to every language and that "sign languages are by definition not "in the language of the dialog[ue]." There is no dialogue in sign language."

Propose close the issue and include the Best Practice suggestions in General Techniques (i.e., close this issue for WCAG 2.0 and open an issue for General techniques).

Issues 1027, 1154, 1155: providing alternatives for live audio-only and video-only content

Issue 1027 - "equivalents" for multimedia (pending)

From the June 2003 WD:

If the web content is real-time and audio-only and not time-sensitive and not interactive a transcript or other non-audio equivalent is sufficient. [...]

If the web content is real-time non-interactive video (e.g., a webcam of ambient conditions), either provide an equivalent... (e.g., an ongoing update of weather conditions) or link to an equivalent... (e.g., a link to a weather website).

The reviewer writes:

This guideline concerns captioning of web multimedia. Its plain reading requires a transcript of all real-time audio broadcasts. That is, every single Internet radio station would require transcription.

Meanwhile, if you have any kind of webcam at all, you need to scrounge up some other site you can link to that is somehow the “equivalent” of the webcam’s image.

To address issue 1027, a proposal from September 2004 suggests two level 1 success criterion:

4. A text alternative is provided for live audio-only content by following Guideline 1.1. (Editorial note: an internet radio stream would only need to provide a description of the intent/character of the station, *not* every song they play)

5. A text alternative is provided for live video-only content by following Guideline 1.1. (Editorial note: webcams would only need a text alternative associated with the concept that the cam is pointing at, *not* every image that is captured)

A reviewer writes:

  1. I suppose you mean dialogue-only audio. This essentially requires real-time captioning. Sometimes a post-facto transcript will do, however there is not a standard for providing real-time captions. [Issue 1154 - Real-time captioning of live audio-only content (pending)]
  2. I think this is going to need a much better formulation. Aren't we requiring captioning and, in some cases, description? [Issue 1155 - Alternatives for live video-only content (pending)]

19 November 2004 draft attempts to clarify that neither captioning nor a transcript is required, only a description.

Guideline 1.1, Level 1, #6: For live audio-only or live video-only content, such as internet radio or Web cameras, text alternatives describe the purpose of the presentation or a link is provided to alternative real-time content, such as traffic reports for a traffic Web camera

Note: real-time content does not imply real-time captions.

Propose that we close these issues. Verify with the reviewer.

[However, would David Poehlman agree (issue 1332)? Depends on definition of text alternative.]

Issue 1182 - need better phrases for "synchronized media equivalents" and "time-dependent presentations” (pending)

The reviewer says,

We recommend using other phrases for "synchronized media equivalents" and "time-dependent presentations."

In the 19 November 2004 draft we use, "synchronized alternatives" and "multimedia" - propose that we close the issue and notify the reviewer.

[However, the defn of multimedia still needs work and "synchronized alternatives" is not in the glossary.]

Elephants

Issue 1085 - "Respond interactively" not defined.

Related to definition of "non-text content" and the issue about when Guideline 4.2 applies versus when 1.1 or 1.2 apply. Still haven't heard or seen a good example.

Editorial note in 19 November 2004 WD:

How should we address presentations that contain only audio or only video and require users to respond interactively at specific times during the presentation? Since it is not multimedia, a criterion could be added to guideline 1.1. However, the need is for synchronized alternatives, therefore a criterion could be added to this guideline. Refer to Issue 1272.

Related: Issue 1272 - Synchronized alternatives for monomedia

The proposed criterion is, "if there is any time-based interaction with audio or video presentation, alternatives have to be synchronized."

However, the current split between Guidelines 1.1 and 1.2 is that 1.1 addresses text alternatives and 1.2 addresses synchronized alternatives for multimedia. Next steps: find real-world examples and determine requirements then determine if we need to expand 1.2 to something like "synchronized alternatives for non-text content" or if we can fit it into 1.1.

The following are notes and related references.

Based on these examples, I do not propose any changes to Guidelines 1.1, 1.2, or 4.2. The only possible change may be to Guideline 1.4.

Issue 1151 - Scoping requirements, relation to policy

There are many questions about when captions and audio descriptions are required or not. This issue attempts to consolidate the variety of questions asked about *when* to provide captions and audio descriptions and cases where they may not be necessary.

Reviewer suggests that we will need scoping requirements for the following:

The following Editorial Note is in the 19 November 2004 draft:

Even though there are instances where captions and audio descriptions are not required, this version of Guideline 1.2 does not attempt to address the variations. Instead, it assumes more detail is included in the techniques documents and that policy makers will clarify when captions and audio descriptions are required.

comment from september proposal for media-equiv guideline:

An example of such a phase-in is the Telecom act of 1996 that mandates the number of broadcast hours that need to be captioned. It increases to 100% by 1 Jan 2006. (2-6 a.m. is not included, thus 20 of 24 hours is 100%.) 30% of programs aired before 1 Jan 1998 must be captioned by 1 Jan 2003. 75% by 1 Jan 2008.

Propose that WCAG 2.0 not attempt to create a phase-in schedule. Instead, we look at a scoping mechanism that would allow developers to exclude multimedia that hasn't been captioned or described and leave phase-in schedules to policy makers. However, there is a possibility that scoping could be used to ignore accessibility requirements and it doesn't make sense to me for someone to claim their site is accessible when it is not. We should stick to what we know: technology and only focus on creating technology requirements in WCAG 2.0. Leave the policy to policy makers. Until we have a scoping mechanism for conformance and several real-world examples showing how to use it, this issue remains open.

Leaving the details to policy was discussed at the 30 September 2004 telecon and scoping was discussed at the 23 September 2004 telecon. At the July face-to-face, we discussed a policy "guide" for policy makers (12 July 2004 irc log, 13 July 2004 irc log). We currently have a paragraph for "Scoping of Conformance Claims" but a detailed model or example should clarify what is allowed or not. Therefore, there are a variety of loose ends related to this issue.

Related issues: