Contents | Guideline 1 | Guideline 2 | Guideline 3 | Guideline 4 | References
[@@EDITORS: THERE IS NO CAN, THERE IS ONLY DO@@]
Copyright © 2003 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply.
Actions may be taken at the author's initiative that may result in accessibility problems. The authoring tool should include features that provide support and guidance to the author in these situations, so that accessible authoring practices can be followed and accessible web content can be produced.
This support includes prompting and assisting the author to create accessible web content (Checkpoint 3.1), especially for information that cannot be generated automatically, checking for accessibility problems (Checkpoint 3.2), and assisting in the repair of accessibility problems (Checkpoint 3.3). In performing these functions, the authoring tool must avoid including automatically generated equivalent alternatives or previously authored equivalent alternatives without author consent (Checkpoint 3.4). The authoring tool may also provide automated means for managing equivalent alternatives (Checkpoint 3.5) and provide accessibility status summaries (Checkpoint 3.6).
Accessibility-related documentation provides support and guidance to the author. The documentation must accommodate the various levels of author familiarity with web content accessibility issues. The checkpoint requirements include documenting accessible content promoting features (Checkpoint 3.7), and ensuring that documentation demonstrates authoring practices and workflow processes that result in accessible content (Checkpoint 3.8).
@@BF: prompting for accessibility localization@@
@@we must ensure accessibility of examples (i.e that they meet GL1)@@
@@more help designing well - e.g. alternatives to image maps@@
Rationale: Appropriate assistance should increase the likelihood that typical authors will create WCAG-conformant content. Different tool developers will accomplish this goal in ways that are appropriate to their products, processes and authors.
In some authoring situations it may be necessary to prompt (see clarification) or assist (e.g. task automation, entry storage, etc.) authors to follow accessible authoring practices. This is especially true of accessibility problems that require human judgment to remedy, such as adding descriptions to images. In general, it is preferable to begin guiding the author towards the production of accessible content before accessibility problems have actually been introduced. Postponing checking (checkpoint 3.2) and correcting (checkpoint 3.3) may leave the author uninformed of accessibility problems for so long that when the author is finally informed, the full weight of the accumulated problems may be overwhelming.
When information is required of the author, it is crucial that that information be correct and complete. This is most likely to occur if the author has been convinced to provide the information voluntarily. Therefore, overly restrictive mechanisms are not recommended for meeting this checkpoint.
The term prompt in this checkpoint should not be interpreted as necessarily implying intrusive prompts, such as pop-up dialog boxes. Instead, ATAG 2.0 uses prompt in a wider sense, to mean any tool initiated process of eliciting author input (see definition of prompting for more information).
The checkpoints in guideline 4 require that implementations of prompting be:
Technique 3.1.1: Use an appropriate prompting and assisting mechanism |
|
3.1.1(1): Prompting and assisting for short text labels (e.g. alternate text, titles, short text metadata fields, rubies for ideograms):
|
|
Example 3.1.1(1a): This illustration shows an authoring interface for
description reuse. It is comprised of a drop-down list that is shown with
several short labels for the same image. Notice that one of the labels
in the list is in a different language (i.e. French). The author must
be able to create a new label, if the stored strings are not appropriate.
(Source: mockup by AUWG) |
|
Example 3.1.1(1b): This illustration shows a code-based authoring interface
for short text label prompting. The author has just typed quotation marks
(") to close the |
|
3.1.1(2): Prompting and assisting for multiple text labels (e.g. image map area labels):
|
|
Example 3.1.1(2): This illustration shows an authoring interface for image
map area text label prompting. It is comprised of a list with two columns.
In the right-hand column is the URL for each image map area. This can
be used as a hint by the author as they fill in a label text entry in
the left-hand column. A checkbox at the bottom provides the option of
using this label text to create a set of text links below the image map.
(Source: mockup by AUWG) |
|
3.1.1(3): Prompting and assisting for long text descriptions (e.g. longdesc text, table summaries, site information, long text metadata fields):
|
|
Example 3.1.1(3): This illustration shows an authoring interface for long
text description prompting. A "description required" checkbox
controls whether the rest of the interface is available. If a description
is required, the author then has the choice of opening an existing description
file or writing (and saving) a new one. (Source: mockup by AUWG)
|
|
3.1.1(4): Prompting and assisting for form field labels:
|
|
Example 3.1.1(4): This
illustration shows a form properties list that allows the
author to simultaneously decide the field labels, tab order, form
field place holders and accesskeys. (Source: mockup by AUWG)
|
|
3.1.1(5): Prompting and assisting for form field place-holders:
|
|
3.1.1(6): Prompting and assisting for TAB order sequence:
@@needs its own example@@ |
|
3.1.1(7): Prompting and assisting for navigational shortcuts (e.g. keyboard shortcuts, skip links, voice commands, etc.):
|
|
Example 3.1.1(7b): This illustration shows a code-based authoring interface
suggesting accesskey values. Notice that "m" is the first suggestion,
as it is the first letter of the link text, "moon". "c"
does not appear in the list as it is already used elsewhere in the document.
(Source: mockup by AUWG)
|
|
3.1.1(8): Prompting and assisting for contrasting colors:
|
|
Example 3.1.1(8):
This illustration shows an authoring interface for choosing a
text color. The palette has been filtered so that sufficient contrast
between the text and the current background color is assured. (Source:
mockup by AUWG) |
|
3.1.1(9): Prompting and assisting for alternative resources for multimedia (transcripts, captions, video transcripts, audio descriptions, signed translations, still images, etc.):
|
|
Example 3.1.1(9): This
illustration shows an authoring interface for embedding a video. The tool
automatically detects whether captions, video transcript, described audio,
or signed translation are present. For some items, links to utilities
to create them are available. (Source: mockup by AUWG)
|
|
3.1.2(10): Metadata: [@@changed, metadata is actually just a special case of all the other things in this list @@] @@[@@OPEN WORK ITEM: LEFT HERE AS A PLACEHOLDER]@@
|
|
3.1.1(11): Prompting and assisting for document structure: [@@OPEN WORK ITEM: this could be greatly expanded to take into account highly structured (e.g. XSLT) authoring @@]
|
|
Example
3.1.1(11): This illustration shows a tool that prompts for structural
information. (Source: mockup by AUWG) |
|
3.1.1(12): Prompting and assisting for tabular structure:
|
|
Example 3.1.1(12): This illustration shows a tool that
prompts the author as to whether the top row of a table is a row of table
headers. (Source: mockup by AUWG) |
|
@@OPEN WORK ITEM: 3.1.1(13): Prompting and assisting for style sheets:
BUCKET OF LOW DETAIL STUFF
|
|
3.1.1(14): Prompting and assisting for clearly written text: [@@changed@@]
|
|
Example 3.1.1(14): This illustration shows an authoring
interface for indicating the reading level of a page and whether it exceeds
a limit determined by the author's preference settings. (Source: mockup
by AUWG) |
|
[@@OPEN ISSUE - needs to be strong to cover us in the programming lang area@@] 3.1.1(15): Prompting and assisting for device independent handlers:
Example needed. |
|
3.1.1(16): Prompting and assisting for non-text supplements to text:
|
|
Example 3.1.1(16): This illustration shows an authoring
interface for prompting the author as to whether a number-rich paragraph
might be made more clear with the addition of a chart or graph. (Source:
mockup by AUWG)
|
|
3.1.1(17): Prompting and assisting for other types of accessibility information: [@@want to keep but not sure where@@]
|
|
3.1.1(18): Prompting and assisting the author to make use of up to date formats: Use technologies according to specification.- This is likely to be handled by the choices made by the tool developers. General-purpose text editors (e.g. emacs, etc.) would need to make technology selection recommendations. @@BF: maybe add section to 3.1.1 about guiding author towards use of XML/XSLT (i.e. most up to date formats)@@ |
|
Technique 3.1.2: The tool can provide multiple preview modes and a warning to authors that there are many other less predictable ways in which a page may be presented (aurally, text-only, text with pictures separately, on a small screen, on a large screen, etc.). Some possible document views include:
|
|
Example
3.1.2: This illustration shows a WYSIWYG authoring interface with a list
of rendering options displayed. The options include "All" (i.e.
render as in a generic browser), "text-only" (i.e. non-text
items replaced by textual equivalents), "no styles", "no
frames" and "grayscale" (used to check for sufficient contrast).
(Source: mockup by AUWG) |
Despite prompting assistance from the tool (see Checkpoint 3.1), accessibility problems may still be introduced. For example, the author may cause accessibility problems by hand coding or by opening content with existing accessibility problems for editing. In these cases, the prompting and assistance mechanisms that operate when markup is added or edited (i.e. insertion dialogs and property windows) must be backed up by a more general checking system that can detect and alert the author to problems anywhere within the content (e.g. attribute, element, programmatic object, etc.). It is preferable that this checking mechanisms be well integrated with correction mechanisms (see Checkpoint 3.3), so that when the checking system detects a problem and informs the author, the tool immediately offer assistance to the author.
The checkpoints in guideline 4 require that implementations of checking be: [@@expand this ? - could be techniques? - needs more emphasis and visibility@@]
Technique 3.2.1: Automate as much checking as possible. Where necessary provide semi-automated checking. Where neither of these options is reliable, provide manual checking. |
|
1. Automated: In automated checking, the tool is able to check for accessibility problems automatically, with no human intervention required. This type of check is usually appropriate for checks of a syntactic nature, such as the use of deprecated elements or a missing attribute, in which the meaning of text or images does not play a role. | |
Example 3.2.1(a): This illustration shows a summary interface for a code-based
authoring tool that displays the results of an automated check. (Source:
mockup by AUWG) |
|
Example 3.2.1(b): This illustration shows an interface that displays the
results of an automated check in a WYSIWYG authoring view using blue squiggly
highlighting around or under rendered elements, identifying accessibility
problems for the author to correct. (Source: mockup by AUWG) |
|
Example 3.2.1(c): This illustration shows an authoring interface of an
automated check in a code-level authoring view. In this view, the text
of elements with accessibility problems is shown in a blue font, instead
of the default black font. (Source: mockup by AUWG) |
|
2. Semi-Automated: In semi-automated checking, the tool is able to identify potential problems, but still requires human judgment by the author to make a final decision on whether an actual problem exists. Semi-automated checks are usually most appropriate for problems that are semantic in nature, such as descriptions of non-text objects, as opposed to purely syntactic problems, such as missing attributes, that lend themselves more readily to full automation. | |
Example 3.2.1(d): This illustration shows a dialog box that appears once
the tool has detected an image without a description attribute. However,
since not all images require description, the author is prompted to make
the final decision. The author can confirm the at this is indeed an accessibility
problem and move on to the repair stage by choosing "Yes". (Source:
mockup by AUWG) |
|
3. Manual: In manual checking, the tool provides the author with instructions for detecting a problem, but does not automate the task of detecting the problem in any meaningful way. As a result, the author must decide on their own whether or not a problem exists. Manual checks are discouraged because they are prone to human error, especially when the type of problem in question may be easily detected by a more automated utility, such as an element missing a particular attribute. |
|
Example 3.2.1(e): This illustration shows a dialog box that reminds the
author to check if there are any words in other languages in the document.
The author can move on to the repair stage by pressing "Yes".
(Source: mockup by AUWG) |
|
Technique 3.2.2: The Techniques For Accessibility Evaluation And Repair Tools [WAI-ER @@change to AERT@@] Public Working Draft document can be consulted for evaluation and repair algorithms related to WCAG 1.0. |
Once a problem has been detected by the author or, preferably by the tool (see Checkpoint 3.2), the tool may assist the author to correct the problem. As with accessibility checking, the extent to which accessibility correction can be automated depends on the nature of the particular problems. Some repairs are easily automated, whereas others that require human judgment may be semi-automated at best.
The checkpoints in guideline 4 require that implementations of correcting be:
Technique 3.3.1: Automate as much repairing as possible. Where necessary provide semi-automated repairing. Where neither of these options is reliable, provide manual repairing. |
|
1. Automated: In automated tools, the tool is able to make repairs automatically, with no author input required. For example, a tool may be capable of automatically adding a document type to the header of a file that lacks this information. In these cases, very little, if any, author notification is required. This type of repair is usually appropriate for corrections of a syntactic or repetitive nature. | |
Example 3.3.1(a): This illustration shows a sample of an announcement
that an automated repair has been completed. An "undo " button
is provided in case the author wishes to reverse the operation. In some
cases, automated repairs might be completed with no
author notification at all. (Source: mockup by AUWG) |
|
2. Semi-Automated: In semi-automated repairing, the tool can provide some automated assistance to the author in performing corrections, but the author's input is still required before the repair can be complete. For example, the tool may prompt the author for a plain text string, but then be capable of handling all the markup required to add the text string to the content. In other cases, the tool may be able to narrow the choice of repair options, but still rely on the author to make the final selection. This type of repair is usually appropriate for corrections of a semantic nature. |
|
Example 3.3.1(b): This illustration shows a sample of a semi-automated
repair in a WYSIWYG editor. The author has right-clicked on an image highlighted
by the automated checker system. The author must then decide whether the
label text that the tool suggests is appropriate. Whichever option the
author chooses, the tool will handle the details of updating the content.
(Source: mockup by AUWG) |
|
3. Manual: In manual repairing, the tool provides the author with instructions for making the necessary correction, but does not automate the task in any substantial way. For example, the tool may move the cursor to start of the problem, but since this is not a substantial automation, the repair would still be considered "manual". Manual correction tools leave it up to the author to follow the instructions and make the repair by themselves. This is the most time consuming option for authors and allows the most opportunity for human error. |
|
Example 3.3.1(c): This illustration shows a sample manual repair. The
problems have already been detected in the checking step and the selected
offending elements in a code view have been highlighted. However, when
it comes to repairing the problem, the only assistance that the tool provides
is a context sensitive hint. The author is left to make sense of the hint
and perform the repair without any automated assistance. (Source: mockup
by AUWG) |
|
Technique 3.3.2: Consider implementing a special-purpose correcting interface. When problems require some human judgment, the simplest solution is often to display the property editing mechanism for the offending element. This has the advantage that the author is already somewhat familiar with the interface. However, this practice suffers from the drawback that it does not necessarily focus the author's attention on the dialog control(s) that are relevant to the required correction. Another option is to display a special-purpose correction utility that includes only the input field(s) for the information currently required. A further advantage of this approach is that additional information and tips that the author may require in order to properly provide the requested information can be easily added. Notice that in the figure, a drop-down edit box has been used for the short text label field. This technique might be used to allow the author to select from text strings used previously for the alt-text of this image (see ATAG Checkpoint 3.5 for more). | |
Example 3.3.2: This illustration
shows a sample of a special-purpose correction interface. The tool supports
the author's repair task by providing a description of the problem, a
preview (in this case of the image missing a label), tips for performing
the repair, possible repair options (archived from previous repairs) and
other information (in this case the name of the image file). (Source:
mockup by AUWG) |
|
Technique 3.3.3: Checks can be automatically sequenced. In cases where there are likely to be many accessibility problems, it may be useful to implement a checking utility that presents accessibility problems and repair options in a sequential manner. This may take a form similar to a configuration wizard or a spell checker (see Figure 3.3.5). In the case of a wizard, a complex interaction is broken down into a series of simple sequential steps that the author can complete one at a time. The later steps can then be updated "on-the-fly" to take into account the information provided by the author in earlier steps. A checker is a special case of a wizard in which the number of detected errors determines the number of steps. For example, word processors have checkers that display all the spelling problems one at a time in a standard template with places for the misspelled word, a list of suggested words, and "change to" word. The author also has correcting options, some of which can store responses to affect how the same situation can be handled later. In an accessibility problem checker, sequential prompting is an efficient way of correcting problems. However, because of the wide range of problems the checker needs to handle (i.e. missing text, missing structural information, improper use of color, etc.), the interface template will need to be even more flexible than that of a spell checker. Nevertheless, the template is still likely to include areas for identifying the problem (WYSIWYG or code-based according to the tool), suggesting multiple solutions and choosing between or creating new solutions. In addition, the dialog may include context-sensitive instructive text to help the author with the current correction. @@Author should know what is the sequencing criteria@@ |
|
Example 3.3.3: This illustration shows an example of a sequential accessibility
checker, the special-purpose correction interface from Example
3.3.2 is supplemented with navigational controls for moving backwards
and forwards through the list of repair tasks. (Source: mockup by AUWG) |
|
@@Should this be sent to 4.1???@@ Technique 3.3.4: When authoring tools produce content in real time, it is usually no longer possible to delay addressing accessibility problems until an arbitrary point in the future. At the same time, due to the time pressure, authors in real-time environments tend to be less receptive to intrusive prompts. Nevertheless, tools that allow this kind of authoring (see Figure 3.3.6) should still take accessibility issues into account by supporting the following:
If it has been determined that the author must provide real-time supplements, but no preparation time or assistant author are available, then in addition to allowing the author control of the nature and timing of prompting, the authoring tool can facilitate the inclusion of supplements by:
|
|
Example 3.3.4: This illustration shows an a real-time presentation in
a whiteboard/chat environment. Notice the functionality by which the presenter
or an assistant/peer author can describe the events on the whiteboard
even as the dialog continues. (Source: mockup by AUWG). |
|
Technique 3.3.5: Where a tool is able to detect site-wide errors, allow the author to make site-wide corrections. This should not be used for equivalent alternatives when the function is not known with certainty (see ATAG Checkpoint 3.4). | |
Technique 3.3.6: Provide a mechanism for authors to navigate sequentially among uncorrected accessibility errors. This allows the author to quickly scan accessibility problems in context. | |
Technique 3.3.7: The Techniques For Accessibility Evaluation And Repair Tools [WAI-ER @@change to AERT@@] Public Working Draft document can be consulted for evaluation and repair algorithms related to WCAG 1.0. |
Technique 3.4.2: If human-authored equivalent alternatives are available for an object (for example, through management functionality (ATAG checkpoint 3.5) and/or equivalent alternatives bundled with pre-authored content (ATAG checkpoint 2.@@), then the equivalent alternatives can be used in both semi-automated repair processes and automated repair processes as long as the function of the object is known with certainty. The function of an instance of an object can be considered to be known with certainty when:
|
|
Technique 3.4.3: Allow the author to store semantic role information for instances of objects.@@non-empty values???@@ |
|
Technique 3.4.4: If human-authored equivalent alternatives are available for an object and that object is used for a function that is not known with certainty, tools can offer the equivalent alternatives to the author as defaults in a semi-automated repair processes, but not not in fully automated repair processes. | |
Technique 3.4.5: Where an object has already been used in a document, the tool can offer the alternative information that was supplied for the first or most recent use as a default. | |
Technique 3.4.6: If the author changes the alternative content, the tool can ask the author whether all instances of the object with the same known function should have their alternative content updated with the new value. |
Note: This checkpoint is priority 3 and is, therefore, not required to be implemented in order for a tool to conform to ATAG 2.0 at the single-A and double-AA levels. However, implementing this checkpoint has the potential to simplify the satisfaction of several higher priority checkpoints (ATAG checkpoint 3.1, ATAG checkpoint 3.2, and ATAG checkpoint 3.3) and improve the usability of the tool.
Technique 3.5.1: A registry can be maintained that associates object identity information with alternative information (this could be done with the Resource Description Framework (RDF) [RDF10]). Whenever an object is used and an equivalent alternative is collected (see ATAG Checkpoint 3.1) the object (or identifying information) and the alternative information can be added to the registry. In the case of a text equivalent, the alternate information can be stored in the document source. For more substantial information (such as video captions or audio descriptions), the information can be stored externally and linked from the document source. Several different versions of alternative information can be associated with a single object. | |
Example 3.5.1: This illustration shows a text equivalents
registry viewer that a tool can include to allow the author to query and
edit the various text equivalents stored in the registry. For maximum
flexibility, the design takes into account multiple non-text objects of
the same name, multiple types of text equivalents for each non-text object,
and multiple versions of each text equivalent type. (Source: mockup by
AUWG)
|
|
Technique 3.5.2: Stored alternative information can be presented to the author as default text in the appropriate field, whenever one of the associated files is inserted into the author's document. This satisfies ATAG Checkpoint 3.4 because the equivalent alternatives are not automatically generated and they are only reused with author confirmation. | |
Technique 3.5.3: If no stored association is found in the registry, the field can be left empty. |
|
Technique 3.5.4: The stored alternative information required for pre-authored content (see ATAG Checkpoint 2.6) may be part of the management system, allowing the alternative equivalents to be retrieved whenever the pre-authored content is inserted. | |
Technique 3.5.5: Tools may allow authors to make keyword searches of a description database (to simplify the task of finding relevant images, sound files, etc.). A paper describing a method to create searchable databases for video and audio files is available (refer to [SEARCHABLE]). |
Technique 3.6.1: A list of all accessibility errors found in the content (e.g. selection, document, site, etc.) can be provided. | |
Technique 3.6.2: A summary of accessibility problems remaining by type and/or by number can be provided. | |
Technique 3.6.3: Evaluation and Repair Language [EARL] can be used to store accessibility status information in an interoperable form. |
The checkpoints in guideline 4 require that implementations of documentation be:
Technique 3.8.1: Include relevant accessible authoring practices in examples. [STRONGLY SUGGESTED] |
|
Example 3.8.1: This illustration shows documentation
for the |
|
Technique 3.8.2: In the documentation, ensure that all code examples pass the tool's own accessibility checking mechanism (see Checkpoint 3.1). | |
Technique 3.8.3: In the documentation, provide at least one model of each accessibility practice in the relevant WCAG techniques document for each language supported by the tool. Include all levels of accessibility practices. | |
Technique 3.8.4: Plug-ins that update accessibility features of a tool, should also update the documentation examples. | |
Technique 3.8.5: Implement context-sensitive help for accessibility terms as well as tasks related to accessibility. | |
Technique 3.8.6: Provide a tutorial on checking for and correcting accessibility problems. | |
Technique 3.8.7: Include pointers to more information on accessible Web authoring, such as WCAG and other accessibility-related resources. | |
Technique 3.8.8: Include current versions of, or links to, relevant language specifications in the documentation. This is particularly relevant for languages that are easily hand-edited, such as most XML languages. | |
Technique 3.8.9: Provide links from within the help text to relevant automated correction utilities. |
Technique 3.9.1: Document the sequence of steps that the author should take, using the tool, in order to increase the likelihood of producing accessible content. This should take account of any idiosyncrasies of the tool. | |
Technique 3.9.2: Explain the importance of accessibility for a wide range of content consumers, from those with disabilities to those with alternative viewers. Consider emphasizing points in "Auxiliary Benefits of Accessibility Features", a W3C-WAI resource. | |
Technique 3.9.3: Avoid referring to accessibility features as being exclusively for particular groups (e.g. "for blind authors"). | |
Technique 3.9.4: In addition to including accessibility information throughout the documentation, provide a dedicated accessibility section. |
Note: Meeting this success criteria will not suffice to meet the checkpoints related to the missing accessibility-related feature.@@Changed reg'd in ATAG doc@@
Technique 3.9.5: Tools that lack an accessibility checking and/or repair feature may point to the relevant WCAG Techniques document. |
Contents | Guideline 1 | Guideline 2 | Guideline 3 | Guideline 4 | References