Contents | Guideline 1 | Guideline 2 | Guideline 3 | Guideline 4 | References
Copyright © 2003 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply.
Actions may be taken at the author's initiative that may result in accessibility problems. The authoring tool should include features that provide support and guidance to the author in these situations, so that accessible authoring practices can be followed and accessible web content can be produced.
This support includes prompting and assisting the author to create accessible web content (Checkpoint 3.1), especially for information that cannot be generated automatically, checking for accessibility problems (Checkpoint 3.2), and assisting in the repair of accessibility problems (Checkpoint 3.3). In performing these functions, the authoring tool must avoid including automatically generated equivalent alternatives or previously authored equivalent alternatives without author consent (Checkpoint 3.4). The authoring tool may also provide automated means for managing equivalent alternatives (Checkpoint 3.5) and provide accessibility status summaries (Checkpoint 3.6).
Accessibility-related documentation provides support and guidance to the author. The documentation must accommodate the various levels of author familiarity with web content accessibility issues. The checkpoint requirements include documenting accessible content promoting features (Checkpoint 3.7), and ensuring that documentation demonstrates authoring practices and workflow processes that result in accessible content (Checkpoint 3.8).
@@BF: prompting for accessibility localization@@
@@we must ensure accessibility of examples (i.e that they meet GL1)@@
@@more help designing well - e.g. alternatives to image maps@@
Rationale: Appropriate assistance should increase the likelihood that typical authors will create WCAG-conformant content. Different tool developers will accomplish this goal in ways that are appropriate to their products, processes and authors.
In some authoring situations it may be necessary to prompt (see clarification) or assist (e.g. task automation, entry storage, etc.) authors to follow accessible authoring practices. This is especially true of accessibility problems that require human judgment to remedy, such as adding descriptions to images. In general, it is preferable to begin guiding the author towards the production of accessible content before accessibility problems have actually been introduced. Postponing checking (checkpoint 3.2) and correcting (checkpoint 3.3) may leave the author uninformed of accessibility problems for so long that when the author is finally informed, the full weight of the accumulated problems may be overwhelming.
When information is required of the author, it is crucial that that information be correct and complete. This is most likely to occur if the author has been convinced to provide the information voluntarily. Therefore, overly restrictive mechanisms are not recommended for meeting this checkpoint.
The term prompt in this checkpoint should not be interpreted as necessarily implying intrusive prompts, such as pop-up dialog boxes. Instead, ATAG 2.0 uses prompt in a wider sense, to mean any tool initiated process of eliciting author input (see definition of prompting for more information).
The checkpoints in guideline 4 require that implementations of prompting be:
Technique 3.1.1: Use an appropriate prompting and assisting mechanism |
|
3.1.1(1): Prompting and assisting for short text labels (e.g. alternate text, titles, short text metadata fields, rubies for ideograms):
|
|
Example 3.1.1(1a): This illustration shows an authoring interface for
description reuse. It is comprised of a drop-down list that is shown with
several short labels for the same image. Notice that one of the labels
in the list is in a different language (i.e. French). The author must
be able to create a new label, if the stored strings are not appropriate.
(Source: mockup by AUWG) |
|
Example 3.1.1(1b): This illustration shows a code-based authoring interface
for short text label prompting. The author has just typed quotation marks
(") to close the |
|
3.1.1(2): Prompting and assisting for multiple text labels (e.g. image map area labels):
|
|
Example 3.1.1(2): This illustration shows an authoring interface for image
map area text label prompting. It is comprised of a list with two columns.
In the right-hand column is the URL for each image map area. This can
be used as a hint by the author as they fill in a label text entry in
the left-hand column. A checkbox at the bottom provides the option of
using this label text to create a set of text links below the image map.
(Source: mockup by AUWG) |
|
3.1.1(3): Prompting and assisting for long text descriptions (e.g. longdesc text, table summaries, site information, long text metadata fields):
|
|
Example 3.1.1(3): This illustration shows an authoring interface for long
text description prompting. A "description required" checkbox
controls whether the rest of the interface is available. If a description
is required, the author then has the choice of opening an existing description
file or writing (and saving) a new one. (Source: mockup by AUWG)
|
|
3.1.1(4): Prompting and assisting for form field labels:
|
|
Example 3.1.2(4): This
illustration shows a form properties list that allows the
|
|
3.1.1(5): Prompting and assisting for form field place-holders:
|
|
3.1.1(6): Prompting and assisting for TAB order sequence:
@@needs its own example@@ |
|
3.1.1(7): Prompting and assisting for navigational shortcuts (e.g. keyboard shortcuts, skip links, voice commands, etc.):
|
|
Example 3.1.1(7b): This illustration shows a code-based authoring interface
suggesting accesskey values. Notice that "m" is the first suggestion,
as it is the first letter of the link text, "moon". "c"
does not appear in the list as it is already used elsewhere in the document.
(Source: mockup by AUWG)
|
|
3.1.1(8): Prompting and assisting for contrasting colors:
|
|
Example 3.1.1(8):
This illustration shows an authoring interface for choosing a
text color. The palette has been filtered so that sufficient contrast
between the text and the current background color is assured. (Source:
mockup by AUWG) |
|
3.1.1(9): Prompting and assisting for alternative resources for multimedia (transcripts, captions, video transcripts, audio descriptions, signed translations, still images, etc.):
|
|
Example 3.1.1(9): This
illustration shows an authoring interface for embedding a video. The tool
automatically detects whether captions, video transcript, described audio,
or signed translation are present. For some items, links to utilities
to create them are available. (Source: mockup by AUWG)
|
|
3.1.2(10): Metadata: [@@changed, metadata is actually just a special case of all the other things in this list @@] @@[@@OPEN WORK ITEM: LEFT HERE AS A PLACEHOLDER]@@
|
|
3.1.1(11): Prompting and assisting for document structure: [@@OPEN WORK ITEM: this could be greatly expanded to take into account highly structured (e.g. XSLT) authoring @@]
|
|
Example
3.1.1(11): This illustration shows a tool that prompts for structural
information. (Source: mockup by AUWG) |
|
3.1.1(12): Prompting and assisting for tabular structure:
|
|
Example 3.1.1(12): This illustration shows a tool that
prompts the author as to whether the top row of a table is a row of table
headers. (Source: mockup by AUWG) |
|
@@OPEN WORK ITEM: 3.1.1(13): Prompting and assisting for style sheets:
BUCKET OF LOW DETAIL STUFF
|
|
3.1.1(14): Prompting and assisting for clearly written text: [@@changed@@]
|
|
Example 3.1.1(14): This illustration shows an authoring
interface for indicating the reading level of a page and whether it exceeds
a limit determined by the author's preference settings. (Source: mockup
by AUWG) |
|
[@@OPEN ISSUE - needs to be strong to cover us in the programming lang area@@] 3.1.1(15): Prompting and assisting for device independent handlers:
Example needed. |
|
3.1.1(16): Prompting and assisting for non-text supplements to text:
|
|
Example 3.1.1(16): This illustration shows an authoring
interface for prompting the author as to whether a number-rich paragraph
might be made more clear with the addition of a chart or graph. (Source:
mockup by AUWG)
|
|
3.1.1(17): Prompting and assisting for other types of accessibility information: [@@want to keep but not sure where@@]
|
|
3.1.1(18): Prompting and assisting the author to make use of up to date formats: Use technologies according to specification.- This is likely to be handled by the choices made by the tool developers. General-purpose text editors (e.g. emacs, etc.) would need to make technology selection recommendations. @@BF: maybe add section to 3.1.1 about guiding author towards use of XML/XSLT (i.e. most up to date formats)@@ |
|
Technique 3.1.2: The tool can provide multiple preview modes and a warning to authors that there are many other less predictable ways in which a page may be presented (aurally, text-only, text with pictures separately, on a small screen, on a large screen, etc.). Some possible document views include:
|
|
Example
3.1.2: This illustration shows a WYSIWYG authoring interface with a list
of rendering options displayed. The options include "All" (i.e.
render as in a generic browser), "text-only" (i.e. non-text
items replaced by textual equivalents), "no styles", "no
frames" and "grayscale" (used to check for sufficient contrast).
(Source: mockup by AUWG) |
[@@Techniques needed@@] If prompting is left beyond the insertion of content containing an accessibility problem, checking becomes involved. For this reason, consider the timing techniques for checking (3.2.2) [@@clean up wording@@]
Despite prompting assistance from the tool (see Checkpoint 3.1), accessibility problems may still be introduced. For example, the author may cause accessibility problems by hand coding or by opening content with existing accessibility problems for editing. In these cases, the prompting and assistance mechanisms that operate when markup is added or edited (i.e. insertion dialogs and property windows) must be backed up by a more general checking system that can detect and alert the author to problems anywhere within the content (e.g. attribute, element, programmatic object, etc.). It is preferable that this checking mechanisms be well integrated with correction mechanisms (see Checkpoint 3.3), so that when the checking system detects a problem and informs the author, the tool immediately offer assistance to the author.
The checkpoints in guideline 4 require that implementations of checking be:
Technique 3.2.1: Automate as much checking as possible. Where necessary provide semi-automated checking. Where neither of these options is reliable, provide manual checking. |
|
1. Automated: In automated checking, the tool is able to check for accessibility problems automatically, with no human intervention required. This type of check is usually appropriate for checks of a syntactic nature, such as the use of deprecated elements or a missing attribute, in which the meaning of text or images does not play a role. | |
Example 3.2.1(c): This illustration shows a summary interface for a code-based
authoring tool that displays the results of an automated check. (Source:
mockup by AUWG) |
|
Example 3.2.1(d): This illustration shows an interface that displays the
results of an automated check in a WYSIWYG authoring view using blue squiggly
highlighting around or under rendered elements, identifying accessibility
problems for the author to correct. (Source: mockup by AUWG) |
|
Example 3.2.1(e): This illustration shows an authoring interface of an
automated check in a code-level authoring view. In this view, the text
of elements with accessibility problems is shown in a blue font, instead
of the default black font. (Source: mockup by AUWG) |
|
2. Semi-Automated: In semi-automated checking, the tool is able to identify potential problems, but still requires human judgment by the author to make a final decision on whether an actual problem exists. Semi-automated checks are usually most appropriate for problems that are semantic in nature, such as descriptions of non-text objects, as opposed to purely syntactic problems, such as missing attributes, that lend themselves more readily to full automation. | |
Example 3.2.1(b): This illustration shows a dialog box that appears once
the tool has detected an image without a description attribute. However,
since not all images require description, the author is prompted to make
the final decision. The author can confirm the at this is indeed an accessibility
problem and move on to the repair stage by choosing "Yes". (Source:
mockup by AUWG) |
|
3. Manual: In manual checking, the tool provides the author with instructions for detecting a problem, but does not automate the task of detecting the problem in any meaningful way. As a result, the author must decide on their own whether or not a problem exists. Manual checks are discouraged because they are prone to human error, especially when the type of problem in question may be easily detected by a more automated utility, such as an element missing a particular attribute. |
|
Example 3.2.1(a): This illustration shows a dialog box that reminds the
author to check if there are any words in other languages in the document.
The author can move on to the repair stage by pressing "Yes".
(Source: mockup by AUWG) |
|
Technique 3.2.2: Consider the timing options to be used for informing the author of the results of the check. Options include: Immediate Interruption, Negotiated Interruption and Scheduled Interruption.[@@new@@] | |
1. Immediate Interruption: An immediate interruption is the most intrusive timing option because the attention of the author is actively diverted from the current editing task by the notification of some issue. This might be achieved, for instance, by an alert dialog. This type of alert presents multiple usability problems and should be used sparingly because it interferes with the normal design workflow. Intrusive warnings are probably only appropriate when the window of opportunity for correcting a serious accessibility problem is about to close, such as when an author decides to publish the content in question. In general, we recommend using the less disruptive timing options. |
|
Example 3.2.2(a): This illustration shows an example of an immediate interruption
of the author's workflow. The author must press the "OK" button
on the dialog box to continue. (Source: mockup by AUWG) |
|
2. Negotiated Interruption (Preferred): A negotiated interruption is caused by interface mechanisms (icons, line or color highlighting of the element, audio feedback, etc.) that alert the author to a problem, but remain flexible enough to allow the author to decide whether to take immediate action or address the issue at a later time. Since negotiated interruptions are less intrusive than immediate interruptions they can often be better integrated into the design workflow and have the added benefit of informing the author about the distribution of errors within the document. Although some authors may choose to ignore the alerts completely, it is not recommended that authors be forced to fix problems. Instead, it is recommended that, at some major editing event (e.g., when publishing), the tool should remind the author of the continuing unresolved accessibility issue. |
|
Example 3.2.2(b): This illustration shows an example of a negotiated interruption.
The author is made aware of problems detected automatically by means of
a blue squiggly line around or under rendered elements with accessibility
problems. The author can still decide to address the problems at a later
time. (Source: mockup by AUWG) |
|
3. Scheduled Interruption: A scheduled interruption is one in which the author has set the tool to alert them of accessibility issues on a configurable schedule. One option for the schedule might be to have prompts associated with the interface mechanisms for significant authoring events such as saving, exiting, publishing, or page generation, etc. At the significant authoring event, the author would be informed of the problem, while at the same time they would not be prevented from saving, publishing, printing, etc. For example, a "save as" dialog could display an accessibility warning and an option to launch a correction utility after saving (see Figure 3.2.7). A potential downside of this type of prompting is that by the time the prompt is displayed (publishing, etc.), the author may not have sufficient time to make the required changes, especially if they are extensive. |
|
Example 3.2.2(c): This illustration shows a "Save As" dialog
box that is an example of a scheduling mechanism for a scheduled interruption.
The author has the option of turning on or off a checking session immediately
following the save operation. The author's preference should be retained
for the next save operation. (Source: mockup by AUWG) |
|
Technique 3.2.3: The Techniques For
Accessibility Evaluation And Repair Tools [WAI-ER] Public Working
Draft document can be consulted for evaluation and repair algorithms related
to WCAG 1.0.
|
|
Technique 3.2.4: Accessibility problems can be detected and immediately highlighted when documents are opened, when an editing or insertion action is completed, or while an author is editing. CSS classes can be used to indicate accessibility problems enabling the author to easily configure the presentation of errors. | |
Technique 3.2.5: The author can be alerted to accessibility problems when saving. | |
New Technique: Accessibility alerts within the document can be linked to context sensitive help. (See the Techniques for ATAG checkpoint 6.1) |
|
Technique 3.2.8: Alerts for high priority WCAG checkpoints can be included in the default configuration. | |
Technique 3.2.10: Preference utilities can be designed to allow authors to choose different alert levels based on the priority of authoring accessibility recommendations. | |
Technique 3.2.11: If intrusive warnings are used, a means can be provides for the author to quickly set the warning to unobtrusive to avoid frustration. | |
Technique 3.2.12: The WAI Evaluation and Repair group has produced a Public Working Draft of techniques for evaluating and repairing HTML according to WCAG 1.0 [AERT]. @@duplicate of 3.2.3@@ |
Once a problem has been detected by the author or, preferably, the tool (see Checkpoint 3.2), the tool may assist the author to correct the problem. As with accessibility checking, the extent to which accessibility correction can be automated depends on the nature of the particular problems. Some repairs are easily automated, whereas others that require human judgment may be semi-automated at best.
The checkpoints in guideline 4 require that implementations of correcting be:
Technique 3.3.1: Consider the level of automation to be used for repairing errors. Options include: Manual, Semi-Automated and Automated.[@@new@@] | |
1. Manual: In manual repairing, the tool provides the author with instructions for making the necessary correction, but does not automate the task in any substantial way. For example, the tool may move the cursor to start of the problem, but since this is not a substantial automation, the repair would still be considered "manual". Manual correction tools leave it up to the author to follow the instructions and make the repair by themselves. This is the most time consuming option for authors and allows the most opportunity for author error. |
|
Example 3.3.1(a): This illustration shows a sample manual repair. The
problems have already been detected in the checking step and the selected
the offending elements in a code view have been highlighted. However,
when it comes to repairing the problem, the only assistance that the tool
provides is a context sensitive hint. The author is left to make sense
of the hint and perform the repair without any automated assistance. (Source:
mockup by AUWG) |
|
2. Semi-Automated: In semi-automated repairing, the tool can provide some automated assistance to the author in performing corrections, but the author's input is still required before the repair can be complete. For example, the tool may prompt the author for a plain text string, but then be capable of handling all the markup required to add the text string to the content. In other cases, the tool may be able to narrow the choice of repair options, but still rely on the author to make the final selection. This type of repair is usually appropriate for corrections of a semantic nature. |
|
Example 3.3.1(b): This illustration shows a sample of a semi-automated
repair in a WYSIWYG editor. The author has right-clicked on an image highlighted
by the automated checker system. The author must then decide whether the
label text that the tool suggests is appropriate. Whichever option the
author chooses, the tool will handle the details of updating the content.
(Source: mockup by AUWG) |
|
3.
Automated: In automated tools, the tool is able to make repairs
automatically, with no author input required. For example, a tool may be
capable of automatically adding a document type to the header of a file
that lacks this information. In these cases, very little, if any,
|
|
Example 3.3.1(c): This illustration shows a sample of an announcement
that an automated repair has been completed. An "undo " button
is provided in case the author wishes to reverse the operation. In some
cases, automated repairs might be completed with no
|
|
Technique 3.3.2: Consider implementing a special-purpose correcting interface. [@@new@@]When problems require some human judgment, the simplest solution is often to display the property editing mechanism for the offending element. This has the advantage that the author is already somewhat familiar with the interface. However, this practice suffers from the drawback that it does not necessarily focus the author's attention on the dialog control(s) that are relevant to the required correction. Another option is to display a special-purpose correction utility that includes only the input field(s) for the information currently required. A further advantage of this approach is that additional information and tips that the author may require in order to properly provide the requested information can be easily added. Notice that in the figure, a drop-down edit box has been used for the short text label field. This technique might be used to allow the author to select from text strings used previously for the alt-text of this image (see ATAG Checkpoint 3.5 for more). | |
Example 3.3.2: This illustration
shows a sample of a special-purpose correction interface. The tool supports
the author's repair task by providing a description of the problem, a
preview (in this case of the image missing a label), tips for performing
the repair, possible repair options (archived from previous repairs) and
other information (in this case the name of the image file). (Source:
mockup by AUWG based on A-Prompt) |
|
Technique 3.3.3: Checks can be automatically sequenced.@@changed@@ | |
In cases where there are likely to be many accessibility problems, it may be useful to implement a checking utility that presents accessibility problems and repair options in a sequential manner. This may take a form similar to a configuration wizard or a spell checker (see Figure 3.3.5). In the case of a wizard, a complex interaction is broken down into a series of simple sequential steps that the author can complete one at a time. The later steps can then be updated "on-the-fly" to take into account the information provided by the author in earlier steps. A checker is a special case of a wizard in which the number of detected errors determines the number of steps. For example, word processors have checkers that display all the spelling problems one at a time in a standard template with places for the misspelled word, a list of suggested words, and "change to" word. The author also has correcting options, some of which can store responses to affect how the same situation can be handled later. In an accessibility problem checker, sequential prompting is an efficient way of correcting problems. However, because of the wide range of problems the checker needs to handle (i.e. missing text, missing structural information, improper use of color, etc.), the interface template will need to be even more flexible than that of a spell checker. Nevertheless, the template is still likely to include areas for identifying the problem (WYSIWYG or code-based according to the tool), suggesting multiple solutions and choosing between or creating new solutions. In addition, the dialog may include context-sensitive instructive text to help the author with the current correction. | |
Example 3.3.3: This illustration shows an example of a sequential accessibility
checker, the special-purpose correction interface from Example
3.3.2 is supplemented with navigational controls for moving backwards
and forwards through the list of repair tasks. (Source: mockup by AUWG
based on A-Prompt) |
|
If it has been determined that the author must provide real-time supplements, but no preparation time or assistant author are available, then in addition to allowing the author control of the nature and timing of prompting, the authoring tool can facilitate the inclusion of supplements by:
|
|
Example 3.3.4: This illustration shows an a real-time presentation in
a whiteboard/chat environment. Notice the functionality by which the presenter
or a secondary author (describer) can describe the events on the whiteboard
even as the dialog continues. (Source: mockup by AUWG, based on A-Communicator). |
|
Technique 3.3.5: |
|
Technique 3.3.6: Where a tool is able to detect site-wide errors, allow the author to make site-wide corrections. This should not be used equivalents alternatives when the function is not known with certainty (see ATAG Checkpoint 3.4). [@@changed@@] | |
Technique 3.3.7: |
|
Technique 3.3.8: |
|
Technique 3.3.9: |
|
Technique 3.3.10: The WAI Evaluation and Repair group [WAI-ER] has produced a Public Working Draft of techniques for evaluating and repairing HTML according to WCAG 1.0 [AERT].@@see 3.2.3@@ |
Technique 3.4.1: If the author has not specified an alternative equivalent, default to leaving out the relevant content (e.g. attribute, element, etc.), rather than including the attribute with no value or with automatically-generated content. Leaving out the attribute will increase the probability that the problem will be detected by checking algorithms (see Techniques for ATAG checkpoint 5.1). |
Technique 3.4.2: If human-authored equivalent alternatives are available for an object (for example, through Techniques for ATAG checkpoint 4.4 and/or Techniques for ATAG checkpoint 3.4), the equivalent alternatives can be used in both semi-automated repair processes and automated repair processes as long as the function of the object is known with certainty. The function of an instance of an object can be considered to be known with certainty when the tool totally controls its use (i.e. a generated tool bar) or the author has linked the current object instance to the same URI(s) as the object was linked to when the equivalent alternative was stored. @@ BAF: suggest the author marking the content with a role/flag to insure certainty. @@ | |
Technique 3.4.3: If human-authored equivalent alternatives are available for an object and that object is used for a function that is not known with certainty, tools can offer the equivalent alternatives to the author as defaults in a semi-automated repair processes, but not not in fully automated repair processes. | |
NEW(@@was part of 3.4.3@@) Technique 3.4.4: Where an object has already been used in a document, the tool can offer the alternative information that was supplied for the first or most recent use as a default. | |
NEW(@@was part of 3.4.3@@) Technique 3.4.5: If the author changes the alternative content, the tool can ask the author whether all instances of the object should have their alternative content updated with the new value. |
Note: This checkpoint is priority 3 and is, therefore, not required to be implemented in order for a tool to conform to ATAG 2.0 at the single-A and double-AA levels. However, implementing this checkpoint has the potential to simplify the satisfaction of several higher priority checkpoints (ATAG checkpoint 3.1, ATAG checkpoint 3.2, and ATAG checkpoint 3.3) and improve the usability of the tool.
Technique 3.5.1: A registry can be maintained that associates object identity information with alternative information (this could be done with the Resource Description Framework (RDF) [RDF10]). Whenever an object is used and an equivalent alternative is collected (see ATAG Checkpoint 3.1) the object (or identifying information) and the alternative information can be added to the registry. In the case of a text equivalent, the alternate information can be stored in the document source. For more substantial information (such as video captions or audio descriptions), the information can be stored externally and linked from the document source. Several different versions of alternative information can be associated with a single object. | |
Example 3.5.1: This illustration shows a of a text equivalents
registry viewer that a tool can include to allow the author to query and
edit the various text equivalents stored in the registry. For maximum
flexibility, the design takes into account multiple non-text objects of
the same name, multiple types of text equivalents for each non-text object,
and multiple versions of each text equivalent type. (Source: mockup by
AUWG)
|
|
Technique 3.5.2: Stored alternative information can be presented to the author as default text in the appropriate field, whenever one of the associated files is inserted into the author's document. This satisfies ATAG Checkpoint 3.4 because the equivalent alternatives are not automatically generated and they are only reused with author confirmation. | |
Technique 3.5.3: If no stored association is found in the registry, the field can be left empty. No purely rule-generated alternative information is allowed.
|
|
Technique 3.5.4: The stored alternative information required for ATAG Checkpoint 3.4 might be part of the management system, allowing the alternative equivalents to be retrieved whenever the pre-packaged objects are inserted. | |
Technique 3.5.5: Tools might allow authors to make keyword searches of a description database (to simplify the task of finding relevant images, sound files, etc.). A paper describing a method to create searchable databases for video and audio files is available (refer to [SEARCHABLE]). |
The checkpoints in guideline 4 require that implementations of documentation be:
Technique 3.8.1: In the documentation, ensure that all code examples pass the tool's own accessibility checking mechanism (required for checkpoint 3.1), regardless of what aspect of the code the example is meant to show. | |
Technique 3.8.2: In the documentation, provide at least
one model of each accessibility practice in the relevant
WCAG techniques document for each language supported by
the tool. Note: This includes all levels of accessibility
practices |
|
Technique 3.8.3: When the help files of a base tool do not meet this checkpoint, an accessibility plug-in that updates the files is acceptable. | |
Technique 3.8.4: When explaining the accessibility issues related to elements that have not been officially deprecated, try to emphasize the solutions rather than explicitly discouraging the use of the element. | |
Technique 3.8.6: For tools that include context sensitive help, implement context-sensitive help for accessibility terms as well as tasks related to accessibility. | |
Technique 3.8.7: For tools that include tutorials, provide a tutorial on checking for and correcting accessibility problems. | |
Technique 3.8.8: Include pointers to more information on accessible Web authoring, such as WCAG and other accessibility-related resources, | |
Technique 3.8.9: Include current versions of, or links to relevant language specifications in the documentation. This is particularly relevant for languages that are easily hand edited, such as most XML languages. |
Technique 3.9.1: Document the sequence of steps that the author should take, using the tool, in order to increase the likelihood of producing accessible content. This should take account of any idiosyncrasies of the tool. | |
Technique 3.9.2: The section could be prefaced by an introduction that explains the importance of accessibility for a wide range of content consumers, from those with disabilities to those with alternative viewers. | |
Technique 3.9.3: For tools that explain the reasons for accessibility, take a broad view. For example, do not refer to any particular accessibility feature as being "for blind authors" or label them with a "disability" icon. Instead, refer to them as being for "authors who are not viewing images". Consider emphasizing points in "Auxiliary Benefits of Accessibility Features", a W3C-WAI resource. | |
Technique 3.9.4: This documentation could be located in a dedicated section. |
Technique 3.9.5: Tools that lack an accessibility checking and/or repair feature may point to the relevant WCAG Techniques document for the language. Note: this will not suffice to meet the checkpoints related to accessibility checking (ATAG Checkpoint 3.1) and repair (ATAG Checkpoint 3.2). |
Contents | Guideline 1 | Guideline 2 | Guideline 3 | Guideline 4 | References