W3C home > Mailing lists > Public > w3c-wai-ua@w3.org > October to December 1999

Action Item: Techniques for guideline 1.

From: <schwer@us.ibm.com>
Date: Wed, 17 Nov 1999 11:28:43 -0600
To: w3c-wai-ua@w3.org
Message-ID: <8525682C.00604E6A.00@d54mta08.raleigh.ibm.com>

Here they are:

1.1 Ensure that every functionality offered through the user interface is
available through every input device API used by the user agent. User
agents are not required to re-implement low-level functionalities (e.g.,
for character input or pointer motion) that are inherently bound to a
particular API and most naturally accomplished with that API. [Priority 1]

   Note. The device-independence required by this checkpoint applies to
   functionalities described by the other checkpoints in this document
   unless otherwise stated by individual checkpoints. This checkpoint does
   not require user agents to use all operating system input device APIs,
   only to make the software accessible through those they do use.


   Operating system and application frameworks provide standard mechanisms
   for controlling application navigation for standard input devices. In
   the case of Windows, OS/2, the X Windows System, and MacOS, the window
   manger provides GUI applications with this information through the
   messaging queue. In the case of non-GUI applications, the compiler
   run-time libraries provide standard mechanisms for receiving keyboard
   input in the case of desktop operating systems. Should you use an
   application framework such as the Microsoft Foundation Classes or the
   Java Foundation Classes, the framework used must support the same
   standard input mechanisms for the host operating system.

   When implementing custom GUI controls do so using the standard input
   mechanisms defined above. Examples of not using the standard input
   devices are:

    Do not communicate directly with the device. For instance, in Windows,
do not open the keyboard device driver directly. This may circumvent system
messaging. It is often the case that the windowing system needs to change
the form and method for processing standard input mechanisms for proper
application coexistence within the user interface framework.
    Do not implement your own input queue handler. Devices for mobility
access, such as those that use serial keys, use standard system facilities
for simulating keyboard and mouse input to all graphical applications.
Example facilities for generating these input device events are the Journal
Playback Hooks in both OS/2 and Windows. These hooks feed the standard
system message queues in these respective windowing systems. To the
application, the resulting keyboard and mouse input messages are treated as
standard input and output device messages generated by the user?s actions.
    If you implement an interface where the user selects text then issues
a command related to it (e.g., select text then create a link using the
selected text as content), all operations related to the selection and
operation on the selected text must be done in a device independent way. In
the case of a desktop user agent this means that the user must be able to
perform these tasks using the keyboard and mouse independently.

   1.2 Use the standard input and output device APIs of the operating
   system. [Priority 1] For example, do not directly manipulate the memory
   associated with information being rendered since screen review
   utilities, which monitor rendering through the standard APIs, will not
   work properly.


    When writing textual information in GUI operating system use standard
text drawing APIs of an operating system. Text converted to offscreen
images or sequences of strokes cannot be intercepted as text drawing calls
at the graphics engine or display driver subsystem of a GUI. Legacy screen
reading solutions intercept these drawing calls before being transferred to
the display and use the text drawn to create a text model representation of
what you see on the screen. This ?offscreen model? is used to speak GUI
text. The absence of the text drawn, by using the identified circumvention
inhibits, legacy screen reading systems from rendering the text drawn as
speech or braille. More information on this is provided in the techniques
for Checkpoint 1.5.
    Use operating system resources for rendering audio information. In
operating systems like Windows a set of standard audio sound resources are
provided to support standard sounds such as alerts. These pre-set sounds
are used to activate sound sentry visual queues to indicate a problem
occurred for people who are deaf or hard of hearing. These queues may be
manifested by flashing the desktop, active caption bar, or active window.
It is important for you to use the standard mechanisms to generated audio
feedback so that operating system or special assistive technologies can add
additional functional for the hearing impaired.
    Enhance the functionality of standard system controls to improve
accessibility where none is provided by responding to standard keyboard
input mechanisms. For example provide keyboard navigation to menus and
dialog box controls in the Apple Macintosh operating system. Another
example is the Java Foundation Classes where internal frames do not provide
a keyboard mechanisms to give them focus. In this case your will need to
add keyboard activation through the standard keyboard activation facility
for Abstract Window Toolkit components.
    Use standard operating system resources for rendering audio
information. When doing so you should not take exclusive control of system
audio resources. This could prevent an assistive technology such as screen
reader from talking if they were using software text-to-speech.

   1.3 Ensure that the user can interact with all active elements in a
   device-independent manner. [Priority 1]

   For example, users who are blind or have motor impairments must be able
   to activate the links in a client-side image map without a pointing
   device. One technique for doing so is to render client-side image maps
   as text links. Note. This checkpoint is an important special case of
   checkpoint 1.1.


   Refer to checkpoint 1.1 and checkpoint 1.5.

   For client-side image maps:

    If alternative text ("alt" or "title" in HTML) is available and not
null for the element (like INPUT or IMG in HTML) that points to a
client-side map, then render some text indicating a map (like "Start of
map") plus the alternative text and the number of areas in the map. If alt
text is null, do not render the map or its areas.
    For each AREA in the map, if alternative text ("alt" or "title") is
available and not null, then render the alternative text as a link.
Otherwise, render some text like "Map area" plus part or all of the href as
a link. If alt "text" is null for an AREA, do not render that AREA.
    When reading through the whole Web page, read the start of map
alternative text with the number of areas, but skip over the AREA links. To
read and activate the map areas, use keys that read and navigate link by
link or element by element.

   Use your DOM Implementation to enable device independent activation of

    When implementing the Document Object Model (DOM) in the User Agent it
is important to be able to programmatically activate DOM elements that are
active whether they are links, links in an image map, or any DOM element
that can respond to an event causing a secondary action.
    In DOM 2 all elements can be potentially active and it is helpful to
allow for activation of all DOM elements by an assistive technology. For
example, a DOM 2 focusin event may result in the construction of a
pull-down menu by an attached JavaScript function. Providing a programmatic
mechanism of activating the ?focusin? function will enable functions such
as speech navigation to control your user agent. Each DOM element may have
more than one set of activation mechanism based on the DOM event received
and it is helpful to enable an assistive technology to enumerate those
functions by description and activate them. An example of this type of
functionality can be seen in the Java Accessibility API. This API provides
an an AccessibleAction Java interface. This inteface provides a list of
actions and descriptions that can be used to describe and activate each
function selectively.

   1.4 Ensure that every functionality offered through the user interface
   is available through the standard keyboard API. [Priority 1]

   The keystroke-only command protocol of the user interface should be
   efficient enough to support production use. Functionalities include
   being able to show, hide, resize and move graphical viewports created by
   the user agent. Note. This checkpoint is an important special case of
   checkpoint 1.1.


    Ensure that the user can trigger mouseover, mouseout, click, etc.
events from the keyboard consistently.
    Ensure that the user can use the keboard TAB key to switch focus from
link to link in your document.
    Ensure that the user can use the graphical user interface menus from
the keyboard.
    Ensure that the user can select text using the keyboard standards for
the platform.
    Ensure that the keyboard can be used to control all cut, copy, paste,
and drag operations within your user agent.
    Allow the user to change the state of form controls using the
    In specialized user agents (i.e. touch screen kiosks, portable
devices) with only one input techniques provide an accessible alternative
(for example EasyAccess, IR links to assistive technologies)
    Allow the user to activate events associated with an element using the
keyboard, including events that imply device dependence like onMouseOver,
MouseClick, etc.

   1.5 Ensure that all messages to the user (e.g., informational messages,
   warnings, errors, etc.) are available through all output device APIs
   used by the user agent. Do not bypass the standard output APIs when
   rendering information (e.g., for reasons of speed, efficiency, etc.).
   [Priority 1] For instance, ensure that information about how much
   content has been viewed is available through output device APIs.
   Proportional navigation bars may provide this information graphically,
   but the information must be available (e.g., as text) to users relying
   on synthesized speech or braille output.


   Operating system and application frameworks provide standard mechanisms
   for using standard output devices. In the case of common desktop
   operating systems such as Windows, OS/2, and MacOS, standard API are
   provided for writing to the display and the multimedia subsystems.

   It is important to also support standard output notification of sound
   such as notifications found in the Windows control panel for sounds.
   Windows maps accessibility features to the event caused by generation of
   these specific system sounds. Accessibility features such as SoundSentry
   would flash the screen, as appropriate, in response to events that would
   cause these sounds to play. This enables the users with deafness to use
   the application in the absence of sound.

   When implementing standard output, do not:

    Bypass standard text drawing calls for rendering text. Screen readers
intercept text drawing calls to create a text representation of the screen,
called an offscreen model, which is read to the user. Common operating
system 2D graphics engines and drawing libraries provide functions for
drawing text to the screen. Examples of this are the Graphics Device
Interface (GDI) for Windows, Graphics Programming Interface (GPI) for OS/2,
and for the X Windows System or Motif it is the X library (XLIB). More
detail on this is provided in the techniques for Checkpoint 1.2.
    Provide your own mechanism for generating pre-defined system sounds.
More detail on this is provided in the techniques for Checkpoint 1.2.
    Use a device driver directly. In the case of display drivers, screen
readers are designed to monitor what is drawn on the screen by hooking
drawing calls at different points in the of the drawing process. By calling
the display driver directly you may be drawing to the display below the
point at which a screen reader for the blind is intercepting the drawing
    Draw directly to the video frame buffer. This circumvents the
interception point at which a screen reader hooks the display calls.
    Forget to provide text alternatives to voiced messages. Make sure an
auditory message also has a redundant visual text message. For example in
AOL "You have mail" should also be presented visually.
    Preclude text presentation when providing auditory tutorials.
Tutorials that use speech to guide a user through the operation of the user
agent should also be available at the same time as graphically displayed


Rich Schwerdtfeger
Lead Architect, IBM Special Needs Systems
EMail/web: schwer@us.ibm.com http://www.austin.ibm.com/sns/rich.htm

"Two roads diverged in a wood, and I -
I took the one less traveled by, and that has made all the difference.",
Received on Wednesday, 17 November 1999 12:32:29 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 20:38:24 UTC