Re: PROPOSAL: Assistive Technology Checkpoints in the Guidelines

Jon Gunderson wrote:
> 
> Based on feedback from the group I think our current checkpoints related to
> assistive technology compatibility need to be reconsidered for the
> following reasons:
> 
> 1. The current techniques for comaptibility read more like techniques than
> statements of assistive technology needs.

That may be true. However, I don't think that it will address
one important goal of the guidelines - the promotion
of interoperability - to consider interprocess communication
an implementation issue. With interoperability in mind, I think
it is important to include at least one or two checkpoints about
communication between browsers and dependent user agents. I realize
that when we say "how" we generally should be talking about
the techniques document. But this is a case when I think a checkpoint
is necessary for compatibility.
 
> 2. DOM can only be considered as part of a solution for Desktop user agents
> for the following reasons:
> 
> a. DOM does not provide any information or the emulation of controls for
> the other parts of the user interface (i.e. controls, menus, staus lines,
> dialpg boxes).  This information needs to come from a non-DOM source.  DOM
> will never provide information or control about these parts of the user
> interface.

I don't believe it lies within the scope of these guidelines to
address programmatic control of the user interface (I don't think
it's appropriate to lump document content in with user interface.)
In fact, I think that the guidelines
shouldn't discuss the accessibility of the user interface at
all, with some exceptions:
 
 - We should talk about device-independent operation of the UI.
 - We should talk about the selection and the focus.
 - We should talk about accessible formats for documentation
 - We should talk about accessible communication between software.

We should not, in my opinion, say much or anything about
menus and buttons and the graphical user interface.

> b. DOM does not have a defined interoperable interface for use by external
> programs.  Some group members say this is not a major issue (including
> myself at times), but it is potentially a weak link if user agents running
> on the same plateform use different methods to expose DOM.  Assistive
> technology would then need to "know" where to look.  Also DOM does not have
> any conventions for more simultaneous access to the DOM.
> How would DOM resolve manipulation requests from both the user agent and
> the assistive technology?
> How would the user agent tell the AT that it changed something?

Most days, I consider these to be implementation issues. I agree that
they are important, but the DOM seems to be the "most underlying" 
platform-independent open standard around - we don't have anything
else to propose that will promote interoperability.
 
> c. The use of DOM would require Assistive Technolgy to sub class the user
> agent as a special technology and some assistive technology companies may
> find this requirement to restraining as the primary mechanism for
> accessibility, escpecially on MS-Windows plateforms that have accessibility
> models based on active accessibility.  Denis Anson made a good point.  If
> push this type of technique, it means that user with disabilities will need
> to wait for AT developers to provide access to new implementations of DOM.
> More general techniques like active accessibility, offer improved
> timeliness to new releases of  user agents.

It may be true that MSAA provides more functionality today and possibly
in the near future. And that it is more general than providing
access to HTML or XML documents. But it is not an open standard
and as such, makes it difficult for W3C to promote. 
 
> So I would like to suggest five checkpoints for people to think about,
> criticize, modify and/or comment:

> ** The following checkpoints ae based on the assistive technologies point
> of view **
> 
> Checkpoint 6.2.1 [Priority 1} Allow assistive technology to access
> information about the current user interface controls (windows, menus,
> toolbars, status bars, dialog boxes).
> 
> Primary techniques: Accessibility APIs or use of operating system standard
> controls.

As I suggest above, I think that 6.2.1 lies outside the scope of these
guidelines as it concerns programmatic control of the user interface
and we should limit ourselves to access/control of document
content where possible.
 
> Checkpoint 6.2.2 [Priority 1} Allow assistive technology to simulate the
> selection and activation of user interface and document controls (windows,
> menus, toolbars, status bars, dialog boxes).
> 
> Primary techniques: Accessibility APIs or use of operating system standard
> controls.

I don't think document controls are in the scope of these guidelines.
I do think it is helpful if dependent user agents can set the
selection/focus programmatically and set the style of the
selection/focus
programmatically (or through style sheets, though CSS2 doesn't allow
this).
 
> Checkpoint 6.2.3 [Priority 1] Allow assistive technologies to access
> information about the current information being rendered by the user agent.
> 
> Primary techniques: Accessibility APIs that provide information on document
> rendering and/or DOM.

I think this needs to be strengthened by mentioning the
W3C DOM explicitly. 
 
> Checkpoint 6.2.4 [Priority 1] Allow accessibility features (accessibility
> flags and interfaces. ) of the operating system to provide alternative
> rendering information and user interfaces for the user agent.

I don't understand this one. If this one means that the UA should
respect os accessibility flags and interfaces, ok.
 
> Checkpoint 6.2.5 [Priority 2] Allow assistive technology to change the
> rendering of document information on the user agent.
> 
> Rationale: In some cases it maybe useful for the assistive technology to
> change the rendering of a document.  For example for a person with certain
> types of visual learning disabilities it maybe important to simplify the
> rendering of the document and allow the person to use the mouse to point at
> objects and have the contents of the object spoken to them.   It could also
> be used for table linearization if the assistive technology felt that was
> the best way for them to provide access to table information.

I agree with this checkpoint, but it sounds a lot like "Allow the
user to control the style of a document" which was slammed 
a long time ago because of its generality. We split it into
fifteen or so individual checkpoints. Why shouldn't we do
the same when the actor is a dependent user agent rather than
the user?

I wish this checkpoint were covered by "Implement DOM 1", but
DOM 1 doesn't address style issues - that is the privilege of DOM 2.

This being the case, we can refer to other interfaces that
allow dependent user agents to control style information and
add that when a W3C Recommendation becomes available, it will
be the preferred interface. 

That interfaces evolve is one argument for pushing everything
concerning inter-software communication into the techniques
document (and I have been tempted). But I think that for
the cause of interoperability, we should promote open interfaces
as checkpoints.

 - Ian

-- 
Ian Jacobs (jacobs@w3.org) 
Tel/Fax: (212) 684-1814 
http://www.w3.org/People/Jacobs

Received on Monday, 8 February 1999 17:20:42 UTC