- From: <schwer@us.ibm.com>
- Date: Mon, 8 Feb 1999 21:39:38 -0600
- To: Charles McCathieNevile <charles@w3.org>
- cc: Jon Gunderson <jongund@staff.uiuc.edu>, w3c-wai-ua@w3.org
Charles,
> 2. DOM can only be considered as part of a solution for Desktop user
agents
> for the following reasons:
While I agree that we DOM level 2 cannot address specific interface related
to desktop components like menus, etc. I strongly disagree that it cannot
in the future. It is important that assistive technology be able to access
the chrome area as well as supporting browser dialogs so that we ensure
that assistive technologies be able to access all web browser functionality
in the future. While Windows may provide access to a great deal of these
features through MSAA, other operating systems do not. We should not be
preclude users from accessing browser functions as simple as changing the
proxy server just because the operating system does not have an object
model or even an offscreen model. In fact, for many systems this
unacceptable depending on the device you are operating.
>
> a. DOM does not provide any information or the emulation of controls for
> the other parts of the user interface (i.e. controls, menus, staus
lines,
> dialpg boxes). This information needs to come from a non-DOM source.
DOM
> will never provide information or control about these parts of the user
> interface.
>
While this is true today, this should not be true in the future. I see no
reason why we cannot create a DOM interface for system GUI components. The
way the web is going it is going to be difficult to decide what is a GUI
component and what is not. Requiring assistive technologies to get
information from different non-DOM sources inhibits accessibility solutions
from being created for each platform. For example, constructing an
offscreen model to support a screen reader is a major effort. We should
make an effort in future DOM releases, ensure that web browsers provide
access to the whole browser and not just the client area.
> b. DOM does not have a defined interoperable interface for use by
external
> programs. Some group members say this is not a major issue (including
> myself at times), but it is potentially a weak link if user agents
running
> on the same plateform use different methods to expose DOM. Assistive
> technology would then need to "know" where to look. Also DOM does not
have
> any conventions for more simultaneous access to the DOM.
> How would DOM resolve manipulation requests from both the user agent and
> the assistive technology?
> How would the user agent tell the AT that it changed something?
>
One of my recommendations for DOM access is to ensure that access to the
DOM be reentrant. In fact I am going to bring this issue up at the DOM
working group tomorrow. This can be accomplished internally by semaphore
protection to a shared library. Through a OM manager that services
requests, etc.
A user agent would tell an AT that something changed by providing a
listening mechanism. We did this when defining the Java Accessibility API.
In this case, each object provided for property change notification to
registered listeners. In the current object model terminology we would need
to determine which nodes are designated as capture nodes for specific
events so that we do not attach listeners to children that are not allowed
to fire these event notifications.
> c. The use of DOM would require Assistive Technolgy to sub class the
user
> agent as a special technology and some assistive technology companies
may
> find this requirement to restraining as the primary mechanism for
> accessibility, escpecially on MS-Windows plateforms that have
accessibility
> models based on active accessibility. Denis Anson made a good point.
If
> push this type of technique, it means that user with disabilities will
need
> to wait for AT developers to provide access to new implementations of
DOM.
> More general techniques like active accessibility, offer improved
> timeliness to new releases of user agents.
First, this is not true because users will use whatever mean they do today
to access the information or continue to do without it. If we keep relying
on reverse-engineered or solutions that only suit a particular platform,
disabled users will continue to be left behind in favor of a short fix.
This issue needs to be reconsidered.
I will be at the DOM working group meeting for the next 2 days, but I would
like the User Agent think hard on this.
Rich
Rich Schwerdtfeger
Lead Architect, IBM Special Needs Systems
EMail/web: schwer@us.ibm.com http://www.austin.ibm.com/sns/rich.htm
"Two roads diverged in a wood, and I -
I took the one less traveled by, and that has made all the difference.",
Frost
Charles McCathieNevile <charles@w3.org> on 02/08/99 01:39:46 PM
To: Jon Gunderson <jongund@staff.uiuc.edu>
cc: w3c-wai-ua@w3.org (bcc: Richard Schwerdtfeger/Austin/IBM)
Subject: Re: PROPOSAL: Assistive Technology Checkpoints in the Guidelines
As a preliminary comment, most of these things are covered under the
general principle 'provide device-independent access to all functionality
of the user agent' (which seems a lot like the current checkpoint 3.1.1)
The difference is that this proposal is splitting out particular functions
and requiring them (of particular browsers, in the current incarnation).
The issue of whether to implement the w3c recommendation for DOM is
separate.
Charles McCathieNevile
On Mon, 8 Feb 1999, Jon Gunderson wrote:
Based on feedback from the group I think our current checkpoints related
to
assistive technology compatibility need to be reconsidered for the
following reasons:
1. The current techniques for comaptibility read more like techniques
than
statements of assistive technology needs.
2. DOM can only be considered as part of a solution for Desktop user
agents
for the following reasons:
a. DOM does not provide any information or the emulation of controls for
the other parts of the user interface (i.e. controls, menus, staus lines,
dialpg boxes). This information needs to come from a non-DOM source.
DOM
will never provide information or control about these parts of the user
interface.
b. DOM does not have a defined interoperable interface for use by
external
programs. Some group members say this is not a major issue (including
myself at times), but it is potentially a weak link if user agents
running
on the same plateform use different methods to expose DOM. Assistive
technology would then need to "know" where to look. Also DOM does not
have
any conventions for more simultaneous access to the DOM.
How would DOM resolve manipulation requests from both the user agent and
the assistive technology?
How would the user agent tell the AT that it changed something?
c. The use of DOM would require Assistive Technolgy to sub class the user
agent as a special technology and some assistive technology companies may
find this requirement to restraining as the primary mechanism for
accessibility, escpecially on MS-Windows plateforms that have
accessibility
models based on active accessibility. Denis Anson made a good point. If
push this type of technique, it means that user with disabilities will
need
to wait for AT developers to provide access to new implementations of
DOM.
More general techniques like active accessibility, offer improved
timeliness to new releases of user agents.
So I would like to suggest five checkpoints for people to think about,
criticize, modify and/or comment:
** The following checkpoints ae based on the assistive technologies point
of view **
Checkpoint 6.2.1 [Priority 1} Allow assistive technology to access
information about the current user interface controls (windows, menus,
toolbars, status bars, dialog boxes).
Primary techniques: Accessibility APIs or use of operating system
standard
controls.
Checkpoint 6.2.2 [Priority 1} Allow assistive technology to simulate the
selection and activation of user interface and document controls
(windows,
menus, toolbars, status bars, dialog boxes).
Primary techniques: Accessibility APIs or use of operating system
standard
controls.
Checkpoint 6.2.3 [Priority 1] Allow assistive technologies to access
information about the current information being rendered by the user
agent.
Primary techniques: Accessibility APIs that provide information on
document
rendering and/or DOM.
Checkpoint 6.2.4 [Priority 1] Allow accessibility features (accessibility
flags and interfaces. ) of the operating system to provide alternative
rendering information and user interfaces for the user agent.
Checkpoint 6.2.5 [Priority 2] Allow assistive technology to change the
rendering of document information on the user agent.
Rationale: In some cases it maybe useful for the assistive technology to
change the rendering of a document. For example for a person with
certain
types of visual learning disabilities it maybe important to simplify the
rendering of the document and allow the person to use the mouse to point
at
objects and have the contents of the object spoken to them. It could
also
be used for table linearization if the assistive technology felt that was
the best way for them to provide access to table information.
Jon Gunderson, Ph.D., ATP
Coordinator of Assistive Communication and Information Technology
Division of Rehabilitation - Education Services
University of Illinois at Urbana/Champaign
1207 S. Oak Street
Champaign, IL 61820
Voice: 217-244-5870
Fax: 217-333-0248
E-mail: jongund@uiuc.edu
WWW: http://www.staff.uiuc.edu/~jongund
http://www.als.uiuc.edu/InfoTechAccess
--Charles McCathieNevile mailto:charles@w3.org
phone: +1 617 258 0992 http://purl.oclc.org/net/charles
W3C Web Accessibility Initiative http://www.w3.org/WAI
MIT/LCS - 545 Technology sq., Cambridge MA, 02139, USA
Received on Monday, 8 February 1999 22:39:52 UTC