W3C home > Mailing lists > Public > public-webapi@w3.org > March 2006

Re: in defence of listener discovery (ISSUE-32, ACTION-84)

From: Al Gilman <Alfred.S.Gilman@IEEE.org>
Date: Wed, 29 Mar 2006 13:01:35 -0500
Message-Id: <p06110402c05056178d30@[]>
To: WebAPI WG <public-webapi@w3.org>
Cc: wai-liaison@w3.org


Thank you all for your questions.

** general

* Why in the DOM?

[Jonas, Maciej]
Actually, my main question is not why the these functions are needed,
but why they are needed in the DOM API.

Per the User Agent Accessibility Guidelines, a compliant user agent
is one that supports the W3C Document Object Model. The reason is
this is the one API which the W3C controls for which we can prescribe
interoperability with W3C content. It is from this API that the user
agent can map secondary API to the DOM or to which the AT can access
directly for interoperability. Both Home Page Reader, Freedom
Scientific's JAWS, and the Fire Vox talking web browser all interact
with the DOM to provide their assistive technology solution.

It is the DOM that is responsible for getting an event handled by the
right handler. So it is the DOM that should tell you what that
handler will be, so that [usability cardinal rule] "the response of
the system to user action is predictable" when the application is
operated through a changed profile of input and output modalities.

Anything else is unnecessary replication of effort and error-prone.
Like maintaining an off-screen model from events scraped from the
screen-driver communication.

* Why this functionality?

In order to provide device independent access to a web application or
document, assistive technologies need to be able to enumerate the
event handers to find out which ones are available for later
triggering. This will become even more important when we go to XML
events version 2 whereas each event handler will have a purpose
associated with them.

With this function an assistive technology can list the event
handlers in a context menu and allow the user to trigger those
events. Persons with disabilities want to make a decistion as to what
event are triggering. When accompanies by the purpose of the event
handler the user can select the event to be triggered and execute it.
In Java Accessibility and Gnome Accessibility we have the ability to
enumerate the accessible actions for a given object by name and allow
the user to activate them in a device independent way.

As we go toward Rich Internet Applications (DHTML, AJAX, Comet, etc.)
we will want to enumerate functions for a given object and activate
them programmatically and device independently.

If the Web API group is going to operate in this space they must
address accessibility which includes device independence.

** Specific points:
Is the accessibility code intended to be run by javascript living in the
webpage? As in, is the author of the webpage is supposed use these APIs?
Or are they intended to be used by authors of accessibility tools like
screen reader plugins? Or are they intended for the browser itself?

It is required for third-party Assistive Technologies such as screen
readers and on-screen keyboards.

Similar functionality may also be implemented as scriptlets or browser

Mostly this is for software the user selected and configured into their
client configuration to get information as to what the author has done,
not for the initial author to use.

Here's the UAAG language:


This is for customers of those interfaces, both from script and from
third-party programs.

I also think that knowing "will an event listener fire" is not the
most important thing for assistive software to know, more important
is some concept of "role". For instance, if an element has a
mousedown listener, is that because it is a button, a drag source, an
editable area, a toggle, something that makes a sound on click in a
capturing listener but passes the click through? My experience with
Safari's accessibility implementation is that adding willTriggerNS,
hasListenerNS DOM APIs would not help much.

Absolutely, we want to know what the handler does, not just
what does it.

But we are looking to get this from the authors in the documents, not
from new functions in the DOM. That is in the roadmap for XML Events 2.
Descriptions of the handlers, as mentioned above.

Meanwhile, the ability to activate the behaviors is necessary and
even more primitive that good information in advance on what they
will do. If you can't activate them, you are dead in the water. With
inadequate labels for what they will do, you can still try and see.

And yes, we're working on getting 'role' information into the document,
even faster than the handler descriptions in XML Events 2.



>  Assistive technologies need to be able to discover these action
>  opportunities
>  in order to set up alternate actuation modalities for their users.
I don't understand why hasEventListener is required for this - an assistive
technology can just override the addEventListener method to catch all
registrations of events and store them - this is a lot more efficient on the
user agent than querying every element in the document for the few that have
events registered on them.

I'm not sure the idea is that the whole tree would be walked onLoad.
Perhaps the activity (event sensitivity) of the node would be checked
onFocus. IIRC we were talked out of DOM attributes (that would have
to be maintained everywhere) by the DOM WG. These methods were
inserted instead, because we could use them only when and where

Having the AT overwriting DOM core functions that all DOM
customers need sounds scary.  Not sure WebAPI would really want
to go there.

Again, thanks for the good discussion.

Received on Wednesday, 29 March 2006 18:01:48 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:16:20 UTC