Re: Next events meeting: 17 Jan 2002 @ 4pm ET

Hi Ray,

Thanks for sending this proposal. Fortunately, I have
Arnaud Le Hors at my side and he has explained some of the
subtleties of your proposal to me, which I will try to 
summarize below (with some additional comments).

 - Ian

Ray Whitmer wrote:
> 
> I offer the following as my preferred alternative to every proposal I
> have heard to change the markup language with respect to events to make
> pages more accessible.  It addresses short and long term needs:
> 
> Define two simple new events:
> 
> interface Action:Event {
>   readonly attribute DOMString name;
> }

Summary: This is a proposal for a device-independent,
generic event
type. Each instance of an event of this type has a name,
presumably
corresponding to some semantic event, such as "submit",
"moreOptions",
"startOver", etc.

Comment: This seems like a good idea in principle. However, 
there may be reticence among authors to stray from their 
familiarity with device-specific handlers. 

> interface ActionList:Event {
>   void addAction(in DOMString name, in DOMString description);
> }
> 
> When you want a list of actions on a a target, you fire the ActionList
> event at the target, and listeners list the apropriate semantic actions.
>  Then you put them on the menu.  When the user selects one, you fire the
> Action event with the selected name, and listeners handle the semantic
> actions.

Summary: 

 - The assistive technology provides an object that
implements 
   the interface "ActionList".
 - To the DOM implementation, this makes that object look
like
   an event, which therefore can be fired on any node in the
tree.
   In effect, this implements the query: the AT can fire an
"ActionList"
   event at any node and when the dispatch is done (i.e.,
after capture
   and bubble), the object can contain relevant information.
 - The relevant information is accumulated as follows:
   Any event listener in the DOM tree that is paying
attention to
   an ActionList event is expected to call the method
"addAction",
   providing the name of the action it works with (e.g.,
"moreOptions")
   and a description of what it does when a "moreOptions
event"
   is fired.
 
> No more broken models, this is a design for the future.  The advantages
> are great:
> 
> It is compatible with many different models, such as VoiceBrowser,
> screen readers, and about any other UI I can contemplate, including
> standard web browsers -- I think it would be great to have access to the
> actions of a node from the right-click popup menu on all browsers, and
> the more browsers that display it, the more compelling it is to support
> it in web pages.  For users who are using hardware-generated events,
> there is at last a sensible strategy:  put the semantic part of all
> actions into the semantic handler, and then fire semantic events from
> them as quickly as possible.
> 
> It also permits the list of actions to adapt itself to the current state
> of the document.  It permits event handlers to be placed anywhere within
> the path of the current delivery, and when capturing or bubbling events
> at some higher place in the hierarchy, a handler right next to the one
> receiving the action decides which events should have a particular event
> made availabe to them.  This model is relatively perfect, compared to
> the alternatives.
> 
> This solves all the problems with no DOM interface changes except the
> events, that any interested implementation can add at any time.  The
> compatibility issues seem manageable, depending upon the details of how
> these are declared which needs to be worked out with the appropriate groups.
> 
> With a very good solution for the future in place, I respectfully
> suggest that the only short-term solution we should consider is a
> boolean test for listeners, wherever they may be declared, that receive
> events of a specified type originating on a specified node.  Otherwise,
> as requested previously, please give more specific use cases why more
> than this is essential.

Here is a problem that has not been discussed yet: the UAAG
1.0
model assumes that the user can move the keyboard focus to
any
"enabled element" in the document, and trigger associated
actions.
For HTML enabled elements include (in the UAAG 1.0 CR)
elements
with explicit event handlers. If the user agent does not
know
that there are handlers on a given node, it will not be able
to include that node as a stopping point for the focus. In
short,
navigation and interactivity are intimately bound in the
UAAG 1.0
model.

True, one can find out from existing DOM calls by looking at
"on*" attributes whether a node has an explicit event
handler.
However, that's not possible when handlers are attached 
by means other than the document source.

I think, therefore, that in order to have a reasonable
number
of focus stops, knowing exactly where handlers appear may be
necessary. 

I look forward to your comments. Thanks again, Ray. And to
Arnaud
for his help in understanding the proposal.

 - Ian

-- 
Ian Jacobs (ij@w3.org)   http://www.w3.org/People/Jacobs
Tel:                     +1 718 260-9447

Received on Wednesday, 16 January 2002 22:43:20 UTC