- From: Ian B. Jacobs <ij@w3.org>
- Date: Thu, 20 Dec 2001 12:43:59 -0500
- To: w3c-wai-ua@w3.org, plh@w3.org, asgilman@iamdigex.net, jongund@uiuc.edu, rayw@netscape.com
- CC: www-dom@w3.org
Hello, To prepare for today's meeting on DOM events and requirements for accessibility, I've summarized where I think we are. Please let me know what I've missed so that we are sure to address the key points at the meeting. The two main points seem to be the following: 1) What's the best way to ensure that assistive technologies can identify and trigger event handlers? 2) What's the best place to describe the semantics of author-specified behaviors? ------------------ 1) What's the best way to ensure that assistive technologies can identify and trigger event handlers? ------------------ Goal: When the author makes available a functionality that is available with one input device, provide access to that functionality via another input device. Event types are generally defined in a format (e.g., onMouseDown, onFocus, etc.). A given node may have zero or more listeners (i.e., programs) associated with a given event type. From what I understand, DOM 2 allows programs to dispatch an event of a given type to a node. DOM 2 does not allow per-listener activation, but I don't think that's an accessibility requirement. So the dispatch solution seems to suffice. DOM 2 does not allow programs to query a node to know whether there are event handlers of a given type associated with the node. The UAWG initially asked that the DOM WG solve this by making the list of listeners available. The DOM WG replied that there is a better solution: a boolean function that returns true or false depending on whether a node has handlers of a given type. So the query solution seems to suffice. Have we converged? What's missing in terms of query and activation? ------------------ 2) What's the best place to describe the semantics of author-specified behaviors? ------------------ Point one is about device-independent access. Point two is about user interface. Goal: Provide the users with clues about expected behavior when the user does not have access to the usual clues of the "primary" output mode. Use case: The mouse user drags the mouse over a piece of screen and a visual-only cue suggests that, by clicking, the user will have access to another page of information. What's a good way to let the non-visual/non-mouse user know to "click here"? There have been suggestions that formats should allow authors to describe behavior in markup. Imagine the following use case: * The user moves focus to an enabled piece of content. * The assistive technology queries the node and learns that there are onMouseOver and onMouseDown event handlers associated. * Either the user queries the content or the AT informs the user automatically that interaction is possible. * The user queries the content for the author-specified description of what is expected to happen on activation. Based on that information, the user makes a decision. There have been discussions about whether the author should describe behavior at the event level or at a higher semantic level. I think this is a topic where more discussion is required: how should formats let authors describe available behaviors? This is a format question more than a DOM question. - Ian -- Ian Jacobs (ij@w3.org) http://www.w3.org/People/Jacobs Tel: +1 718 260-9447
Received on Thursday, 20 December 2001 12:44:04 UTC