- From: Al Gilman <asgilman@iamdigex.net>
- Date: Sun, 29 Aug 1999 14:16:55 -0400
- To: w3c-wai-pf@w3.org
- Cc: w3c-wai-ua@w3.org
At 07:55 AM 8/26/99 +0100, DPawson@rnib.org.uk wrote: >Al wrote: > Only the user knows that the >>screen reader is talking; the web browser doesn't. Same for the SMIL >>player. > >Is this something SMIL group are aware of? Yes, generally speaking. They have a system-captions flag. This is a channel for learning about user preferences concerning use of the user interface from what the user has said globally to the operating system. >Is it something we could do something about? a) with some difficulty; it is largely in the hands of the operating system and not the W3C so far as I understand. b) however, not to give up. Because it matters to accessibility, we should keep it on our watch list even if the circle of discussion has to go outside the W3C to arrive at a solution. This is almost exactly what I have talked about in the past as a requirement for the way a Web document is handled from DOM to display device needs to be aware of the whole complement of devices and what they are supposed to do in the user's plan for how to employ their capabilities. The people at CWI developed this idea pre-smil as a bundle of channels. That is a congenial metaphor to me. Channel assignments are instances of [specialized or subclassed] device capabilities. Interaction capabilities of the document are associated with interaction capabilities of the user+devices via UI profiling. See many discussions of kestroke profiling in the UA group. Note: The critical connection here is the mouse and events. I don't think that we are going to eliminate mouse-orientation from the design of user interactions. I expect if we can do it right, the mouse-unaware abstraction in the DOM will be delivered to application authors with the mouse-aware application of that abstraction layered over it in the macros that authors use in their page authoring environment. We need to model the interaction world where the cursor interacts with the layout geometry to come up with accurate models for the OnMouseOver event etc. It is not enough for the WAI to concern ourselves with a pure abstract document in the DOM. We need to have a model of the interaction environment as it is actually used and ensure that there is a way to uncouple that from specific devices. The author needs to be able to think mouse and the user to not think mouse. Coming up with the mutual abstraction requires that we look hard both at the actual mousewise interaction and our ideas of rhetorical abstractions. In fact we care about the precise geometry of the GUI interaction for such things as people with fine motor control problems. [I have not reviewed the CSS DOM, so it may all be in there and I wouldn't know.] Fortunately any touchscreen which people are supposed to operate with a finger and not a stylus has geometry rule problems with the general run of web pages, so it is not just people with disabilities that care about the ability to run final geometry against geometry rules. Al > >DaveP >
Received on Sunday, 29 August 1999 14:09:39 UTC