- From: Charles McCathieNevile <charles@w3.org>
- Date: Thu, 2 May 2002 20:01:56 -0400 (EDT)
- To: Gregg Vanderheiden <GV@TRACE.WISC.EDU>
- cc: <w3c-wai-gl@w3.org>
On Thu, 2 May 2002, Gregg Vanderheiden wrote: Some of the approaches from that 1. Operable with device independent handlers - does this include text input? - what are these? - do they apply to all technologies? - Would they all be operable from keyboard? I think this is important. There has been some discussion in the User agent group about how to use DOM 2 to provide device independent handlers - essentially the user agent decides what triggers to give, and can be configured. character input is one approach to triggering them, mouse/keyboard commands and menus is another. 2. Operable from Keyboard - assumes all devices have a keyboard? - what keys on keyboard? (function keys too)? This is too general (what keys, is there a keydown/keyup function? etc) and too specific - not all devices have a keyboard, and many have only a very small one. 3. All functionality is operable using event handlers that can be activated with commands. - can text be input with commands? - What does commands mean? This just seems a bit vague on what it really means. 4. All functionality is operable via text input plus command operable events this starts to make sense, but I don't like it - it should be possible in most cases to just use a mouse to drive everything, yet that doesn't seem to be supported here. 5. All functionality operable via text input plus tab, up, down, left, right, and enter. (these are the text and command keys that can be ensured would be on all "keyboards" (real or virtual).) No they are not. One of my two keyboards doesn't have this. And speech-based systems don't have up, down, left, right as ways of relating things. This is too specific to visual environments. just my 2c worth chaals
Received on Thursday, 2 May 2002 20:01:57 UTC