- From: Ian Jacobs <ij@w3.org>
- Date: Wed, 29 Sep 1999 11:33:10 -0400
- To: Marja-Riitta Koivunen <marja@w3.org>
- CC: ian@w3.org, w3c-wai-ua@w3.org
Marja-Riitta Koivunen wrote: > > While thinking about conformance I was looking the guidelines and > checkpoints again. I still don't like the word keyboard in guideline 2. I > also think keyboard access is not what we want to say in many checkpoints > e.g. in > > 2.1 By default and without additional customization, ensure that all > functionalities offered by the user agent are accessible using the keyboard. > > So you could use the keyboard arrow keys to point and some other key to > select and still conform? Or what about my laptop keyboard with a finger > mouse built into it? > > I think we want to say something about offering direct mapping from input > device keys to the functionalities as opposite to spatial mapping with > pointing and graphical objects. In the first case we usually have many keys > or key combinations that the user needs to remember but no need to point or > see. In the latter case we need to remember just few keys and some way to > point in 2D (or 3D). If we can present the activating of functionalities > with graphical objects or by using force feedback it often helps memory but > it is slower to get to the functions. > > I think both mappings are important. The point&click UI with explorable > memory aid (e.g. graphical objects, sound map, force feedback map) helps > cognitively disabled (and everyone with human memory) the direct mapping > helps motorically disabled because some key or morse code etc. can be > mapped directly to the function without need to go through the spatially > located object. There is an analogy with serial access to links (which provides context as-you-go) and direct access (which is faster, but requires more experience). Explaining the utility of both for access to UA functionality would be useful (we already do so for navigation). However, I think your abstraction overlooks the need that motivated this Guideline: assistive technology today relies on software using the standard OS keyboard API (please correct me if I'm wrong). This Guideline is less abstract than others since it addresses today's technology and today's requirements. Device-independence captures the principle while talking about the keyboard API captures today's need. I am not yet convinced (but still open!) that presenting the Guideline as requiring direct v. serial access to UA functionality will adequately address the requirement of today's technology. Please let me know if my comments reflect an understanding of your suggestion. Thank you, - Ian > A separate thing is then how to present all this. If the user can see she > can have memory aid on the screen (or even paper) also for directly mapped > keyboard events, if she cannot she needs to rely more on memory. On the > other hand she may use spatial mapping and exhaustive spatial search with > sound or force feedback to help her memory. The graphical object model > provides the memory aid naturally but can also be badly designed. > > Marja -- Ian Jacobs (jacobs@w3.org) http://www.w3.org/People/Jacobs Tel/Fax: +1 212 684-1814 Cell: +1 917 450-8783
Received on Wednesday, 29 September 1999 11:33:30 UTC