direct and spatial mapping to functionalities

While thinking about conformance I was looking the guidelines and
checkpoints again. I still don't like the word keyboard in guideline 2. I
also think keyboard access is not what we want to say in  many checkpoints
e.g. in

2.1 By default and without additional customization, ensure that all
functionalities offered by the user agent are accessible using the keyboard.

So you could use the keyboard arrow keys to point and some other key to
select and still conform? Or what about my laptop keyboard with a finger
mouse built into it?

I think we want to say something about offering direct mapping from input
device keys to the functionalities as opposite to spatial mapping with
pointing and graphical objects. In the first case we usually have many keys
or key combinations that the user needs to remember but no need to point or
see. In the latter case we need to remember just few keys and some way to
point in 2D (or 3D). If we can present the activating of functionalities
with graphical objects or by using force feedback it often helps memory but
it is slower to get to the functions.

I think both mappings are important. The point&click UI with explorable
memory aid (e.g. graphical  objects, sound map, force feedback map) helps
cognitively disabled (and everyone with human memory) the direct mapping
helps motorically disabled because some key or morse code etc. can be
mapped directly to the function without need to go through the spatially
located object.

A separate thing is then how to present all this. If the user can see she
can have memory aid on the screen (or even paper) also for directly mapped
keyboard events, if she cannot she needs to rely more on memory. On the
other hand she may use spatial mapping and exhaustive spatial search with
sound or force feedback to help her memory. The graphical object model
provides the memory aid naturally but can also be badly designed.

Marja

Received on Wednesday, 29 September 1999 11:06:46 UTC