Guideline 2 & device independence

Sorry, but I still think guideline 2 is too device specific when it talks
about keyboard access.

To understand it better I first explain how I think the system works and
then what I think we try to say in higher level.

An input device has any number of buttons, maybe location info, microphone
etc. The computer has a device driver that converts the pushing of buttons,
saying a word, using morse code etc. to set of events that the user agent
can understand. When UA gets the events it can activate functions.

Some of the events activate a user level function directly. These are
shortcuts to the functions and often the event names are related to
keyboard e.g. "control X".

Often in graphical UI events consist of button pushes and pointer
movements. The location info of a pointing device is used to decide which
graphical object should handle the events and activate the functions and
again the object may use the location info inside to decide which function
is activated.

So I guess what we want here is to be able to activate functions also
directly without a need of the pointing information which may be hard to
create in the device driver with certain non pointing devices. In other
words we want direct shortcuts to the functionality so that non-pointing
devices can easily provide that. The fact that the names in the event level
often come from a keyboard world does not mean we only want keyboard. For
instance, the "control X" event could be created by the device driver of
speech device when user says "delete" or creates morse code sequence "-..".

So could we state the GL 2 something like "Provide direct shortcuts to the
functionality of the user interface (that can be activated by non-pointing
devices)"?

Then the checkpoints probably need to be rephrased a little but keyboard
can be used as example.

What do you think?

Marja

Received on Wednesday, 22 September 1999 10:51:24 UTC