Re: Guideline 2 & device independence

At 03:23 PM 9/22/99 -0700, Jon Gunderson wrote:
>Responses in JRG:
>At 10:47 AM 9/22/99 -0400, Marja-Riitta Koivunen wrote:
>>Sorry, but I still think guideline 2 is too device specific when it talks
>>about keyboard access.
>
>JRG: We included the guideline to specifically highlight the problem of
>keyboard support.  The group felt developers need to see this clearly in
>the guidelines.  It does not mean only keyboard funcationality should be
>provided, but that all functionalities should be available through the
>keyboard.  And if your user agent does not support a keyboard, the
>checkpoints would not apply to your user agent.
>
>>
>>To understand it better I first explain how I think the system works and
>>then what I think we try to say in higher level.
>>
>>An input device has any number of buttons, maybe location info, microphone
>>etc. The computer has a device driver that converts the pushing of buttons,
>>saying a word, using morse code etc. to set of events that the user agent
>>can understand. When UA gets the events it can activate functions.
>>
>>Some of the events activate a user level function directly. These are
>>shortcuts to the functions and often the event names are related to
>>keyboard e.g. "control X".
>>
>>Often in graphical UI events consist of button pushes and pointer
>>movements. The location info of a pointing device is used to decide which
>>graphical object should handle the events and activate the functions and
>>again the object may use the location info inside to decide which function
>>is activated.
>>
>>So I guess what we want here is to be able to activate functions also
>>directly without a need of the pointing information which may be hard to
>>create in the device driver with certain non pointing devices. In other
>>words we want direct shortcuts to the functionality so that non-pointing
>>devices can easily provide that. The fact that the names in the event level
>>often come from a keyboard world does not mean we only want keyboard. For
>>instance, the "control X" event could be created by the device driver of
>>speech device when user says "delete" or creates morse code sequence "-..".
>>
>>So could we state the GL 2 something like "Provide direct shortcuts to the
>>functionality of the user interface (that can be activated by non-pointing
>>devices)"?
>
>JRG: We want all functionalities to be accessible to the user, not just
>what the developer thinks will be important or frequently used.
>
>>
>>Then the checkpoints probably need to be rephrased a little but keyboard
>>can be used as example.
>>
>>What do you think?
>
>JRG: I think this is old territory for the group.  I don't think there is
>any new information here that we have not already discussed.

There are many other territories that can be discussed on the list when
people have concerns. I don't understand why this one is any special.

When I compare it to the other guidelines this sticks to the eye being on a
totally different level and I tried to see if that could he helped. Sorry
about that.

Marja

>>
>>Marja
>
>Jon Gunderson, Ph.D., ATP
>Coordinator of Assistive Communication and Information Technology
>Chair, W3C WAI User Agent Working Group
>Division of Rehabilitation - Education Services
>University of Illinois at Urbana/Champaign
>1207 S. Oak Street
>Champaign, IL 61820
>
>Voice: 217-244-5870
>Fax: 217-333-0248
>E-mail: jongund@uiuc.edu
>WWW:	http://www.staff.uiuc.edu/~jongund
>		http://www.w3.org/wai/ua
>		http://www.als.uiuc.edu/InfoTechAccess

Received on Wednesday, 22 September 1999 17:00:15 UTC