RE: User Agent Accessibility Guidelines

The issue isn't really the keyboard, it's the availability of a "discrete
event" method, as opposed to a graphical method, of performing tasks.  The
visual interface using point-level control just isn't practical for either
those who are not visual, or those who do not have point-level control.
Drag and Drop with a mouse isn't possible for a blind person (even with a
haptic interface, it isn't reasonable), and isn't very useable for a person
using voice input, though there are some close approximations.  But the same
issues arise for a telephone based browser, or any "eyes-free, hands-free"
interface.

The only reason that "keyboard control" is mentioned at all, I think, is
that we think of it as the prototype for discrete event control - key
presses.  Most alternative input systems (other than mouse emulators)
generate keyboard characters at some level.  The keyboard codes are the
lingua franca of alternative input, so allowing keyboard control also allows
switch control, voice control, and so on.

Denis Anson, MS, OTR
Assistant Professor
College Misericordia
301 Lake St.
Dallas, PA 18612

Member since 1989:
RESNA: An International Association of Assistive Techology Professionals
Website: http://www.resna.org
RESNA ANNUAL CONFERENCE -- "RESNA 2000"
ORLANDO, FL, JUNE 28 -- July 2, 2000

-----Original Message-----
From: w3c-wai-ua-request@w3.org [mailto:w3c-wai-ua-request@w3.org]On Behalf
Of Charles McCathieNevile
Sent: Friday, October 01, 1999 7:47 AM
To: Kasper Peeters
Cc: Ian Jacobs; disc@mnemonic.org; w3c-wai-ua@w3.org
Subject: Re: User Agent Accessibility Guidelines

Although there are cases where keyboard control isn't that handy (think of a
speech-driven palmtop, or even more a pen-driven one) there are a lot of
cases where people cannot use a mouse effectively. (The obvious one is
people
who are blind.)

For such people, all functionalities need to be available through the
keyboard. While there are some features that already are, using a differnt
metaphr, I cannot think of anything that cannot be sensibly and usefully
implemented.

To take the drag and drop example:

Imagine the ability to select an object, grab it, and then go to another
object and ask the second one to do something to "whatever has been marked".
This describes, pretty clearly, drag and drop. And the keyboard technique
used in windows of select, copy, select, paste, using application icons. For
people without a useful spatial model (for example those who are using
speech
output and the completely linear navigation available via the tab key) that
is much more sensibe than trying to drive around a mouse and hope they hit
the things they are after.

The essential point is to abstract the user interface sufficiently that it
doesn't depend on a particular input or output device. Then it is possible
for people to use your software with the device they need, be it a full
combination of keyboard, force-feedback mouse, dataglove, voice I/O and a
24"
monitor, or a head switch and a morse code buzzer, or anywhere in between.

keep up the feedback.

Charles

On Fri, 1 Oct 1999, Kasper Peeters wrote:


  There are two issues here: 1. do all mouse-driven manipulations have a
  useful keyboard equivalent and 2. is it a good idea to drive software
  by simulating keyboard events. For the first one, I think that there
  are definitely things that don't make much sense when done through the
  keyboard (drag and drop, for instance). As far as the second point is
  concerned, I think that the proper way to drive software by external
  means is to expose an API to the outside world. Granted, you list that
  somewhere else too (`make the browser scriptable', or something along
  those lines).

Received on Friday, 1 October 1999 08:44:46 UTC