- From: Al Gilman <asgilman@access.digex.net>
- Date: Tue, 10 Nov 1998 21:42:48 -0500 (EST)
- To: charlesn@srl.rmit.EDU.AU (Charles McCathieNevile)
- Cc: w3c-wai-ua@w3.org
to follow up on what Charles McCathieNevile said: > I don't think the answer (especially in a general set of > guidelines) to interface problems is to allow keyboard control > specifically. I think the answer is to allow all controls to be > exposed to third party technology. However, for any UA which > has a keyboard interface it seems like a priority 1 that the > keyboard be able to operate all controls. > (ie it is more important that the controls are exposed in > general, but only just. Keyboard is the single most important > specific device) Sorry for possibly muddying what seems like a simple question, but I a just beginning to realize that what we are looking at in the command area is not just the ability to bind a set of commands to different tokens that go with different devices, but that there needs to be a sliding scale of _protocols_ available for commanding. Over the last few days I have had the pleasure of watching Gregg demonstrate the EZ Access modes for touch-screen kiosks that have been developed at Trace. One of the modes is adaptive for people with poor fine motor control or a tremor. This is a typing emulator where there is a two-step transaction for each symbol recorded. The semi-controlled hand waves past the touch screen until a sensitive area is brushed and a letter gains the focus because a touch is sensed. Then, if this letter is intended, it is committed by pressing the big green diamond button. One of the traits of the latter is a generous guardband around it. You don't activate anything else by mistake when going for it. The touch screen can be used by people with motor control that is not so fine as the grid of sensitive spots in the alphabet typing screen, because of the two-phased commit, where a letter is not committed by a screen touch alone. The conventional mouse command modality is a stateful protocol as well, because mouse motion is relative and accumulated by the screen cursor. That is what breaks when the user is blind. Not the ability to manipulate the mouse, but the ability to process the state feedback or orientation given by the cursor. Al
Received on Tuesday, 10 November 1998 21:42:17 UTC