RE: 2.1 thoughts

On Mon, 6 May 2002, Gregg Vanderheiden wrote:

  Chaals wrote

  this starts to make sense, but I don't like it - it should be possible in
  most cases to just use a mouse to drive everything, yet that doesn't seem
  to be supported here.

  Question,
  How would you enter text with a mouse?
  (Using an on screen keyboard doesn't count since that is a keyboard as
  far as the application is concerned).

response:
Yes, I mean that it should be possible to use an onscreen keyboard (as is
done in kiosk type environments in some cases). The point is that most users
now prefer to use a mouse, and in some situations (such as the EIAD browser
designed for people with brain injuries) rely on a touch screen.

Using the keyboard to move around requires an abstraction of navigation.
Point and click doesn't, for folks who can use it. We need to support both
cases, I think.

  ALSO in response to

  5. All functionality operable via text input plus tab, up, down, left,
  right, and enter.

  (these are the text and command keys that can be ensured would be on all
  "keyboards"  (real or virtual).)

  Chaals wrote

  No they are not. One of my two keyboards doesn't have this. And
  speech-based systems don't have up, down, left, right as ways of relating
  things. This is too specific to visual environments.

  3 Questions
  1- Which keys were missing.  The arrowkeys?

Yep, my phone doesn't have them, nor a tab key. And text entry isn't too
efficient either.

  2 - What speech input system doesn't provide a way to operate keyboard
  keys?  (One on a system without a keyboard?)

One which isn't simply a speech input interface to a desktop computer model -
for example a VoiceXML application, or similar system. Again, my phone has
voice control of many functions, but not voice simulation of keyboard use.

  3 - How about
  --- Text input plus  "step to next" (TAB) and "Activate". (ENTER).
  The arrowkeys can be optional but all function needs to be operable with
  text and the two functions.

Well, I prefer to start from "device independent mechanisms, including direct
activation where available (e.g. Voice, point and click) and navigation among
options (e.g. "next", "previous", "activate" using voice commands, or
keyboard input)

By the way, this is the kind of problem that User Agent group has dealt with
fairly extensively - it would be worth asking their thoughts in my humble
opinion.

cheers

Chaals

Received on Monday, 6 May 2002 15:26:36 UTC