Re: proposed changes by Ian 1, 2, and 11

Proposed 1.2 should clearly note that both input and output device
independence is required - if I can use the keyboard to move the cursor
around but then have to guess where on a vast screen devoid of landmarks the
links might be in the document then the requirement has not been met. This is
what Marja has been talking about for the last month or two in regard to
spatial, sequential and direct access to functions of the User Agent (which
include interacting with active components on a page).

Proposed 1.4 may as well point out that it only applies where a keyboard
interrface is supported at all. (EIAD is an example of an extremely good,
disability-specific desktop graphical User Agnet which has no keyboard
interface).

Some responses to comments below:

On Tue, 26 Oct 1999 thatch@us.ibm.com wrote:
[snip]  
  quote. 11.2 Provide information to the user about the current input
  configuration for the keyboard, graphical user interface, voice
  commands, etc. [Priority 1]
  The current configuration is the result of a cascade of author-specified
  user interface information   (e.g., "accesskey" or "tabindex" in HTML),
  browser defaults, and user-modified settings.  endquote.
  
  Here is a P1 requirement based on what everybody agreed was a
  broken requirement of access key. There is *** No *** reason to
  document tab index. This is another checkpoint from left field.
  
CMN: The requirement is that the user can find out how the system they are
using actually works. As Tim Lacey, Gregory, Myself and others pointed out
iat Redmond, the user neither knows nor (in general) cares how the controls
were assigned - what they need to know is what are the controls. As well as
tabindex and accesskeys (however they are implemented) the author provides
controls through Applets, scripts and forms, and of course a series of links.
The User agent is what determines what the final set of available controls
are, and what they do. From keyboard shortcuts (which many programs do allow
to to be changed, a number of them dynamically) to basic menu functions, from
a button on the page to changing desktops (many X systems allow multiple
"desktops", and even command-line linux systems now provide multiple
virtual consoles - like several different DOS screens running concurrently
with a way to swap between them), what is critical to the author is "how do I
make X happen?" and "What happens if I click button Y". Where this has been
determined by the User Agent, it is the responsibility of the User Agent to
let the User know.

JT:
  quote. 11.3 Allow the user to control the input configuration for standard
  input devices, including the keyboard, graphical user interface, voice
  commands, etc. "One stroke" access should be possible, for example
  a single key stroke, voice command, or button to activate an important
  functionality. [Priority 2]  endquote.
  
  Talk about mushrooms. Where in heavens name did this come from?
  If I understand it, it is very hard. No apps do it. How can you require P1
  that user agents do it. I think it means that a user can change Ctrl+P for
  print to Alt+F4. These are controls that should *not* be given the UA
  user.

CMN  

Amaya (Unix and Windows versions), Lynx, Word, enlightenment (a Window
manager for X window systems that is common on RedHat and Debian Linux
environments), Allaire's HomeSite authoring tool are a few that leap to mind.
This is a requirement that has been in the guidelines at least since december
1998, and is neither new, nor apparently terribly difficult for many apps.

JT
  Quite. 11.4 Use system conventions to provide information to the user
  about the current input configuration for the keyboard, graphical user
  interface, voice commands, etc. [Priority 2]
  For example, on some platforms, if a functionality is available from a
  menu, the letter of the key that will activate that functionality is
  underlined.
  endquote.
  
  The whole concept of "current input configuration," trying to generalize
  a broken concept, gives me great pain. I know we don't really care about
  my pain. But darn it, I thought at the face to face we agreed not
  to focus on accesskey.

CMN
It doesn't mention accesskey. Implementation of accesskey is almost
completely irrelevant (if it is implemented then it becomes part of the input
configuration and the user should naturally be able to work out what the tool
is going to do).

JT  
  Quote. 11.5 Avoid default keyboard, graphical user interface, voice,
  or other input configurations that interfere with or deviate from system
  conventions. [Priority 2]  endquote.
  
  This seems like a reasonable UI guideline, independent of accesskey,
  why is it here.

CMN
I thought the guideline was about User Interface. So the Checkpoint seems to
belong here.

JT  
  Quote. 11.7 Provide default keyboard, graphical user interface, voice,
  and other input configurations for frequently performed operations.
  [Priority 3]   endquote.
  
  This seems gratuitous. And then again, I don't understand it. Why here.

CMN
I would hope it is gratuitous, and that developers always do it already. Then
again, I understand that they don't, and often they have never considered
something they use a particularly complex interaction mode for all the time
as a function that needs a keyboard equivalent, or a mouse equivalent. So it
seems a reasonable requirement (although it seems more than beneficial - for
frequently performed operations it seems important to have something better
than struggling around with mousekeys to find links, which as I understood it
is the definition of a P2 requirement.)

Charles McCN  
  

Received on Wednesday, 27 October 1999 00:56:12 UTC