Device independence and system conventions

Hello,

I received an action item at the 14 July teleconf [1] to
propose text for Guideline 1 about the relationship between
device independent and system conventions. The issue originally
raised by Harvey [2] concerned using the mouse to input text.
Do user agents that don't provide an on-screen keyboard fail
to satisfy checkpoint 1.1 (in [3])?

To summarize the discussion of [1]:

 - User agents should not all be required to implement on-screen
   keyboards.
 - The operating system should provide an on-screen keyboard
   that may be used by any software running on the system.
   Apparently lots of software exists anyway to do this.
 - User agents should take input (mouse and keyboard events)
   through standard system interfaces. 

One proposal is to modify the wording of checkpoint 1.1 from:

   Ensure that all functionalities offered by the user 
   agent interface are available through all supported 
   input devices. 

to somethinge like:

   Ensure that all functionalities offered by the user agent
   interface are available through standard interfaces for
   input devices supported by the operating system. 

Discussion at the meeting suggested that there were only two
interfaces actually used: for pointing device and keyboard,
and that other devices ended up using those two. (This is
suggested at the MS developer site by [4], the definition of
"Event", which includes only keyboard and mouse events).

Jim Allan suggested that the term "device-independence" continue
to be used in various checkpoints and that we explain what
this is supposed to mean in the rationale of Guideline 1.

Here's a first draft for a new rationale section for Guideline 1:

    Since not all users make use of the same hardware for
    input or output, software must be designed to work
    with the widest possible range of devices. For instance,
    not all users have pointing devices, so software 
    must not rely on them for operation. Users must be 
    able to reach all functionalities offered by the user 
    agent interface with all input devices supported by
    the underlying system.

    The best way to make this possible is to design software
    that follows system conventions and uses standard APIs
    for user input and output.  When user agents use these 
    standard interfaces, assistive technologies and other software can
    programmatically trigger mouse or keyboard events. For 
    instance, some users who may not be able to enter text easily
    through a standard keyboard can still use special
    devices or an on-screen keyboard to operate the user agent.

    Standard interfaces make it possible for users to use
    a variety of input and output devices (and to develop new ones),
    including pointing devices, keyboards,  
    braille devices, head wands, microphones, touch
    screens, speech synthesizers, and more. Using standard
    interfaces also allows international users with
    very different keyboards to use software. [@@this sounds
    good. is it true? -editor]

    Please  refer also to Guideline 12, which discusses 
    the importance to accessibility of following operating 
    system conventions.


We could also add a definition of 
"device independence" to the glossary:

    Device Independence:

    The ability to make use of software via any input
    or output device supported by the operating system.
    User agents should follow system conventions and
    use standard APIs for device input and output.

Comments welcome,

 - Ian

[1] http://lists.w3.org/Archives/Public/w3c-wai-ua/1999JulSep/0018.html
[2] http://lists.w3.org/Archives/Public/w3c-wai-ua/1999AprJun/0204.html
[3] http://www.w3.org/WAI/UA/WAI-USERAGENT-19990709/
[4] http://msdn.microsoft.com/library/officedev/off2000/defEvent.htm
-- 
Ian Jacobs (jacobs@w3.org)   http://www.w3.org/People/Jacobs
Tel/Fax:                     +1 212 684-1814

Received on Thursday, 15 July 1999 11:44:18 UTC