Possible guideline about non-keyboard input devices

Hi! Jan's comment on the current survey about generalizing success criteria to address input devices beyond the keyboard access suggested to me that we could add something like the following. Alternatively, it could be limited to just pointing devices with only minor changes. However, if people feel that it's too late to add them I'd certainly understand.
*
*


    *2.12 Other input devices*


Summary: For all input devices supported by the platform, the user agents should let the user perform all functions aside from entering text (2.12.2), and enter text with any platform-provided features (2.12.1). If possible, it is also encouraged to let the user enter text even if the platform does not provide such a feature (2.12.3).


      *2.12.1 Support Platform Text Input Devices: If the platform supports text input using an input device, the user agent is compatible with this functionality. (Level A)*


Intent:

Some users rely entirely on pointing devices, or find them much more convenient than keyboards. These users can operate applications much more easily and efficiently if they can carry out most operations with the pointing device. It is not the intention of these guidelines to require every user agent to implement its own on-screen keyboard on systems that do not include them, but on systems where one is included it is vitally important that the user agent support this utility.

Examples:

    Ruth has extremely limited motor control and slurred speech, so operates her computer using a head pointer. Her desktop operating system includes a built-in on-screen keyboard utility, and even though the percentage of desktop users who use it is very small, she counts on new applications (including user agents) to be tested and compatible with it so that she can enter text. When active, the on-screen keyboard reserves the bottom portion of the screen for its own use, so the user agent respects this and does not cover that area even in modes that would normally take up the full screen. It also avoids communicating with the keyboard through low-level system API that would miss simulated keyboard input.


      *2.12.2 Operation With Any Device: If an input device is supported by the platform, all user agent functionality other than text input can be operated using that device. (Level AA)*


Intent: Some users rely entirely on pointing devices, or find them much more convenient than keyboards. These users can operate applications much more easily and efficiently if they can carry out most operations with the pointing device, and only fall back on a physical or on-screen keyboard as infrequently as possible. If the platform provides the ability to enter arbitrary text using a device (such as large vocabulary speech recognition or an on-screen keyboard utility), the user agent is required to support it per 2.12.1 Text Input With Any Device. If the platform does not provide such a feature, the browser is encouraged to provide its own, but because that is generally more difficult and resource intensive than command and control it is not required.

Examples:

    Ruth has extremely limited motor control and slurred speech, so operates her computer using a head pointer. The mouse pointer moves in response to the orientation of her head, and she clicks, double clicks, or drags using a sip-and-puff switch. It is much easier for her to point and click on a button or menu item than it is for her to simulate keyboard shortcuts using her on-screen keyboard. In fact, she prefers to customize her applications to make most functions available through toolbar buttons or menu items, even those that are by default available only through keyboard shortcuts.

    Randall has a web browser on his smart phone that allows him to perform most operations using speech commands. Unfortunately, a few features are only available through the touchscreen, which he can only operate by taking off his protective gloves. In the next version of the browser, the remaining features are given keyboard commands, and Randall finds the product safer and more convenient to use.


      2.12.3 Text Input With Any Device:**If an input device is supported by the platform, all user agent functionality including text input can be operated using that device. (Level AAA)**

Intent: Some users rely entirely on pointing devices, or find them much more convenient than keyboards. These users can operate applications much more easily and efficiently if they can carry out most operations with the pointing device, and only fall back on a physical or on-screen keyboard as infrequently as possible. If the platform provides the ability to enter arbitrary text using a device (such as large vocabulary speech recognition or an on-screen keyboard utility), the user agent is required to support it per 2.12.1 Text Input With Any Device. If the platform does not provide such a feature, the browser is encouraged to provide its own.

Examples:

    Ruth has extremely limited motor control and slurred speech, so operates her computer using a head pointer. The mouse pointer moves in response to the orientation of her head, and she clicks, double clicks, or drags using a sip-and-puff switch. The operating system does not provide an on-screen keyboard, but in order to be maximally accessible, a small on-screen keyboard is available as an add-on for her browser.

    Randall has a web browser on his smart phone that allows him to perform most operations using speech commands. By offloading the speech recognition to an Internet server, it is able to perform large vocabulary speech recognition, so Randall can use his voice to compose email and fill in forms, as well as controlling the browser itself.


     Thanks,
     Greg

Received on Thursday, 17 November 2011 07:29:03 UTC