W3C home > Mailing lists > Public > w3c-wai-ua@w3.org > October to December 2011

Re: Possible guideline about non-keyboard input devices

From: Jim Allan <jimallan@tsbvi.edu>
Date: Thu, 17 Nov 2011 11:21:00 -0600
Message-ID: <CA+=z1WniU-BH5OkfmBCGS=0CEuCJ8ziEwe04XtgNm9QEkGBbtw@mail.gmail.com>
To: Greg Lowney <gcl-0039@access-research.org>
Cc: WAI-UA list <w3c-wai-ua@w3.org>
this looks pretty good. I have a niggle that the items below are too
specific but can't tease it out at the moment. This would cover touch
interface, voice, mental (brain wave) etc.
 I was wondering if we wanted to say 'input device/method'. Device sounds
external, like a keyboard or a mouse. I know the capacitive screen on my
smart phone is the input device, but I just think of it as my phone (it
just happens to have a self contained non-replaceable touch screen). The
input method is inherent in the device. you can't separate them. However,
you can add an additional device (bluetooth or usb keyboard for example)

Jim

On Thu, Nov 17, 2011 at 1:28 AM, Greg Lowney
<gcl-0039@access-research.org>wrote:

> **
> Hi! Jan's comment on the current survey about generalizing success
> criteria to address input devices beyond the keyboard access suggested to
> me that we could add something like the following. Alternatively, it could
> be limited to just pointing devices with only minor changes. However, if
> people feel that it's too late to add them I'd certainly understand.
> *
> * *2.12 Other input devices*
> Summary: For all input devices supported by the platform, the user agents
> should let the user perform all functions aside from entering text
> (2.12.2), and enter text with any platform-provided features (2.12.1). If
> possible, it is also encouraged to let the user enter text even if the
> platform does not provide such a feature (2.12.3).
>
> * 2.12.1 Support Platform Text Input Devices: If the platform supports
> text input using an input device, the user agent is compatible with this
> functionality. (Level A)*
> Intent:
>
> Some users rely entirely on pointing devices, or find them much more
> convenient than keyboards. These users can operate applications much more
> easily and efficiently if they can carry out most operations with the
> pointing device. It is not the intention of these guidelines to require
> every user agent to implement its own on-screen keyboard on systems that do
> not include them, but on systems where one is included it is vitally
> important that the user agent support this utility.
>
> Examples:
>
>  Ruth has extremely limited motor control and slurred speech, so operates
> her computer using a head pointer. Her desktop operating system includes a
> built-in on-screen keyboard utility, and even though the percentage of
> desktop users who use it is very small, she counts on new applications
> (including user agents) to be tested and compatible with it so that she can
> enter text. When active, the on-screen keyboard reserves the bottom portion
> of the screen for its own use, so the user agent respects this and does not
> cover that area even in modes that would normally take up the full screen.
> It also avoids communicating with the keyboard through low-level system API
> that would miss simulated keyboard input.
>
>
> * 2.12.2 Operation With Any Device: If an input device is supported by
> the platform, all user agent functionality other than text input can be
> operated using that device. (Level AA)*
> Intent: Some users rely entirely on pointing devices, or find them much
> more convenient than keyboards. These users can operate applications much
> more easily and efficiently if they can carry out most operations with the
> pointing device, and only fall back on a physical or on-screen keyboard as
> infrequently as possible. If the platform provides the ability to enter
> arbitrary text using a device (such as large vocabulary speech recognition
> or an on-screen keyboard utility), the user agent is required to support it
> per 2.12.1 Text Input With Any Device. If the platform does not provide
> such a feature, the browser is encouraged to provide its own, but because
> that is generally more difficult and resource intensive than command and
> control it is not required.
>
> Examples:
>
>  Ruth has extremely limited motor control and slurred speech, so operates
> her computer using a head pointer. The mouse pointer moves in response to
> the orientation of her head, and she clicks, double clicks, or drags using
> a sip-and-puff switch. It is much easier for her to point and click on a
> button or menu item than it is for her to simulate keyboard shortcuts using
> her on-screen keyboard. In fact, she prefers to customize her applications
> to make most functions available through toolbar buttons or menu items,
> even those that are by default available only through keyboard shortcuts.
>
> Randall has a web browser on his smart phone that allows him to perform
> most operations using speech commands. Unfortunately, a few features are
> only available through the touchscreen, which he can only operate by taking
> off his protective gloves. In the next version of the browser, the
> remaining features are given keyboard commands, and Randall finds the
> product safer and more convenient to use.
>
>
> 2.12.3 Text Input With Any Device:*If an input device is supported by the
> platform, all user agent functionality including text input can be operated
> using that device. (Level AAA)* Intent: Some users rely entirely on
> pointing devices, or find them much more convenient than keyboards. These
> users can operate applications much more easily and efficiently if they can
> carry out most operations with the pointing device, and only fall back on a
> physical or on-screen keyboard as infrequently as possible. If the platform
> provides the ability to enter arbitrary text using a device (such as large
> vocabulary speech recognition or an on-screen keyboard utility), the user
> agent is required to support it per 2.12.1 Text Input With Any Device. If
> the platform does not provide such a feature, the browser is encouraged to
> provide its own.
>
> Examples:
>
> Ruth has extremely limited motor control and slurred speech, so operates
> her computer using a head pointer. The mouse pointer moves in response to
> the orientation of her head, and she clicks, double clicks, or drags using
> a sip-and-puff switch. The operating system does not provide an on-screen
> keyboard, but in order to be maximally accessible, a small on-screen
> keyboard is available as an add-on for her browser.
>
> Randall has a web browser on his smart phone that allows him to perform
> most operations using speech commands. By offloading the speech recognition
> to an Internet server, it is able to perform large vocabulary speech
> recognition, so Randall can use his voice to compose email and fill in
> forms, as well as controlling the browser itself.
>
>
>     Thanks,
>     Greg
>



-- 
Jim Allan, Accessibility Coordinator & Webmaster
Texas School for the Blind and Visually Impaired
1100 W. 45th St., Austin, Texas 78756
voice 512.206.9315    fax: 512.206.9264  http://www.tsbvi.edu/
"We shape our tools and thereafter our tools shape us." McLuhan, 1964
Received on Thursday, 17 November 2011 17:21:36 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 17 November 2011 17:21:37 GMT