W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > April to June 2008

on-screen keyboards and Web page structure

From: Al Gilman <Alfred.S.Gilman@IEEE.org>
Date: Mon, 7 Apr 2008 11:48:52 -0400
Message-Id: <D96221B4-3FDF-49A2-A54D-02D578AAB688@IEEE.org>
Cc: info@ace-centre.org.uk, wai-xtech@w3.org
To: w3c-wai-ig@w3.org

* background:

I believe I have seen a demonstration of on-screen-keyboard
behavior where choices were organized into a menu tree of
at least two levels.  The submenus were displayed as rows
in a grid and the upper level choices represented by the rows.

The process of "tool animates focus, user selects one when
it is focused" was repeated first selecting rows and then
individual choices in a row.

This kind of hierarchical descent selection is re-affirmed
as helpful in Colvin and Lysley,

"Designing and using efficient interfaces for switch accessibility"

.. but they are talking about designers consciously designing for switch
usability, not AT groping their way through the level of structure
that you find in the wild on the Web.  My question has to do
with actual practice applied to general Web pages.

* question:

My question is: is there any current practice where an on-screen
keyboard or other switch-user Assistive Technology uses the
nesting hierarchy of a Web page (elements inside other elements)
to construct such a hierarchical menu? In which the user gets
to choose among the interactive items in the page by page-region
group first, and eventually winds up with a group of individual
actions that they can activate or not as the scanning passes by?

Received on Monday, 7 April 2008 15:51:10 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 20:36:31 UTC