WAI-ARIA role "menu": State that it should not be used for list of autocomplete options

Hi there!

This is a proposal to enhance the description of the WAI-ARIA role
"menu" to not use it for lists of autocomplete options, but only for
real menu scenarios such as the context menu to an item, or the dropdown
menu from a menu bar's menu item. I'm referring specifically to:

Instead, a widget with role of "listbox" should be used for such scenarios.

Subsequently, the actual autocomplete items should not be menuitem, but
option elements instead. So the documentation for role "menuitem" should
probably also reference this.

Rationale for this proposal:

The short version is: Menus on Windows are a mess when it comes to
support by assistive technologies and their assumptions on states of
menus and where the focus is. Avoiding the role "menu" and "menuitem"
for autocomplete scenarios avoids getting into such a mess.

Long version: Win32 has always had a concept of a menu mode. This is a
specific state applications get in when a menu bar or menu is open. This
has been the case even on Windows 3.x, which was still Win16, and has
been inherited into Windows 95 and into Microsoft Active Accessibility
right from the start in 1996.
Screen readers like JAWS and Window-Eyes have certain assumptions about
how this menu mode should work. There are 4 events defined:
SYSTEM_MENUEND. For a menu bar and its dropdown menus, the only valid
event sequence is that stated in the previous sentence. If an
application doesn't follow that sequence, screen readers get utterly
For context menus, the sequence is SYSTEM_MENUPOPUPSTART and
SYSTEM_MENUPOPUPEND, omitting the other two. Even NVDA uses these, using
the SYSTEM_MENUPOPUPSTART as a focus event to recognize that a context
menu was opened. The moment such menus are active, focus events on
widgets outside the menus cause confusion or are ignored.

If an author now uses ARIA menubar, menu, and menuitem roles, the
browsers have to translate those into the proper events so screen
readers can support them. Examples of menus can be seen in Google Docs,
or on the Freedom Scientific homepage. There, even keyboard navigation
clearly indicates the menu bar and dropdown menu interaction, and events
have to be fired correctly so screen readers can interact with those
properly on Windows.

In the case of an auto-complete, however, the situation is different
entirely. Yes, visually they tend to pop up just like context menus, and
often even are styled similarly. But the concept is different. It's not
a list of executable functions to choose from, it's merely a list of
suggested names or terms to fill in automatically for the user.

Say you have an AutoComplete widget consisting of a text box and a
container plus result items. If the typed text matches something in the
auto complete list a live region announces that results are available,
and the list appears. DownArrow selects the first item, up and down
select more items, and tab or enter accepts the current item and
completes the text in the textbox and returns focus to it.

What happens if a web author uses menu and menuitem roles on such an
autocomplete is this:

1. User types something.
2. AutoComplete function finds a match, sends info to the live region,
and pops up the menu. Note that it only appears, it doesn't actually get
3. Because of 2, a SYSTEM_MENUPOPUPSTART event is fired. NVDA and other
screen readers think that a menu has popped up. For them, focus is now
no longer in the text field, but on the menu, without any item selected.
The result is that left and right arrow no longer read the characters
inside the edit field, because a menu is active.

4. On the other hand, if the menu was a listbox instead, and the
menuitems were options instead, the listbox and option items would
appear, a live region text would tell the user auto complete results are
available, but focus would not be stolen from the text box. Only if the
user pressed DownArrow, focus would move into the listbox and select the
first option. Pressing Enter or Tab would accept the item, and focus
would be returned to the textbox.

An alternative solution to the focus and menu mode inconsistencies
could, in theory, be that browsers fire the menu start event only if an
item within such a menu gains focus, if such a menu is created by
WAI-ARIA markup. However, that way, authors would be forced to focus an
item within a menu once they pop it up, or screen readers would never
announce that a context menu, for example, had opened.

So, to get the most consistent results acros assistive technologies,
browsers, and operating systems, using listbox and option roles as a
suggested way for auto complete widgets could go a long way towards
relieving authors of doubt and uncertainty.



Marco zehe
Accessibility QA engineer and evangelist
Mail: mzehe@mozilla.com
Blog: http://www.marcozehe.de
Twitter: http://twitter.com/MarcoInEnglish

Received on Thursday, 6 March 2014 12:55:04 UTC