Comment on WAI-ARIA Practices 1.1 - emphasis on keyboard interactions, which don't translate to touch devices

A large portion of the ARIA patterns and practices assume the presence 
of a keyboard, and codify the expected/suggested key combinations to 
interact with widgets. Authors would then write custom JS code to detect 
keystrokes and fire relevant events/changes accordingly.

However, this does not translate well (and in many cases, not at all) to 
the ever-growing number of touchscreen devices (be they pure 
touchscreens, such as a mobile phone/tablet without any paired keyboard, 
or multi-input scenarios such as touch-enabled laptops, if the user 
chooses to interact purely via the touchscreen).

At the most basic level, assistive technology on touchscreens offers 
sequential access to move the focus - essentially equivalent to moving 
the accessible cursor right/left in document mode on a desktop/keyboard 
device. In addition, ATs on these devices can generally be switched to 
only navigate through certain types of content (navigate sequentially 
between headings, links, form controls, etc). These interactions do not 
fire any keydown/keypress events, but simply move the current focus (so 
focus/blur events are sent).  And of course, they offer an interaction 
(generally double-tap) to then activate/trigger whatever currently has 
focus - equivalent to ENTER/SPACE. Again, no keycode is sent (at least 
from my last test a while ago, if I recall correctly), but a click event 
is fired.

Already with this basic scenario, any specific code that listens to 
keyboard events instead of focus/blur/click will not work.

Beyond that, any interactions that then rely (based on the suggested 
ARIA patterns) on further keyboard interactions (for instance, use of 
cursor/arrow keys, SHIFT+arrow keys, etc) are also impossible to trigger 
without an actual keyboard.

As concrete examples: take the slider pattern 
http://www.w3.org/TR/wai-aria-practices-1.1/#slider - this relies on 
left/right arrow to change the value, and would be coded by adding 
keydown event listeners. These will not be triggered on a touchscreen 
device.

In some cases, the original mouse-driven interaction (that was 
presumably coded first, and then ARIA-fied to support keyboard/AT users) 
can still make patterns work to an extent on touchscreen devices - for 
instance, although the tabpanel 
http://www.w3.org/TR/wai-aria-practices-1.1/#tabpanel relies on 
left/right arrow, the fact that the tabs presumably have some form of 
click event handler to make the tab panel actually work with a mouse 
will help touchscreen AT users, as THAT is what will actually be triggered.

 From what I gather through testing, screenreaders on touchscreen 
devices (e.g. VoiceOver on iOS) attempt to partially work around the 
issue (for simple, relatively standard ARIA-fied widgets) by trying to 
understand what things are based purely on role attributes, to at least 
announce what things are. But if any actual JS is only fired as a result 
of keyboard events (so not just through focus/blur/click), nothing works 
in these scenarios (because AT doesn't - yet, perhaps - attempt to 
provide a built-in interaction - such as "swipe up or down to change the 
value" - which then fires faked keyboard events to rattle the correct JS 
listeners).

At this stage, I'm not quite sure if there is a clean solution to this 
problem...but I know that it's a problem that will rear its head more 
and more, as complex ARIA widgets get implemented and then fail for 
touchscreen AT users. As a first step, an aknowledgment and/or strong 
warning in the spec may be required.

P
-- 
Patrick H. Lauke

www.splintered.co.uk | https://github.com/patrickhlauke
http://flickr.com/photos/redux/ | http://redux.deviantart.com
twitter: @patrick_h_lauke | skype: patrick_h_lauke

Received on Friday, 22 May 2015 22:19:25 UTC