Re: Quick (hopefully) question about WCAG 2 glossary definition of "keyboard interface"

On 01/07/2016 07:24, Gregg Vanderheiden RTF wrote:
> that is one reason why swiping cannot be considered equiv to keyboard
> access.
>
> Some cannot swipe accurately.  Some cannot swipe at all.

Once more: the swiping itself is handled by the AT (VoiceOver, TalkBack, 
Narrator) on the touchscreen device. So accuracy or inaccuracy of 
triggering the AT commands like "move to the next item", "move to the 
previous item", etc are an issue of the AT, NOT of the web content.

> But I THINK the goal was not to have swipe be accepted in place of
> keyboard — but rather require it AS WELL.

More broadly: the goal is to ensure that the *end result* of swiping 
when VoiceOver, TalkBack, Narrator etc running be accepted as being 
another way in which a user moves the accessibility focus sequentially. 
How that is triggered is secondary.

> The question is — if we require swipe — and swipe gestures are
> patented…..   we are requiring proprietary techniques.

But I'm not proposing that we require swipe. I'm proposing that we 
accept the outcome of the swipe as being equivalent to a keyboard 
interaction by expanding the currently narrow definition of keyboard.

> Also— if we require AUTHORS to support swipe — what does that mean?
>  Does it mean they must add swipe gestured into their content?  or ???

And as I wrote already in my previous email, no, it doesn't mean 
anything new for authors. This is all completely transparent to the 
author. It's the AT (VoiceOver, TalkBack, Narrator) which is 
interpreting the swipe gesture and translating that into then telling 
the browser "move the focus to the next element", "move the focus to the 
previous element", "activate this element" etc. For the author, there's 
nothing new to do here (with some minor caveats, which WILL need a look 
at some of the keyboard SCs for some clarifying notes). All the sites 
that currently work for you if you're using, say, an iPhone + VoiceOver 
and use the touchscreen gestures to navigate...they didn't have to do 
anything different from what they're doing for mouse/keyboard already.

Let me explain this another way: this is very much like voice control. 
To make a site work with voice control, authors don't have to code 
anything specific or new, because it's software on the user's machine 
(Dragon Naturally Speaking or whatever) which actually handles the 
tricky parts of listening to the user, understanding the commands, 
translating those into actual actions in the browser (to move the focus, 
activate the element). It's the AT that handles this. Ditto for 
VoiceOver etc and touchscreen commands.

Anyway, as said...I'm working on a low-impact way to address this with 
minimal impact on the glossary definition. If I can get it right, this 
would immediately give us coverage under existing SCs for the 
touchscreen WITH assistive technology scenario, rather than requiring 
what's effectively an unnecessary duplication.

P
-- 
Patrick H. Lauke

www.splintered.co.uk | https://github.com/patrickhlauke
http://flickr.com/photos/redux/ | http://redux.deviantart.com
twitter: @patrick_h_lauke | skype: patrick_h_lauke

Received on Friday, 1 July 2016 08:31:14 UTC