Re: Proposal: expanding/modifying Guideline 2.1 and its SCs (2.1.1, 2.1.2, 2.1.3) to cover Touch+AT

On 04/07/2016 21:27, Gregg Vanderheiden RTF wrote:

>> On Jul 4, 2016, at 3:35 PM, Patrick H. Lauke <redux@splintered.co.uk
>> <mailto:redux@splintered.co.uk>> wrote:
>>
>> We're quite specifically talking about touch WITH assistive technology
>> (VoiceOver, TalkBack, Narrator, JAWS on a touchscreen laptop, NVDA on
>> a touchscreen laptop).
>>
>> Pure touch is a pointer input, not a non-pointer input.
>
> How does the author know what AT is on the computer?

They don't need to, it's transparent to the author.

> It sounds like this is an  AT Compatibility requirement?  (we have
> several of these)

>> The swipe in the touch + AT scenario is handled by the AT. The AT
>> interprets this (not the content/app created by the author), and sends
>> high-level "move the focus to this control, activate this control,
>> etc" commands via the OS to the content/app.
>
> Right.  So what is the requirement on the Author?

As with the existing WCAG 2.0 keyboard requirement as it stands, the 
other must NOT simply listen to mouseover/mouseenter/mousedown/mouseup, 
i.e. specific mouse events. In addition, they must NOT simply listen to 
touchstart/touchmove/touchend, i.e. touch-specific events. They must 
cater for keyboard and other non-pointer inputs.

Now, and here's the tricky part: an author can currently satisfy WCAG 
2.0 by explicitly listening to keydown/keyup/keypress events, which are 
specific to keyboard and keyboard-like interfaces (other inputs that, in 
essence, simulate/look like a keyboard to the OS/UA and send "faked" 
keyboard events). As certain input methods such as touch+AT do NOT send 
faked key events, authors can ALSO NOT rely on simply listening to 
keydown/keyup/keypress (in addition to whatever mouse/touch events 
they're also listening to anyway). They need to go one abstraction level 
higher and listen to focus/blur/click/contextmenu etc device/input 
agnostic events, as these are the only safe ones that are sent by all 
input modes (with some slight variations...in fact, in some scenarios, 
even focus/blur are not fired by all non-pointer inputs). And in doing 
this, they also get content that works for current traditional keyboards.

> And is the author responsible for doing it if it is not on the device
> (swipe is not on many devices)

The author is responsible for, as mentioned above, catering to 
high-level device agnostic events, rather than simply using low-level 
keyboard events which are specific to keyboard/keyboard-like inputs, and 
not fired by other types of non-pointer inputs.

> So there is no requirement on the part of the author?
>
> if not — there is no need for an SC —   correct?

Yes. And the requirement is functionally so close to the existing 
keyboard-specific requirements that my thinking is that instead of 
duplicating these, their definition should be expanded to include these 
requirements.

> Soooooo  this seems to say there is no need for a requirement beyond the
> keyboard interface requirement  (2.1)   Is that right?
> not what I thought you were saying so I must be misreading this.

See above. The crux of the problem is the current definition of 
keyboard, and the requirement for it to be sending "keystrokes". Plus 
the wording may be misleading (since WE're having to go round and round 
to clarify what is keyboard, and what other "not really a keyboard per 
se" inputs fall under it, I propose using a more generalised wording 
like non-pointer).

>>> There was a good description of the need for something for SWIPE support
>>> in other email — but I don’t think I have heard the actual problem well
>>> stated.   I’m not saying it wasn’t —   I have so many projects I don’t
>>> get to read all of the emails though I try to read all of them on a
>>> thread before responding.
>>>
>>> Can you/someone state clearly (or repost) the description of the problem
>>> we are talking about creating this SC to solve?
>>
>> Ok, let's try once more:
>>
>> - assume you're using a touchscreen on, say, a Microsoft Surface 3
>> touch-enabled laptop
>>
>> - you are running JAWS on that laptop, under Windows 10
>>
>> - you're using JAWS' touch gesture support
>>
>> - you're on a web page, with traditional links, buttons, etc
>>
>> - you swipe right on the touchscreen; JAWS recognises the gesture, and
>> signals (via the OS) to the browser that the focus needs to be moved
>> to the next element in the page
>>
>> - the browser moves the focus to the next element (say a link)
>>
>> - NO keypress/keydown event is sent via JavaScript. If the webpage
>> naively decided to handle its own keyboard control by listening to
>> keypress or keydown events, then checking if the keycode was "TAB" or
>> "ENTER" to decide whether to move its own internal focus or activate
>> some functionality, this will NOT work, as JAWS does not send faked
>> keyboard events
>
> OK  — so Jaws doesn’t need 2.1    but other things do.

Urgh, I'm not saying that JAWS needs anything. I'm saying that the 
author's content needs to be written in a way that it supports this 
interaction from the user mediated by JAWS. The content written by the 
author needs to follow 2.1, but currently 2.1 doesn't cover the way JAWS 
on a touchscreen actually moves focus/activates controls/etc. So I can 
currently write a website/app that passes 2.1, but won't work with 
JAWS+touchscreen, even though I as an author can easily do it by simply 
listening to high-level focus/blur/click/contextmenu/etc device-agnostic 
input events instead of listening to mouse* and key* events.


>>
>> - IF the content instead listens for high-level focus/blur/click
>> events, it will react to this swipe, as interpreted by JAWS. the same
>> way that traditional keyboard will also work (in that scenario, the
>> keyboard, in addition to firing keydown/keypress, ALSO signals to the
>> browser that it needs to move the focus or activate the element that
>> currently has focus).
>>
>> So, functionally, this is exactly the same. Just that an author should
>> not rely on creating their own completely custom keystroke
>> intercept/interpretation, but instead rely on listening to high-level
>> focus/blur/click events.
>
> I was following you up to here.
>
> Suggesting that these high level should ALSO be supported for those AT
> that could use them.
>
> But then you said   INSTEAD THE AUTHOR —  which means that you think
> that authors should NOT do keyboard but INSTEAD do the high level.
>
> This would help those AT you talked about — but would eliminate all the
> people who rely on they keyboard interface.

NO IT DOESN'T. if you're using a keyboard interface, it fires both 
keydown/keyup/keypress AND focus/blur/click/contextmenu/etc events 
already. So specifying that an author, by using the latter, makes sure 
that their app/content works for both "real" keyboards AND touch+AT and 
similar scenarios is what I'm proposing.

> *Why not add the requirement for high level even support — but leave the
> Keyboard interface requirement in place? *

The base requirement of supporting high-level device/input agnostic 
events would automatically cover keyboard already. Authors are free to, 
in addition, do extra low-level keydown/keyup/keypress handling if they 
want to, but the high-level agnostic events cover more inputs, including 
the touch+AT scenario.

>> And the reason why I think it would make sense to extend
> OK
>> (and at the same time tighten)
> By tighten you mean eliminate what some users need?

Do you really think I'd propose that?

Tightening what authors need to do.

>> the definition of 2.1/2.1.1/2.1.2/2.1.3 is that otherwise we'd create
>> SCs that are practically identical to the current 2.1.1, 2.1.2, 2.1.3,
>> but just with "touchscreen + AT" in place of "keyboard”.
>
> Again — what you are asking to add — does NOT COVER the user of keyboard
> interface. Only some.    So why cut them off to create a better user
> experience for others?

It does cover the user of keyboard interfaces. How does it not?

> Why not add what you think should be added?
>
>
>> Which seems overly specific and wasteful (and not very future-proof,
>> as by the time WCAG 2.1 comes out, there may well be new, slightly
>> different but functionally identical input methods...then you'd have
>> SCs that are, on paper, too specific to the current input methods and
>> that don't necessarily make it clear that they apply to these new
>> input methods).
>
> We have other SC’s that are similar but different.

We can pile lots more SCs into WCAG 2.1 then that are almost exactly the 
same. Not sure that would help with it being easily 
understandable/digestible for authors...

P
-- 
Patrick H. Lauke

www.splintered.co.uk | https://github.com/patrickhlauke
http://flickr.com/photos/redux/ | http://redux.deviantart.com
twitter: @patrick_h_lauke | skype: patrick_h_lauke

Received on Monday, 4 July 2016 20:48:40 UTC