Re: Proposal: expanding/modifying Guideline 2.1 and its SCs (2.1.1, 2.1.2, 2.1.3) to cover Touch+AT

On 04/07/2016 20:21, Gregg Vanderheiden RTF wrote:
>
>
> /gregg/
>
>> And here's where I think YOU are misunderstanding. Because when I use,
>> say, VoiceOver on iOS and I swipe left/right, the browser's focus is
>> moved BUT no "fake" TAB/SHIFT+TAB or fake CURSOR LEFT/RIGHT key even
>> is sent. It does not emulate a keyboard, it simply moves the focus.
>
> of course
>
> and if you touch the screen it doesnt tab to the item and hit spacebar.


We're quite specifically talking about touch WITH assistive technology 
(VoiceOver, TalkBack, Narrator, JAWS on a touchscreen laptop, NVDA on a 
touchscreen laptop).

Pure touch is a pointer input, not a non-pointer input.

>> So, having said all this, you seem to be saying that as far as you're
>> concerned, the current WCAG 2.0 2.1, 2.1.1, 2.1.2, 2.1.3 already
>> implicitly cover touch+AT (swipes) etc? Well in that case, my change
>> should be fine since I'm only making it more explicit, no?
>
> Nope — not saying that at all.
>
> Just saying that the SWIPE provision is a completely different issue
> from the keyboard interface one.
>
>  SWIPE is not an alternative  solution to keyboard interface requirement.
> It is a different parallel issue.

The swipe in the touch + AT scenario is handled by the AT. The AT 
interprets this (not the content/app created by the author), and sends 
high-level "move the focus to this control, activate this control, etc" 
commands via the OS to the content/app.


> We need Keyboard Interface requirement (for all the reasons mentioned)
>
> We may ALSO need a SEPARATE requirement to address an issue where SWIPE
> should work but doesnt

No, as this is handled by the AT. If the swipe itself doesn't work, it's 
the AT's fault for not interpreting it correctly. What the AT sends as a 
result of the swipe, though, is the standard OS-level signal of "move 
the focus, activate the control, etc".


> — for that user group - but SWIPE won’t address
> the problem of keyboard interface access.

It will, as to make content/apps work in the touch+AT scenario, you do 
exactly what you'd generally do to make things work with a traditional 
keyboard: you listen to focus/blur/click events (if we're talking web 
content for a minute, rather than native apps, but the concept is the 
same) INSTEAD OF mousedown/mouseup/mouseover/mouseenter or even the 
equivalent touchstart/touchmove/touchend or similar.

> There was a good description of the need for something for SWIPE support
> in other email — but I don’t think I have heard the actual problem well
> stated.   I’m not saying it wasn’t —   I have so many projects I don’t
> get to read all of the emails though I try to read all of them on a
> thread before responding.
>
> Can you/someone state clearly (or repost) the description of the problem
> we are talking about creating this SC to solve?

Ok, let's try once more:

- assume you're using a touchscreen on, say, a Microsoft Surface 3 
touch-enabled laptop

- you are running JAWS on that laptop, under Windows 10

- you're using JAWS' touch gesture support

- you're on a web page, with traditional links, buttons, etc

- you swipe right on the touchscreen; JAWS recognises the gesture, and 
signals (via the OS) to the browser that the focus needs to be moved to 
the next element in the page

- the browser moves the focus to the next element (say a link)

- NO keypress/keydown event is sent via JavaScript. If the webpage 
naively decided to handle its own keyboard control by listening to 
keypress or keydown events, then checking if the keycode was "TAB" or 
"ENTER" to decide whether to move its own internal focus or activate 
some functionality, this will NOT work, as JAWS does not send faked 
keyboard events

- IF the content instead listens for high-level focus/blur/click events, 
it will react to this swipe, as interpreted by JAWS. the same way that 
traditional keyboard will also work (in that scenario, the keyboard, in 
addition to firing keydown/keypress, ALSO signals to the browser that it 
needs to move the focus or activate the element that currently has focus).

So, functionally, this is exactly the same. Just that an author should 
not rely on creating their own completely custom keystroke 
intercept/interpretation, but instead rely on listening to high-level 
focus/blur/click events.

And the reason why I think it would make sense to extend (and at the 
same time tighten) the definition of 2.1/2.1.1/2.1.2/2.1.3 is that 
otherwise we'd create SCs that are practically identical to the current 
2.1.1, 2.1.2, 2.1.3, but just with "touchscreen + AT" in place of 
"keyboard". Which seems overly specific and wasteful (and not very 
future-proof, as by the time WCAG 2.1 comes out, there may well be new, 
slightly different but functionally identical input methods...then you'd 
have SCs that are, on paper, too specific to the current input methods 
and that don't necessarily make it clear that they apply to these new 
input methods).

P
-- 
Patrick H. Lauke

www.splintered.co.uk | https://github.com/patrickhlauke
http://flickr.com/photos/redux/ | http://redux.deviantart.com
twitter: @patrick_h_lauke | skype: patrick_h_lauke

Received on Monday, 4 July 2016 19:36:02 UTC