Re: Proposal: expanding/modifying Guideline 2.1 and its SCs (2.1.1, 2.1.2, 2.1.3) to cover Touch+AT

HI Patrick

I think there is a fundamental misunderstanding of the Keyboard Interface provision. 

Also the importance of   OR   vs   AND

let me see if I can make it clearer below. 


Hint 

the key bit from below is 


It is not about “non-pointer”.       That is the trap that this all falls into.   It is about  Non-input-technique specific.      

Pointing  - is technique specific 
so is Swiping
so are keyboards 

but keyboard interfaces are not technique specific 
All of the following are supported by keyboard interface 
sip and puff
brain wave
eye gaze
keyboard of any size
speech recognition 
all of the things you are trying to add above (which are all possible if the device supports keyboard interface access) 
(and swipe is too - if all swipes have a keyboard equivalent)




gregg

> On Jul 3, 2016, at 4:27 PM, Patrick H. Lauke <redux@splintered.co.uk> wrote:
> 
> As threatened/promised, here's what I've got so far. Fair warning: this is potentially a long read, and will likely require a lot more discussion (brace yourselves for another epic mailing list thread).
> 
> I'm limiting the initial discussion to the MATF list (though I've sought feedback outside of the list separately, which helped shape some of the language below).
> 
> I would request that you give this proposal some serious consideration (I'd add that the thinking behind this proposal is essentially how we at The Paciello Group are currently interpreting WCAG 2.0 when doing mobile/tablet audits of web content, in the absence of any current provision for touch+AT scenarios/issues in WCAG 2.0 - so this interpretation has been roughly "road-tested", though by no means I'd claim it's exhaustive or covers all possible edge cases).
> 
> ##########
> # Proposal:
> ##########
> 
> Expanding the current Guideline 2.1 and SCs 2.1.1, 2.1.2 and 2.1.3 to cover not only "keyboards", but inputs that provide a functionally similar interaction mode (in particular in the touchscreen+AT scenario). The primary reason is to avoid duplicating various SCs purely for "touch+AT" when the concept is already expressed in the same way in the current 2.1/2.1.1/2.1.2/2.1.3 but, due to the language used in those, "touch+AT" is not covered.


None of the items mentioned are functionally similar. 

2.1  is NOT about KEYBOARDS.      and this is critical and common misunderstanding.   

It is about “KEYBOARD INTERFACE”   which is an interface that takes codes for keystrokes.   
This is important because the keyboard interface has the following characteristics.  
(and anything that is to be considered similar - has to have the following characteristics as well.

1) it is completely input modality independent. 
 
2) it does not require any particular movement ability on the part of the user —  in fact it doesnt require any movement at all 

3) it is something the author can do — independent of any knowledge of the user agent or device being used

4) all of the following types of input can be used — as long as the author follows the SC and makes the content fully navigable and useable using the keyboard interface functions of the web technologies
sip and puff
brain wave
eye gaze
keyboard of any size
speech recognition 
all of the things you are trying to add above (which are all possible if the device supports keyboard interface access) 


Calling anything else equivalent that does not support these functionalities is inaccurate.

Creating an SC that says  Keyboard Interface  OR   xxxxxx    no longer requires a keyboard interface — and loses all of the above


Creating an SC which says Keyboard Interface  AND   xxxxxx  is not an SC but two SC’s   (two requirements) and should be written up as two.  We have never combined two requirements in one provision. It is not good standards design. 



I am not commenting against another SC — just with not confusing what this SC is about — and also about not putting two SC’s in one. 


> 
> ## Considerations:
> 
> - please review these against the current WCAG 2.0 2.1/2.1.1/2.1.2/2.1.3
> 
> - does the revised language still adequately cover the traditional "keyboard" scenario? we don't want to make the new guideline/SCs "looser". We want to expand their applicability, without providing any new loopholes/wiggle room for authors NOT to support traditional keyboard

If done as an OR  or if it implies that other approaches (that do not meet the above) are ‘equivalent’  then it would indeed loosen  and weaken the SC 2.1

> 
> - does the revised language adequately cover the touch+AT scenario and our intended requirements (that users of touch+AT must be able to navigate/operate stuff, and that they don't get trapped)?

I don’t think it does.  because keyboard interface  is still an option if it is OR — so nothing new would be required. 

> 
> - a reminder that this is specifically about touch+AT - in this scenario, it's the AT (VoiceOver, TalkBack, Narrator, JAWS on a touchscreen Win 10 laptop, NVDA on a touchscreen Win 10 laptop) that is interpreting the gestures/swipes/taps and translating these into instructions to the OS/browser (move focus to next element, to previous element, activate); scenarios where an author OVERRIDES the AT (which were mentioned in some of our calls, but I feel violate the non-interference requirement), and scenarios where we're looking at touch WITHOUT AT (e.g. where authors implemented their own gesture-based interface, for instance) are NOT covered by my suggestions below, and these WILL need new SCs (so to be clear, I'm not saying "if we just do the below, we can go home early folks..."); these simply cover one particular aspect (which we can then set aside and concentrate on the touch w/out AT stuff, stylus, fancy stylus with tilt/rotation/etc, device motion sensor, light sensors, etc).

We should be making NO requirements of anyone except the Web Page Author. 

So what exactly are you requiring of the author (except that it be keyboard operable - so that all these navigation techniques above would work? )   


> 
> Note: in many respects, the touch+AT scenario is actually more limited than traditional keyboard, since it does not generally allow for arbitrary keys (like cursor keys, any letters, ESC, etc) to be triggered unless an on-screen keyboard is provided by the OS/AT (and, in the case of VoiceOver/iOS, TalkBack/Android, Narrator/Win10Mobile) this only happens when a user explicitly sets their focus on an input (like a text input in a form). It is functionally similar to the most basic keyboard TAB/SHIFT+TAB/ENTER/SPACE interactions (though it does NOT fire "fake" keyboard events, like a fake TAB for instance). This actually makes it potentially more involved for authors to satisfy this new/modified guideline/SC, meaning that if the below were to be included in 2.1, it would tighten (not loosen) the requirement on authors. If this is felt too limiting, one possibility could be to add some form of additional exception to the modified 2.1.1 to keep it as loose as current WCAG 2.0, and rely solely on 2.1.3 and its "no exceptions" (but 2.1.3 would then, I'd say, need to be promoted to AA instead of AAA).

So how would someone with a keyboard access the content — if access is more limited than what is now required?   (and the provision would allow use of other approach instead of keyboard?) 

(again - if it is to require Keyboard interface access  AND  something else-  the something else should be an additional SC.)

> 
> Of course, beyond the below, there'd be a need to review all relevant "understanding", "how to meet", and related failure/success examples/techniques. But for now, I feel we need to nail this part, as it's quite fundamental to any further input-specific work we want to carry out under MATF.
> 
> 
> 
> ######################
> # Modified Guideline/SCs:
> ######################
> 
> # Guideline 2.1 Non-pointer Accessible: Make all functionality available from non-pointer input.
> 
> ## Understanding Guideline 2.1
> 
> ### 2.1.1 Non-pointer: All functionality of the content is operable through accessibility supported non-pointer input interfaces (such as keyboards), without requiring specific timings for individual interactions, except where the underlying function requires input that depends on the path of the user's movement and not just the endpoints. (Level A)

No - it is not about “non-pointer”.       That is the trap that this all falls into.   It is about  Non-input technique specific.      

Pointing  - is technique specific 
so is Swiping
so are keyboards 

but keyboard interfaces are not technique specific 
All of the following are supported by keyboard interface 
sip and puff
brain wave
eye gaze
keyboard of any size
speech recognition 
all of the things you are trying to add above (which are all possible if the device supports keyboard interface access) 
(and swipe is too - if all swipes have a keyboard equivalent)


> 
> Note 1: non-pointer inputs include (but are not limited to) physical keyboards, on-screen keyboards, single-switch and two-switch interfaces, assistive technologies such as speech input (which translate spoken commands into simulated keystrokes and user agent interactions) and screen readers on a touchscreen device (which translate touchscreen swipes and other gestures into user agent interactions). [ED: this is pretty much what should go in the glossary, but I'd say the note can reiterate it here for clarity?]
> 
> Note 2: The exception relates to the underlying function, not the input technique. For example, if using handwriting to enter text, or gestures on a touchscreen device running gesture-controlled assistive technology to move the current focus, the input technique (handwriting, touchscreen gesture) usually requires path-dependent input, but the underlying function (text input, moving the focus) does not.
> 
> Note 3: This does not forbid and should not discourage authors from providing pointer input (such as mouse, touch or stylus) in addition to non-pointer operation.
> 

again - not about Non-pointer.
it is about input modality independent.

only keyboard interface qualifies for this.

We COULD try to create a new standard for “modality independent input”  but then we would need to connect it to keyboards and we would end up right back where we are… 



> 
> ### 2.1.2 No Focus Trap: If focus can be moved to a component of the page using accessibility supported non-pointer input interfaces, then focus can be moved away from that component using only non-pointer input interfaces. If it requires more than unmodified exit method (such as arrow keys, tab keys, or other standard exit methods), the user is advised of the method for moving focus away. (Level A)
> 
> Note: Since any content that does not meet this success criterion can interfere with a user's ability to use the whole page, all content on the Web page (whether it is used to meet other success criteria or not) must meet this success criterion. See Conformance Requirement 5: Non-Interference.
> How to Meet 2.1.2 | Understanding 2.1.2
> 
> ### 2.1.3 Non-pointer (No Exception): All functionality of the content is operable through accessibility supported non-pointer input interfaces (such as keyboards) without requiring specific timings for individual interactions. (Level AAA)
> 
> 
> 
> # Additions to glossary:
> 
> ## pointer input
> an input device that can target a specific coordinate (or set of coordinates) on a screen, such as a mouse, pen, or touch contact. (cross-reference https://w3c.github.io/pointerevents/#dfn-pointer) [ED: i'd also be happy to modify the glossary definition in the Pointer Events Level 2 spec (which is about to go to FPWD) to talk about "pointer input" rather than "pointer", to make it match this proposed wording]
> 
> ## non-pointer input
> compared to a pointer input (cross-reference previous glossary entry) - which allows user to target a specific coordinate (or set of coordinates) on a screen - non-pointer inputs generally provide an indirect way for users to move their focus and activate controls/functionality. Non-pointer inputs include (but are not limited to) physical keyboards, on-screen keyboards, single-switch and two-switch interfaces, assistive technologies such as speech input (which translate spoken commands into simulated keystrokes and user agent interactions) and screen readers on a touchscreen device (which translate touchscreen swipes and other gestures into user agent interactions).
> 
> 
> -- 
> Patrick H. Lauke
> 
> www.splintered.co.uk | https://github.com/patrickhlauke
> http://flickr.com/photos/redux/ | http://redux.deviantart.com
> twitter: @patrick_h_lauke | skype: patrick_h_lauke
> 

Received on Monday, 4 July 2016 17:21:43 UTC