- From: <josh@interaccess.ie>
- Date: Thu, 30 Jun 2016 10:36:59 +0000
- To: "Detlev Fischer" <detlev.fischer@testkreis.de>, gregg@raisingthefloor.org, redux@splintered.co.uk
- Cc: public-mobile-a11y-tf@w3.org, team-wcag-editors@w3.org
Doh! Never say 'Quick' anything - the thread is jinxed *grin.... ------ Original Message ------ From: "Detlev Fischer" <detlev.fischer@testkreis.de> To: gregg@raisingthefloor.org; redux@splintered.co.uk Cc: public-mobile-a11y-tf@w3.org; team-wcag-editors@w3.org Sent: 30/06/2016 11:31:26 Subject: Re: Quick (hopefully) question about WCAG 2 glossary definition of "keyboard interface" >One difference between sequential navigation via swiping and keyboard >sequential navigation (tabbing, arrowing) is predictability. In several >user tests we have experienced issues of focus loss or accidental focus >jumps even with users who know the swipe gestures and can in other >circumstances apply them correctly. Swiping (especially with Android / >Talkback) often resets the focus to what happens to be at the position >where the finger came down. This is partly due to problems of touch >implementation (especially compared with the better implemetation under >iOS/Voiceover), partly due to processing speed (test devices were Moto >E, Galaxy Note 3, Nexus 4, etc, so not the latest and fastest >hardware). > >This may still not be a general reason to keep keyboard and touch >sequential navigation apart - just saying... > >Detlev > >Compared to that, tabbing is a fairly predictable action. > >> And I'm not saying we can ignore keyboard. What I AM saying is that >>it's >> irrelevant that "swipe" is a "gesture", the same way that for >>instance >> you don't explicitly call out "voice control" or similar...because >> regardless of the actual mechanism, the end result is what counts: >>the >> user controls (using a non-pointer device, so NOT a mouse or touch or >> pen) the focus, and can activate controls etc. In essence the OS/UA >> themselves handle the peculiarities of recognising gesture, voice, >>etc, >> and translate it at a high level into "move the focus, trigger the >>click >> action, etc".NNN >> >>> With keyboard access I can use an alternate keyboard, a sip and puff >>> alternate keyboard, a speech control alternate keyboard, a >>>miniature >>> keyboard, a large keyboard and many other input devices to control >>>my >>> iPhone (or any mobile device). All of these and much more are cut >>>off >>> if you say “swipes can replace keyboard access” >> >> And I never said "swipes can replace keyboard access". I'm saying >>that >> for users who can use the touchscreen (but may be blind/visually >> impaired), using VoiceOver and swipe gestures is equivalent to using >>an >> external keyboard, or switch control, or any other peripheral, in >>that >> the OS translates that gesture for them into a "move the focus" >>action. >> Thus, at a functional level, it is equivalent to "keyboard" access >>(just >> that it does not fire an actual keystroke event) >> >>> Again - because iPhone provides a keyboard equiv for each navigation >>> gesture — the iPhone is completely keyboard operable and meets WCAG >>>2.0 >>> keyboard SC. If it did not - and swipes were the only way then it >>> would fail. >> >> I'm not talking about "does the iPhone pass/fail WCAG 2.0's SCs". I'm >> saying that unless you want to see all keyboard SCs being exactly >> duplicated unnecessarily to also cover sequential navigation >>mechanisms >> which are exactly the same, from a functional standpoint, as >>sequential >> navigation using an actual input device that fires keystrokes, the >> definition of "keyboard" can be extended to allow for input >>mechanisms >> like VoiceOver+swipe gestures. This does not remove or weaken the >> requirement that content needs to also work with an actual bluetooth >> keyboard. In fact, it tightens the requirement, as without a >>bluetooth >> keyboard, VO users can't simply press arbitrary keys, like a >>particular >> letter, or an arrow key, meaning that content needs to be made to >>work >> at a minimum with just reacting to focus/click/blur events (because >> paradoxically, things that currently quite happily pass the letter of >> WCAG 2.0 because they are "keyboard accessible" don't work in >>situations >> where a user is navigating using VoiceOver/TalkBack on a touchscreen, >> because they can't press specific keys - see >> https://w3c.github.io/aria-in-html/#aria-touch) >> >>> Here we are talking about web content though — and it needs to be >>> accessible on both Pcs and mobile. So you need keyboard access >>>for >>> all the places and mobile devices taht don’t have swipe nav. >> >> You seem to be misunderstanding how content is made to work with >> VoiceOver + swipe (and note, once more, I'm explicitly talking about >> swipe WHEN AT IS ENABLED, as the AT is the one which then handles the >> gesture and translates it into high-level "move focus, activate >>element, >> etc" instructions that are passed on to the browser): you don't code >> special swipe nav detection into your site. You take care to use >>simple >> focus/blur/click handlers...the same way you would to make it >>"keyboard >> accessible" on PC. >> >>> IF YOU ARE TALKING ABOUT ADDING A NEW requirement (that all content >>>be >>> navigable BY BOTH the keyboard AND gestures )— then you have >>>different >>> problem. The gesture control is not from the web content but from >>>the >>> iPhone and the author has no control of that…. >> >> Exactly, the author has no control over that, as the AT will handle >>the >> gestures in the case of AT+touch. Which is my point: functionally, >>the >> other has no control, but the author also doesn't need to do anything >> special that they're not already doing to handle keyboard. >> >>> Help me here. We are going back and forth here but I’m not sure I >>> really know what you are proposing to add or subtract. Can you >>>tell >>> me specifically the wording you are proposing (or the final effect >>>you >>> want the wording to have?) >> >> At this point, I wanted to understand why "keystrokes" was explicitly >> added to the definition. Now that I know, I'll be proposing a slight >> rewording in the coming week or so. That should give a hopefully >>clearer >> idea of where I'm heading with this. >> >>> It is always good to include more options. But dropping the basic >>>one >>> is not. >> >> Again, I'm not dropping anything. >> >>> Again, including other built in access approaches is always good. >> >> That's the one. I'm proposing to carefully expand the definition of >> "keyboard" to include the AT+touch scenario, where the AT is handling >> the gesture recognition and then translates that into high-level >>"move >> focus, activate the element, ..." signals to the browser. >> >>> REQUIRING them — means everyone has to do all of them on everything. >>> Be sure that is possible. >> >> As the work necessary to make content work in the AT+touch scenario >>is >> functionally the same as that of making things work with a regular >> keyboard (with some clarification needed about which specific events >> need to be targetted, which is something that needs to happen in the >> non-normative/understanding/how to meet docs), it is possible. >> >>> ELIMINATING the basic one for one that is usable by less people - >>>would >>> be a significant reduction in access span. >> >> Not eliminating anything here. >> >>> Again - can you say what IT is? >>> >>> Adding a new requirement in addition to keyboard interface access? >>> >>> IT above (saying that navigation doesnt have to all be possible from >>> just the keyboard) (that you said doesnt weaken it ) would >>>eliminate >>> access for all the people using the approaches I enumerated above. >>> >>> Or are you saying something different ? >> >> Yes, see above. >> >>>> >>>>> I DO think it is GREAT that gestures also be possible. Just like >>>>>it is >>>>> great that there are mouse ways to do things done by keyboard. >>>>>Both >>>>> should be available to users. But keyboard access should always >>>>>be one >>>>> option. >>>> >>>> And keyboard access would still be one of the options, as it's the >>>> most common non-pointer/sequential navigation interface in >>>> circulation. Just that on devices which are primarily touchscreen >>>> driven the most common non-pointer/navigation paradigm is >>>>functionally >>>> the same, but does not send "keystrokes" - the effect is the same >>>> though, in that it moves the focus. >>> >>> when you say OPTIONS — do you mean >>> >>> 1) OPTION for the user (and therefore a requirement of the author?) >>> >>> Then we are on the same page. You are requiring BOTH the >>>keyboard >>> AND gesture? >>> My questions then are >>> >>> * how does the author provide gesture access to his content? >>> * how does the author provide gesture access to his content on >>>PCs? >> >> Yes, this is the option. And as outlined above, functionally this is >> completely transparent to the author as it's the AT (in the AT+touch >> scenario I'm talking about here) that interprets the gestures and >>sends >> high-level commands to the UA, which have the same effect that >>"actual >> physical keyboard" navigation has. >> >>> 2) If you mean OPTION of the author — then the author would be able >>>to >>> have some navigation accessible via swipe and not keyboard and that >>> would cut out all the people above again. >>> >>> then it is not an option for the user — the author decides how >>>the >>> user much be able to access the content >> >> No, that's not what I'm proposing. >> >> In short: now that you've clarified the presence of "keystroke" >>(which >> is the one fundamental stumbling block I saw for even attempting to >> propose a change), I'll work on an actual proposal for your/Mobile >> TF/WCAG GL consideration. >> >> Thanks, >> >> P >> -- >> Patrick H. Lauke >> >> www.splintered.co.uk | https://github.com/patrickhlauke >> http://flickr.com/photos/redux/ | http://redux.deviantart.com >> twitter: @patrick_h_lauke | skype: patrick_h_lauke >> > >
Received on Thursday, 30 June 2016 10:35:08 UTC