- From: Detlev Fischer <detlev.fischer@testkreis.de>
- Date: Tue, 28 Jul 2015 17:04:37 +0200 (CEST)
- To: public-mobile-a11y-tf@w3.org
Hi Patrick, I have dithered whether I should reply between the lines as the discussion may get a bit unwieldy, or rather attempt a write-up that would try to sift through advantages and disadvantages of changing core WCAG vs. the seemingly preferred option of WCAG extensions (I haven't been on the WG calls where this came out as pereferred option). I'll try the first option, for now. >> Hi Patrick, I didn't intend this first draft to be restricted to >> touch ony devices - just capturing that input mode. It's certainly >> good to capture input commonalities where they exist (e.g., activate >> elements on touchend/mouseup) > > Or, even better, just relying on the high-level focus/blur/click ones > (though even for focus/blur, most touch AT don't fire them when you'd > expect them - see > http://patrickhlauke.github.io/touch/tests/results/#mobile-tablet-touchscreen-assistive-technology-events > > and particularly > http://patrickhlauke.github.io/touch/tests/results/#desktop-touchscreen-assistive-technology-events > > where none of the tested touchscreen AT trigger a focus when moving to a > control) Thanks for pointing again to this great resource! Looking at the tables, I wasn't quite sure what 1st activation vs. 2nd activation stands for - as for using screen readers on mobile, I assume that 1st activation = swiping left/right, 2nd activation = making a double-tap to activate? And on the first table, I wasn't sure how 1st tap and 2nd tap would be spaced - would that be a quick double tap (below 300 ms) to trigger browser zoom? This may be obvious for others, but I am not sure. > >> - but then there are touch-specific >> things, not just touch target size as mentioned by Alan, but also >> touch gestures without mouse equivalent. Swiping - split-tapping - >> long presses - rotate gestures - cursed L-shaped gestures, etc. > > It's probably worth being careful about distinguishing between gestures > that the *system / AT* provides, and which are then translated into > high-level events (e.g. swiping left/right which a mobile AT will > interpret itself and move the focus accordingly) and gestures that are > directly handled via JavaScript (with touch and pointer events specific > code) - also keeping in mind that the latter can't be done by default > when using a touchscreen AT unless the user explicitly triggers some > form of gesture passthrough. That's a good point. Thinking of the perspective of an AT user carrying out an accessibility test, or even any non-programmer carrying out a heuristic accessibility evaluation using browser toolbars and things like Firebug, I wonder what is implied in making that distinction, and how it might be reflected in documented test procedures. Are we getting to the point where it becomes impossible to carry out accessibility tests without investigating in detail the chain of events fired? > > For the former, the fact that the focus is moved sequentially using a > swipe left/right rather than a TAB/SHIFT+TAB does not cause any new > issues not covered, IMHO, by the existing keyboard-specific SCs if > instead of keyboard it talked in more input agnostic terms. Same for not > trapping focus etc. One important difference being that swiping on mobile also gets to non-focusable elements. While a script may keep keyboard focus safely inside a pop-up window, a SR user may swipe beyond that pop-up unawares (unless the page background has been given the aria-hidden treatment, and that may not work everywhere as intended). Also, it may be easier to reset focus on a touch interface (e.g. 4-finger tap on iOS) compared to getting out of a keyboard trap if a keyboard is all you can use to interact. > > For the latter, though, I agree that this would be touch (not mobile > though) specific...and advice should be given that custom gestures may > be difficult/impossible to even trigger for certain users (even for > single touch gestures, and even more so for multitouch ones). Assuming a non-expert perspective (say, product manager, company stategist), when looking at Principle 2 Operable it would be quite intelligible to talk about 2.1 Keyboard Accessible 2.5 Touch Accessible 2.6 Pointer Accessible (It's not just Windows and Android with KB, Blackberry has a pointer too) 2.7 Voice Accessible While the input modes touch and pointer share many aspects and (as you show) touch events are actually mapped onto mouse events, there might be enough differences to warrant different Guidelines. For example, you are right that there is no reason why target size and clearance should not also be defined for pointer input, but the actual values would probably be slightly lower in a "Pointer accessible" Guideline. A pointer is a) more pointed (sigh) and therefore more precise and b) does not obliterate its target in the same way as a finger tip. Another example: A SC for touch might address multi-touch gestures, mouse has no swipe gesture. SCs under Touch accessible may also cover two input modes: default (direct interaction) and the two-phase indirect interaction of focusing, then activating, when the screenreader is turned on. Of course it might be more elegant to just make Guideline 2.1 input mode agnostic, but I wonder whether the resulting abstraction would be intelligible to designers and testers. I think it would be worthwhile to take a stab at *just drafting* an input-agnostic Guideline 2.1 "Operable in any mode" and draft SC below, to get a feel what tweaking core WCAG might look like, and how Success criteria and techniques down the line may play out. > >> As >> changing core WCAG is not on the table ATM I think it makes sense >> drafting extensions and see whether they can make sense. > > In my view, that's unfortunate...and something that should be fed back > up the chain, as the risk here is that instead of smoothing out the > problems caused by having input-specific advice enshrined in the core > spec (i.e. the emphasis on "keyboard"), we'd end up simply piling extra > input modalities on top (including the commonalities that don't require > any input-specific consideration beyond using words other than "keyboard"). I'd really be curious how that 'smoothing out' of input-specific adivce would actually look like in a standard that is phrased in a generally understandable and technology-agnostic manner. Having modular extensions might be handy when testers make conformance checks along particular defined accessibility baselines. For example, some old-fashioned company has an intranet app used only on Windows 7 (8,10) with IE/Edge and JAWS. It picks core WCAG for its conformance check. Another company has field reps using an intranet app also on tablets. Here, WCAG conformance may be checked for core WCAG 2.0 plus the Touch and Voice extensions. > > Sorry, my barging in here and delivering my Joyce-ian streams of > consciousness may come across a bit confrontational. It's not intended. > Just want to ensure that we don't end up building a whole new silo of > extended SCs that are nominally "for touch/mobile", when in fact a lot > of it would be better fixed at source. In my view, your intervention is most welcome and doesn't at all come across as confrontational! A final observation (rant) is that interfaces catering for both mouse and touch input often lead to horrible, abject usability. Watch low vision touch users swear at Windows 8 (metro) built-in magnification via indirect input on sidebars (an abomination probably introduced because mice don't know ho to pinch-zoom). Watch Narrator users struggle when swipe gestures get too close to the edge and unintendedly reveal the charms bar or those bottom and top slide-in bars in apps. Similar things happen when Blackberry screenreader users unintentionally trigger the common swipes from the edges which BB thought should be retained even with screenreader on. And finally, watch mouse users despair as they cannot locate a close button in a metro view because it is only revealed when they move the mouse right to the top edge of the screen. So much for now - Detlev > > P > -- > Patrick H. Lauke > > www.splintered.co.uk | https://github.com/patrickhlauke > http://flickr.com/photos/redux/ | http://redux.deviantart.com > twitter: @patrick_h_lauke | skype: patrick_h_lauke >
Received on Tuesday, 28 July 2015 15:05:09 UTC