Re: Rough draft of some success criteria for a extension guideline "Touch accessible"

On 18/08/2015 16:17, Jonathan Avila wrote:
> [Gregg wrote]
>
> Øthen the screen reader could use that method for achieving the function
> - rather than needing to worry about knowing or being able to perform
> the new gesture
>
> In practice how would this work though?  Mobile screen readers don’t
> provide ways to map keystrokes to gestures (not accessibility supported)
> and it is unreasonable to require a screen reader users to carry around
> a keyboard to have comparable access to people who don’t use screen
> readers.

This leads to that other discussion - about keyboard and keyboard-like 
interfaces, and touch-AT sequential access which is similar, but 
different, to keyboard interaction on desktop. In that discussion, I 
seem to recall I was arguing that beyond double-tap/click, we can't rely 
on anything either, so I assume what Gregg (and certainly I myself) 
meant was: there must be a focusable element/button that can be 
clicked/activated that achieves the same function.

As mentioned, I could envisage this being a user setting on a 
website/app (so users could have a "clean" gestural interface, or an 
interface that actually provides buttons/controls).

P
-- 
Patrick H. Lauke

www.splintered.co.uk | https://github.com/patrickhlauke
http://flickr.com/photos/redux/ | http://redux.deviantart.com
twitter: @patrick_h_lauke | skype: patrick_h_lauke

Received on Tuesday, 18 August 2015 15:27:13 UTC