Re: Rough draft of some success criteria for a extension guideline "Touch accessible"

this is spot on Mike

 this note is great
> -          Note 1: A keyboard interface allows users to provide keystroke input to programs even if the native technology does not contain a keyboard

I would add a note 2
- Note 2: full control form a keyboard interface allows control from any input modality since it is modality agnostic.  It can allow control from speech, Morse code, sip and puff, eye gaze, gestures, and augmentative communication aid or any other device or software program that can take (any type) of user input and convert it into keystrokes.   Full control from keyboard interface is to input what text is to output.  Text can be presented in any sensory modality.   Keyboard interface input can be produced by software using any input modality. 

RE “character input interface”
-  we thought of that but you need more than the characters on the keyboard.  You also need arrow keys and return and escape etc. 
- we thought of encoded input (but that is greek)  and ascii (but that is not international) or UNICODE  (but that is undefined and really geeky) 


gregg

----------------------------------
Gregg Vanderheiden
gregg@raisingthefloor.org




> On Aug 18, 2015, at 5:13 PM, Michael Pluke <Mike.Pluke@castle-consult.com> wrote:
> 
> Gregg et al
>  
> In the relatively short time that I have been involved with WCAG-related discussions I have been really amazed at how, almost without fail, when the term “keyboard interface” is used everyone immediately focusses on situations where a conventional keyboard is attached to whatever device we are talking about.
>  
> If this, quite predictable, assumption was only made by WCAG novices that would be understandable, but still a worry as most people outside W3C who need to use WCAG are relative novices. But what I have observed is that even the most experienced WCAG experts lock-in to assuming that when we start talking about “keyboard interface” we are talking about an external keyboard – and yet we should never let that assumption narrow our thinking.
>  
> Although we are stuck with this confusing term in WCAG 2.0, perhaps we could try using “keystroke input interface” in our discussions – simply to break that 1:1 association between “keyboard interface” and “keyboard”. I suggest trying this particular term as “keystroke input” is directly used in the “keyboard interface” definition and in the Note 1 (which should be hardwired into all of our brains!):
>  
> -          Note 1: A keyboard interface allows users to provide keystroke input to programs even if the native technology does not contain a keyboard
>  
> I’d personally prefer something like “character input interface” to further break the automatic assumption that we are talking about keyboards or other things with keys on them.
>  
> I know that my proposal to use an alternative to “keyboard interface” in our internal discussions is a wildly controversial one, but I really am amazed and very worried at how many times I have seen W3C discussions go off in the wrong direction because of the assumption that “keyboard interface” means use of a keyboard (and how many reminder emails Gregg needs to write to correct this misinterpretation and to drag the discussion back on track).
>  
> Best regards
>  
> Mike
>  
>  
>   <>
> From: Gregg Vanderheiden [mailto:gregg@raisingthefloor.org] 
> Sent: 18 August 2015 21:15
> To: David MacDonald <david100@sympatico.ca>
> Cc: Patrick H. Lauke <redux@splintered.co.uk>; public-mobile-a11y-tf@w3.org
> Subject: Re: Rough draft of some success criteria for a extension guideline "Touch accessible"
>  
>  
>  
> 
>  
> On Aug 18, 2015, at 11:44 AM, David MacDonald <david100@sympatico.ca <mailto:david100@sympatico.ca>> wrote:
>  
> I would be uncomfortable with punting the gesture issue and just rely on keyboard, just because its hard to solve. 
>  
> I’m not sure I follow, David.   How is it hard to solve.  This seems much easier to solve than figuring out how to provide access to people who cant make gestures using gestures.    Or are we leaving them out of the discussion because they are covered some other way then keyboard interface? 
>  
>  
> I've tested several apps for large corporations where they worked fine with VO off, but with VO on large parts of the functionality didn't work. 
>  
> Again I’m  not following.    Clearly if that is true then the app is inaccessible to people who are blind - and to others who use VO    with and without speech.    
>  
> So we are not saying that it should be declared accessible even if not usable by people who are blind and also people who can’t use gestures (or even those who can’t do some of the gestures needed)?
>  
> 
> 
>  
> For me, it's not really "mobile" if it can't be done while on the move, if there is a requirement for a keyboard. You have to find a table, sit down, and treat the device like a laptop. 
>  
> Keyboard interface does not mean they are using a keyboard.    Keyboard interface can allow access with sip and puff morse code,  with eye gaze,  with speech,   with any alternate keyboard plug in,  with an augmentative communication aid — the list goes on and on  (oh and also keyboards,  pocket keyboards, back of the wrist keyboards, etc.)  
>  
> 
> 
>  
> It's early days of mobile AT and its the wild west. But let's try to put our heads together and find a solution that:
>  
> - allows users of mobile AT get the information while standing, while on a moving bus, etc...
> -that puts some requirements on authors, but not crazy ones
> -that helps set in motion a convergence of AT for mobile on functionality etc…
>  
> Yep  and the ONLY way that works across disabilities is Keyboard Interface.    It allows any modality from speech to gesture to eye gaze to morse code etc to be used.  No other technique does.  And gesture is outside of the range of many users.
>  
> (Keyboard interface for input - is kind of like TEXT for output.   It is the universal method that supports all different physical/sensory input modalities — like text provides the ability to present in any sensory modality)
>  
>  
> Gestures should be thought of like mice.   Very powerful and fast for those that can use them.   But should never be the only way to control something. 
>  
>  
>  
> 
> 
>  
>  
> 
> Cheers,
> David MacDonald
>  
> CanAdapt Solutions Inc.
> Tel:  613.235.4902
> LinkedIn <http://www.linkedin.com/in/davidmacdonald100>
> www.Can-Adapt.com <http://www.can-adapt.com/>
>    
>   Adapting the web to all users
>             Including those with disabilities
>  
> If you are not the intended recipient, please review our privacy policy <http://www.davidmacd.com/disclaimer.html>
>  
> On Tue, Aug 18, 2015 at 11:51 AM, Patrick H. Lauke <redux@splintered.co.uk <mailto:redux@splintered.co.uk>> wrote:
> On 18/08/2015 16:41, Gregg Vanderheiden wrote:
> do they have a way to map screen readers gestures to colliding special
> gestures in apps?
> 
> Not to my knowledge, no.
> 
> iOS does have some form of gesture recording with Assistive Touch, but I can't seem to get it to play ball in combination with VoiceOver, and in the specific case of web content (though this may be my inexperience with this feature). On Android/Win Mobile side, I don't think there's anything comparable, so certainly no cross-platform, cross-AT mechanism.
> 
> this was not to replace use of gestures — but to provide a simple
> alternate way to get at them if you can’t make them (physically can’t or
> can’t because of collisions)
>  
> -- 
> Patrick H. Lauke
> 
> www.splintered.co.uk <http://www.splintered.co.uk/> | https://github.com/patrickhlauke <https://github.com/patrickhlauke>
> http://flickr.com/photos/redux/ <http://flickr.com/photos/redux/> | http://redux.deviantart.com <http://redux.deviantart.com/>
> twitter: @patrick_h_lauke | skype: patrick_h_lauke
> 

Received on Wednesday, 19 August 2015 03:56:08 UTC