RE: Rough draft of some success criteria for a extension guideline "Touch accessible"

Hi David

I appreciate this very clear explanation of exactly what is being done.

When I was hearing “the keyboard interface on mobile is broken” I was a little unsure whether the plan was to assume that the keyboard interface will not work and propose something that ignores it.

So I now see that what is being done is a true extension – which is what I had assumed when I first started following this.

Despite all of this, I still believe that the term “keyboard interface” will continue to mislead too many people. But there is not much that can be done about this in practice – we must just hope that everyone using WCAG 2.0 reads the very clear definition of “keyboard interface” and its associated notes and doesn’t assume that they know what it means (PS/2, USB, Bluetooth, etc.).

Best regards

Mike

From: David MacDonald [mailto:david100@sympatico.ca]
Sent: 19 August 2015 13:35
To: Michael Pluke <Mike.Pluke@castle-consult.com>
Cc: Jonathan Avila <jon.avila@ssbbartgroup.com>; Gregg Vanderheiden <gregg@raisingthefloor.org>; Patrick H. Lauke <redux@splintered.co.uk>; public-mobile-a11y-tf@w3.org
Subject: Re: Rough draft of some success criteria for a extension guideline "Touch accessible"

HI Mike

Perhaps I haven't explained myself well...

We are working on extensions to WCAG. WCAG already has a keyboard requirement. So we don't need to do that here. This extension will require 2.1.1 Keyboard.

Veteran team members know very well what "keyboard interface" means, and how wonderfully flexible it is. That is why we pushed so hard for it at Level A. Any survey of the WCAG mailing list between 2000 and 2005 will show an abundance of discussion on how important Keyboard accessibility is, not only because it works for keyboard but because it works for Voice and a bunch of other technologies.

Since 1996, I've not used a standard keyboard because of my disability. For 15 of those years I used an input device that plugged into the serial port, and leveraged a program called Serial Keys which was one of Gregg's early developments at Trace which transformed my life in 1996. DOS OS was on my device. I used TELIX to send GUIDI codes to the serial port. SerialKeys mapped those to the Windows keyboard mappings.

Currently, I use a capacitance interface that hooks into my desktop over wifi using LogMeIN, which intercepts my action and maps them to keystrokes.

What I never envisioned in the years 2000-2008 when we were tying up WCAG was people who are blind using a flat screen to operate a mobile device. And I think it was a huge leap forward for our industry, and we need to foster their relationship to their devices, and run with it. Keyboard requirements are in place, they are not going away. Our job now is to look at the gaps, and see if there is anything we can do to ensure these users can continue to use their flat screens which has levelled the playing field for the blind, and to foster authoring that doesn't screw that up.


Cheers,

David MacDonald



CanAdapt Solutions Inc.

Tel:  613.235.4902

LinkedIn<http://www.linkedin.com/in/davidmacdonald100>

www.Can-Adapt.com<http://www.Can-Adapt.com>



  Adapting the web to all users
            Including those with disabilities

If you are not the intended recipient, please review our privacy policy<http://www.davidmacd.com/disclaimer.html>

On Wed, Aug 19, 2015 at 3:43 AM, Michael Pluke <Mike.Pluke@castle-consult.com<mailto:Mike.Pluke@castle-consult.com>> wrote:
Jonathan, please don’t get me wrong!

I’m not in any way intending to suggest that keyboard interface is a solution to the many very valid concerns expressed in this thread. I’m definitely not very knowledgeable about the many detailed issues described in this thread – I have learnt an enormous amount from your and others contributions.

It just happened to be in this thread that I detected odd instances of the “keyboard interface”=connected keyboard assumption that immediately sparked off a response from Gregg to yet again explain that this assumption is incorrect. My email was directed at the confusing nature of the terminology – not at the arguments being made in this thread.

What your reply to my contribution has also highlighted is that just supporting “keyboard interface” functionality means that many other accessibility failures are being ignored.

Best regards

Mike


From: Jonathan Avila [mailto:jon.avila@ssbbartgroup.com<mailto:jon.avila@ssbbartgroup.com>]
Sent: 19 August 2015 03:39
To: Michael Pluke <Mike.Pluke@castle-consult.com<mailto:Mike.Pluke@castle-consult.com>>; Gregg Vanderheiden <gregg@raisingthefloor.org<mailto:gregg@raisingthefloor.org>>; David MacDonald <david100@sympatico.ca<mailto:david100@sympatico.ca>>
Cc: Patrick H. Lauke <redux@splintered.co.uk<mailto:redux@splintered.co.uk>>; public-mobile-a11y-tf@w3.org<mailto:public-mobile-a11y-tf@w3.org>
Subject: RE: Rough draft of some success criteria for a extension guideline "Touch accessible"


>  But what I have observed is that even the most experienced WCAG experts lock-in to assuming that when we start talking about “keyboard interface” we are talking about an external keyboard – and yet we should never let that assumption narrow our thinking.

Mike, I am keenly aware of the flexibility of keyboard interfaces.  However, what I have not done a good job doing is to explain why I feel we need a success criteria that needs to be more specific than just requiring keyboard operation through a keyboard interface on mobile.

My primary concern is that the current SC 2.1.1 can be met by providing keyboard access to web interactions by creating shortcut keystrokes and listening to key up and key down events.  While in theory this keyboard support should be sufficient for an interface – as I have explain in my prior posts – such a way for assistive technology such as speech recognition, screen readers, etc. on mobile devices to interact with the interface does not currently exist to my knowledge.

Take for example a hypothetical knob on a webpage.  Without a screen reader I can turn that knob to specific settings.  As a developer I can implement keystrokes, let’s say control+1, control+2, etc. for the different settings.  I have met the letter of the success criteria by providing a keyboard interface through creating JavaScript shortcut keystroke listeners.  In practical reality though as a mobile screen reader user who does not carry around a keyboard I have no way to trigger those keystrokes.  How as a screen reader user can I press control+1?  Sure there could be other ways to solve this situations such as providing increment and decrement buttons or an input field etc.  But the fact is the developer doesn’t have to provide those things because he/she has created shortcut keystrokes which meet the success criteria.

I’d be happy to discuss other real world examples which might help us to figure out the solution.


>  I know that my proposal to use an alternative to “keyboard interface” in our internal discussions is a wildly controversial one, but I really am amazed and very worried at how many times I have seen W3C discussions go off in the wrong direction because of the assumption that “keyboard interface” means use of a keyboard (and how many reminder emails Gregg needs to write to correct this misinterpretation and to drag the discussion back on track).

I don’t mean to be objectionable but I don’t feel the discussion is going in the wrong direction.    If the keyboard interface language in the current WCAG was sufficient then we would not need to have any touch success criteria.  The fact is that myself and others are saying that we need to raise the bar beyond just a keyboard interface to require that the interface is interoperable with the current assistive technology on mobile platforms.  The keyboard interface on mobile is broken and in some platforms locked down – we cannot ignore this fact and we need to provide solutions and criteria to describe how things must be made to be ensure access.

Best Regards,

Jonathan

--
Jonathan Avila
Chief Accessibility Officer
SSB BART Group
jon.avila@ssbbartgroup.com<mailto:jon.avila@ssbbartgroup.com>

703-637-8957<tel:703-637-8957> (o)
Follow us: Facebook<http://www.facebook.com/#%21/ssbbartgroup> | Twitter<http://twitter.com/#%21/SSBBARTGroup> | LinkedIn<http://www.linkedin.com/company/355266?trk=tyah> | Blog<http://www.ssbbartgroup.com/blog> | Newsletter<http://eepurl.com/O5DP>

From: Michael Pluke [mailto:Mike.Pluke@castle-consult.com]
Sent: Tuesday, August 18, 2015 6:13 PM
To: Gregg Vanderheiden; David MacDonald
Cc: Patrick H. Lauke; public-mobile-a11y-tf@w3.org<mailto:public-mobile-a11y-tf@w3.org>
Subject: RE: Rough draft of some success criteria for a extension guideline "Touch accessible"

Gregg et al

In the relatively short time that I have been involved with WCAG-related discussions I have been really amazed at how, almost without fail, when the term “keyboard interface” is used everyone immediately focusses on situations where a conventional keyboard is attached to whatever device we are talking about.

If this, quite predictable, assumption was only made by WCAG novices that would be understandable, but still a worry as most people outside W3C who need to use WCAG are relative novices. But what I have observed is that even the most experienced WCAG experts lock-in to assuming that when we start talking about “keyboard interface” we are talking about an external keyboard – and yet we should never let that assumption narrow our thinking.

Although we are stuck with this confusing term in WCAG 2.0, perhaps we could try using “keystroke input interface” in our discussions – simply to break that 1:1 association between “keyboard interface” and “keyboard”. I suggest trying this particular term as “keystroke input” is directly used in the “keyboard interface” definition and in the Note 1 (which should be hardwired into all of our brains!):


-          Note 1: A keyboard interface allows users to provide keystroke input to programs even if the native technology does not contain a keyboard

I’d personally prefer something like “character input interface” to further break the automatic assumption that we are talking about keyboards or other things with keys on them.

I know that my proposal to use an alternative to “keyboard interface” in our internal discussions is a wildly controversial one, but I really am amazed and very worried at how many times I have seen W3C discussions go off in the wrong direction because of the assumption that “keyboard interface” means use of a keyboard (and how many reminder emails Gregg needs to write to correct this misinterpretation and to drag the discussion back on track).

Best regards

Mike



From: Gregg Vanderheiden [mailto:gregg@raisingthefloor.org]
Sent: 18 August 2015 21:15
To: David MacDonald <david100@sympatico.ca<mailto:david100@sympatico.ca>>
Cc: Patrick H. Lauke <redux@splintered.co.uk<mailto:redux@splintered.co.uk>>; public-mobile-a11y-tf@w3.org<mailto:public-mobile-a11y-tf@w3.org>
Subject: Re: Rough draft of some success criteria for a extension guideline "Touch accessible"




On Aug 18, 2015, at 11:44 AM, David MacDonald <david100@sympatico.ca<mailto:david100@sympatico.ca>> wrote:

I would be uncomfortable with punting the gesture issue and just rely on keyboard, just because its hard to solve.

I’m not sure I follow, David.   How is it hard to solve.  This seems much easier to solve than figuring out how to provide access to people who cant make gestures using gestures.    Or are we leaving them out of the discussion because they are covered some other way then keyboard interface?


I've tested several apps for large corporations where they worked fine with VO off, but with VO on large parts of the functionality didn't work.

Again I’m  not following.    Clearly if that is true then the app is inaccessible to people who are blind - and to others who use VO    with and without speech.

So we are not saying that it should be declared accessible even if not usable by people who are blind and also people who can’t use gestures (or even those who can’t do some of the gestures needed)?



For me, it's not really "mobile" if it can't be done while on the move, if there is a requirement for a keyboard. You have to find a table, sit down, and treat the device like a laptop.

Keyboard interface does not mean they are using a keyboard.    Keyboard interface can allow access with sip and puff morse code,  with eye gaze,  with speech,   with any alternate keyboard plug in,  with an augmentative communication aid — the list goes on and on  (oh and also keyboards,  pocket keyboards, back of the wrist keyboards, etc.)



It's early days of mobile AT and its the wild west. But let's try to put our heads together and find a solution that:

- allows users of mobile AT get the information while standing, while on a moving bus, etc...
-that puts some requirements on authors, but not crazy ones
-that helps set in motion a convergence of AT for mobile on functionality etc…

Yep  and the ONLY way that works across disabilities is Keyboard Interface.    It allows any modality from speech to gesture to eye gaze to morse code etc to be used.  No other technique does.  And gesture is outside of the range of many users.

(Keyboard interface for input - is kind of like TEXT for output.   It is the universal method that supports all different physical/sensory input modalities — like text provides the ability to present in any sensory modality)


Gestures should be thought of like mice.   Very powerful and fast for those that can use them.   But should never be the only way to control something.







Cheers,
David MacDonald

CanAdapt Solutions Inc.
Tel:  613.235.4902<tel:613.235.4902>
LinkedIn<http://www.linkedin.com/in/davidmacdonald100>
www.Can-Adapt.com<http://www.can-adapt.com/>

  Adapting the web to all users
            Including those with disabilities

If you are not the intended recipient, please review our privacy policy<http://www.davidmacd.com/disclaimer.html>

On Tue, Aug 18, 2015 at 11:51 AM, Patrick H. Lauke <redux@splintered.co.uk<mailto:redux@splintered.co.uk>> wrote:
On 18/08/2015 16:41, Gregg Vanderheiden wrote:
do they have a way to map screen readers gestures to colliding special
gestures in apps?

Not to my knowledge, no.

iOS does have some form of gesture recording with Assistive Touch, but I can't seem to get it to play ball in combination with VoiceOver, and in the specific case of web content (though this may be my inexperience with this feature). On Android/Win Mobile side, I don't think there's anything comparable, so certainly no cross-platform, cross-AT mechanism.
this was not to replace use of gestures — but to provide a simple
alternate way to get at them if you can’t make them (physically can’t or
can’t because of collisions)

--
Patrick H. Lauke

www.splintered.co.uk<http://www.splintered.co.uk/> | https://github.com/patrickhlauke

http://flickr.com/photos/redux/ | http://redux.deviantart.com<http://redux.deviantart.com/>
twitter: @patrick_h_lauke | skype: patrick_h_lauke

Received on Wednesday, 19 August 2015 18:01:27 UTC