- From: Léonie Watson <lwatson@paciellogroup.com>
- Date: Fri, 11 Apr 2014 11:00:39 +0100
- To: "'James Craig'" <jcraig@apple.com>, "'Matthew King'" <mattking@us.ibm.com>
- Cc: "'W3C WAI Protocols & Formats'" <public-pfwg@w3.org>
James Craig wrote: "I'm assuming you mean something like "Press Control + J key to perform Action X." Is that right? What would this mean to a user that did not have an Control key, or did not have a keyboard for that matter? What about a user on a platform that had both but expected the primary modifier to be something other than Control? Wouldn't it be better to convey programmatically the actions that could be performed and let the AT or mainstream user interface perform these in a way that is independent of one specific physical interface." No, I wasn't thinking about specific commands. We're already starting to see ATs provide that kind of information, for example Jaws announcing "Press Jawskey alt m to move to controlled element" when it encounters aria-controls. The problem is at a higher level than that. It's knowing what the widget itself is, and what it entails. Having the role identified isn't enough information, especially when someone can't pick up on the visual cues that make it easier to understand the expected interaction. It's one thing to know that a command will move focus to a controlled element, but that isn't all that helpful if you don't understand the nature of the widget itself. Léonie. -- Léonie Watson Senior Accessibility Engineer, TPG. @LeonieWatson @PacielloGroup
Received on Friday, 11 April 2014 10:01:00 UTC