- From: Peter Korn <peter.korn@sun.com>
- Date: Mon, 31 Jan 2000 17:14:37 -0800
- To: Charles McCathieNevile <charles@w3.org>
- CC: schwer@us.ibm.com, thatch@us.ibm.com, Jon Gunderson <jongund@ux1.cso.uiuc.edu>, mark novak <menovak@facstaff.wisc.edu>, w3c-wai-ua@w3.org
Hi Charles, Please pardon my soap-box... > while I agree with you that an Off Screen Model is often not the best way > to engineer a product, particularly for cross-platform protability, I don't > think there is an intrinsic reason why it is harmful. If a developer was > working only on a single platform (and many do) and found that using an OSM > was more effective than tying to get through a bizarre API or an > undocumented one, then it may be a better solution. There is a significant issue with the OSM approach: responsibility for problems in accessibility are almost never clear. If one screen reader's OSM is able to capture information on the screen through some particularly tricky heuristics, then it is to the benefit of the users of that particular screen reader, but it may not work in other screen readers (to the detriment of those users). Then the question becomes who should change -> the poorly behaved application putting that information on the screen, or the OSMs of the other screen readers? If there is a standard way for applications to describe their contents via a programming interface (API), then it is much eaiser to figure out what is going wrong and fix it. The API may be insufficiently expressive, the app may not be implementing the API properly, or the assistive technology may not be utilizing the API. Those three things I claim are eaiser to test and verify than the finger-pointing we get via the OSM model. When we have an API, assistive technologies can always go beyond the API (as happens already today), if the API or the application implementation(s) of the API do not meet their needs. > I think a DOM which includes access to the chrome is a great benefit to > accessibility, and using itis a very good way to meet the needs of > users. However I am not sure that it is always a requirement. Should we not require full keyboard access? After all, the functionality of one screen reader - outSPOKEN for Macintosh - provides features like Find that make keyboard access less critical (especially since on the Macintosh there isn't that much support in the OS for keyboard access to controls). Also, an assistive technology could potentially assign their own keyboard access mechanism on top of ill-behaved apps (just as screen readers build information in their OSMs that by rights should be directly exposed by applications). I think going forward we need to require that applications provide *all* of their semantic information directly via a clear and easy to use API to assistive technologies. Assistive technologies have a long, proud, and painful history hacking around operating systems and applications so as to provide their users with access. The engineering staffs of these companies have tremendous expertise in reverse engineering behavior, and these techniques have provided tens of thousands of users with workable access solutions and thereby employment and general access to information. But it is time we stop relying on this expertise, and leaving users with a mish-mash patchwork of access quality to applications that are supposedly complying with a new set of guidelines on how to be compatible with assistive technologies. We can do better than that, and require better than that, of the next generation of applications. Peter Korn Sun Accessibility team
Received on Monday, 31 January 2000 20:15:45 UTC