- From: John M Slatin <john_slatin@austin.utexas.edu>
- Date: Thu, 11 Sep 2003 08:40:54 -0500
- To: <tcroucher@netalleynetworks.com>, <rscano@iwa-italy.org>
- Cc: <w3c-wai-gl@w3.org>
Some screen readers *do* actually read the source code and walk the DOM rather than simply pulling out what's on the screen. JAWS is an example. One way to address this issue would be to provide support for the Aural CSS specification which is part of CSS 2. My research group here at UT Austin is supporting a proof-of-concept project which has been going on since last spring in collaboration with some classes in the Computer Sciences department here. pwWebSpeak, IBM Home Page Reader, and now JAWS 5.0 (public beta) are doing what you might call "user-side ACSS"-- that is, these tools allow the end user to associate various sounds (.wav files, etc.) with specific elements such as headers. I would like to see ACSS supported by mainstream user agents because it would afford designers a valuable way to create richer user experiences (that it could and would also be used to create disastrously bad user experiences is without question, but that's a whole different issue<grin>: the capacity should be there). John "Good design is accessible design." Please note our new name and URL! John Slatin, Ph.D. Director, Accessibility Institute University of Texas at Austin FAC 248C 1 University Station G9600 Austin, TX 78712 ph 512-495-4288, f 512-495-4524 email jslatin@mail.utexas.edu web http://www.utexas.edu/research/accessibility/ -----Original Message----- From: Tom Croucher [mailto:tcroucher@netalleynetworks.com] Sent: Thursday, September 11, 2003 6:21 am To: rscano@iwa-italy.org Cc: w3c-wai-gl@w3.org Subject: Re: Screen reader invisibility This is a serious user agent issue in my opinion, one which TV Raman looks at in his book, Auditory user interfaces. The key issue surrounds the way that screen readers deal with applications. By simply reading the screen as is, screen readers are attempting to adapt visual styles for PwDs rather than create(or use) more meaningful audio styles. What _should_ be happening is they should be looking at the underlying structure of the code and working out what should be read themselves. I will admit that in proprietary applications that this is not possible unless the manufacturers use an API to enable the screen readers to talk to the application at a level before visual presentation. Emacs speak is a great example of this, and as such is the only 'speaking browser' (although it is much more than that and encompasses the entire Emacs desktop) which gets this right. The issue that I have with the user agents is that unlike interacting with software which requires the screen readers to provide an API and the software to use it, web tech is set up in such a way that screen readers can just access what they need to render it correctly inherently. That is of course not to say all web sites comply to the relevant guidelines, but currently the screen readers are not giving the level of support to the technology which _would_ make them better that they should. If they wish to use Internet explorer, Mozilla, XPCOM or any other similar technology to interface with HTTP or parse the DOM that's fine, however simply reading the _visual_ output of something when they have easy access to the semantic source of is not. > > I've found this url in webstandards.org: > > http://css-discuss.incutio.com/?page=ScreenreaderVisibility > > and this the "point of view" of Joe that suggest to keep all them > visible: > > http://joeclark.org/book/sashay/serialization/Chapter08.html#h4-2020 > > ...but with problems for small screens: > > http://macedition.com/cb/resources/handheldbrowsercsssupport.html > > How will we approach to this?
Received on Thursday, 11 September 2003 09:40:55 UTC