- From: Robert Burns <rob@robburns.com>
- Date: Fri, 20 Jul 2007 00:15:44 -0500
- To: HTML WG <public-html@w3.org>
On Jul 19, 2007, at 10:21 PM, Al Gilman wrote: > > At 6:59 PM -0500 19 07 2007, Robert Burns wrote: >> >> I would add Apple to your list (with VoiceOver) too as Apple has a >> good measure of control over both the screen reader and the web >> SDK. I'm not sure if the problem with these screen reader vendors >> is a lack of awareness, or whether their users have expressed an >> indifference to HTML and CSS specific accessibility features or >> whether performance optimizations end up tossing out crucial DOM >> accessibility information. It's a mystery. > > It's no mystery. > > Up until now, browsers have had one processing path to the DOM and > another to the screen with divergent results. Perhaps it is also an awareness issue too. On Jul 19, 2007, at 10:33 PM, Maciej Stachowiak wrote: > The DOM can't give you complete information about what is onscreen, > since that is partly under the control of CSS and element-specific > rendering behaviors. For example, things like CSS :before/:after > generated content aren't in the DOM. Neither are the counter values > for ordered lists. When I said DOM, i meant both the DOM0/ DOM1/ DOM2/ DOM3 sense on one hand and the CSS Object Model (whether that's covered by DOM0-3 or elsewhere). Clearly it a screen reader can tap into that either through a standard OS API or through its own hand-built code, it can provide a much richer experience to users (though a reusable framework seems like a better way to go). > On Mac OS X, VoiceOver does not use the DOM as the API for > accessibility hooks into Safari. Instead, the browser (actually the > WebKit engine) fulfills OS-wide accessibility APIs, and the > information we present is based primarily on the render tree, but > it also looks at the DOM. > > Overall, I don't think the DOM is the right API for accessibility. > Assistive technologies need information that is not appropriate to > expose to scripts in the web page. It should really be more of a two-way street, so that the AT gets even more information from the DOM/CxxOM) I say this because W3C recommendations support so much more accessibility detail than is supported through other content that might get displayed on the screen. The standard accessibility API alone can't get to that information. So in that sense the DOM/CssOM) is the right place for that. Moreover, there's not any danger from exposing this information to scripts since even a script could be an assistive technology script if the DOM is properly abstracted. I can't think of any security holes created by allowing a script to know what voice should be used for a particular passage in the document or to associate the headers of a table with a data cell. > Also, assistive technologies generally work cross-process, which > introduces new complexities. I'm not sure what you mean here. Could you give an example of how assistive technologies work cross-process and how that introduces complexities? Once a DOM/CssOM tree is built (or a portion is built incrementally) an AT agent can start doing its thing, right? > And finally, to faithfully represent a web page, you need to > include CSS info, and it's better if the AT can ask the browser how > it actually rendered things rather than attempting to do its own > style resolution. So this sounds like you do support the increased integration of screen readers with the DOM APIs. Am I understanding you correctly? Take care, Rob
Received on Friday, 20 July 2007 05:15:58 UTC