On Sun, Jul 3, 2011 at 3:32 AM, Charles Pritchard <chuck@jumis.com> wrote: >>> I don't see why you would have to duplicate the whole accessibility >>> stack to provide focus tracking for a screen magnifier, can you >>> explain this a bit further? >> >> The remote system access server would need to translate the remote >> applications (as accessed by the accessibility tree plus custom >> hooks) into DOM. To support custom views/controls for which we do not >> have semantics in the web stack or to provide any >> application-specific customizations, local AT would have to make >> special interpretations of the DOM (either directly or as exposed to >> the accessibility API). Thus, the accessibility stack (converting >> remote applications into accessible interfaces) would need to be >> duplicated. >> >> If you disagree, can you explain precisely what you think the remote >> system access server on the one hand, and local AT on the other, >> would need to do? > > They can limit their protocol to sending information to elements which > gain focus, for high-latency/bandwidth constrained environments; Doesn't detecting the size and position of the focused object require the remote access server to inspect the accessibility tree *plus custom hooks*? > they could enable a protocol along websockets which allows them to > pass platform specific commands for Accessibility tree queries (such > as Microsoft's UIA or Apple's UIAutomation); Do you mean commands issued by client AT? Doesn't detecting the size and position of the focused object require the client AT to send commands to inspect the accessibility tree *plus custom hooks*? > arbitrary amount of introspection and heuristics. That's a bit hand-wavy! -- Benjamin Hawkes-LewisReceived on Sunday, 3 July 2011 08:19:09 UTC
This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:10:31 UTC