Re: hit testing and retained graphics

On Sat, Jul 2, 2011 at 5:13 PM, Charles Pritchard <chuck@jumis.com> wrote:
> On 7/2/2011 6:32 AM, Benjamin Hawkes-Lewis wrote:
>> Would using a remote screen magnifier not work?
>
> Generally speaking, an AT is intended for the computer that the user is
> actually running on.

The goal is to enable people with disabilities to use a remote system.

Where the AT is running is not key to this goal.

> Further, we're just looking for screen coordinates here, so that the
> local AT will work.

Please explain in the context of remote *system* access.

Does "just" not imply converting remote applications into DOM?

If so, can you explain what would be required for remote system access
software and local AT?

> Of course we can setup a page to use CSS transform, or many other techniques
> to get a zoom effect.

I didn't suggest doing this.

> That does not serve the purpose of supporting the client-side AT.

I don't think that's realistic or essential for the remote system access
use-case, at least with respect to AT software as opposed to hardware.

> Supporting client side ATs through ARIA and through adequate population
> of the Accessibility tree is a part of the UAAG.
>
> http://www.w3.org/TR/UAAG20/
>
>
> UA vendors have a responsibility to support client-side AT.

UAAG conformance is not a precondition of HTML5 conformance.

In the case of canvas being used as the visual display component of a
remote access system, user agents that *choose* to conform to UAAG would
conform (primarily) by reporting the bounding dimensions and coordinates
of the canvas (2.1.6.a).

What would make the remote access useful to people with disabilities
would be hooking up remote AT software with local hardware.

> As for your "less work" talk:  How is that relevant?

All other things being equal, reducing the work required to make
something happen makes it more likely people will do the work to make it
happen.

The amount of work required for various approaches is therefore relevant
to enabling people with disabilities to access remote systems.

>> I don't see why this needs to change when canvas is used for the
>> visuals rather than a native graphics API?  (Specifically, it sounds
>> like less work to fix the technical problems I mentioned above than
>> to duplicate the whole work of the accessibility stack in some sort
>> of complex conversion from remote platform APIs to DOM to local
>> platform accessibility APIs to local interface.)
>
> How is this relevant?

We have a way to enable magnification users to access remote
systems when a native graphics API is used.

If the goal is to enable magnification users to access remote
systems when canvas is used, it's obvious to ask why the existing
way is not sufficient.

Can you answer the question?

> We've already solved most of the a11y programmatic access issues via
> the canvas shadow DOM, drawFocusRing and other associated methods.

These features don't contribute to a practical approach to the remote
system access use-case.

> We're simply looking to solve the issue involved in spatial awareness
> for pointer events and for repositioning prior to/without requiring a
> focus event.
>
> It's a very specific problem.

Can you explain what that problem has to do with a practical approach to
the remote system access use-case?

--
Benjamin Hawkes-Lewis

Received on Sunday, 3 July 2011 02:10:12 UTC