Re: hit testing and retained graphics

Hi Ben,

If we wanted to provide remote access to a system you would need to provide
access to the same accessibility information on that system. So, in
addition to drawing calls coming across you would need the additional
accessibility semantics.

This is a lot like Seamless Windows on OS/2. When I developed Windows
support on Screen Reader/2 I capture the drawing calls in windows and fed
them to a socket to an OS/2 application that would form OSM on that side of
the fence. Then I reproduced the entire Windows tree and semantic
information in shared memory between Windows and OS/2 -  this is
essentially the object tree. The canvas author could theoretically dump
that in fallback content and apply render the object positions in canvas.
This would be an expensive undertaking and we would not have the benefit of
shared memory to share info. Events, like focus events would also need to
pass across the socket. With the HTML changes for tabindex this is also
doable.

This would be a very large project (fraught with a lot of gotchas) - not
unlike what we did for Screen Reader/2. However, the result written to the
fallback content would be accessible across platforms.

Rich Schwerdtfeger
CTO Accessibility Software Group



From:	Benjamin Hawkes-Lewis <bhawkeslewis@googlemail.com>
To:	Steve Faulkner <faulkner.steve@gmail.com>
Cc:	Charles Pritchard <chuck@jumis.com>, Henri Sivonen
            <hsivonen@iki.fi>, Sean Hayes <Sean.Hayes@microsoft.com>, "E.J.
            Zufelt" <everett@zufelt.ca>, Paul Bakaus <pbakaus@zynga.com>,
            "Tab Atkins Jr." <jackalmage@gmail.com>, John Foliot
            <jfoliot@stanford.edu>, Charles McCathieNevile
            <chaals@opera.com>, Richard Schwerdtfeger/Austin/IBM@IBMUS,
            Cameron McCormack <cam@mcc.id.au>, Cynthia Shelly
            <cyns@microsoft.com>, "david.bolter@gmail.com"
            <david.bolter@gmail.com>, Frank Olivier
            <Frank.Olivier@microsoft.com>, "Mike@w3.org" <Mike@w3.org>,
            "public-canvas-api@w3.org" <public-canvas-api@w3.org>,
            "public-html@w3.org" <public-html@w3.org>,
            "public-html-a11y@w3.org" <public-html-a11y@w3.org>
Date:	07/03/2011 04:57 AM
Subject:	Re: hit testing and retained graphics



On Sun, Jul 3, 2011 at 9:20 AM, Steve Faulkner <faulkner.steve@gmail.com>
wrote:
> I am assuming that no one disagrees that the use of canvas provide
display
> and interaction with a remote system is a legitimate use case?

Do we want to allow people with disabilities to access remote systems
through their web browser, when canvas is used for visual display? Sure.

Like any use-case, this might or might not be one we can solve.

Do we want to allow people with disabilities to access remote systems
through their web browser with local AT, when canvas is used for
visual display? Sure.

But I'm pretty sure that *even* if we could provide features to enable
this, nobody
is going to step up to use those features, because the amount of work is
huge
and the benefits are slim.

This is very different to the situation where people are building
their own application
that happens to use canvas for some custom controls or even a single canvas
with widgets drawn directly onto the canvas.

> If so, there is also the keyboard only case. the romote access app I
linked
> to works fine with the keyboard except that when focus moves offscreen
the
> view isn't modified  to display the focused content. I think this could
be
> fixed by the ability to define focusable areas on the the canvas. Again
this
> would not require access to the full remote accessibility stack.

You're talking about a scenario where the canvas does not display the
whole remote view? Wouldn't the best thing to do here be to zoom out
the view until it fits the canvas? (Then remote zoom features could be
used to follow cursor and keyboard focus around.)

--
Benjamin Hawkes-Lewis

Received on Tuesday, 5 July 2011 15:29:50 UTC