Re: Reasons of using Canvas for UI design

David,

The problem we are trying to solve here is fulfill the ability of the
application developer to be able to support the platform accessibility API
in the browser. Without this basic support we cannot provide an application
developer the tools to support the native platform accessibility services
layers on each platform that companies, like Apple, have worked so hard to
develop. These API services layers are the foundations of interoperability
with assistive technologies on every OS platform today. What we are doing
here is providing the plumbing for accessibility interoperability vs. the
end game solution.

In HTML, this can done in a declarative form with much less effort than
writing directly to a native platform accessibility API. Much of the gory
details of doing that is handled by the browser. The reason ARIA was
created was to fill gaps in HTML to provide additional needed role, state,
and properties for custom UI components and other Web 2.0 concepts. There
are places, however, that due to the separation of content and presentation
in canvas, we need to fill.

The reasons we want to add hit testing for canvas and how it relates to
accessibility are as follows:

1. The fallback content in canvas is keyboard accessible and its contents
are used to drive the accessibility object tree supported by the user agent
just like visible HTML rendered content
2. Now the we have these objects a screen reader or screen magnifier needs
to know there location and bounds on the drawing surface. Unlike HTML, as a
default, that information is not provided. This information is needed for
the following reasons:
- A magnifier needs to provide the ability to zoom to an object in that
accessible object tree. Zooming is based on: object type, its context (what
the object is contained in), and its bounds. If you do nothing to those
elements in fallback content you don't have the bounds information
- A screen reader uses the layout and positioning of the accessible object
to assess how to drive a Braille display. Braille displays may have say 80
characters. A screen reader will use the position information to orient
what drives the cells into lines
3. There are a couple of ways to provide the bounds of an object
- Assigning an invisible CSS position, width and height. This is a lot of
work for an author as they can't see what they are positioning and it
requires the author to do all the transformations to screen coordinates
that are handled by canvas APIs. This is not something I relish having to
tell every canvas author to do.
- Associating a path on canvas, representing the visual bounds of the
element in fallback content,  with the fallback content element.
4. Since supplying these bounds is extra work on behalf of the author we
felt it necessary to find a way to provide a tool to authors (hit testing)
that would make their life easier and provide these bounds without the
author having to think about doing an extra step for accessibility. Hit
testing:
- Causes the author to create a path and associate it with a fallback
element in the canvas element subtree
- Would allow the author to push the hit testing to the user agent.
- Would allow the author to dispatch the appropriate pointer event from
canvas to the same element used to handle the keyboard input in fallback
content (like the browser does for HTML)
- By mapping a path to a fallback element the browser can use the
information to fill the bounds of the object to the accessibility API
bounds. The browser has all the path transformation information to do the
job

With the basic plumbing in place now we can have the discussion on how you
make something accessible and there are multiple strategies that apply. The
solution depends on the problem and like in HTML content (this is not
limited to canvas) the author may need to provide an alternative rendering
based on the persons ability. So, spending our time defining different use
cases is a bit of a red herring in that we can go through an exhaustive
list of these and still come back to the basic plumbing problem we are
trying to address here.

Once we have the plumbing in place now we need to do an analysis of the
different uses of each of the different drawing technologies (<canvas> and
<svg>) and we will come up with potentially:

- Different ontologies for drawings and how to make the accessible. Also,
the solutions will vary based on many factors - including the type of
device you have, the users ability, and the environment they operate. At
IBM I have an entire working group in our Accessibility Architecture Board
looking at this problem as we speak. IBM makes use of a broad range of
visualizations in our data analytics that require this to be done.
- Accessibility API extensions in the underlying platforms
- Potentially new interoperability concepts. Before ARIA we had not concept
of live regions on native platforms. This was new. That said, we needed the
basic plumbing in place, that already existed, to make it real.
- Ways of arbitrating and choosing alternative renderings. That is being
developed now in areas like Access For All standards (we discussed this
with media queries) and the Global Public Inclusive Infrastructure

I am sorry for this long explanation. People in the WhatWG claim I like to
write novellas on this but it is very important to understand why we are
doing what we are doing. We are not asking for adding a huge retained
graphics capability to canvas. We are asking for probably three additional
methods an basic hit testing support to fill the gaps in the ability to
support the native platform plumbing that is common on each OS.



Rich Schwerdtfeger
CTO Accessibility Software Group



From:	David Singer <singer@apple.com>
To:	Richard Schwerdtfeger/Austin/IBM@IBMUS, Canvas
            <public-canvas-api@w3.org>
Cc:	Frank Olivier <Frank.Olivier@microsoft.com>, paniz alipour
            <alipourpaniz@gmail.com>, Cynthia Shelly <cyns@microsoft.com>,
            Steve Faulkner <faulkner.steve@gmail.com>
Date:	07/28/2011 05:04 PM
Subject:	Re: Reasons of using Canvas for UI design



I have tried to 'take a step back' and ask some basic questions, and look
for some basic principles, and then come up with some (probably basic)
points and ideas.

It seems that using Canvas is most interesting and useful when the
application (represented by the HTML context including Canvas, its
supporting scripts, and so on) is offering some user interaction model that
is NOT available when using HTML and/or SVG.  These are the applications
that must 'work' from an accessibility point of view, in my opinion. An
obvious example is the particle simulation one;  another thought experiment
I played with is below.  The issue is that if the application offers
something more than just a visual (e.g. one can learn something, build
something, or affect something) it ought to be accessible to the
visually-impaired user.

The canvas and what's drawn on it are just the visible manifestation of the
application; it's what those pixels mean, and what interaction with the
application means, that we need to make accessible. So rather than asking
'how do we make canvas accessible?' I think we need to ask 'how do we make
applications that use canvas accessible?'.

Ideally, the accessibility of those canvas-using applications is mostly
enabled by making the applications work at all; if there are extra,
special, provisions for accessibility, we know from experience that some
authors won't take the trouble to use those provisions, and accessibility
suffers.  I don't know how to achieve that for canvas-based applications,
but it's worth keeping in mind.

In a canvas-based application, it is the scripts (and their supporting
infrastructure) that constitute the application; the canvas surface is just
the visible rendering. So I think it is the scripts that should bear the
duty of providing accessibility.  Writing scripts that do hit testing is
currently somewhat of a pain;  it may well be that if we can provide access
to optimized hit testing, for scripts, we can both ease the job of writing
the applications and also providing accessibilutyt.  However, I do not
think that the accessibility sub-system should be interacting directly with
the 'model' that such a hit-testing support system might build. Rather, the
scripts should provide the final answer, supported (if they wish) by such a
sub-system.

One thought experiment I took was this: the canvas represents a
bio-reactor, in which various populations of algae, funghi, bacteria etc.
are competing. The user can interact with it using a 'virtual pipette' --
adding to the pipette by picking up material in one place, and dropping
material from the pipette somewhere else (e.g. by right and left
click-and-hold). All the while, the organisms are reproducing, spreading,
dying, shrinking, co-existing, etc. In this, there are no 'paths' to test
against; rather the application is modelling a fluid situation. The user
can learn what the effect of dropping various populations in the midst of
others, is. Apart from a legend "right-click and hold to pick up,
left-click and hold to drop" (outside the canvas) how does the application
convey what is being picked up, what's in the pipette, and what's going on
in the reactor, to an accessibility-needing user?  "There is a check-box
under the mouse, which says "Remember me"" comes nowhere close. This
application is not path-based, is not using 'standard controls' and so on.

Applications that can use standard controls should use them!  Even
overlayed on the canvas if need be.  If they can use (or get some
assistance from) path-based hit-testing, let's develop a support system
that they can use for that. If they are breaking new ground, let's ask what
the scripts need to be able to do, to make the resulting application
accessible.  I feel sure that if we can answer that question, many cases
will suddenly be seen to be workable.



David Singer
Multimedia and Software Standards, Apple Inc.

Received on Friday, 29 July 2011 14:05:59 UTC