Re: Reasons of using Canvas for UI design

On 7/28/2011 3:04 PM, David Singer wrote:
> I have tried to 'take a step back' and ask some basic questions, and look for some basic principles, and then come up with some (probably basic) points and ideas.
>
> It seems that using Canvas is most interesting and useful when the application (represented by the HTML context including Canvas, its supporting scripts, and so on) is offering some user interaction model that is NOT available when using HTML and/or SVG.  These are the applications that must 'work' from an accessibility point of view, in my opinion. An obvious example is the particle simulation one;  another thought experiment I played with is below.  The issue is that if the application offers something more than just a visual (e.g. one can learn something, build something, or affect something) it ought to be accessible to the visually-impaired user.
Even a purely visual application ought to offer something to the 
visually-impaired user. Too many SVG files lack even a basic title or 
description.
Animated canvas applications might at least provide stop/start buttons. 
These requirements are better defined by WCAG 2.0.

> The canvas and what's drawn on it are just the visible manifestation of the application; it's what those pixels mean, and what interaction with the application means, that we need to make accessible. So rather than asking 'how do we make canvas accessible?' I think we need to ask 'how do we make applications that use canvas accessible?'.
We also need to make the spatial manifestation accessible -- we need 
both. Keep in mind that vision and space are not the same thing.
Richard brought up ZoomText as a visual-semantic use case -- a user may 
navigate through ARIA role="button" elements, as
well as highlight such elements with additional visual content -- this 
is very visual. Then there's the eyes-free use case I brought up,
which Apple has every reason to explore: VoiceOver on Mobile Safari 
works with spatial areas.

Both use cases are both related to what is drawn on screen, as well as 
interaction. Though in the later use case,
pixel fidelity is not relevant.

I absolutely think that the latter question is important; it's something 
I've been working on most of the year.
WCAG 2.0 has been a terrific resource, and ARIA has been invaluable in 
marking up the semantics necessary
to drive existing assistive technology.

> Ideally, the accessibility of those canvas-using applications is mostly enabled by making the applications work at all; if there are extra, special, provisions for accessibility, we know from experience that some authors won't take the trouble to use those provisions, and accessibility suffers.  I don't know how to achieve that for canvas-based applications, but it's worth keeping in mind.
I understand as spec designers, that authoring quality is a concern. 
Such concerns
are important, but they should not hinder authors who are developing 
with accessibility in mind.

I know from experience that developers who do not take accessibility 
(WCAG) into account,
fail to develop accessible content, regardless of the formats and 
authoring tools being used.

I've heard from people in relation to software released on the Apple 
iPhone store, how
applications may start off accessible, then in a later release, are no 
longer accessible. The original
developers are likely oblivious to this, as they have not tested with 
VoiceOver.

 From experience, I keep that issue in mind. Even "automatic" 
accessibility fails most of the time.

> In a canvas-based application, it is the scripts (and their supporting infrastructure) that constitute the application; the canvas surface is just the visible rendering. So I think it is the scripts that should bear the duty of providing accessibility.  Writing scripts that do hit testing is currently somewhat of a pain;  it may well be that if we can provide access to optimized hit testing, for scripts, we can both ease the job of writing the applications and also providing accessibilutyt.  However, I do not think that the accessibility sub-system should be interacting directly with the 'model' that such a hit-testing support system might build. Rather, the scripts should provide the final answer, supported (if they wish) by such a sub-system.
The accessibility sub-system receives ARIA updates and interacts with 
the existing DOM model, sending device events.
Accessibility sub-systems such as VoiceOver for Mobile Safari, do work 
directly with the model, to get bounding box information
and to send focus and click events.

Scripts certainly bear the duty of providing accessibility -- the 
authors must appropriately manage the canvas subtree or their
application will not be accessible and will not be of the quality that 
the WCAG set forward.

Scripts provide a final answer in what they do with events received by 
the DOM, as well as providing context,
when they setup HTML/ARIA/other markup semantics in the canvas subtree.

>
> One thought experiment I took was this: the canvas represents a bio-reactor, in which various populations of algae, funghi, bacteria etc. are competing. The user can interact with it using a 'virtual pipette' -- adding to the pipette by picking up material in one place, and dropping material from the pipette somewhere else (e.g. by right and left click-and-hold). All the while, the organisms are reproducing, spreading, dying, shrinking, co-existing, etc. In this, there are no 'paths' to test against; rather the application is modelling a fluid situation. The user can learn what the effect of dropping various populations in the midst of others, is. Apart from a legend "right-click and hold to pick up, left-click and hold to drop" (outside the canvas) how does the application convey what is being picked up, what's in the pipette, and what's going on in the reactor, to an accessibility-needing user?  "There is a check-box under the mouse, which says "Remember me"" comes nowhere close. This application is not path-based, is not using 'standard controls' and so on.
This is what the ARIA language was setup for, and will continue to work on.

In your application, the placement of the materials are indeed based on 
spatial regions. They are path based.
Deconstruct the controls. Keep in mind that drag-and-drop is a new 
semantic, but one that is in ARIA.

With a pipette, I would have a list of materials, I could do a listbox, 
a grid, a select box, or collection of buttons.

Consider this visual application you're speaking of -- how would it look 
in HTML 3.2? That's where I would start, for accessibility.
I would push all of those elements into the subtree. Now I've got a 
bio-reactor that works in HTML 3.2 without requiring scripting.

Now I build from there. I setup events, so that when an item is 
selected, it is marked up with aria-grabbed,
I setup regions with labels and I bind them to live data tables, I use 
aria-live when necessary. I ensure that the user can easily
navigate between those data regions with their keyboard.

I use ARIA to describe to the user what is going on in the reactor, and 
what they are currently holding, regardless of canvas.
And it does not matter to ARIA whether it is in the canvas subtree or 
outside of it.

With all of this done, I then open up ATs that are currently popular on 
the market, and I test across Windows and OS.X machines,
the two most popular OS platforms. I perform functional testing, I see 
if I can perform all of the actions that I could otherwise
perform with a keyboard, mouse and a large desktop screen with decent 
vision.

That is the process. I encourage you to collaborate with accessibility 
testers so that you may further understand their work.

> Applications that can use standard controls should use them!  Even overlayed on the canvas if need be.  If they can use (or get some assistance from) path-based hit-testing, let's develop a support system that they can use for that. If they are breaking new ground, let's ask what the scripts need to be able to do, to make the resulting application accessible.  I feel sure that if we can answer that question, many cases will suddenly be seen to be workable.
>
I've spent nearly six months this year working on accessibility in a web 
application.
 From the perspective of modern accessibility software -- standard 
controls are merely defined by ARIA, nothing more. They are not visual.

With Canvas, we do use standard controls in the canvas subtree, for all 
sorts of reasons. Think of Canvas as another presentation layer,
much like CSS. We still use the <select> element, because the native 
implementation is really handy. But we style that element, with Canvas.

As for what we need to do, that's covered here:
http://www.w3.org/WAI/eval/

As for breaking new ground: I do believe that the ARIA specification 
could be updated with new role types. Spreadsheets are not a new
UI concept, but the ARIA concept of "grid" and the HTML concept of 
"table" do not convey "spreadsheet" to an AT. Currently, an AT
knows its a spreadsheet if the application is MS Excel. Lets extend more 
UI roles onto the web. ARIA allows for fallback roles.

<div role="spreadsheet grid"> <!-- This works, it falls back to grid of 
spreadsheet is not supported by the AT  -->

I would love to answer "what scripts need to be able to do" -- I've 
certainly tried.

I believe that WCAG 2.0 and the WAI evaluation guidelines document what 
scripts need to do,
and ARIA provides semantics to accomplish that.


-Charles

Received on Thursday, 28 July 2011 23:48:47 UTC