- From: Philip Taylor <pjt47@cam.ac.uk>
- Date: Wed, 18 Feb 2009 16:14:05 +0000
- To: Robin Berjon <robin@berjon.com>
- CC: Steven Faulkner <faulkner.steve@gmail.com>, HTML WG <public-html@w3.org>, W3C WAI-XTECH <wai-xtech@w3.org>, Janina Sajka <janina@rednote.net>
Robin Berjon wrote: > On Feb 18, 2009, at 15:52 , Philip Taylor wrote: >> The main problem I see with adding built-in (as opposed to bolt-on) >> accessibility to canvas is that I can't even begin to imagine any way >> that could ever possibly work at all :-). That may be largely because >> my imagination is limited - I'd be interested in concrete suggestions >> of how it could be done. Otherwise I can't think of anything the spec >> could say to help accessibility here. > > Well the canvas text API gets text in, so it's possible that one could > get text out. The problem is: in what order. [...] A more basic problem is deciding exactly which text to get out. Most interesting canvas examples have some animation or dynamic updates. (If they're static, you might as well use <img> instead). There is no explicit begin/end frame command -- you might use clearRect() to clear the canvas before drawing the next frame, or draw a solid white rectangle over the visible area before the next frame, or you might just draw the next frame immediately and have it overwrite the old content. I suppose it would be possible to add a beginFrame command, which can hint to the UA that it should forget any strings it's previously seen and remembered for accessibility. But I don't think it would have any purpose other than accessibility, and so most authors will ignore it or misuse it, which is the problem with "bolt on" accessibility features that we'd like to avoid whenever possible. You also might only redraw a small portion of the canvas, which can be a very important optimisation, e.g. if you type a character into Bespin then it only needs to redraw the current line of text and not the entire screen. beginFrame wouldn't help with that case, because you're never passing the entire screen's text into the API. The UA just sees a stream of function calls, some of which are drawing text, but it doesn't have information to tell what text is part of the current display. I don't know how it could semi-reliably work that out without effectively requiring the author to provide it with a string representing the currently displayed text, at which point you might as well insert that string as HTML into the canvas element's content and you don't need to involve the canvas API at all. > There could, furthermore, be additional > pushAnnotation(string)/popAnnotation() calls added to the API that could > provide alternative text information about what is being painted. Would pushAnnotation provide any advantages over "canvas.innerHTML += string" as a way of offering a non-graphical representation of the canvas bitmap? If the annotation system becomes complex enough to indicate interactive buttons, it seems that doing "canvas.innerHTML += '<button>...'" (or the equivalent in the DOM API) would involve similar amounts of effort from authors and wouldn't require any extensions to the canvas API or to UAs. > Again, > I'm not sure that that's a good idea, I'm just indicating that there > /might/ be options. I agree there might be some; the problem is I just haven't seen any yet that do seem like a good idea. > At first thought I would tend to consider that a good first step would > be to write an authoring guide providing guidance as to when to use > canvas and when to use something else. There's a rough attempt at http://wiki.whatwg.org/wiki/SVG_and_canvas to vaguely suggest when to use canvas and when to use SVG. It could be helpful to have a document focusing on reasons not to use canvas and other suitable alternatives -- the text in the HTML5 spec saying when canvas is inappropriate doesn't give any justification and seems likely to be ignored. -- Philip Taylor pjt47@cam.ac.uk
Received on Wednesday, 18 February 2009 16:14:42 UTC