Re: hit testing and retained graphics

Hi everyone,

I absolutely agree with TJ here. I think a lot of people here are trying
to solve theoretical issues right now, which is very dangerous. I have
closely observed the canvas landscape, and there are very few production
apps out there that could benefit largely from increased accessibility on
the canvas object itself. Mostly, today's canvas applications are game
demos and drawing apps.

If you are writing your GUI in canvas, you are doing it wrong. If you are
building a chart/graph and don't run the canvas visualization from data
markup that was progressively enhanced, you are doing it wrong. Hell, even
if today you are trying to create a full game on canvas, you are probably
doing it wrong (unless you want really fancy particle fx and can live with
a super small canvas size). Canvas *is* an enhanced <img>.

Before therefore moving on with this discussion, I would like you to take
a step back and think about animated GIFs. Every accessibility enhancement
you do on canvas should apply to animated gifs. If it doesn't, it's
probably over engineered for the problem at hand, and canvas was the wrong
tech choice in advance. It's easy really: "Would you create X for animated
gifs so disabled people can enjoy animated gifs? <your answer here>".

Finally, I want to end this little rant by reiterating that canvas is
absolutely *not* used much in the wild. Granted, it takes off a little
better than SVG because it's easy (and crappy, too!) to get a demo done in
no time, but most people realize after a couple hours that it's probably
not a good idea to iterate over millions of pixels per frame in JavaScript
today.

Am 29.06.11 01:40 schrieb "Tab Atkins Jr." unter <jackalmage@gmail.com>:

>On Tue, Jun 28, 2011 at 2:32 PM, John Foliot <jfoliot@stanford.edu> wrote:
>> Tab Atkins Jr. wrote:
>>> The WHATWG wiki pages for Video Caption and Modal Dialog use cases
>>> exemplify what is meant by compiling clear use-cases:
>>> *
>>> <http://wiki.whatwg.org/wiki/Use_cases_for_timed_tracks_rendered_over_v
>>> ideo_by_the_UA>
>>>
>>> They examine existing usage to discover what features are important,
>>> and give several examples of each.  This way we can tell directly
>>> whether the solutions we're crafting are adequate, by attempting to
>>> recreate the examples with the proposed solution.
>>
>> And right there, you've identified the disconnect. Those "use cases" for
>> video captioning are, frankly, bollocks, as they do not address a
>> significant amount of accessibility concerns.
>>
>> Tab I must tell you that most folk I know scoffed at that WHATWG wiki
>>page
>> of anime examples and other screen-captures, as they barely scratched
>>the
>> surface in terms of identifying use-cases, or rather
>>*user-requirements*.
>> How could they? They are pictures, with no examination of what is
>>actually
>> trying to be solved, or even a clear understanding of what the problems
>> are - at best those pictures illustrate some visual design requirements.
>> That wiki page accurately exemplifies the slap-dash WHATWG approach to
>> addressing any accessibility problem on the web - the "close enough, we
>>can
>> fix it later" approach.
>>
>> Contrast that collection of incomplete pictures with the detailed User
>> Requirements that the media sub-team created, and you will quickly see
>>that
>> the incomplete "use-case" exercise that the WHATWG folks undertook was
>> woefully inadequate.
>> 
>>(http://www.w3.org/WAI/PF/HTML/wiki/Media_Accessibility_User_Requirements
>>)
>
>The WHATWG wiki page answers the question "What are authors doing with
>captions in other technologies?".  Knowing the answer to that helps us
>know what a new technology needs to enable, which helps us craft a
>solution.
>
>The WAI wiki page also endeavors to answer that question, in addition
>to answering "What caption-specific types of accessibility problems
>may exist?", which is also valuable.  (Separately, the WAI wiki page
>also has several problems, where requirements on different targets
>(authoring API vs browser UX) are intermixed, and some requirements
>don't appear to have adequate justification.)
>
>> Now today we have a <canvas> element that took the first approach
>>(rather
>> than the second one), and so we are in a situation where <canvas> is
>> woefully inaccessible - in part because when it was being designed it
>>wasn't
>> examined and thought through w.r.t. accessibility, once again it was
>>"close
>> enough, we can fix it later". (I don't mean to pick on Ben Galbraith or
>>Dion
>> Almaer, but Bespin epitomized this:
>> http://benzilla.galbraiths.org/2009/02/18/bespin-and-canvas-part-2/)
>>
>> Well, "later" is *now*.
>
>I agree.  Did you think I was defending the current <canvas> API?  My
>argument is simply that I don't think the approach taken in this
>thread of defining a minimally-invasive API atop the 2d context is
>good.  "Minimally-invasive" fixes usually result in bad solutions.  We
>should instead be starting from a list of problems to be solved, so we
>can determine if the best solution is to patch the 2d context, create
>a new canvas context that better solves the problems, or use a
>different technology entirely like SVG.
>
>~TJ
>

Received on Wednesday, 29 June 2011 13:33:56 UTC