W3C home > Mailing lists > Public > public-canvas-api@w3.org > October to December 2011

Re: draft of hit testing on active regions (actually paths) in canvas

From: Charles Pritchard <chuck@jumis.com>
Date: Fri, 28 Oct 2011 14:19:53 -0700
Message-ID: <4EAB1C79.8060807@jumis.com>
To: Jonas Sicking <jonas@sicking.cc>
CC: Richard Schwerdtfeger <schwer@us.ibm.com>, franko@microsoft.com, david.bolter@gmail.com, cyns@exchange.microsoft.com, public-html-a11y@w3.org, janina@rednote.net, jbrewer@w3.org, public-canvas-api@w3.org
> On Fri, Oct 28, 2011 at 1:02 AM, Charles Pritchard<chuck@jumis.com>  wrote:
>>> On Thu, Oct 27, 2011 at 7:27 PM, Charles Pritchard<chuck@jumis.com>
>>>   wrote:
>>>> On 10/27/11 5:42 PM, Jonas Sicking wrote:
>>>>
>>>>> In general I think I prefer the API I proposed in [1]. Can you
>>>>> describe what problems you were trying to solve with your changes
>>>>> compared to that proposal?
>>>>>
>>>>> [1]
>>>>>
>>>>> http://lists.w3.org/Archives/Public/public-canvas-api/2011JulSep/0195.html
>>>> My primary concern here is path fidelity. Apart from the extra steps I
>>>> outlined earlier, would a mature setDrawingFor implementation keep all of
>>>> the paths for accessibility?
>>>>
>>>> setPathForElement instructs the implementation to maintain the already
>>>> constructed path on some kind of stack or other ordered object.
>>>>
>>>> setDrawingFor would -possibly- maintain a series of paths opposed to a
>>>> series of subpaths.
>>>>
>>>> Basic implementations are most likely to just keep bounding box
>>>> information
>>>> in either case. More advanced implementations will maintain some kind of
>>>> additional information about the interactive region.
>>> I don't think keeping just bounding box information should be
>>> permitted. My idea is that when the user clicks a<canvas>    which has
>>> used the setDrawingFor API, then we forward the click to the actual
>>> elements inside the canvas. Similar to how labels work. I.e. any event
>>> handlers registered on the element would fire. If the element is a
>> Yes, I like that idea.
>>
>>> <input type=checkbox>    then it would flip its "checkedness" etc. But
>>> only pixels drawn while setDrawingFor is in effect acts as a
>>> forwarding area.
>> This kind of implementation requires keeping a bitmap for each interactive
>> element.
>> It can be a small bitmap, just 1 bit per pixel.
> That would be one way to implement it. Another implementation strategy
> would be for to just store vector information for each issued drawing
> command. This information is then used to do hit-testing internally in
> the implementation. So for a path-based drawing command the
> implementation would remember the path. For a drawImage command the
> implementation would remember the coordinates that make up the square.
>
> This is similar to how we implement image maps in Gecko where each
> <area>  is parsed into a set of coordinates which are then used to do
> hit testing.

I recommend that browser vendors take the vector strategy.

Is this something that would only apply on "fill" commands?
fillText may be tricky -- I'd recommend treating it as a rectangle in 
the implementation.

Should the current clip() be applied? Seems that it would.

I'm trying to flesh out the implementation details.

Calculating clip against a path can be a bit costly -- having tried and 
done it poorly twice in implementations: I'd recommend that the clip() 
be added onto the path list and used before hit testing subsequent 
drawing commands.

That is...  ctx.fillRect(..); ctx.clip();  ctx.fillRect(); would have 
two items on the hit testing path list, the clip rect would have the 
special property of requiring the event fall within it before hit 
testing subsequent paths.



>> This method works for simple UIs.
> Why wouldn't it work for complex UIs?

It's fine with vector coordinates. It works with 1-bit bitmaps, but I 
don't recommend that route.


>> It looks kind of like this, when implemented directly in canvas by authors:
>> mouseContext.getImageData(x,y), get the pixel color, use that pixel color
>> to access the appropriate listener in an event object.
>>
>> This method is fine for authors, but I don't recommend it for UAs and it's
>> not that nice for ATs.
>> It's actually helpful for sighted authors, sometimes, as they can set the
>> mouseContext canvas to display visibly and  see the interactive regions in
>> color.
>>
>> A UA, and a more advanced app will maintain a list of paths and use various
>> indexing schemes to ensure
>> that list can grow and remain performant. It requires far less memory and is
>> easily serialized. It does have a slight trade-off for CPU usage, of course.
> Indeed. That is exactly the implementation strategy I would recommend.
> However the API doesn't require any particular implementation
> strategy, as long as it satisfies the desired behavior (exact hit
> testing) and a11y requirements (being able to find bounding box for
> screen magnifiers).

I'd like implementations to keep some kind of path details beyond a 
simple bounding box. I'm not sure "exact" hit testing is necessary as 
we're dealing in subpixels and complex curves. Close-enough is close 
enough for me.

The path data is/will be useful for touch based interfaces as well as 
debugging.

>> It's far more likely that vendors would re-use their existing hit-testing
>> code from SVG implementations.
> Yup. Or the hit-testing code from image maps.

Image maps don't have complex curves, nor clip(), but sure, it could be 
an easier source to borrow from. I'll stop my worrying :-)

>>> This way authors have a much greater incentive to use the
>>> setDrawingFor API. I.e. they would get much simpler event handling for
>>> all their users. The fact that we'd be able to use the same
>>> information to drive screen magnifiers etc is "just" a convenient side
>>> effect from the authors point of view. This is the type of API which
>>> has worked the best historically. I.e. bolt-on solutions which are
>>> there only for a11y users has seen much less usage than APIs which add
>>> enough semantics to help all users. (This is why we prefer if people
>>> use HTML based UIs, rather than canvas based ones after all, right?)
>>> (Note, i'm not saying that bolt-on is bad or should be avoided. But
>>> when we can find better solutions, I think we should use them).
>> The setPathForElement method is providing the same incentive: simpler event
>> handling for users, while also driving screen magnifiers and other spatial
>> AT, such as AT running on touch-based devices.
>>
>> I'm afraid there may be a miscommunication, as I don't believe either of us
>> have suggested a "bolt-on solution".
>>
>> We're looking at the same pointer semantics for both proposals. Setup an
>> interactive region to delegate pointer events to an element within the
>> Canvas sub-tree.
> Hmm.. so is the intent for setPathForElement that it would forward
> clicks to the appropriate element inside the canvas too? If so, it
> doesn't seem like a bounding box would be enough to remember for the
> implementation.

You're correct: it would forward clicks too. You're also correct, a 
bounding box is rather crude. That said: VoiceOver on Mobile Safari uses 
bounding boxes from the accessibility tree.

I'd prefer all vendors go beyond simply tracking a bounding box. But, as 
a first step, if they can at least start with the bounding box, that'd 
catch-up Canvas a11y with the rest of HTML5.

> If setPathForElement is indeed intended to work the same way as
> setDrawingFor, then it simply seems like a question of how much
> convenience we supply for page authors. I.e. do we request that they
> construct paths for all drawn "things" that they want hit testing for,
> or do we let them use any drawing command.
>
> My preference would be the latter.

As an author, my preference is the former; I'd be happy to have 
anything, and I do believe that both semantics are useful.

Now, my code is based on existing practices, so I'm already doing a lot 
of extra work for hit testing. Plugging setPathForElement would be 
easier for me to support than setDrawingFor, but both can work.

>>> I think we'll see significantly more usage if all the author needs to
>>> do is to sprinkle setDrawingFor calls over the code to mark which
>>> drawing calls are made for which elements. If setPathForElements
>>> require authors to create new paths specifically just to mark bounding
>>> boxes then I think fewer people will use it.
>>>
>> setPathForElement is less work in the code bases I work on.
> Same in Gecko. But I think it's more work for websites.

In many cases, setPathForElement will require less lines of code to be 
added.

Many existing projects already maintain their own hit testing code. In 
many cases the hit testing code is separate from the rendering code. 
That is, it's run in very separate methods. It'd be easier in my own 
projects to use setPathForElement, though I've shown you how I would use 
setDrawingFor for the same effect.

>> I've shown a few examples of how and when it is more appropriate.
>>
>> Consider a simple button. In most cases of Canvas based UI, the author need
>> only convey the bounding box of the button, or the outer shape of the
>> button, the rest of the drawing calls are entirely presentational.
>>
>> That presentational information is certainly useful when/if serialized into
>> an SVG document. I do feel that setDrawingFor has a strong tie to the
>> component model being discussed in other lists.
> In the button UI example you are talking about, why wouldn't the page
> simply add a setDrawingFor(element) call before the code it already
> has to draw the button, and maybe a setDrawingFor(null) after?
>
> If it's really concerned about only getting the outermost bounding box
> covered, it can add move the setDrawingFor calls to be just around the
> code that draws the button background.

Yes, that's correct, but that's one additional line of code.

The following example demonstrates that, with a 100x100 button.
>> Here is a basic example of supporting both methods in existing clients:
>>
>> // NOTE: real-world use would include polyfill methods for legacy clients
>> (using CSS overlays).
>>
>> // setDrawingFor for legacy clients
>> var hasDraw = Boolean('setDrawingFor' in ctx);
>> if(hasDraw) ctx.setDrawingFor(element);
>> ctx.fillRect(0,0,100,100);
>> if(hasDraw) ctx.setDrawingFor(null);
>>
>> // setPathForElement for legacy clients
>> var hasDraw = Boolean('setPathForElement' in ctx);
>> ctx.fillRect(0,0,100,100);
>> if(hasDraw) ctx.setPathForElement(element);
>>
>> // end.
> I'm not sure that what this example intends to show?

element = document.createElement('button');
It shows how I might support buttons in my current code bases.

As demonstrated, setPathForElement requires one less line of code.
Given all the insertion points I'd add the code, that's a lot less code.


>> I can more easily introduce setPathForElement to existing Canvas projects as
>> well as existing Canvas implementations. It's intended to operate with the
>> same positive result as setDrawingFor -- pointer events are marshalled onto
>> the target element.
> Until someone speaks up and says "that proposal is too complex to
> implement" I don't think we should worry about implementation
> complexity and instead look at what's most useful for page authors.
>
>
Understood. It's my belief that setPathForElement is most useful for the 
projects I currently manage.

For implementors: It'd be great, if setPathForElement or setDrawingFor 
are implemented, if debugger tools would allow us to see a visualization 
of the active element regions on the canvas. Sure, we can move our mouse 
around and see the cursor change, but seeing everything at once is quite 
helpful. Simply show random colors with a 0.3 opacity, it can be helpful.




-Charles
Received on Friday, 28 October 2011 21:20:32 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 28 October 2011 21:20:33 GMT